text
stringlengths 11
320k
| source
stringlengths 26
161
|
|---|---|
In mathematics,Mittag-Leffler summationis any of several variations of theBorel summationmethod for summing possiblydivergentformal power series, introduced byGösta Mittag-Leffler(1908)
Let
be aformal power seriesinz.
Define the transformBαy{\displaystyle \scriptstyle {\mathcal {B}}_{\alpha }y}ofy{\displaystyle \scriptstyle y}by
Then theMittag-Leffler sumofyis given by
if each sum converges and the limit exists.
A closely related summation method, also called Mittag-Leffler summation, is given as follows (Sansone & Gerretsen 1960).
Suppose that the Borel transformB1y(z){\displaystyle {\mathcal {B}}_{1}y(z)}converges to ananalytic functionnear 0 that can beanalytically continuedalong thepositive real axisto a function growing sufficiently slowly that the following integral is well defined (as an improper integral). Then theMittag-Leffler sumofyis given by
Whenα= 1 this is the same asBorel summation.
|
https://en.wikipedia.org/wiki/Mittag-Leffler_summation
|
Incomplex analysis, thePhragmén–Lindelöf principle(ormethod), first formulated byLars Edvard Phragmén(1863–1937) andErnst Leonard Lindelöf(1870–1946) in 1908, is a technique which employs an auxiliary, parameterized function to prove the boundedness of a holomorphic functionf{\displaystyle f}(i.e,|f(z)|<M(z∈Ω){\displaystyle |f(z)|<M\ \ (z\in \Omega )}) on an unbounded domainΩ{\displaystyle \Omega }when an additional (usually mild) condition constraining the growth of|f|{\displaystyle |f|}onΩ{\displaystyle \Omega }is given. It is a generalization of themaximum modulus principle, which is only applicable to bounded domains.
In the theory of complex functions, it is known that themodulus(absolute value) of aholomorphic(complex differentiable) function in the interior of aboundedregion is bounded by its modulus on the boundary of the region. More precisely, if a non-constant functionf:C→C{\displaystyle f:\mathbb {C} \to \mathbb {C} }is holomorphic in a bounded region[1]Ω{\displaystyle \Omega }andcontinuouson its closureΩ¯=Ω∪∂Ω{\displaystyle {\overline {\Omega }}=\Omega \cup \partial \Omega }, then|f(z0)|<supz∈∂Ω|f(z)|{\textstyle |f(z_{0})|<\sup _{z\in \partial \Omega }|f(z)|}for allz0∈Ω{\displaystyle z_{0}\in \Omega }. This is known as themaximum modulus principle.(In fact, sinceΩ¯{\displaystyle {\overline {\Omega }}}is compact and|f|{\displaystyle |f|}is continuous, there actually exists somew0∈∂Ω{\displaystyle w_{0}\in \partial \Omega }such that|f(w0)|=supz∈Ω|f(z)|{\textstyle |f(w_{0})|=\sup _{z\in \Omega }|f(z)|}.) The maximum modulus principle is generally used to conclude that a holomorphic function is bounded in a region after showing that it is bounded on its boundary.
However, the maximum modulus principle cannot be applied to an unbounded region of the complex plane. As a concrete example, let us examine the behavior of the holomorphic functionf(z)=exp(exp(z)){\displaystyle f(z)=\exp(\exp(z))}in the unbounded strip
Although|f(x±πi/2)|=1{\displaystyle |f(x\pm \pi i/2)|=1}, so that|f|{\displaystyle |f|}is bounded on boundary∂S{\displaystyle \partial S},|f|{\displaystyle |f|}grows rapidly without bound when|z|→∞{\displaystyle |z|\to \infty }along the positive real axis. The difficulty here stems from the extremely fast growth of|f|{\displaystyle |f|}along the positive real axis. If the growth rate of|f|{\displaystyle |f|}is guaranteed to not be "too fast," as specified by an appropriate growth condition, thePhragmén–Lindelöf principlecan be applied to show that boundedness off{\displaystyle f}on the region's boundary implies thatf{\displaystyle f}is in fact bounded in the whole region, effectively extending the maximum modulus principle to unbounded regions.
Suppose we are given a holomorphic functionf{\displaystyle f}and an unbounded regionS{\displaystyle S}, and we want to show that|f|≤M{\displaystyle |f|\leq M}onS{\displaystyle S}. In a typical Phragmén–Lindelöf argument, we introduce a certain multiplicative factorhϵ{\displaystyle h_{\epsilon }}satisfyinglimϵ→0hϵ=1{\textstyle \lim _{\epsilon \to 0}h_{\epsilon }=1}to "subdue" the growth off{\displaystyle f}. In particular,hϵ{\displaystyle h_{\epsilon }}is chosen such that (i):fhϵ{\displaystyle fh_{\epsilon }}is holomorphic for allϵ>0{\displaystyle \epsilon >0}and|fhϵ|≤M{\displaystyle |fh_{\epsilon }|\leq M}on the boundary∂Sbdd{\displaystyle \partial S_{\mathrm {bdd} }}of an appropriateboundedsubregionSbdd⊂S{\displaystyle S_{\mathrm {bdd} }\subset S}; and (ii): the asymptotic behavior offhϵ{\displaystyle fh_{\epsilon }}allows us to establish that|fhϵ|≤M{\displaystyle |fh_{\epsilon }|\leq M}forz∈S∖Sbdd¯{\displaystyle z\in S\setminus {\overline {S_{\mathrm {bdd} }}}}(i.e., the unbounded part ofS{\displaystyle S}outside the closure of the bounded subregion). This allows us to apply the maximum modulus principle to first conclude that|fhϵ|≤M{\displaystyle |fh_{\epsilon }|\leq M}onSbdd¯{\displaystyle {\overline {S_{\mathrm {bdd} }}}}and then extend the conclusion to allz∈S{\displaystyle z\in S}. Finally, we letϵ→0{\displaystyle \epsilon \to 0}so thatf(z)hϵ(z)→f(z){\displaystyle f(z)h_{\epsilon }(z)\to f(z)}for everyz∈S{\displaystyle z\in S}in order to conclude that|f|≤M{\displaystyle |f|\leq M}onS{\displaystyle S}.
In the literature of complex analysis, there are many examples of the Phragmén–Lindelöf principle applied to unbounded regions of differing types, and also a version of this principle may be applied in a similar fashion tosubharmonicand superharmonic functions.
To continue the example above, we can impose a growth condition on a holomorphic functionf{\displaystyle f}that prevents it from "blowing up" and allows the Phragmén–Lindelöf principle to be applied. To this end, we now include the condition that
for some real constantsc<1{\displaystyle c<1}andA<∞{\displaystyle A<\infty }, for allz∈S{\displaystyle z\in S}. It can then be shown that|f(z)|≤1{\displaystyle |f(z)|\leq 1}for allz∈∂S{\displaystyle z\in \partial S}implies that|f(z)|≤1{\displaystyle |f(z)|\leq 1}in fact holds for allz∈S{\displaystyle z\in S}. Thus, we have the following proposition:
Proposition.Let
Letf{\displaystyle f}be holomorphic onS{\displaystyle S}and continuous onS¯{\displaystyle {\overline {S}}}, and suppose there exist real constantsc<1,A<∞{\displaystyle c<1,\ A<\infty }such that
for allz∈S{\displaystyle z\in S}and|f(z)|≤1{\displaystyle |f(z)|\leq 1}for allz∈S¯∖S=∂S{\displaystyle z\in {\overline {S}}\setminus S=\partial S}. Then|f(z)|≤1{\displaystyle |f(z)|\leq 1}for allz∈S{\displaystyle z\in S}.
Note that this conclusion fails whenc=1{\displaystyle c=1}, precisely as the motivating counterexample in the previous section demonstrates. The proof of this statement employs a typical Phragmén–Lindelöf argument:[2]
Proof:(Sketch)We fixb∈(c,1){\displaystyle b\in (c,1)}and define for eachϵ>0{\displaystyle \epsilon >0}the auxiliary functionhϵ{\displaystyle h_{\epsilon }}byhϵ(z)=e−ϵ(ebz+e−bz){\textstyle h_{\epsilon }(z)=e^{-\epsilon (e^{bz}+e^{-bz})}}. Moreover, for a givena>0{\displaystyle a>0}, we defineSa{\displaystyle S_{a}}to be the open rectangle in the complex plane enclosed within the vertices{a±iπ/2,−a±iπ/2}{\displaystyle \{a\pm i\pi /2,-a\pm i\pi /2\}}. Now, fixϵ>0{\displaystyle \epsilon >0}and consider the functionfhϵ{\displaystyle fh_{\epsilon }}. Because one can show that|hϵ(z)|≤1{\displaystyle |h_{\epsilon }(z)|\leq 1}for allz∈S¯{\displaystyle z\in {\overline {S}}}, it follows that|f(z)hϵ(z)|≤1{\displaystyle |f(z)h_{\epsilon }(z)|\leq 1}forz∈∂S{\displaystyle z\in \partial S}. Moreover, one can show forz∈S¯{\displaystyle z\in {\overline {S}}}that|f(z)hϵ(z)|→0{\displaystyle |f(z)h_{\epsilon }(z)|\to 0}uniformly as|ℜ(z)|→∞{\displaystyle |\Re (z)|\to \infty }. This allows us to find anx0{\displaystyle x_{0}}such that|f(z)hϵ(z)|≤1{\displaystyle |f(z)h_{\epsilon }(z)|\leq 1}wheneverz∈S¯{\displaystyle z\in {\overline {S}}}and|ℜ(z)|≥x0{\displaystyle |\Re (z)|\geq x_{0}}. Now consider the bounded rectangular regionSx0{\displaystyle S_{x_{0}}}. We have established that|f(z)hϵ(z)|≤1{\displaystyle |f(z)h_{\epsilon }(z)|\leq 1}for allz∈∂Sx0{\displaystyle z\in \partial S_{x_{0}}}. Hence, the maximum modulus principle implies that|f(z)hϵ(z)|≤1{\displaystyle |f(z)h_{\epsilon }(z)|\leq 1}for allz∈Sx0¯{\displaystyle z\in {\overline {S_{x_{0}}}}}. Since|f(z)hϵ(z)|≤1{\displaystyle |f(z)h_{\epsilon }(z)|\leq 1}also holds wheneverz∈S{\displaystyle z\in S}and|ℜ(z)|>x0{\displaystyle |\Re (z)|>x_{0}}, we have in fact shown that|f(z)hϵ(z)|≤1{\displaystyle |f(z)h_{\epsilon }(z)|\leq 1}holds for allz∈S{\displaystyle z\in S}. Finally, becausefhϵ→f{\displaystyle fh_{\epsilon }\to f}asϵ→0{\displaystyle \epsilon \to 0}, we conclude that|f(z)|≤1{\displaystyle |f(z)|\leq 1}for allz∈S{\displaystyle z\in S}.Q.E.D.
A particularly useful statement proved using the Phragmén–Lindelöf principle bounds holomorphic functions on a sector of the complex plane if it is bounded on its boundary. This statement can be used to give a complex analytic proof of theHardy'suncertainty principle, which states that a function and its Fourier transform cannot both decay faster than exponentially.[3]
Proposition.LetF{\displaystyle F}be a function that isholomorphicin asector
of central angleβ−α=π/λ{\displaystyle \beta -\alpha =\pi /\lambda }, and continuous on its boundary. If
forz∈∂S{\displaystyle z\in \partial S}, and
for allz∈S{\displaystyle z\in S}, whereρ∈[0,λ){\displaystyle \rho \in [0,\lambda )}andC>0{\displaystyle C>0}, then|F(z)|≤1{\displaystyle |F(z)|\leq 1}holds also for allz∈S{\displaystyle z\in S}.
The condition (2) can be relaxed to
with the same conclusion.
In practice the point 0 is often transformed into the point ∞ of theRiemann sphere. This gives a version of the principle that applies to strips, for example bounded by two lines of constantreal partin the complex plane. This special case is sometimes known asLindelöf's theorem.
Carlson's theoremis an application of the principle to functions bounded on the imaginary axis.
|
https://en.wikipedia.org/wiki/Phragm%C3%A9n%E2%80%93Lindel%C3%B6f_principle
|
Inmathematics,Abelian and Tauberian theoremsaretheoremsgiving conditions for two methods of summingdivergent seriesto give the same result, named afterNiels Henrik AbelandAlfred Tauber. The original examples areAbel's theoremshowing that if a seriesconvergesto some limit then itsAbel sumis the same limit, and Tauber's theorem showing that if the Abel sum of a series exists and the coefficients are sufficiently small (o(1/n)) then the series converges to the Abel sum. More general Abelian and Tauberian theorems give similar results for more general summation methods.
There is not yet a clear distinction between Abelian and Tauberian theorems, and no generally accepted definition of what these terms mean. Often, a theorem is called "Abelian" if it shows that some summation method gives the usual sum for convergent series, and is called "Tauberian" if it gives conditions for a series summable by some method that allows it to be summable in the usual sense.
In the theory ofintegral transforms, Abelian theorems give the asymptotic behaviour of the transform based on properties of the original function. Conversely, Tauberian theorems give the asymptotic behaviour of the original function based on properties of the transform but usually require some restrictions on the original function.[1]
For any summation methodL, itsAbelian theoremis the result that ifc= (cn) is aconvergent sequence, withlimitC, thenL(c) =C.[clarification needed]
An example is given by theCesàro method, in whichLis defined as the limit of thearithmetic meansof the firstNterms ofc, asNtends to infinity. One canprovethat ifcdoes converge toC, then so does the sequence (dN) where
To see that, subtractCeverywhere to reduce to the caseC= 0. Then divide the sequence into an initial segment, and a tail of small terms: given any ε > 0 we can takeNlarge enough to make the initial segment of terms up tocNaverage to at mostε/2, while each term in the tail is bounded by ε/2 so that the average is also necessarily bounded.
The name derives fromAbel's theoremonpower series. In that caseLis theradial limit(thought of within thecomplexunit disk), where we letrtend to the limit 1 from below along the real axis in the power series with term
and setz=r·eiθ. That theorem has its main interest in the case that the power series hasradius of convergenceexactly 1: if the radius of convergence is greater than one, the convergence of the power series isuniformforrin [0,1] so that the sum is automaticallycontinuousand it follows directly that the limit asrtends up to 1 is simply the sum of thean. When the radius is 1 the power series will have some singularity on |z| = 1; the assertion is that, nonetheless, if the sum of theanexists, it is equal to the limit overr. This therefore fits exactly into the abstract picture.
Partialconversesto Abelian theorems are calledTauberian theorems. The original result ofAlfred Tauber(1897)[2]stated that if we assume also
(seeLittle o notation) and the radial limit exists, then the series obtained by settingz= 1 is actually convergent. This was strengthened byJohn Edensor Littlewood: we need only assume O(1/n). A sweeping generalization is theHardy–Littlewood Tauberian theorem.
In the abstract setting, therefore, anAbeliantheorem states that the domain ofLcontains the convergent sequences, and its values there are equal to those of theLimfunctional. ATauberiantheorem states, under some growth condition, that the domain ofLis exactly the convergent sequences and no more.
If one thinks ofLas some generalised type ofweighted average, taken to the limit, a Tauberian theorem allows one to discard the weighting, under the correct hypotheses. There are many applications of this kind of result innumber theory, in particular in handlingDirichlet series.
The development of the field of Tauberian theorems received a fresh turn withNorbert Wiener's very general results, namelyWiener's Tauberian theoremand its large collection ofcorollaries.[3]The central theorem can now be proved byBanach algebramethods, and contains much, though not all, of the previous theory.
|
https://en.wikipedia.org/wiki/Abelian_and_tauberian_theorems
|
Inmathematicsandnumerical analysis, the van Wijngaarden transformation is a variant on theEuler transformused to accelerate the convergence of analternating series.
One algorithm to computeEuler's transformruns as follows:
Compute a row of partial sumss0,k=∑n=0k(−1)nan{\displaystyle s_{0,k}=\sum _{n=0}^{k}(-1)^{n}a_{n}}and form rows of averages between neighborssj+1,k=sj,k+sj,k+12{\displaystyle s_{j+1,k}={\frac {s_{j,k}+s_{j,k+1}}{2}}}The first columnsj,0{\displaystyle s_{j,0}}then contains the partial sums of the Euler transform.
Adriaan van Wijngaarden's contribution was to point out that it is better not to carry this procedure through to the very end, but to stop two-thirds of the way.[1]Ifa0,a1,…,a12{\displaystyle a_{0},a_{1},\ldots ,a_{12}}are available, thens8,4{\displaystyle s_{8,4}}is almost always a better approximation to the sum thans12,0{\displaystyle s_{12,0}}. In many cases the diagonal terms do not converge in one cycle so process of averaging is to be repeated with diagonal terms by bringing them in a row. (For example, this will be needed in a geometric series with ratio−4{\displaystyle -4}.) This process of successive averaging of the average of partial sum can be replaced by using the formula to calculate the diagonal term.
For a simple-but-concrete example, recall theLeibniz formula for pi
The algorithm described above produces the following table:
These correspond to the following algorithmic outputs:
|
https://en.wikipedia.org/wiki/Van_Wijngaarden_transformation
|
Inmathematics,complex geometryis the study ofgeometricstructures and constructions arising out of, or described by, thecomplex numbers. In particular, complex geometry is concerned with the study ofspacessuch ascomplex manifoldsandcomplex algebraic varieties, functions ofseveral complex variables, and holomorphic constructions such asholomorphic vector bundlesandcoherent sheaves. Application of transcendental methods toalgebraic geometryfalls in this category, together with more geometric aspects ofcomplex analysis.
Complex geometry sits at the intersection of algebraic geometry,differential geometry, and complex analysis, and uses tools from all three areas. Because of the blend of techniques and ideas from various areas, problems in complex geometry are often more tractable or concrete than in general. For example, the classification of complex manifolds and complex algebraic varieties through theminimal model programand the construction ofmoduli spacessets the field apart from differential geometry, where the classification of possiblesmooth manifoldsis a significantly harder problem. Additionally, the extra structure of complex geometry allows, especially in thecompactsetting, forglobal analyticresults to be proven with great success, includingShing-Tung Yau's proof of theCalabi conjecture, theHitchin–Kobayashi correspondence, thenonabelian Hodge correspondence, and existence results forKähler–Einstein metricsandconstant scalar curvature Kähler metrics. These results often feed back into complex algebraic geometry, and for example recently the classification of Fano manifolds usingK-stabilityhas benefited tremendously both from techniques in analysis and in purebirational geometry.
Complex geometry has significant applications to theoretical physics, where it is essential in understandingconformal field theory,string theory, andmirror symmetry. It is often a source of examples in other areas of mathematics, including inrepresentation theorywheregeneralized flag varietiesmay be studied using complex geometry leading to theBorel–Weil–Bott theorem, or insymplectic geometry, whereKähler manifoldsare symplectic, inRiemannian geometrywhere complex manifolds provide examples of exotic metric structures such asCalabi–Yau manifoldsandhyperkähler manifolds, and ingauge theory, whereholomorphic vector bundlesoften admit solutions to importantdifferential equationsarising out of physics such as theYang–Mills equations. Complex geometry additionally is impactful in pure algebraic geometry, where analytic results in the complex setting such asHodge theoryof Kähler manifolds inspire understanding ofHodge structuresforvarietiesandschemesas well asp-adic Hodge theory,deformation theoryfor complex manifolds inspires understanding of the deformation theory of schemes, and results about thecohomologyof complex manifolds inspired the formulation of theWeil conjecturesandGrothendieck'sstandard conjectures. On the other hand, results and techniques from many of these fields often feed back into complex geometry, and for example developments in the mathematics of string theory and mirror symmetry have revealed much about the nature ofCalabi–Yau manifolds, which string theorists predict should have the structure of Lagrangian fibrations through theSYZ conjecture, and the development ofGromov–Witten theoryofsymplectic manifoldshas led to advances inenumerative geometryof complex varieties.
TheHodge conjecture, one of themillennium prize problems, is a problem in complex geometry.[1]
Broadly, complex geometry is concerned withspacesandgeometric objectswhich are modelled, in some sense, on thecomplex plane. Features of the complex plane andcomplex analysisof a single variable, such as an intrinsic notion oforientability(that is, being able to consistently rotate 90 degrees counterclockwise at every point in the complex plane), and the rigidity ofholomorphic functions(that is, the existence of a single complex derivative implies complex differentiability to all orders) are seen to manifest in all forms of the study of complex geometry. As an example, every complex manifold is canonically orientable, and a form ofLiouville's theoremholds oncompactcomplex manifolds orprojectivecomplex algebraic varieties.
Complex geometry is different in flavour to what might be calledrealgeometry, the study of spaces based around the geometric and analytical properties of thereal number line. For example, whereassmooth manifoldsadmitpartitions of unity, collections of smooth functions which can be identically equal to one on someopen set, and identically zero elsewhere, complex manifolds admit no such collections of holomorphic functions. Indeed, this is the manifestation of theidentity theorem, a typical result in complex analysis of a single variable. In some sense, the novelty of complex geometry may be traced back to this fundamental observation.
It is true that every complex manifold is in particular a real smooth manifold. This is because the complex planeC{\displaystyle \mathbb {C} }is, after forgetting its complex structure, isomorphic to the real planeR2{\displaystyle \mathbb {R} ^{2}}. However, complex geometry is not typically seen as a particular sub-field ofdifferential geometry, the study of smooth manifolds. In particular,Serre'sGAGA theoremsays that everyprojectiveanalytic varietyis actually analgebraic variety, and the study of holomorphic data on an analytic variety is equivalent to the study of algebraic data.
This equivalence indicates that complex geometry is in some sense closer toalgebraic geometrythan todifferential geometry. Another example of this which links back to the nature of the complex plane is that, in complex analysis of a single variable, singularities ofmeromorphic functionsare readily describable. In contrast, the possible singular behaviour of a continuous real-valued function is much more difficult to characterise. As a result of this, one can readily studysingularspaces in complex geometry, such as singular complexanalytic varietiesor singular complex algebraic varieties, whereas in differential geometry the study of singular spaces is often avoided.
In practice, complex geometry sits in the intersection of differential geometry, algebraic geometry, andanalysisinseveral complex variables, and a complex geometer uses tools from all three fields to study complex spaces. Typical directions of interest in complex geometry involveclassificationof complex spaces, the study of holomorphic objects attached to them (such asholomorphic vector bundlesandcoherent sheaves), and the intimate relationships between complex geometric objects and other areas of mathematics and physics.
Complex geometry is concerned with the study ofcomplex manifolds, andcomplex algebraicandcomplex analytic varieties. In this section, these types of spaces are defined and the relationships between them presented.
Acomplex manifoldis atopological spaceX{\displaystyle X}such that:
Notice that since every biholomorphism is adiffeomorphism, andCn{\displaystyle \mathbb {C} ^{n}}is isomorphism as areal vector spacetoR2n{\displaystyle \mathbb {R} ^{2n}}, every complex manifold of dimensionn{\displaystyle n}is in particular a smooth manifold of dimension2n{\displaystyle 2n}, which is always an even number.
In contrast to complex manifolds which are always smooth, complex geometry is also concerned with possibly singular spaces. Anaffine complex analytic varietyis a subsetX⊆Cn{\displaystyle X\subseteq \mathbb {C} ^{n}}such that about each pointp∈X{\displaystyle p\in X}, there is an open neighbourhoodU{\displaystyle U}ofp{\displaystyle p}and a collection of finitely many holomorphic functionsf1,…,fk:U→C{\displaystyle f_{1},\dots ,f_{k}:U\to \mathbb {C} }such thatX∩U={z∈U∣f1(z)=⋯=fk(z)=0}=Z(f1,…,fk){\displaystyle X\cap U=\{z\in U\mid f_{1}(z)=\cdots =f_{k}(z)=0\}=Z(f_{1},\dots ,f_{k})}. By convention we also require the setX{\displaystyle X}to beirreducible. A pointp∈X{\displaystyle p\in X}issingularif theJacobian matrixof the vector of holomorphic functions(f1,…,fk){\displaystyle (f_{1},\dots ,f_{k})}does not have full rank atp{\displaystyle p}, andnon-singularotherwise. Aprojective complex analytic varietyis a subsetX⊆CPn{\displaystyle X\subseteq \mathbb {CP} ^{n}}ofcomplex projective spacethat is, in the same way, locally given by the zeroes of a finite collection of holomorphic functions on open subsets ofCPn{\displaystyle \mathbb {CP} ^{n}}.
One may similarly define anaffine complex algebraic varietyto be a subsetX⊆Cn{\displaystyle X\subseteq \mathbb {C} ^{n}}which is locally given as the zero set of finitely many polynomials inn{\displaystyle n}complex variables. To define aprojective complex algebraic variety, one requires the subsetX⊆CPn{\displaystyle X\subseteq \mathbb {CP} ^{n}}to locally be given by the zero set of finitely manyhomogeneous polynomials.
In order to define a general complex algebraic or complex analytic variety, one requires the notion of alocally ringed space. Acomplex algebraic/analytic varietyis a locally ringed space(X,OX){\displaystyle (X,{\mathcal {O}}_{X})}which is locally isomorphic as a locally ringed space to an affine complex algebraic/analytic variety. In the analytic case, one typically allowsX{\displaystyle X}to have a topology that is locally equivalent to the subspace topology due to the identification with open subsets ofCn{\displaystyle \mathbb {C} ^{n}}, whereas in the algebraic caseX{\displaystyle X}is often equipped with aZariski topology. Again we also by convention require this locally ringed space to be irreducible.
Since the definition of a singular point is local, the definition given for an affine analytic/algebraic variety applies to the points of any complex analytic or algebraic variety. The set of points of a varietyX{\displaystyle X}which are singular is called thesingular locus, denotedXsing{\displaystyle X^{sing}}, and the complement is thenon-singularorsmooth locus, denotedXnonsing{\displaystyle X^{nonsing}}. We say a complex variety issmoothornon-singularif it's singular locus is empty. That is, if it is equal to its non-singular locus.
By theimplicit function theoremfor holomorphic functions, every complex manifold is in particular a non-singular complex analytic variety, but is not in general affine or projective. By Serre's GAGA theorem, every projective complex analytic variety is actually a projective complex algebraic variety. When a complex variety is non-singular, it is a complex manifold. More generally, the non-singular locus ofanycomplex variety is a complex manifold.
Complex manifolds may be studied from the perspective of differential geometry, whereby they are equipped with extra geometric structures such as aRiemannian metricorsymplectic form. In order for this extra structure to be relevant to complex geometry, one should ask for it to be compatible with the complex structure in a suitable sense. AKähler manifoldis a complex manifold with a Riemannian metric and symplectic structure compatible with the complex structure. Every complex submanifold of a Kähler manifold is Kähler, and so in particular every non-singular affine or projective complex variety is Kähler, after restricting the standard Hermitian metric onCn{\displaystyle \mathbb {C} ^{n}}or theFubini-Study metriconCPn{\displaystyle \mathbb {CP} ^{n}}respectively.
Other important examples of Kähler manifolds includeRiemann surfaces,K3 surfaces, andCalabi–Yau manifolds.
Serre's GAGA theorem asserts that projective complex analytic varieties are actually algebraic. Whilst this is not strictly true for affine varieties, there is a class of complex manifolds that act very much like affine complex algebraic varieties, calledStein manifolds. A manifoldX{\displaystyle X}is Stein if it is holomorphically convex and holomorphically separable (see the article on Stein manifolds for the technical definitions). It can be shown however that this is equivalent toX{\displaystyle X}being a complex submanifold ofCn{\displaystyle \mathbb {C} ^{n}}for somen{\displaystyle n}. Another way in which Stein manifolds are similar to affine complex algebraic varieties is thatCartan's theorems A and Bhold for Stein manifolds.
Examples of Stein manifolds include non-compact Riemann surfaces and non-singular affine complex algebraic varieties.
A special class of complex manifolds ishyper-Kähler manifolds, which areRiemannian manifoldsadmitting three distinct compatibleintegrable almost complex structuresI,J,K{\displaystyle I,J,K}which satisfy thequaternionic relationsI2=J2=K2=IJK=−Id{\displaystyle I^{2}=J^{2}=K^{2}=IJK=-\operatorname {Id} }. Thus, hyper-Kähler manifolds are Kähler manifolds in three different ways, and subsequently have a rich geometric structure.
Examples of hyper-Kähler manifolds includeALE spaces,K3 surfaces,Higgs bundlemoduli spaces,quiver varieties, and many othermoduli spacesarising out ofgauge theoryandrepresentation theory.
As mentioned, a particular class of Kähler manifolds is given by Calabi–Yau manifolds. These are given by Kähler manifolds with trivial canonical bundleKX=ΛnT1,0∗X{\displaystyle K_{X}=\Lambda ^{n}T_{1,0}^{*}X}. Typically the definition of a Calabi–Yau manifold also requiresX{\displaystyle X}to be compact. In this caseYau'sproof of theCalabi conjectureimplies thatX{\displaystyle X}admits a Kähler metric with vanishingRicci curvature, and this may be taken as an equivalent definition of Calabi–Yau.
Calabi–Yau manifolds have found use instring theoryandmirror symmetry, where they are used to model the extra 6 dimensions of spacetime in 10-dimensional models of string theory. Examples of Calabi–Yau manifolds are given byelliptic curves, K3 surfaces, and complexAbelian varieties.
A complexFano varietyis a complex algebraic variety withampleanti-canonical line bundle (that is,KX∗{\displaystyle K_{X}^{*}}is ample). Fano varieties are of considerable interest in complex algebraic geometry, and in particularbirational geometry, where they often arise in theminimal model program. Fundamental examples of Fano varieties are given by projective spaceCPn{\displaystyle \mathbb {CP} ^{n}}whereK=O(−n−1){\displaystyle K={\mathcal {O}}(-n-1)}, and smooth hypersurfaces ofCPn{\displaystyle \mathbb {CP} ^{n}}of degree less thann+1{\displaystyle n+1}.
Toric varietiesare complex algebraic varieties of dimensionn{\displaystyle n}containing an opendense subsetbiholomorphic to(C∗)n{\displaystyle (\mathbb {C} ^{*})^{n}}, equipped with an action of(C∗)n{\displaystyle (\mathbb {C} ^{*})^{n}}which extends the action on the open dense subset. A toric variety may be described combinatorially by itstoric fan, and at least when it is non-singular, by amomentpolytope. This is a polygon inRn{\displaystyle \mathbb {R} ^{n}}with the property that any vertex may be put into the standard form of the vertex of the positiveorthantby the action ofGL(n,Z){\displaystyle \operatorname {GL} (n,\mathbb {Z} )}. The toric variety can be obtained as a suitable space which fibres over the polytope.
Many constructions that are performed on toric varieties admit alternate descriptions in terms of the combinatorics and geometry of the moment polytope or its associated toric fan. This makes toric varieties a particularly attractive test case for many constructions in complex geometry. Examples of toric varieties include complex projective spaces, and bundles over them.
Due to the rigidity of holomorphic functions and complex manifolds, the techniques typically used to study complex manifolds and complex varieties differ from those used in regular differential geometry, and are closer to techniques used in algebraic geometry. For example, in differential geometry, many problems are approached by taking local constructions and patching them together globally using partitions of unity. Partitions of unity do not exist in complex geometry, and so the problem of when local data may be glued into global data is more subtle. Precisely when local data may be patched together is measured bysheaf cohomology, andsheavesand theircohomology groupsare major tools.
For example, famous problems in the analysis of several complex variables preceding the introduction of modern definitions are theCousin problems, asking precisely when local meromorphic data may be glued to obtain a global meromorphic function. These old problems can be simply solved after the introduction of sheaves and cohomology groups.
Special examples of sheaves used in complex geometry include holomorphicline bundles(and thedivisorsassociated to them),holomorphic vector bundles, andcoherent sheaves. Since sheaf cohomology measures obstructions in complex geometry, one technique that is used is to prove vanishing theorems. Examples of vanishing theorems in complex geometry include theKodaira vanishing theoremfor the cohomology of line bundles on compact Kähler manifolds, andCartan's theorems A and Bfor the cohomology of coherent sheaves on affine complex varieties.
Complex geometry also makes use of techniques arising out of differential geometry and analysis. For example, theHirzebruch-Riemann-Roch theorem, a special case of theAtiyah-Singer index theorem, computes theholomorphic Euler characteristicof a holomorphic vector bundle in terms of characteristic classes of the underlying smooth complex vector bundle.
One major theme in complex geometry isclassification. Due to the rigid nature of complex manifolds and varieties, the problem of classifying these spaces is often tractable. Classification in complex and algebraic geometry often occurs through the study ofmoduli spaces, which themselves are complex manifolds or varieties whose points classify other geometric objects arising in complex geometry.
The termmoduliwas coined byBernhard Riemannduring his original work on Riemann surfaces. The classification theory is most well-known for compact Riemann surfaces. By theclassification of closed oriented surfaces, compact Riemann surfaces come in a countable number of discrete types, measured by theirgenusg{\displaystyle g}, which is a non-negative integer counting the number of holes in the given compact Riemann surface.
The classification essentially follows from theuniformization theorem, and is as follows:[2][3][4]
Complex geometry is concerned not only with complex spaces, but other holomorphic objects attached to them. The classification of holomorphic line bundles on a complex varietyX{\displaystyle X}is given by thePicard varietyPic(X){\displaystyle \operatorname {Pic} (X)}ofX{\displaystyle X}.
The picard variety can be easily described in the case whereX{\displaystyle X}is a compact Riemann surface of genus g. Namely, in this case the Picard variety is a disjoint union of complexAbelian varieties, each of which is isomorphic to theJacobian varietyof the curve, classifyingdivisorsof degree zero up to linear equivalence. In differential-geometric terms, these Abelian varieties are complex tori, complex manifolds diffeomorphic to(S1)2g{\displaystyle (S^{1})^{2g}}, possibly with one of many different complex structures.
By theTorelli theorem, a compact Riemann surface is determined by its Jacobian variety, and this demonstrates one reason why the study of structures on complex spaces can be useful, in that it can allow one to solve classify the spaces themselves.
|
https://en.wikipedia.org/wiki/Complex_geometry
|
Inmathematics,hypercomplex analysisis the extension ofcomplex analysisto thehypercomplex numbers. The first instance is functions of aquaternion variable, where the argument is aquaternion(in this case, the sub-field of hypercomplex analysis is calledquaternionic analysis). A second instance involves functions of amotor variablewhere arguments aresplit-complex numbers.
Inmathematical physics, there are hypercomplex systems calledClifford algebras. The study of functions with arguments from a Clifford algebra is calledClifford analysis.
Amatrixmay be considered a hypercomplex number. For example, the study of functions of 2 × 2realmatrices shows that thetopologyof thespaceof hypercomplex numbers determines the function theory. Functions such assquare root of a matrix,matrix exponential, andlogarithm of a matrixare basic examples of hypercomplex analysis.[1]The function theory ofdiagonalizable matricesis particularly transparent since they haveeigendecompositions.[2]SupposeT=∑i=1NλiEi{\displaystyle \textstyle T=\sum _{i=1}^{N}\lambda _{i}E_{i}}where theEiareprojections. Then for anypolynomialf{\displaystyle f},f(T)=∑i=1Nf(λi)Ei.{\displaystyle f(T)=\sum _{i=1}^{N}f(\lambda _{i})E_{i}.}
The modern terminology for a "system of hypercomplex numbers" is analgebraover the real numbers, and the algebras used in applications are oftenBanach algebrassinceCauchy sequencescan be taken to beconvergent. Then the function theory is enriched bysequencesandseries. In this context the extension ofholomorphic functionsof acomplexvariable is developed as theholomorphic functional calculus. Hypercomplex analysis on Banach algebras is calledfunctional analysis.
|
https://en.wikipedia.org/wiki/Hypercomplex_analysis
|
Vector calculusorvector analysisis a branch of mathematics concerned with thedifferentiationandintegrationofvector fields, primarily in three-dimensionalEuclidean space,R3.{\displaystyle \mathbb {R} ^{3}.}[1]The termvector calculusis sometimes used as a synonym for the broader subject ofmultivariable calculus, which spans vector calculus as well aspartial differentiationandmultiple integration. Vector calculus plays an important role indifferential geometryand in the study ofpartial differential equations. It is used extensively in physics and engineering, especially in the description ofelectromagnetic fields,gravitational fields, andfluid flow.
Vector calculus was developed from the theory ofquaternionsbyJ. Willard GibbsandOliver Heavisidenear the end of the 19th century, and most of the notation and terminology was established by Gibbs andEdwin Bidwell Wilsonin their 1901 book,Vector Analysis, though earlier mathematicians such asIsaac Newtonpioneered the field.[2]In its standard form using thecross product, vector calculus does not generalize to higher dimensions, but the alternative approach ofgeometric algebra, which uses theexterior product, does (see§ Generalizationsbelow for more).
Ascalar fieldassociates ascalarvalue to every point in a space. The scalar is a mathematical number representing aphysical quantity. Examples of scalar fields in applications include thetemperaturedistribution throughout space, thepressuredistribution in a fluid, and spin-zero quantum fields (known asscalar bosons), such as theHiggs field. These fields are the subject ofscalar field theory.
Avector fieldis an assignment of avectorto each point in aspace.[3]A vector field in the plane, for instance, can be visualized as a collection of arrows with a givenmagnitudeand direction each attached to a point in the plane. Vector fields are often used to model, for example, the speed and direction of a moving fluid throughout space, or the strength and direction of someforce, such as themagneticorgravitationalforce, as it changes from point to point. This can be used, for example, to calculateworkdone over a line.
In more advanced treatments, one further distinguishespseudovectorfields andpseudoscalarfields, which are identical to vector fields and scalar fields, except that they change sign under an orientation-reversing map: for example, thecurlof a vector field is a pseudovector field, and if one reflects a vector field, the curl points in the opposite direction. This distinction is clarified and elaborated ingeometric algebra, as described below.
The algebraic (non-differential) operations in vector calculus are referred to asvector algebra, being defined for a vector space and then appliedpointwiseto a vector field. The basic algebraic operations consist of:
Also commonly used are the twotriple products:
Vector calculus studies variousdifferential operatorsdefined on scalar or vector fields, which are typically expressed in terms of thedeloperator (∇{\displaystyle \nabla }), also known as "nabla". The three basicvector operatorsare:[4]
Also commonly used are the two Laplace operators:
A quantity called theJacobian matrixis useful for studying functions when both the domain and range of the function are multivariable, such as achange of variablesduring integration.
The three basic vector operators have corresponding theorems which generalize thefundamental theorem of calculusto higher dimensions:
In two dimensions, the divergence and curl theorems reduce to the Green's theorem:
Linear approximations are used to replace complicated functions with linear functions that are almost the same. Given a differentiable functionf(x,y)with real values, one can approximatef(x,y)for(x,y)close to(a,b)by the formula
The right-hand side is the equation of the plane tangent to the graph ofz=f(x,y)at(a,b).
For a continuously differentiablefunction of several real variables, a pointP(that is, a set of values for the input variables, which is viewed as a point inRn) iscriticalif all of thepartial derivativesof the function are zero atP, or, equivalently, if itsgradientis zero. The critical values are the values of the function at the critical points.
If the function issmooth, or, at least twice continuously differentiable, a critical point may be either alocal maximum, alocal minimumor asaddle point. The different cases may be distinguished by considering theeigenvaluesof theHessian matrixof second derivatives.
ByFermat's theorem, all localmaxima and minimaof a differentiable function occur at critical points. Therefore, to find the local maxima and minima, it suffices, theoretically, to compute the zeros of the gradient and the eigenvalues of the Hessian matrix at these zeros.
Vector calculus can also be generalized to other3-manifoldsandhigher-dimensionalspaces.
Vector calculus is initially defined forEuclidean 3-space,R3,{\displaystyle \mathbb {R} ^{3},}which has additional structure beyond simply being a 3-dimensional real vector space, namely: anorm(giving a notion of length) defined via aninner product(thedot product), which in turn gives a notion of angle, and anorientation, which gives a notion of left-handed and right-handed. These structures give rise to avolume form, and also thecross product, which is used pervasively in vector calculus.
The gradient and divergence require only the inner product, while the curl and the cross product also requires the handedness of thecoordinate systemto be taken into account (seeCross product § Handednessfor more detail).
Vector calculus can be defined on other 3-dimensional real vector spaces if they have an inner product (or more generally a symmetricnondegenerate form) and an orientation; this is less data than an isomorphism to Euclidean space, as it does not require a set of coordinates (a frame of reference), which reflects the fact that vector calculus is invariant under rotations (thespecial orthogonal groupSO(3)).
More generally, vector calculus can be defined on any 3-dimensional orientedRiemannian manifold, or more generallypseudo-Riemannian manifold. This structure simply means that thetangent spaceat each point has an inner product (more generally, a symmetric nondegenerate form) and an orientation, or more globally that there is a symmetric nondegeneratemetric tensorand an orientation, and works because vector calculus is defined in terms of tangent vectors at each point.
Most of the analytic results are easily understood, in a more general form, using the machinery ofdifferential geometry, of which vector calculus forms a subset. Grad and div generalize immediately to other dimensions, as do the gradient theorem, divergence theorem, and Laplacian (yieldingharmonic analysis), while curl and cross product do not generalize as directly.
From a general point of view, the various fields in (3-dimensional) vector calculus are uniformly seen as beingk-vector fields: scalar fields are 0-vector fields, vector fields are 1-vector fields, pseudovector fields are 2-vector fields, and pseudoscalar fields are 3-vector fields. In higher dimensions there are additional types of fields (scalar, vector, pseudovector or pseudoscalar corresponding to0,1,n− 1orndimensions, which is exhaustive in dimension 3), so one cannot only work with (pseudo)scalars and (pseudo)vectors.
In any dimension, assuming a nondegenerate form, grad of a scalar function is a vector field, and div of a vector field is a scalar function, but only in dimension 3 or 7[5](and, trivially, in dimension 0 or 1) is the curl of a vector field a vector field, and only in 3 or7dimensions can a cross product be defined (generalizations in other dimensionalities either requiren−1{\displaystyle n-1}vectors to yield 1 vector, or are alternativeLie algebras, which are more general antisymmetric bilinear products). The generalization of grad and div, and how curl may be generalized is elaborated atCurl § Generalizations; in brief, the curl of a vector field is abivectorfield, which may be interpreted as thespecial orthogonal Lie algebraof infinitesimal rotations; however, this cannot be identified with a vector field because the dimensions differ – there are 3 dimensions of rotations in 3 dimensions, but 6 dimensions of rotations in 4 dimensions (and more generally(n2)=12n(n−1){\displaystyle \textstyle {{\binom {n}{2}}={\frac {1}{2}}n(n-1)}}dimensions of rotations inndimensions).
There are two important alternative generalizations of vector calculus. The first,geometric algebra, usesk-vectorfields instead of vector fields (in 3 or fewer dimensions, everyk-vector field can be identified with a scalar function or vector field, but this is not true in higher dimensions). This replaces the cross product, which is specific to 3 dimensions, taking in two vector fields and giving as output a vector field, with theexterior product, which exists in all dimensions and takes in two vector fields, giving as output a bivector (2-vector) field. This product yieldsClifford algebrasas the algebraic structure on vector spaces (with an orientation and nondegenerate form). Geometric algebra is mostly used in generalizations of physics and other applied fields to higher dimensions.
The second generalization usesdifferential forms(k-covector fields) instead of vector fields ork-vector fields, and is widely used in mathematics, particularly indifferential geometry,geometric topology, andharmonic analysis, in particular yieldingHodge theoryon oriented pseudo-Riemannian manifolds. From this point of view, grad, curl, and div correspond to theexterior derivativeof 0-forms, 1-forms, and 2-forms, respectively, and the key theorems of vector calculus are all special cases of the general form ofStokes' theorem.
From the point of view of both of these generalizations, vector calculus implicitly identifies mathematically distinct objects, which makes the presentation simpler but the underlying mathematical structure and generalizations less clear.
From the point of view of geometric algebra, vector calculus implicitly identifiesk-vector fields with vector fields or scalar functions: 0-vectors and 3-vectors with scalars, 1-vectors and 2-vectors with vectors. From the point of view of differential forms, vector calculus implicitly identifiesk-forms with scalar fields or vector fields: 0-forms and 3-forms with scalar fields, 1-forms and 2-forms with vector fields. Thus for example the curl naturally takes as input a vector field or 1-form, but naturally has as output a 2-vector field or 2-form (hence pseudovector field), which is then interpreted as a vector field, rather than directly taking a vector field to a vector field; this is reflected in the curl of a vector field in higher dimensions not having as output a vector field.
|
https://en.wikipedia.org/wiki/Vector_calculus
|
Complex analysis, traditionally known as thetheory of functions of a complex variable, is the branch ofmathematicsthat investigatesfunctionsofcomplex numbers. It is useful in many branches of mathematics, includingnumber theoryandapplied mathematics; as well as inphysics, includinghydrodynamics,thermodynamics, andelectrical engineering.
See also:glossary of real and complex analysis.
|
https://en.wikipedia.org/wiki/List_of_complex_analysis_topics
|
Incomplex analysis, themonodromy theoremis an important result aboutanalytic continuationof acomplex-analytic functionto a larger set. The idea is that one can extend a complex-analytic function (from here on called simplyanalytic function) along curves starting in the original domain of the function and ending in the larger set. A potential problem of thisanalytic continuation along a curvestrategy is there are usually many curves which end up at the same point in the larger set. The monodromy theorem gives sufficient conditions for analytic continuation to give the same value at a given point regardless of the curve used to get there, so that the resulting extended analytic function is well-defined and single-valued.
Before stating this theorem it is necessary to define analytic continuation along a curve and study its properties.
The definition of analytic continuation along a curve is a bit technical, but the basic idea is that one starts with an analytic function defined around a point, and one extends that function along a curve via analytic functions defined on small overlapping disks covering that curve.
Formally, consider a curve (acontinuous function)γ:[0,1]→C.{\displaystyle \gamma :[0,1]\to \mathbb {C} .}Letf{\displaystyle f}be an analytic function defined on anopen diskU{\displaystyle U}centered atγ(0).{\displaystyle \gamma (0).}Ananalytic continuationof the pair(f,U){\displaystyle (f,U)}alongγ{\displaystyle \gamma }is a collection of pairs(ft,Ut){\displaystyle (f_{t},U_{t})}for0≤t≤1{\displaystyle 0\leq t\leq 1}such that
Analytic continuation along a curve is essentially unique, in the sense that given two analytic continuations(ft,Ut){\displaystyle (f_{t},U_{t})}and(gt,Vt){\displaystyle (g_{t},V_{t})}(0≤t≤1){\displaystyle (0\leq t\leq 1)}of(f,U){\displaystyle (f,U)}alongγ,{\displaystyle \gamma ,}the functionsf1{\displaystyle f_{1}}andg1{\displaystyle g_{1}}coincide onU1∩V1.{\displaystyle U_{1}\cap V_{1}.}Informally, this says that any two analytic continuations of(f,U){\displaystyle (f,U)}alongγ{\displaystyle \gamma }will end up with the same values in a neighborhood ofγ(1).{\displaystyle \gamma (1).}
If the curveγ{\displaystyle \gamma }is closed (that is,γ(0)=γ(1){\displaystyle \gamma (0)=\gamma (1)}), one need not havef0{\displaystyle f_{0}}equalf1{\displaystyle f_{1}}in a neighborhood ofγ(0).{\displaystyle \gamma (0).}For example, if one starts at a point(a,0){\displaystyle (a,0)}witha>0{\displaystyle a>0}and thecomplex logarithmdefined in a neighborhood of this point, and one letsγ{\displaystyle \gamma }be the circle of radiusa{\displaystyle a}centered at the origin (traveled counterclockwise from(a,0){\displaystyle (a,0)}), then by doing an analytic continuation along this curve one will end up with a value of the logarithm at(a,0){\displaystyle (a,0)}which is2πi{\displaystyle 2\pi i}plus the original value (see the second illustration on the right).
As noted earlier, two analytic continuations along the same curve yield the same result at the curve's endpoint. However, given two different curves branching out from the same point around which an analytic function is defined, with the curves reconnecting at the end, it is not true in general that the analytic continuations of that function along the two curves will yield the same value at their common endpoint.
Indeed, one can consider, as in the previous section, the complex logarithm defined in a neighborhood of a point(a,0){\displaystyle (a,0)}and the circle centered at the origin and radiusa.{\displaystyle a.}Then, it is possible to travel from(a,0){\displaystyle (a,0)}to(−a,0){\displaystyle (-a,0)}in two ways, counterclockwise, on the upper half-plane arc of this circle, and clockwise, on the lower half-plane arc. The values of the logarithm at(−a,0){\displaystyle (-a,0)}obtained by analytic continuation along these two arcs will differ by2πi.{\displaystyle 2\pi i.}
If, however, one can continuously deform one of the curves into another while keeping the starting points and ending points fixed, and analytic continuation is possible on each of the intermediate curves, then the analytic continuations along the two curves will yield the same results at their common endpoint. This is called themonodromy theoremand its statement is made precise below.
The monodromy theorem makes it possible to extend an analytic function to a larger set via curves connecting a point in the original domain of the function to points in the larger set. The theorem below which states that is also called the monodromy theorem.
|
https://en.wikipedia.org/wiki/Monodromy_theorem
|
TheRiemann–Roch theoremis an important theorem inmathematics, specifically incomplex analysisandalgebraic geometry, for the computation of the dimension of the space ofmeromorphic functionswith prescribed zeros and allowedpoles. It relates the complex analysis of a connectedcompactRiemann surfacewith the surface's purely topologicalgenusg, in a way that can be carried over into purely algebraic settings.
Initially proved asRiemann's inequalitybyRiemann (1857), the theorem reached its definitive form for Riemann surfaces after work ofRiemann's short-lived studentGustav Roch(1865). It was later generalized toalgebraic curves, to higher-dimensionalvarietiesand beyond.
ARiemann surfaceX{\displaystyle X}is atopological spacethat is locally homeomorphic to an open subset ofC{\displaystyle \mathbb {C} }, the set ofcomplex numbers. In addition, thetransition mapsbetween these open subsets are required to beholomorphic. The latter condition allows one to transfer the notions and methods ofcomplex analysisdealing with holomorphic andmeromorphic functionsonC{\displaystyle \mathbb {C} }to the surfaceX{\displaystyle X}. For the purposes of the Riemann–Roch theorem, the surfaceX{\displaystyle X}is always assumed to becompact. Colloquially speaking, thegenusg{\displaystyle g}of a Riemann surface is its number ofhandles; for example the genus of the Riemann surface shown at the right is three. More precisely, the genus is defined as half of the firstBetti number, i.e., half of theC{\displaystyle \mathbb {C} }-dimension of the firstsingular homologygroupH1(X,C){\displaystyle H_{1}(X,\mathbb {C} )}with complex coefficients. The genusclassifiescompact Riemann surfacesup tohomeomorphism, i.e., two such surfaces are homeomorphic if and only if their genus is the same. Therefore, the genus is an important topological invariant of a Riemann surface. On the other hand,Hodge theoryshows that the genus coincides with theC{\displaystyle \mathbb {C} }-dimension of the space of holomorphic one-forms onX{\displaystyle X}, so the genus also encodes complex-analytic information about the Riemann surface.[1]
AdivisorD{\displaystyle D}is an element of thefree abelian groupon the points of the surface. Equivalently, a divisor is a finite linear combination of points of the surface with integer coefficients.
Any meromorphic functionf{\displaystyle f}gives rise to a divisor denoted(f){\displaystyle (f)}defined as
whereR(f){\displaystyle R(f)}is the set of all zeroes and poles off{\displaystyle f}, andsν{\displaystyle s_{\nu }}is given by
The setR(f){\displaystyle R(f)}is known to be finite; this is a consequence ofX{\displaystyle X}being compact and the fact that the zeros of a (non-zero) holomorphic function do not have anaccumulation point. Therefore,(f){\displaystyle (f)}is well-defined. Any divisor of this form is called aprincipal divisor. Two divisors that differ by a principal divisor are calledlinearly equivalent. The divisor of a meromorphic1-formis defined similarly. A divisor of a global meromorphic 1-form is called thecanonical divisor(usually denotedK{\displaystyle K}). Any two meromorphic 1-forms will yield linearly equivalent divisors, so the canonical divisor is uniquely determined up to linear equivalence (hence "the" canonical divisor).
The symboldeg(D){\displaystyle \deg(D)}denotes thedegree(occasionally also called index) of the divisorD{\displaystyle D}, i.e. the sum of the coefficients occurring inD{\displaystyle D}. It can be shown that the divisor of a global meromorphic function always has degree 0, so the degree of a divisor depends only on its linear equivalence class.
The numberℓ(D){\displaystyle \ell (D)}is the quantity that is of primary interest: thedimension(overC{\displaystyle \mathbb {C} }) of the vector space of meromorphic functionsh{\displaystyle h}on the surface, such that all the coefficients of(h)+D{\displaystyle (h)+D}are non-negative. Intuitively, we can think of this as being all meromorphic functions whose poles at every point are no worse than the corresponding coefficient inD{\displaystyle D}; if the coefficient inD{\displaystyle D}atz{\displaystyle z}is negative, then we require thath{\displaystyle h}has a zero of at least thatmultiplicityatz{\displaystyle z}– if the coefficient inD{\displaystyle D}is positive,h{\displaystyle h}can have a pole of at most that order. The vector spaces for linearly equivalent divisors are naturally isomorphic through multiplication with the global meromorphic function (which is well-defined up to a scalar).
The Riemann–Roch theorem for a compact Riemann surface of genusg{\displaystyle g}with canonical divisorK{\displaystyle K}states
Typically, the numberℓ(D){\displaystyle \ell (D)}is the one of interest, whileℓ(K−D){\displaystyle \ell (K-D)}is thought of as a correction term (also called index of speciality[2][3]) so the theorem may be roughly paraphrased by saying
Because it is the dimension of a vector space, the correction termℓ(K−D){\displaystyle \ell (K-D)}is always non-negative, so that
This is calledRiemann's inequality.Roch's partof the statement is the description of the possible difference between the sides of the inequality. On a general Riemann surface of genusg{\displaystyle g},K{\displaystyle K}has degree2g−2{\displaystyle 2g-2}, independently of the meromorphic form chosen to represent the divisor. This follows from puttingD=K{\displaystyle D=K}in the theorem. In particular, as long asD{\displaystyle D}has degree at least2g−1{\displaystyle 2g-1}, the correction term is 0, so that
The theorem will now be illustrated for surfaces of low genus. There are also a number other closely related theorems: an equivalent formulation of this theorem usingline bundlesand a generalization of the theorem toalgebraic curves.
The theorem will be illustrated by picking a pointP{\displaystyle P}on the surface in question and regarding the sequence of numbers
i.e., the dimension of the space of functions that are holomorphic everywhere except atP{\displaystyle P}where the function is allowed to have a pole of order at mostn{\displaystyle n}. Forn=0{\displaystyle n=0}, the functions are thus required to beentire, i.e., holomorphic on the whole surfaceX{\displaystyle X}. ByLiouville's theorem, such a function is necessarily constant. Therefore,ℓ(0)=1{\displaystyle \ell (0)=1}. In general, the sequenceℓ(n⋅P){\displaystyle \ell (n\cdot P)}is an increasing sequence.
TheRiemann sphere(also calledcomplex projective line) issimply connectedand hence its first singular homology is zero. In particular its genus is zero. The sphere can be covered by two copies ofC{\displaystyle \mathbb {C} }, withtransition mapbeing given by
Therefore, the formω=dz{\displaystyle \omega =dz}on one copy ofC{\displaystyle \mathbb {C} }extends to a meromorphic form on the Riemann sphere: it has a double pole at infinity, since
Thus, its canonical divisor isK:=div(ω)=−2P{\displaystyle K:=\operatorname {div} (\omega )=-2P}(whereP{\displaystyle P}is the point at infinity).
Therefore, the theorem says that the sequenceℓ(n⋅P){\displaystyle \ell (n\cdot P)}reads
This sequence can also be read off from the theory ofpartial fractions. Conversely if this sequence starts this way, theng{\displaystyle g}must be zero.
The next case is a Riemann surface of genusg=1{\displaystyle g=1}, such as atorusC/Λ{\displaystyle \mathbb {C} /\Lambda }, whereΛ{\displaystyle \Lambda }is a two-dimensionallattice(a group isomorphic toZ2{\displaystyle \mathbb {Z} ^{2}}). Its genus is one: its first singular homology group is freely generated by two loops, as shown in the illustration at the right. The standard complex coordinatez{\displaystyle z}onC{\displaystyle C}yields a one-formω=dz{\displaystyle \omega =dz}onX{\displaystyle X}that is everywhere holomorphic, i.e., has no poles at all. Therefore,K{\displaystyle K}, the divisor ofω{\displaystyle \omega }is zero.
On this surface, this sequence is
and this characterises the caseg=1{\displaystyle g=1}. Indeed, forD=0{\displaystyle D=0},ℓ(K−D)=ℓ(0)=1{\displaystyle \ell (K-D)=\ell (0)=1}, as was mentioned above. ForD=n⋅P{\displaystyle D=n\cdot P}withn>0{\displaystyle n>0}, the degree ofK−D{\displaystyle K-D}is strictly negative, so that the correction term is 0. The sequence of dimensions can also be derived from the theory ofelliptic functions.
Forg=2{\displaystyle g=2}, the sequence mentioned above is
It is shown from this that the ? term of degree 2 is either 1 or 2, depending on the point. It can be proven that in any genus 2 curve there are exactly six points whose sequences are 1, 1, 2, 2, ... and the rest of the points have the generic sequence 1, 1, 1, 2, ... In particular, a genus 2 curve is ahyperelliptic curve. Forg>2{\displaystyle g>2}it is always true that at most points the sequence starts withg+1{\displaystyle g+1}ones and there are finitely many points with other sequences (seeWeierstrass points).
Using the close correspondence between divisors andholomorphic line bundleson a Riemann surface, the theorem can also be stated in a different, yet equivalent way: letLbe a holomorphic line bundle onX. LetH0(X,L){\displaystyle H^{0}(X,L)}denote the space of holomorphic sections ofL. This space will be finite-dimensional; its dimension is denotedh0(X,L){\displaystyle h^{0}(X,L)}. LetKdenote thecanonical bundleonX. Then, the Riemann–Roch theorem states that
The theorem of the previous section is the special case of whenLis apoint bundle.
The theorem can be applied to show that there areglinearly independent holomorphic sections ofK, orone-formsonX, as follows. TakingLto be the trivial bundle,h0(X,L)=1{\displaystyle h^{0}(X,L)=1}since the only holomorphic functions onXare constants. The degree ofLis zero, andL−1{\displaystyle L^{-1}}is the trivial bundle. Thus,
Therefore,h0(X,K)=g{\displaystyle h^{0}(X,K)=g}, proving that there aregholomorphic one-forms.
Since the canonical bundleK{\displaystyle K}hash0(X,K)=g{\displaystyle h^{0}(X,K)=g}, applying Riemann–Roch toL=K{\displaystyle L=K}gives
which can be rewritten as
hence the degree of the canonical bundle isdeg(K)=2g−2{\displaystyle \deg(K)=2g-2}.
Every item in the above formulation of the Riemann–Roch theorem for divisors on Riemann surfaces has an analogue inalgebraic geometry. The analogue of a Riemann surface is anon-singularalgebraic curveCover a fieldk. The difference in terminology (curve vs. surface) is because the dimension of a Riemann surface as a realmanifoldis two, but one as a complex manifold. The compactness of a Riemann surface is paralleled by the condition that the algebraic curve becomplete, which is equivalent to beingprojective. Over a general fieldk, there is no good notion of singular (co)homology. The so-calledgeometric genusis defined as
i.e., as the dimension of the space of globally defined (algebraic) one-forms (seeKähler differential). Finally, meromorphic functions on a Riemann surface are locally represented as fractions of holomorphic functions. Hence they are replaced byrational functionswhich are locally fractions ofregular functions. Thus, writingℓ(D){\displaystyle \ell (D)}for the dimension (overk) of the space of rational functions on the curve whose poles at every point are not worse than the corresponding coefficient inD, the very same formula as above holds:
whereCis a projective non-singular algebraic curve over analgebraically closed fieldk. In fact, the same formula holds for projective curves over any field, except that the degree of a divisor needs to take into accountmultiplicitiescoming from the possible extensions of the base field and theresidue fieldsof the points supporting the divisor.[4]Finally, for a proper curve over anArtinian ring, the Euler characteristic of the line bundle associated to a divisor is given by the degree of the divisor (appropriately defined) plus the Euler characteristic of the structural sheafO{\displaystyle {\mathcal {O}}}.[5]
The smoothness assumption in the theorem can be relaxed, as well: for a (projective) curve over an algebraically closed field, all of whose local rings areGorenstein rings, the same statement as above holds, provided that the geometric genus as defined above is replaced by thearithmetic genusga, defined as
(For smooth curves, the geometric genus agrees with the arithmetic one.) The theorem has also been extended to general singular curves (and higher-dimensional varieties).[7]
One of the important consequences of Riemann–Roch is it gives a formula for computing theHilbert polynomialof line bundles on a curve. If a line bundleL{\displaystyle {\mathcal {L}}}is ample, then the Hilbert polynomial will give the first degreeL⊗n{\displaystyle {\mathcal {L}}^{\otimes n}}giving an embedding into projective space. For example, the canonical sheafωC{\displaystyle \omega _{C}}has degree2g−2{\displaystyle 2g-2}, which gives an ample line bundle for genusg≥2{\displaystyle g\geq 2}.[8]If we setωC(n)=ωC⊗n{\displaystyle \omega _{C}(n)=\omega _{C}^{\otimes n}}then the Riemann–Roch formula reads
Giving the degree1{\displaystyle 1}Hilbert polynomial ofωC{\displaystyle \omega _{C}}
Because the tri-canonical sheafωC⊗3{\displaystyle \omega _{C}^{\otimes 3}}is used to embed the curve, the Hilbert polynomial
HC(t)=HωC⊗3(t){\displaystyle H_{C}(t)=H_{\omega _{C}^{\otimes 3}}(t)}
is generally considered while constructing theHilbert scheme of curves(and themoduli space of algebraic curves). This polynomial is
HC(t)=(6t−1)(g−1)=6(g−1)t+(1−g){\displaystyle {\begin{aligned}H_{C}(t)&=(6t-1)(g-1)\\&=6(g-1)t+(1-g)\end{aligned}}}
and is called theHilbert polynomial of a genus g curve.
Analyzing this equation further, the Euler characteristic reads as
Sincedeg(ωC⊗n)=n(2g−2){\displaystyle \deg(\omega _{C}^{\otimes n})=n(2g-2)}
forn≥3{\displaystyle n\geq 3}, since its degree is negative for allg≥2{\displaystyle g\geq 2}, implying it has no global sections, there is an embedding into some projective space from the global sections ofωC⊗n{\displaystyle \omega _{C}^{\otimes n}}. In particular,ωC⊗3{\displaystyle \omega _{C}^{\otimes 3}}gives an embedding intoPN≅P(H0(C,ωC⊗3)){\displaystyle \mathbb {P} ^{N}\cong \mathbb {P} (H^{0}(C,\omega _{C}^{\otimes 3}))}whereN=5g−5−1=5g−6{\displaystyle N=5g-5-1=5g-6}sinceh0(ωC⊗3)=6g−6−g+1{\displaystyle h^{0}(\omega _{C}^{\otimes 3})=6g-6-g+1}. This is useful in the construction of themoduli space of algebraic curvesbecause it can be used as the projective space to construct theHilbert schemewith Hilbert polynomialHC(t){\displaystyle H_{C}(t)}.[9]
An irreducible plane algebraic curve of degreedhas (d− 1)(d− 2)/2 −gsingularities, when properly counted. It follows that, if a curve has (d− 1)(d− 2)/2 different singularities, it is arational curveand, thus, admits a rational parameterization.
TheRiemann–Hurwitz formulaconcerning (ramified) maps between Riemann surfaces or algebraic curves is a consequence of the Riemann–Roch theorem.
Clifford's theorem on special divisorsis also a consequence of the Riemann–Roch theorem. It states that for a special divisor (i.e., such thatℓ(K−D)>0{\displaystyle \ell (K-D)>0}) satisfyingℓ(D)>0{\displaystyle \ell (D)>0}, the following inequality holds:[10]
The statement for algebraic curves can be proved usingSerre duality. The integerℓ(D){\displaystyle \ell (D)}is the dimension of the space of global sections of theline bundleL(D){\displaystyle {\mathcal {L}}(D)}associated toD(cf.Cartier divisor). In terms ofsheaf cohomology, we therefore haveℓ(D)=dimH0(X,L(D)){\displaystyle \ell (D)=\mathrm {dim} H^{0}(X,{\mathcal {L}}(D))}, and likewiseℓ(KX−D)=dimH0(X,ωX⊗L(D)∨){\displaystyle \ell ({\mathcal {K}}_{X}-D)=\dim H^{0}(X,\omega _{X}\otimes {\mathcal {L}}(D)^{\vee })}. But Serre duality for non-singular projective varieties in the particular case of a curve states thatH0(X,ωX⊗L(D)∨){\displaystyle H^{0}(X,\omega _{X}\otimes {\mathcal {L}}(D)^{\vee })}is isomorphic to the dualH1(X,L(D))∨{\displaystyle H^{1}(X,{\mathcal {L}}(D))^{\vee }}. The left hand side thus equals theEuler characteristicof the divisorD. WhenD= 0, we find the Euler characteristic for the structure sheaf is1−g{\displaystyle 1-g}by definition. To prove the theorem for general divisor, one can then proceed by adding points one by one to the divisor and ensure that the Euler characteristic transforms accordingly to the right hand side.
The theorem for compact Riemann surfaces can be deduced from the algebraic version usingChow's Theoremand theGAGAprinciple: in fact, every compact Riemann surface is defined by algebraic equations in some complex projective space. (Chow's Theorem says that any closed analytic subvariety of projective space is defined by algebraic equations, and the GAGA principle says that sheaf cohomology of an algebraic variety is the same as the sheaf cohomology of the analytic variety defined by the same equations).
One may avoid the use of Chow's theorem by arguing identically to the proof in the case of algebraic curves, but replacingL(D){\displaystyle {\mathcal {L}}(D)}with the sheafOD{\displaystyle {\mathcal {O}}_{D}}of meromorphic functionshsuch that all coefficients of the divisor(h)+D{\displaystyle (h)+D}are nonnegative. Here the fact that the Euler characteristic transforms as desired when one adds a point to the divisor can be read off from the long exact sequence induced by the short exact sequence
whereCP{\displaystyle \mathbb {C} _{P}}is theskyscraper sheafatP, and the mapOD+P→CP{\displaystyle {\mathcal {O}}_{D+P}\to \mathbb {C} _{P}}returns the−k−1{\displaystyle -k-1}th Laurent coefficient, wherek=D(P){\displaystyle k=D(P)}.[11]
A version of thearithmetic Riemann–Roch theoremstates that ifkis aglobal field, andfis a suitably admissible function of theadelesofk, then for everyidelea, one has aPoisson summation formula:
In the special case whenkis the function field of an algebraic curve over a finite field andfis any character that is trivial onk, this recovers the geometric Riemann–Roch theorem.[12]
Other versions of the arithmetic Riemann–Roch theorem make use ofArakelov theoryto resemble the traditional Riemann–Roch theorem more exactly.
TheRiemann–Roch theorem for curveswas proved for Riemann surfaces by Riemann and Roch in the 1850s and for algebraic curves byFriedrich Karl Schmidtin 1931 as he was working onperfect fieldsoffinite characteristic. As stated byPeter Roquette,[13]
The first main achievement of F. K. Schmidt is the discovery that the classical theorem of Riemann–Roch on compact Riemann surfaces can be transferred to function fields with finite base field. Actually, his proof of the Riemann–Roch theorem works for arbitrary perfect base fields, not necessarily finite.
It is foundational in the sense that the subsequent theory for curves tries to refine the information it yields (for example in theBrill–Noether theory).
There are versions in higher dimensions (for the appropriate notion ofdivisor, orline bundle). Their general formulation depends on splitting the theorem into two parts. One, which would now be calledSerre duality, interprets theℓ(K−D){\displaystyle \ell (K-D)}term as a dimension of a firstsheaf cohomologygroup; withℓ(D){\displaystyle \ell (D)}the dimension of a zeroth cohomology group, or space of sections, the left-hand side of the theorem becomes anEuler characteristic, and the right-hand side a computation of it as adegreecorrected according to the topology of the Riemann surface.
Inalgebraic geometryof dimension two such a formula was found by thegeometers of the Italian school; aRiemann–Roch theorem for surfaceswas proved (there are several versions, with the first possibly being due toMax Noether).
Ann-dimensional generalisation, theHirzebruch–Riemann–Roch theorem, was found and proved byFriedrich Hirzebruch, as an application ofcharacteristic classesinalgebraic topology; he was much influenced by the work ofKunihiko Kodaira. At about the same timeJean-Pierre Serrewas giving the general form of Serre duality, as we now know it.
Alexander Grothendieckproved a far-reaching generalization in 1957, now known as theGrothendieck–Riemann–Roch theorem. His work reinterprets Riemann–Roch not as a theorem about a variety, but about a morphism between two varieties. The details of the proofs were published byArmand BorelandJean-Pierre Serrein 1958.[14]Later, Grothendieck and his collaborators simplified and generalized the proof.[15]
Finally a general version was found inalgebraic topology, too. These developments were essentially all carried out between 1950 and 1960. After that theAtiyah–Singer index theoremopened another route to generalization. Consequently, the Euler characteristic of acoherent sheafis reasonably computable. For just one summand within the alternating sum, further arguments such asvanishing theoremsmust be used.
|
https://en.wikipedia.org/wiki/Riemann%E2%80%93Roch_theorem
|
Incomplex analysis,Runge's theorem(also known asRunge's approximation theorem) is named after the German mathematicianCarl Rungewho first proved it in the year 1885. It states the following:
Denoting byCthe set ofcomplex numbers, letKbe acompact subsetofCand letfbe afunctionwhich isholomorphicon an open set containingK. IfAis a set containingat least onecomplex number from everyboundedconnected componentofC\Kthen there exists asequence(rn)n∈N{\displaystyle (r_{n})_{n\in \mathbb {N} }}ofrational functionswhichconverges uniformlytofonKand such that all thepolesof the functions(rn)n∈N{\displaystyle (r_{n})_{n\in \mathbb {N} }}are inA.
Note that not every complex number inAneeds to be a pole of every rational function of the sequence(rn)n∈N{\displaystyle (r_{n})_{n\in \mathbb {N} }}. We merely know that for all members of(rn)n∈N{\displaystyle (r_{n})_{n\in \mathbb {N} }}thatdohave poles, those poles lie inA.
One aspect that makes this theorem so powerful is that one can choose the setAarbitrarily. In other words, one can chooseanycomplex numbers from the bounded connected components ofC\Kand the theorem guarantees the existence of a sequence of rational functions with poles only amongst those chosen numbers.
For the special case in whichC\Kis a connected set (in particular whenKis simply-connected), the setAin the theorem will clearly be empty. Since rational functions with no poles are simplypolynomials, we get the followingcorollary: IfKis a compact subset ofCsuch thatC\Kis a connected set, andfis a holomorphic function on an open set containingK, then there exists a sequence of polynomials(pn){\displaystyle (p_{n})}that approachesfuniformly onK(the assumptions can be relaxed, seeMergelyan's theorem).
Runge's theorem generalises as follows: one can takeAto be a subset of theRiemann sphereC∪{∞} and require thatAintersect also the unbounded connected component ofK(which now contains ∞). That is, in the formulation given above, the rational functions may turn out to have a pole at infinity, while in the more general formulation the pole can be chosen instead anywhere in the unbounded connected component ofC\K.
An elementary proof, inspired bySarason (1998), proceeds as follows. There is a closed piecewise-linear contour Γ in the open set, containingKin its interior, such that all the chosen distinguished points are in its exterior. ByCauchy's integral formula
forwinK. Riemann approximating sums can be used to approximate the contour integral uniformly overK(there is a similar formula for the derivative). Each term in the sum is a scalar multiple of (z−w)−1for some pointzon the contour. This gives a uniform approximation by a rational function with poles on Γ.
To modify this to an approximation with poles at specified points in each component of the complement ofK, it is enough to check this for terms of the form (z−w)−1. Ifz0is the point in the same component asz, take a path fromztoz0.
If two points are sufficiently close on the path, we may use the formula
valid on the circle-complement|z0−w0|<|z−w|{\displaystyle |z_{0}-w_{0}|<|z-w|}; note that the chosen path has a positive distance to K by compactness. That series can be truncated to give a rational function with poles only at the second point uniformly close to the original function onK. Proceeding by steps along the path fromztoz0the original function (z−w)−1can be successively modified to give a rational function with poles only atz0.
Ifz0is the point at infinity, then by the above procedure the rational function (z−w)−1can first be approximated by a rational functiongwith poles atR> 0 whereRis so large thatKlies inw<R. TheTaylor seriesexpansion ofgabout 0 can then be truncated to give a polynomial approximation onK.
|
https://en.wikipedia.org/wiki/Runge%27s_theorem
|
In themathematicalstudy ofpartial differential equations, theBateman transformis a method for solving theLaplace equationin four dimensions andwave equationin three by using aline integralof aholomorphic functionin threecomplex variables. It is named after the mathematicianHarry Bateman, who first published the result in (Bateman 1904).
The formula asserts that ifƒis a holomorphic function of three complex variables, then
is a solution of the Laplace equation, which follows by differentiation under the integral. Furthermore, Bateman asserted that the most general solution of the Laplace equation arises in this way.
Thismathematical analysis–related article is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Bateman_transform
|
Inmathematics(in particular,functional analysis),convolutionis amathematical operationon twofunctionsf{\displaystyle f}andg{\displaystyle g}that produces a third functionf∗g{\displaystyle f*g}, as theintegralof the product of the two functions after one is reflected about the y-axis and shifted. The termconvolutionrefers to both the resulting function and to the process of computing it. The integral is evaluated for all values of shift, producing the convolution function. The choice of which function is reflected and shifted before the integral does not change the integral result (seecommutativity). Graphically, it expresses how the 'shape' of one function is modified by the other.
Some features of convolution are similar tocross-correlation: for real-valued functions, of a continuous or discrete variable, convolutionf∗g{\displaystyle f*g}differs from cross-correlationf⋆g{\displaystyle f\star g}only in that eitherf(x){\displaystyle f(x)}org(x){\displaystyle g(x)}is reflected about the y-axis in convolution; thus it is a cross-correlation ofg(−x){\displaystyle g(-x)}andf(x){\displaystyle f(x)}, orf(−x){\displaystyle f(-x)}andg(x){\displaystyle g(x)}.[A]For complex-valued functions, the cross-correlation operator is theadjointof the convolution operator.
Convolution has applications that includeprobability,statistics,acoustics,spectroscopy,signal processingandimage processing,geophysics,engineering,physics,computer visionanddifferential equations.[1]
The convolution can be defined for functions onEuclidean spaceand othergroups(asalgebraic structures).[citation needed]For example,periodic functions, such as thediscrete-time Fourier transform, can be defined on acircleand convolved byperiodic convolution. (See row 18 atDTFT § Properties.) Adiscrete convolutioncan be defined for functions on the set ofintegers.
Generalizations of convolution have applications in the field ofnumerical analysisandnumerical linear algebra, and in the design and implementation offinite impulse responsefilters in signal processing.[citation needed]
Computing theinverseof the convolution operation is known asdeconvolution.
The convolution off{\displaystyle f}andg{\displaystyle g}is writtenf∗g{\displaystyle f*g}, denoting the operator with the symbol∗{\displaystyle *}.[B]It is defined as the integral of the product of the two functions after one is reflected about the y-axis and shifted. As such, it is a particular kind ofintegral transform:
An equivalent definition is (seecommutativity):
While the symbolt{\displaystyle t}is used above, it need not represent the time domain. At eacht{\displaystyle t}, the convolution formula can be described as the area under the functionf(τ){\displaystyle f(\tau )}weighted by the functiong(−τ){\displaystyle g(-\tau )}shifted by the amountt{\displaystyle t}. Ast{\displaystyle t}changes, the weighting functiong(t−τ){\displaystyle g(t-\tau )}emphasizes different parts of the input functionf(τ){\displaystyle f(\tau )}; Ift{\displaystyle t}is a positive value, theng(t−τ){\displaystyle g(t-\tau )}is equal tog(−τ){\displaystyle g(-\tau )}that slides or is shifted along theτ{\displaystyle \tau }-axis toward the right (toward+∞{\displaystyle +\infty }) by the amount oft{\displaystyle t}, while ift{\displaystyle t}is a negative value, theng(t−τ){\displaystyle g(t-\tau )}is equal tog(−τ){\displaystyle g(-\tau )}that slides or is shifted toward the left (toward−∞{\displaystyle -\infty }) by the amount of|t|{\displaystyle |t|}.
For functionsf{\displaystyle f},g{\displaystyle g}supportedon only[0,∞){\displaystyle [0,\infty )}(i.e., zero for negative arguments), the integration limits can be truncated, resulting in:
For the multi-dimensional formulation of convolution, seedomain of definition(below).
A common engineering notational convention is:[2]
which has to be interpreted carefully to avoid confusion. For instance,f(t)∗g(t−t0){\displaystyle f(t)*g(t-t_{0})}is equivalent to(f∗g)(t−t0){\displaystyle (f*g)(t-t_{0})}, butf(t−t0)∗g(t−t0){\displaystyle f(t-t_{0})*g(t-t_{0})}is in fact equivalent to(f∗g)(t−2t0){\displaystyle (f*g)(t-2t_{0})}.[3]
Given two functionsf(t){\displaystyle f(t)}andg(t){\displaystyle g(t)}withbilateral Laplace transforms(two-sided Laplace transform)
and
respectively, the convolution operation(f∗g)(t){\displaystyle (f*g)(t)}can be defined as theinverse Laplace transformof the product ofF(s){\displaystyle F(s)}andG(s){\displaystyle G(s)}.[4][5]More precisely,
Lett=u+v{\displaystyle t=u+v}, then
Note thatF(s)⋅G(s){\displaystyle F(s)\cdot G(s)}is the bilateral Laplace transform of(f∗g)(t){\displaystyle (f*g)(t)}. A similar derivation can be done using theunilateral Laplace transform(one-sided Laplace transform).
The convolution operation also describes the output (in terms of the input) of an important class of operations known aslinear time-invariant(LTI). SeeLTI system theoryfor a derivation of convolution as the result of LTI constraints. In terms of theFourier transformsof the input and output of an LTI operation, no new frequency components are created. The existing ones are only modified (amplitude and/or phase). In other words, the output transform is the pointwise product of the input transform with a third transform (known as atransfer function). SeeConvolution theoremfor a derivation of that property of convolution. Conversely, convolution can be derived as the inverse Fourier transform of the pointwise product of two Fourier transforms.
The resultingwaveform(not shown here) is the convolution of functionsf{\displaystyle f}andg{\displaystyle g}.
Iff(t){\displaystyle f(t)}is aunit impulse, the result of this process is simplyg(t){\displaystyle g(t)}. Formally:
One of the earliest uses of the convolution integral appeared inD'Alembert's derivation ofTaylor's theoreminRecherches sur différents points importants du système du monde,published in 1754.[6]
Also, an expression of the type:
is used bySylvestre François Lacroixon page 505 of his book entitledTreatise on differences and series, which is the last of 3 volumes of the encyclopedic series:Traité du calcul différentiel et du calcul intégral, Chez Courcier, Paris, 1797–1800.[7]Soon thereafter, convolution operations appear in the works ofPierre Simon Laplace,Jean-Baptiste Joseph Fourier,Siméon Denis Poisson, and others. The term itself did not come into wide use until the 1950s or 1960s. Prior to that it was sometimes known asFaltung(which meansfoldinginGerman),composition product,superposition integral, andCarson's integral.[8]Yet it appears as early as 1903, though the definition is rather unfamiliar in older uses.[9][10]
The operation:
is a particular case of composition products considered by the Italian mathematicianVito Volterrain 1913.[11]
When a functiongT{\displaystyle g_{T}}is periodic, with periodT{\displaystyle T}, then for functions,f{\displaystyle f}, such thatf∗gT{\displaystyle f*g_{T}}exists, the convolution is also periodic and identical to:
wheret0{\displaystyle t_{0}}is an arbitrary choice. The summation is called aperiodic summationof the functionf{\displaystyle f}.
WhengT{\displaystyle g_{T}}is a periodic summation of another function,g{\displaystyle g}, thenf∗gT{\displaystyle f*g_{T}}is known as acircularorcyclicconvolution off{\displaystyle f}andg{\displaystyle g}.
And if the periodic summation above is replaced byfT{\displaystyle f_{T}}, the operation is called aperiodicconvolution offT{\displaystyle f_{T}}andgT{\displaystyle g_{T}}.
For complex-valued functionsf{\displaystyle f}andg{\displaystyle g}defined on the setZ{\displaystyle \mathbb {Z} }of integers, thediscrete convolutionoff{\displaystyle f}andg{\displaystyle g}is given by:[12]
or equivalently (seecommutativity) by:
The convolution of two finite sequences is defined by extending the sequences to finitely supported functions on the set of integers. When the sequences are the coefficients of twopolynomials, then the coefficients of theordinary product of the two polynomialsare the convolution of the original two sequences. This is known as theCauchy productof the coefficients of the sequences.
Thus whenghas finite support in the set{−M,−M+1,…,M−1,M}{\displaystyle \{-M,-M+1,\dots ,M-1,M\}}(representing, for instance, afinite impulse response), a finite summation may be used:[13]
When a functiongN{\displaystyle g_{_{N}}}is periodic, with periodN,{\displaystyle N,}then for functions,f,{\displaystyle f,}such thatf∗gN{\displaystyle f*g_{_{N}}}exists, the convolution is also periodic and identical to:
The summation onk{\displaystyle k}is called aperiodic summationof the functionf.{\displaystyle f.}
IfgN{\displaystyle g_{_{N}}}is a periodic summation of another function,g,{\displaystyle g,}thenf∗gN{\displaystyle f*g_{_{N}}}is known as acircular convolutionoff{\displaystyle f}andg.{\displaystyle g.}
When the non-zero durations of bothf{\displaystyle f}andg{\displaystyle g}are limited to the interval[0,N−1],{\displaystyle [0,N-1],}f∗gN{\displaystyle f*g_{_{N}}}reduces to these common forms:
The notationf∗Ng{\displaystyle f*_{N}g}forcyclic convolutiondenotes convolution over thecyclic groupofintegers moduloN.
Circular convolution arises most often in the context of fast convolution with afast Fourier transform(FFT) algorithm.
In many situations, discrete convolutions can be converted to circular convolutions so that fast transforms with a convolution property can be used to implement the computation. For example, convolution of digit sequences is the kernel operation inmultiplicationof multi-digit numbers, which can therefore be efficiently implemented with transform techniques (Knuth 1997, §4.3.3.C;von zur Gathen & Gerhard 2003, §8.2).
Eq.1requiresNarithmetic operations per output value andN2operations forNoutputs. That can be significantly reduced with any of several fast algorithms.Digital signal processingand other applications typically use fast convolution algorithms to reduce the cost of the convolution to O(NlogN) complexity.
The most common fast convolution algorithms usefast Fourier transform(FFT) algorithms via thecircular convolution theorem. Specifically, thecircular convolutionof two finite-length sequences is found by taking an FFT of each sequence, multiplying pointwise, and then performing an inverse FFT. Convolutions of the type defined above are then efficiently implemented using that technique in conjunction with zero-extension and/or discarding portions of the output. Other fast convolution algorithms, such as theSchönhage–Strassen algorithmor the Mersenne transform,[14]use fast Fourier transforms in otherrings. The Winograd method is used as an alternative to the FFT.[15]It significantly speeds up 1D,[16]2D,[17]and 3D[18]convolution.
If one sequence is much longer than the other, zero-extension of the shorter sequence and fast circular convolution is not the most computationally efficient method available.[19]Instead, decomposing the longer sequence into blocks and convolving each block allows for faster algorithms such as theoverlap–save methodandoverlap–add method.[20]A hybrid convolution method that combines block andFIRalgorithms allows for a zero input-output latency that is useful for real-time convolution computations.[21]
The convolution of two complex-valued functions onRdis itself a complex-valued function onRd, defined by:
and is well-defined only iffandgdecay sufficiently rapidly at infinity in order for the integral to exist. Conditions for the existence of the convolution may be tricky, since a blow-up ingat infinity can be easily offset by sufficiently rapid decay inf. The question of existence thus may involve different conditions onfandg:
Iffandgarecompactly supportedcontinuous functions, then their convolution exists, and is also compactly supported and continuous (Hörmander 1983, Chapter 1). More generally, if either function (sayf) is compactly supported and the other islocally integrable, then the convolutionf∗gis well-defined and continuous.
Convolution offandgis also well defined when both functions are locally square integrable onRand supported on an interval of the form[a, +∞)(or both supported on[−∞,a]).
The convolution offandgexists iffandgare bothLebesgue integrable functionsinL1(Rd), and in this casef∗gis also integrable (Stein & Weiss 1971, Theorem 1.3). This is a consequence ofTonelli's theorem. This is also true for functions inL1, under the discrete convolution, or more generally for theconvolution on any group.
Likewise, iff∈L1(Rd) andg∈Lp(Rd) where1 ≤p≤ ∞, thenf*g∈Lp(Rd), and
In the particular casep= 1, this shows thatL1is aBanach algebraunder the convolution (and equality of the two sides holds iffandgare non-negative almost everywhere).
More generally,Young's inequalityimplies that the convolution is a continuous bilinear map between suitableLpspaces. Specifically, if1 ≤p,q,r≤ ∞satisfy:
then
so that the convolution is a continuous bilinear mapping fromLp×LqtoLr.
The Young inequality for convolution is also true in other contexts (circle group, convolution onZ). The preceding inequality is not sharp on the real line: when1 <p,q,r< ∞, there exists a constantBp,q< 1such that:
The optimal value ofBp,qwas discovered in 1975[22]and independently in 1976,[23]seeBrascamp–Lieb inequality.
A stronger estimate is true provided1 <p,q,r< ∞:
where‖g‖q,w{\displaystyle \|g\|_{q,w}}is theweakLqnorm. Convolution also defines a bilinear continuous mapLp,w×Lq,w→Lr,w{\displaystyle L^{p,w}\times L^{q,w}\to L^{r,w}}for1<p,q,r<∞{\displaystyle 1<p,q,r<\infty }, owing to the weak Young inequality:[24]
In addition to compactly supported functions and integrable functions, functions that have sufficiently rapid decay at infinity can also be convolved. An important feature of the convolution is that iffandgboth decay rapidly, thenf∗galso decays rapidly. In particular, iffandgarerapidly decreasing functions, then so is the convolutionf∗g. Combined with the fact that convolution commutes with differentiation (see#Properties), it follows that the class ofSchwartz functionsis closed under convolution (Stein & Weiss 1971, Theorem 3.3).
Iffis a smooth function that iscompactly supportedandgis a distribution, thenf∗gis a smooth function defined by
More generally, it is possible to extend the definition of the convolution in a unique way withφ{\displaystyle \varphi }the same asfabove, so that the associative law
remains valid in the case wherefis a distribution, andga compactly supported distribution (Hörmander 1983, §4.2).
The convolution of any twoBorel measuresμandνofbounded variationis the measureμ∗ν{\displaystyle \mu *\nu }defined by (Rudin 1962)
In particular,
whereA⊂Rd{\displaystyle A\subset \mathbf {R} ^{d}}is a measurable set and1A{\displaystyle 1_{A}}is theindicator functionofA{\displaystyle A}.
This agrees with the convolution defined above when μ and ν are regarded as distributions, as well as the convolution of L1functions when μ and ν are absolutely continuous with respect to the Lebesgue measure.
The convolution of measures also satisfies the following version of Young's inequality
where the norm is thetotal variationof a measure. Because the space of measures of bounded variation is aBanach space, convolution of measures can be treated with standard methods offunctional analysisthat may not apply for the convolution of distributions.
The convolution defines a product on thelinear spaceof integrable functions. This product satisfies the following algebraic properties, which formally mean that the space of integrable functions with the product given by convolution is a commutativeassociative algebrawithoutidentity(Strichartz 1994, §3.3). Other linear spaces of functions, such as the space of continuous functions of compact support, areclosedunder the convolution, and so also form commutative associative algebras.
Proof (usingconvolution theorem):
q(t)⟺FQ(f)=R(f)S(f){\displaystyle q(t)\ {\stackrel {\mathcal {F}}{\Longleftrightarrow }}\ \ Q(f)=R(f)S(f)}
q(−t)⟺FQ(−f)=R(−f)S(−f){\displaystyle q(-t)\ {\stackrel {\mathcal {F}}{\Longleftrightarrow }}\ \ Q(-f)=R(-f)S(-f)}
q(−t)=F−1{R(−f)S(−f)}=F−1{R(−f)}∗F−1{S(−f)}=r(−t)∗s(−t){\displaystyle {\begin{aligned}q(-t)&={\mathcal {F}}^{-1}{\bigg \{}R(-f)S(-f){\bigg \}}\\&={\mathcal {F}}^{-1}{\bigg \{}R(-f){\bigg \}}*{\mathcal {F}}^{-1}{\bigg \{}S(-f){\bigg \}}\\&=r(-t)*s(-t)\end{aligned}}}
Iffandgare integrable functions, then the integral of their convolution on the whole space is simply obtained as the product of their integrals:[25]
This follows fromFubini's theorem. The same result holds iffandgare only assumed to be nonnegative measurable functions, byTonelli's theorem.
In the one-variable case,
whereddx{\displaystyle {\frac {d}{dx}}}is thederivative. More generally, in the case of functions of several variables, an analogous formula holds with thepartial derivative:
A particular consequence of this is that the convolution can be viewed as a "smoothing" operation: the convolution offandgis differentiable as many times asfandgare in total.
These identities hold for example under the condition thatfandgare absolutely integrable and at least one of them has an absolutely integrable (L1) weak derivative, as a consequence ofYoung's convolution inequality. For instance, whenfis continuously differentiable with compact support, andgis an arbitrary locally integrable function,
These identities also hold much more broadly in the sense of tempered distributions if one offorgis arapidly decreasing tempered distribution, a
compactly supported tempered distribution or a Schwartz function and the other is a tempered distribution. On the other hand, two positive integrable and infinitely differentiable functions may have a nowhere continuous convolution.
In the discrete case, thedifference operatorDf(n) =f(n+ 1) −f(n) satisfies an analogous relationship:
Theconvolution theoremstates that[26]
whereF{f}{\displaystyle {\mathcal {F}}\{f\}}denotes theFourier transformoff{\displaystyle f}.
Versions of this theorem also hold for theLaplace transform,two-sided Laplace transform,Z-transformandMellin transform.
IfW{\displaystyle {\mathcal {W}}}is theFourier transform matrix, then
where∙{\displaystyle \bullet }isface-splitting product,[27][28][29][30][31]⊗{\displaystyle \otimes }denotesKronecker product,∘{\displaystyle \circ }denotesHadamard product(this result is an evolving ofcount sketchproperties[32]).
This can be generalized for appropriate matricesA,B{\displaystyle \mathbf {A} ,\mathbf {B} }:
from the properties of theface-splitting product.
The convolution commutes with translations, meaning that
where τxf is the translation of the functionfbyxdefined by
Iffis aSchwartz function, thenτxfis the convolution with a translated Dirac delta functionτxf=f∗τxδ. So translation invariance of the convolution of Schwartz functions is a consequence of the associativity of convolution.
Furthermore, under certain conditions, convolution is the most general translation invariant operation. Informally speaking, the following holds
Thus some translation invariant operations can be represented as convolution. Convolutions play an important role in the study oftime-invariant systems, and especiallyLTI system theory. The representing functiongSis theimpulse responseof the transformationS.
A more precise version of the theorem quoted above requires specifying the class of functions on which the convolution is defined, and also requires assuming in addition thatSmust be acontinuous linear operatorwith respect to the appropriatetopology. It is known, for instance, that every continuous translation invariant continuous linear operator onL1is the convolution with a finiteBorel measure. More generally, every continuous translation invariant continuous linear operator onLpfor 1 ≤p< ∞ is the convolution with atempered distributionwhoseFourier transformis bounded. To wit, they are all given by boundedFourier multipliers.
IfGis a suitablegroupendowed with ameasureλ, and iffandgare real or complex valuedintegrablefunctions onG, then we can define their convolution by
It is not commutative in general. In typical cases of interestGis alocally compactHausdorfftopological groupand λ is a (left-)Haar measure. In that case, unlessGisunimodular, the convolution defined in this way is not the same as∫f(xy−1)g(y)dλ(y){\textstyle \int f\left(xy^{-1}\right)g(y)\,d\lambda (y)}. The preference of one over the other is made so that convolution with a fixed functiongcommutes with left translation in the group:
Furthermore, the convention is also required for consistency with the definition of the convolution of measures given below. However, with a right instead of a left Haar measure, the latter integral is preferred over the former.
Onlocally compact abelian groups, a version of theconvolution theoremholds: the Fourier transform of a convolution is the pointwise product of the Fourier transforms. Thecircle groupTwith the Lebesgue measure is an immediate example. For a fixedginL1(T), we have the following familiar operator acting on theHilbert spaceL2(T):
The operatorTiscompact. A direct calculation shows that its adjointT*is convolution with
By the commutativity property cited above,Tisnormal:T*T=TT* . Also,Tcommutes with the translation operators. Consider the familySof operators consisting of all such convolutions and the translation operators. ThenSis a commuting family of normal operators. According tospectral theory, there exists an orthonormal basis {hk} that simultaneously diagonalizesS. This characterizes convolutions on the circle. Specifically, we have
which are precisely thecharactersofT. Each convolution is a compactmultiplication operatorin this basis. This can be viewed as a version of the convolution theorem discussed above.
A discrete example is a finitecyclic groupof ordern. Convolution operators are here represented bycirculant matrices, and can be diagonalized by thediscrete Fourier transform.
A similar result holds for compact groups (not necessarily abelian): the matrix coefficients of finite-dimensionalunitary representationsform an orthonormal basis inL2by thePeter–Weyl theorem, and an analog of the convolution theorem continues to hold, along with many other aspects ofharmonic analysisthat depend on the Fourier transform.
LetGbe a (multiplicatively written) topological group.
If μ and ν areRadon measuresonG, then their convolutionμ∗νis defined as thepushforward measureof thegroup actionand can be written as[33]
for each measurable subsetEofG. The convolution is also a Radon measure, whosetotal variationsatisfies
In the case whenGislocally compactwith (left-)Haar measureλ, and μ and ν areabsolutely continuouswith respect to a λ,so that each has a density function, then the convolution μ∗ν is also absolutely continuous, and its density function is just the convolution of the two separate density functions. In fact, ifeithermeasure is absolutely continuous with respect to the Haar measure, then so is their convolution.[34]
If μ and ν areprobability measureson the topological group(R,+),then the convolutionμ∗νis theprobability distributionof the sumX+Yof twoindependentrandom variablesXandYwhose respective distributions are μ and ν.
Inconvex analysis, theinfimal convolutionof proper (not identically+∞{\displaystyle +\infty })convex functionsf1,…,fm{\displaystyle f_{1},\dots ,f_{m}}onRn{\displaystyle \mathbb {R} ^{n}}is defined by:[35](f1∗⋯∗fm)(x)=infx{f1(x1)+⋯+fm(xm)|x1+⋯+xm=x}.{\displaystyle (f_{1}*\cdots *f_{m})(x)=\inf _{x}\{f_{1}(x_{1})+\cdots +f_{m}(x_{m})|x_{1}+\cdots +x_{m}=x\}.}It can be shown that the infimal convolution of convex functions is convex. Furthermore, it satisfies an identity analogous to that of the Fourier transform of a traditional convolution, with the role of the Fourier transform is played instead by theLegendre transform:φ∗(x)=supy(x⋅y−φ(y)).{\displaystyle \varphi ^{*}(x)=\sup _{y}(x\cdot y-\varphi (y)).}We have:(f1∗⋯∗fm)∗(x)=f1∗(x)+⋯+fm∗(x).{\displaystyle (f_{1}*\cdots *f_{m})^{*}(x)=f_{1}^{*}(x)+\cdots +f_{m}^{*}(x).}
Let (X, Δ, ∇,ε,η) be abialgebrawith comultiplication Δ, multiplication ∇, unit η, and counitε. The convolution is a product defined on theendomorphism algebraEnd(X) as follows. Letφ,ψ∈ End(X), that is,φ,ψ:X→Xare functions that respect all algebraic structure ofX, then the convolutionφ∗ψis defined as the composition
The convolution appears notably in the definition ofHopf algebras(Kassel 1995, §III.3). A bialgebra is a Hopf algebra if and only if it has an antipode: an endomorphismSsuch that
Convolution and related operations are found in many applications in science, engineering and mathematics.
|
https://en.wikipedia.org/wiki/Convolution_kernel
|
Circular convolution, also known ascyclic convolution, is a special case ofperiodic convolution, which is theconvolutionof two periodic functions that have the same period. Periodic convolution arises, for example, in the context of thediscrete-time Fourier transform(DTFT). In particular, the DTFT of the product of two discrete sequences is the periodic convolution of the DTFTs of the individual sequences. And each DTFT is aperiodic summationof a continuous Fourier transform function (seeDiscrete-time Fourier transform § Relation to Fourier Transform). Although DTFTs are usually continuous functions of frequency, the concepts of periodic and circular convolution are also directly applicable to discrete sequences of data. In that context, circular convolution plays an important role in maximizing the efficiency of a certain kind of common filtering operation.
Theperiodic convolutionof two T-periodic functions,hT(t){\displaystyle h_{_{T}}(t)}andxT(t){\displaystyle x_{_{T}}(t)}can be defined as:
whereto{\displaystyle t_{o}}is an arbitrary parameter. An alternative definition, in terms of the notation of normallinearoraperiodicconvolution, follows from expressinghT(t){\displaystyle h_{_{T}}(t)}andxT(t){\displaystyle x_{_{T}}(t)}asperiodic summationsof aperiodic componentsh{\displaystyle h}andx{\displaystyle x}, i.e.:
Then:
∫toto+ThT(τ)⋅xT(t−τ)dτ=∫−∞∞h(τ)⋅xT(t−τ)dτ≜(h∗xT)(t)=(x∗hT)(t).{\displaystyle \int _{t_{o}}^{t_{o}+T}h_{_{T}}(\tau )\cdot x_{_{T}}(t-\tau )\,d\tau =\int _{-\infty }^{\infty }h(\tau )\cdot x_{_{T}}(t-\tau )\,d\tau \ \triangleq \ (h*x_{_{T}})(t)=(x*h_{_{T}})(t).}
Both forms can be calledperiodic convolution.[a]The termcircular convolution[2][3]arises from the important special case of constraining the non-zero portions of bothh{\displaystyle h}andx{\displaystyle x}to the interval[0,T].{\displaystyle [0,T].}Then the periodic summation becomes aperiodic extension[b], which can also be expressed as acircular function:
And the limits of integration reduce to the length of functionh{\displaystyle h}:
Similarly, for discrete sequences, and a parameterN, we can write acircular convolutionof aperiodic functionsh{\displaystyle h}andx{\displaystyle x}as:
This function isN-periodic. It has at mostNunique values. For the special case that the non-zero extent of bothxandhare≤ N, it is reducible tomatrix multiplicationwhere the kernel of the integral transform is acirculant matrix.
A case of great practical interest is illustrated in the figure. The duration of thexsequence isN(or less), and the duration of thehsequence is significantly less. Then many of the values of the circular convolution are identical to values ofx∗h, which is actually the desired result when thehsequence is afinite impulse response(FIR) filter. Furthermore, the circular convolution is very efficient to compute, using afast Fourier transform(FFT) algorithm and thecircular convolution theorem.
There are also methods for dealing with anxsequence that is longer than a practical value forN. The sequence is divided into segments (blocks) and processed piecewise. Then the filtered segments are carefully pieced back together. Edge effects are eliminated byoverlappingeither the input blocks or the output blocks. To help explain and compare the methods, we discuss them both in the context of anhsequence of length 201 and an FFT size ofN= 1024.
This method uses a block size equal to the FFT size (1024). We describe it first in terms of normal orlinearconvolution. When a normal convolution is performed on each block, there are start-up and decay transients at the block edges, due to the filterlatency(200-samples). Only 824 of the convolution outputs are unaffected by edge effects. The others are discarded, or simply not computed. That would cause gaps in the output if the input blocks are contiguous. The gaps are avoided by overlapping the input blocks by 200 samples. In a sense, 200 elements from each input block are "saved" and carried over to the next block. This method is referred to asoverlap-save,[4]although the method we describe next requires a similar "save" with the output samples.
When an FFT is used to compute the 824 unaffected DFT samples, we don't have the option of not computing the affected samples, but the leading and trailing edge-effects are overlapped and added because of circular convolution. Consequently, the 1024-point inverse FFT (IFFT) output contains only 200 samples of edge effects (which are discarded) and the 824 unaffected samples (which are kept). To illustrate this, the fourth frame of the figure at right depicts a block that has been periodically (or "circularly") extended, and the fifth frame depicts the individual components of a linear convolution performed on the entire sequence. The edge effects are where the contributions from the extended blocks overlap the contributions from the original block. The last frame is the composite output, and the section colored green represents the unaffected portion.
This method is known asoverlap-add.[4]In our example, it uses contiguous input blocks of size 824 and pads each one with 200 zero-valued samples. Then it overlaps and adds the 1024-element output blocks. Nothing is discarded, but 200 values of each output block must be "saved" for the addition with the next block. Both methods advance only 824 samples per 1024-point IFFT, but overlap-save avoids the initial zero-padding and final addition.
|
https://en.wikipedia.org/wiki/Circular_convolution
|
Inlinear algebra, acirculant matrixis asquare matrixin which all rows are composed of the same elements and each row is rotated one element to the right relative to the preceding row. It is a particular kind ofToeplitz matrix.
Innumerical analysis, circulant matrices are important because they arediagonalizedby adiscrete Fourier transform, and hencelinear equationsthat contain them may be quickly solved using afast Fourier transform.[1]They can beinterpreted analyticallyas theintegral kernelof aconvolution operatoron thecyclic groupCn{\displaystyle C_{n}}and hence frequently appear in formal descriptions of spatially invariant linear operations. This property is also critical in modern software defined radios, which utilizeOrthogonal Frequency Division Multiplexingto spread thesymbols(bits) using acyclic prefix. This enables the channel to be represented by a circulant matrix, simplifying channel equalization in thefrequency domain.
Incryptography, a circulant matrix is used in theMixColumnsstep of theAdvanced Encryption Standard.
Ann×n{\displaystyle n\times n}circulant matrixC{\displaystyle C}takes the formC=[c0cn−1⋯c2c1c1c0cn−1c2⋮c1c0⋱⋮cn−2⋱⋱cn−1cn−1cn−2⋯c1c0]{\displaystyle C={\begin{bmatrix}c_{0}&c_{n-1}&\cdots &c_{2}&c_{1}\\c_{1}&c_{0}&c_{n-1}&&c_{2}\\\vdots &c_{1}&c_{0}&\ddots &\vdots \\c_{n-2}&&\ddots &\ddots &c_{n-1}\\c_{n-1}&c_{n-2}&\cdots &c_{1}&c_{0}\\\end{bmatrix}}}or thetransposeof this form (by choice of notation). If eachci{\displaystyle c_{i}}is ap×p{\displaystyle p\times p}squarematrix, then thenp×np{\displaystyle np\times np}matrixC{\displaystyle C}is called ablock-circulant matrix.
A circulant matrix is fully specified by one vector,c{\displaystyle c}, which appears as the first column (or row) ofC{\displaystyle C}. The remaining columns (and rows, resp.) ofC{\displaystyle C}are eachcyclic permutationsof the vectorc{\displaystyle c}with offset equal to the column (or row, resp.) index, if lines are indexed from0{\displaystyle 0}ton−1{\displaystyle n-1}. (Cyclic permutation of rows has the same effect as cyclic permutation of columns.) The last row ofC{\displaystyle C}is the vectorc{\displaystyle c}shifted by one in reverse.
Different sources define the circulant matrix in different ways, for example as above, or with the vectorc{\displaystyle c}corresponding to the first row rather than the first column of the matrix; and possibly with a different direction of shift (which is sometimes called ananti-circulant matrix).
Thepolynomialf(x)=c0+c1x+⋯+cn−1xn−1{\displaystyle f(x)=c_{0}+c_{1}x+\dots +c_{n-1}x^{n-1}}is called theassociated polynomialof the matrixC{\displaystyle C}.
The normalizedeigenvectorsof a circulant matrix are the Fourier modes, namely,vj=1n(1,ωj,ω2j,…,ω(n−1)j)T,j=0,1,…,n−1,{\displaystyle v_{j}={\frac {1}{\sqrt {n}}}\left(1,\omega ^{j},\omega ^{2j},\ldots ,\omega ^{(n-1)j}\right)^{T},\quad j=0,1,\ldots ,n-1,}whereω=exp(2πin){\displaystyle \omega =\exp \left({\tfrac {2\pi i}{n}}\right)}is a primitiven{\displaystyle n}-throot of unityandi{\displaystyle i}is theimaginary unit.
(This can be understood by realizing that multiplication with a circulant matrix implements a convolution. In Fourier space, convolutions become multiplication. Hence the product of a circulant matrix with a Fourier mode yields a multiple of that Fourier mode, i.e. it is an eigenvector.)
The correspondingeigenvaluesare given byλj=c0+c1ω−j+c2ω−2j+⋯+cn−1ω−(n−1)j,j=0,1,…,n−1.{\displaystyle \lambda _{j}=c_{0}+c_{1}\omega ^{-j}+c_{2}\omega ^{-2j}+\dots +c_{n-1}\omega ^{-(n-1)j},\quad j=0,1,\dots ,n-1.}
As a consequence of the explicit formula for the eigenvalues above,
thedeterminantof a circulant matrix can be computed as:detC=∏j=0n−1(c0+cn−1ωj+cn−2ω2j+⋯+c1ω(n−1)j).{\displaystyle \det C=\prod _{j=0}^{n-1}(c_{0}+c_{n-1}\omega ^{j}+c_{n-2}\omega ^{2j}+\dots +c_{1}\omega ^{(n-1)j}).}Since taking the transpose does not change the eigenvalues of a matrix, an equivalent formulation isdetC=∏j=0n−1(c0+c1ωj+c2ω2j+⋯+cn−1ω(n−1)j)=∏j=0n−1f(ωj).{\displaystyle \det C=\prod _{j=0}^{n-1}(c_{0}+c_{1}\omega ^{j}+c_{2}\omega ^{2j}+\dots +c_{n-1}\omega ^{(n-1)j})=\prod _{j=0}^{n-1}f(\omega ^{j}).}
Therankof a circulant matrixC{\displaystyle C}is equal ton−d{\displaystyle n-d}whered{\displaystyle d}is thedegreeof the polynomialgcd(f(x),xn−1){\displaystyle \gcd(f(x),x^{n}-1)}.[2]
Fn=(fjk)withfjk=e−2πi/n⋅jk,for0≤j,k≤n−1.{\displaystyle F_{n}=(f_{jk}){\text{ with }}f_{jk}=e^{-2\pi i/n\cdot jk},\,{\text{for }}0\leq j,k\leq n-1.}There are important connections between circulant matrices and the DFT matrices. In fact, it can be shown thatC=Fn−1diag(Fnc)Fn,{\displaystyle C=F_{n}^{-1}\operatorname {diag} (F_{n}c)F_{n},}wherec{\displaystyle c}is the first column ofC{\displaystyle C}. The eigenvalues ofC{\displaystyle C}are given by the productFnc{\displaystyle F_{n}c}. This product can be readily calculated by afast Fourier transform.[3]
Circulant matrices can be interpretedgeometrically, which explains the connection with the discrete Fourier transform.
Consider vectors inRn{\displaystyle \mathbb {R} ^{n}}as functions on theintegerswith periodn{\displaystyle n}, (i.e., as periodic bi-infinite sequences:…,a0,a1,…,an−1,a0,a1,…{\displaystyle \dots ,a_{0},a_{1},\dots ,a_{n-1},a_{0},a_{1},\dots }) or equivalently, as functions on thecyclic groupof ordern{\displaystyle n}(denotedCn{\displaystyle C_{n}}orZ/nZ{\displaystyle \mathbb {Z} /n\mathbb {Z} }) geometrically, on (the vertices of) theregularn{\displaystyle n}-gon: this is a discrete analog to periodic functions on thereal lineorcircle.
Then, from the perspective ofoperator theory, a circulant matrix is the kernel of a discreteintegral transform, namely theconvolution operatorfor the function(c0,c1,…,cn−1){\displaystyle (c_{0},c_{1},\dots ,c_{n-1})}; this is a discretecircular convolution. The formula for the convolution of the functions(bi):=(ci)∗(ai){\displaystyle (b_{i}):=(c_{i})*(a_{i})}is
(recall that the sequences are periodic)
which is the product of the vector(ai){\displaystyle (a_{i})}by the circulant matrix for(ci){\displaystyle (c_{i})}.
The discrete Fourier transform then converts convolution into multiplication, which in the matrix setting corresponds to diagonalization.
TheC∗{\displaystyle C^{*}}-algebra of all circulant matrices withcomplexentries isisomorphicto the groupC∗{\displaystyle C^{*}}-algebra ofZ/nZ.{\displaystyle \mathbb {Z} /n\mathbb {Z} .}
For asymmetriccirculant matrixC{\displaystyle C}one has the extra condition thatcn−i=ci{\displaystyle c_{n-i}=c_{i}}.
Thus it is determined by⌊n/2⌋+1{\displaystyle \lfloor n/2\rfloor +1}elements.C=[c0c1⋯c2c1c1c0c1c2⋮c1c0⋱⋮c2⋱⋱c1c1c2⋯c1c0].{\displaystyle C={\begin{bmatrix}c_{0}&c_{1}&\cdots &c_{2}&c_{1}\\c_{1}&c_{0}&c_{1}&&c_{2}\\\vdots &c_{1}&c_{0}&\ddots &\vdots \\c_{2}&&\ddots &\ddots &c_{1}\\c_{1}&c_{2}&\cdots &c_{1}&c_{0}\\\end{bmatrix}}.}
The eigenvalues of anyrealsymmetric matrix are real.
The corresponding eigenvaluesλ→=n⋅Fn†c{\displaystyle {\vec {\lambda }}={\sqrt {n}}\cdot F_{n}^{\dagger }c}become:λk=c0+cn/2e−πi⋅k+2∑j=1n2−1cjcos(−2πn⋅kj)=c0+cn/2ωkn/2+2c1ℜωk+2c2ℜωk2+⋯+2cn/2−1ℜωkn/2−1{\displaystyle {\begin{array}{lcl}\lambda _{k}&=&c_{0}+c_{n/2}e^{-\pi i\cdot k}+2\sum _{j=1}^{{\frac {n}{2}}-1}c_{j}\cos {(-{\frac {2\pi }{n}}\cdot kj)}\\&=&c_{0}+c_{n/2}\omega _{k}^{n/2}+2c_{1}\Re \omega _{k}+2c_{2}\Re \omega _{k}^{2}+\dots +2c_{n/2-1}\Re \omega _{k}^{n/2-1}\end{array}}}forn{\displaystyle n}even, andλk=c0+2∑j=1n−12cjcos(−2πn⋅kj)=c0+2c1ℜωk+2c2ℜωk2+⋯+2c(n−1)/2ℜωk(n−1)/2{\displaystyle {\begin{array}{lcl}\lambda _{k}&=&c_{0}+2\sum _{j=1}^{\frac {n-1}{2}}c_{j}\cos {(-{\frac {2\pi }{n}}\cdot kj)}\\&=&c_{0}+2c_{1}\Re \omega _{k}+2c_{2}\Re \omega _{k}^{2}+\dots +2c_{(n-1)/2}\Re \omega _{k}^{(n-1)/2}\end{array}}}forn{\displaystyle n}odd, whereℜz{\displaystyle \Re z}denotes thereal partofz{\displaystyle z}.
This can be further simplified by using the fact thatℜωkj=ℜe−2πin⋅kj=cos(−2πn⋅kj){\displaystyle \Re \omega _{k}^{j}=\Re e^{-{\frac {2\pi i}{n}}\cdot kj}=\cos(-{\frac {2\pi }{n}}\cdot kj)}andωkn/2=e−2πin⋅kn2=e−πi⋅k{\displaystyle \omega _{k}^{n/2}=e^{-{\frac {2\pi i}{n}}\cdot k{\frac {n}{2}}}=e^{-\pi i\cdot k}}depending onk{\displaystyle k}even or odd.
Symmetric circulant matrices belong to the class ofbisymmetric matrices.
The complex version of the circulant matrix, ubiquitous in communications theory, is usuallyHermitian. In this casecn−i=ci∗,i≤n/2{\displaystyle c_{n-i}=c_{i}^{*},\;i\leq n/2}and its determinant and all eigenvalues are real.
Ifnis even the first two rows necessarily takes the form[r0z1z2r3z2∗z1∗z1∗r0z1z2r3z2∗…].{\displaystyle {\begin{bmatrix}r_{0}&z_{1}&z_{2}&r_{3}&z_{2}^{*}&z_{1}^{*}\\z_{1}^{*}&r_{0}&z_{1}&z_{2}&r_{3}&z_{2}^{*}\\\dots \\\end{bmatrix}}.}in which the first elementr3{\displaystyle r_{3}}in the top second half-row is real.
Ifnis odd we get[r0z1z2z2∗z1∗z1∗r0z1z2z2∗…].{\displaystyle {\begin{bmatrix}r_{0}&z_{1}&z_{2}&z_{2}^{*}&z_{1}^{*}\\z_{1}^{*}&r_{0}&z_{1}&z_{2}&z_{2}^{*}\\\dots \\\end{bmatrix}}.}
Tee[5]has discussed constraints on the eigenvalues for the Hermitian condition.
Given a matrix equation
whereC{\displaystyle C}is a circulant matrix of sizen{\displaystyle n}, we can write the equation as thecircular convolutionc⋆x=b,{\displaystyle \mathbf {c} \star \mathbf {x} =\mathbf {b} ,}wherec{\displaystyle \mathbf {c} }is the first column ofC{\displaystyle C}, and the vectorsc{\displaystyle \mathbf {c} },x{\displaystyle \mathbf {x} }andb{\displaystyle \mathbf {b} }are cyclically extended in each direction. Using thecircular convolution theorem, we can use thediscrete Fourier transformto transform the cyclic convolution into component-wise multiplicationFn(c⋆x)=Fn(c)Fn(x)=Fn(b){\displaystyle {\mathcal {F}}_{n}(\mathbf {c} \star \mathbf {x} )={\mathcal {F}}_{n}(\mathbf {c} ){\mathcal {F}}_{n}(\mathbf {x} )={\mathcal {F}}_{n}(\mathbf {b} )}so thatx=Fn−1[((Fn(b))ν(Fn(c))ν)ν∈Z]T.{\displaystyle \mathbf {x} ={\mathcal {F}}_{n}^{-1}\left[\left({\frac {({\mathcal {F}}_{n}(\mathbf {b} ))_{\nu }}{({\mathcal {F}}_{n}(\mathbf {c} ))_{\nu }}}\right)_{\!\nu \in \mathbb {Z} }\,\right]^{\rm {T}}.}
This algorithm is much faster than the standardGaussian elimination, especially if afast Fourier transformis used.
Ingraph theory, agraphordigraphwhoseadjacency matrixis circulant is called acirculant graph/digraph. Equivalently, a graph is circulant if itsautomorphism groupcontains a full-length cycle. TheMöbius laddersare examples of circulant graphs, as are thePaley graphsforfieldsofprimeorder.
|
https://en.wikipedia.org/wiki/Circulant_matrix
|
Inmathematics, adifferential equationis anequationthat relates one or more unknownfunctionsand theirderivatives.[1]In applications, the functions generally represent physical quantities, the derivatives represent their rates of change, and the differential equation defines a relationship between the two. Such relations are common inmathematical modelsandscientific laws; therefore, differential equations play a prominent role in many disciplines includingengineering,physics,economics, andbiology.
The study of differential equations consists mainly of the study of their solutions (the set of functions that satisfy each equation), and of the properties of their solutions. Only the simplest differential equations are solvable by explicit formulas; however, many properties of solutions of a given differential equation may be determined without computing them exactly.
Often when aclosed-form expressionfor the solutions is not available, solutions may be approximated numerically using computers, and manynumerical methodshave been developed to determine solutions with a given degree of accuracy. Thetheory of dynamical systemsanalyzes thequalitativeaspects of solutions, such as theiraverage behaviorover a long time interval.
Differential equations came into existence with theinvention of calculusbyIsaac NewtonandGottfried Leibniz. In Chapter 2 of his 1671 workMethodus fluxionum et Serierum Infinitarum,[2]Newton listed three kinds of differential equations:
In all these cases,yis an unknown function ofx(or ofx1andx2), andfis a given function.
He solves these examples and others using infinite series and discusses the non-uniqueness of solutions.
Jacob Bernoulliproposed theBernoulli differential equationin 1695.[3]This is anordinary differential equationof the form
for which the following year Leibniz obtained solutions by simplifying it.[4]
Historically, the problem of a vibrating string such as that of amusical instrumentwas studied byJean le Rond d'Alembert,Leonhard Euler,Daniel Bernoulli, andJoseph-Louis Lagrange.[5][6][7][8]In 1746, d’Alembert discovered the one-dimensionalwave equation, and within ten years Euler discovered the three-dimensional wave equation.[9]
TheEuler–Lagrange equationwas developed in the 1750s by Euler and Lagrange in connection with their studies of thetautochroneproblem. This is the problem of determining a curve on which a weighted particle will fall to a fixed point in a fixed amount of time, independent of the starting point. Lagrange solved this problem in 1755 and sent the solution to Euler. Both further developed Lagrange's method and applied it tomechanics, which led to the formulation ofLagrangian mechanics.
In 1822,Fourierpublished his work onheat flowinThéorie analytique de la chaleur(The Analytic Theory of Heat),[10]in which he based his reasoning onNewton's law of cooling, namely, that the flow of heat between two adjacent molecules is proportional to the extremely small difference of their temperatures. Contained in this book was Fourier's proposal of hisheat equationfor conductive diffusion of heat. This partial differential equation is now a common part of mathematical physics curriculum.
Inclassical mechanics, the motion of a body is described by its position and velocity as the time value varies.Newton's lawsallow these variables to be expressed dynamically (given the position, velocity, acceleration and various forces acting on the body) as a differential equation for the unknown position of the body as a function of time.
In some cases, this differential equation (called anequation of motion) may be solved explicitly.
An example of modeling a real-world problem using differential equations is the determination of the velocity of a ball falling through the air, considering only gravity and air resistance. The ball's acceleration towards the ground is the acceleration due to gravity minus the deceleration due to air resistance. Gravity is considered constant, and air resistance may be modeled as proportional to the ball's velocity. This means that the ball's acceleration, which is a derivative of its velocity, depends on the velocity (and the velocity depends on time). Finding the velocity as a function of time involves solving a differential equation and verifying its validity.
Differential equations can be classified several different ways. Besides describing the properties of the equation itself, these classes of differential equations can help inform the choice of approach to a solution. Commonly used distinctions include whether the equation is ordinary or partial, linear or non-linear, and homogeneous or heterogeneous. This list is far from exhaustive; there are many other properties and subclasses of differential equations which can be very useful in specific contexts.
Anordinary differential equation(ODE) is an equation containing an unknownfunction of one real or complex variablex, its derivatives, and some given functions ofx. The unknown function is generally represented by avariable(often denotedy), which, therefore,dependsonx. Thusxis often called theindependent variableof the equation. The term "ordinary" is used in contrast with the termpartial differential equation, which may be with respect tomore thanone independent variable.
Linear differential equationsare the differential equations that arelinearin the unknown function and its derivatives. Their theory is well developed, and in many cases one may express their solutions in terms ofintegrals.
Most ODEs that are encountered inphysicsare linear. Therefore, mostspecial functionsmay be defined as solutions of linear differential equations (seeHolonomic function).
As, in general, the solutions of a differential equation cannot be expressed by aclosed-form expression,numerical methodsare commonly used for solving differential equations on a computer.
Apartial differential equation(PDE) is a differential equation that contains unknownmultivariable functionsand theirpartial derivatives. (This is in contrast toordinary differential equations, which deal with functions of a single variable and their derivatives.) PDEs are used to formulate problems involving functions of several variables, and are either solved in closed form, or used to create a relevantcomputer model.
PDEs can be used to describe a wide variety of phenomena in nature such assound,heat,electrostatics,electrodynamics,fluid flow,elasticity, orquantum mechanics. These seemingly distinct physical phenomena can be formalized similarly in terms of PDEs. Just as ordinary differential equations often model one-dimensionaldynamical systems, partial differential equations often modelmultidimensional systems.Stochastic partial differential equationsgeneralize partial differential equations for modelingrandomness.
Anon-linear differential equationis a differential equation that is not alinear equationin the unknown function and its derivatives (the linearity or non-linearity in the arguments of the function are not considered here). There are very few methods of solving nonlinear differential equations exactly; those that are known typically depend on the equation having particularsymmetries. Nonlinear differential equations can exhibit very complicated behaviour over extended time intervals, characteristic ofchaos. Even the fundamental questions of existence, uniqueness, and extendability of solutions for nonlinear differential equations, and well-posedness of initial and boundary value problems for nonlinear PDEs are hard problems and their resolution in special cases is considered to be a significant advance in the mathematical theory (cf.Navier–Stokes existence and smoothness). However, if the differential equation is a correctly formulated representation of a meaningful physical process, then one expects it to have a solution.[11]
Linear differential equations frequently appear asapproximationsto nonlinear equations. These approximations are only valid under restricted conditions. For example, theharmonic oscillatorequation is an approximation to the nonlinear pendulum equation that is valid for small amplitude oscillations.
Theorder of the differential equationis the highestorder of derivativeof the unknown function that appears in the differential equation.
For example, an equation containing onlyfirst-order derivativesis afirst-order differential equation, an equation containing thesecond-order derivativeis asecond-order differential equation, and so on.[12][13]
When it is written as apolynomial equationin the unknown function and its derivatives, itsdegree of the differential equationis, depending on the context, thepolynomial degreein the highest derivative of the unknown function,[14]or itstotal degreein the unknown function and its derivatives. In particular, alinear differential equationhas degree one for both meanings, but the non-linear differential equationy′+y2=0{\displaystyle y'+y^{2}=0}is of degree one for the first meaning but not for the second one.
Differential equations that describe natural phenomena almost always have only first and second order derivatives in them, but there are some exceptions, such as thethin-film equation, which is a fourth order partial differential equation.
In the first group of examplesuis an unknown function ofx, andcandωare constants that are supposed to be known. Two broad classifications of both ordinary and partial differential equations consist of distinguishing betweenlinearandnonlineardifferential equations, and betweenhomogeneousdifferential equationsandheterogeneousones.
In the next group of examples, the unknown functionudepends on two variablesxandtorxandy.
Solvingdifferential equations is not like solvingalgebraic equations. Not only are their solutions often unclear, but whether solutions are unique or exist at all are also notable subjects of interest.
For first order initial value problems, thePeano existence theoremgives one set of circumstances in which a solution exists. Given any point(a,b){\displaystyle (a,b)}in the xy-plane, define some rectangular regionZ{\displaystyle Z}, such thatZ=[l,m]×[n,p]{\displaystyle Z=[l,m]\times [n,p]}and(a,b){\displaystyle (a,b)}is in the interior ofZ{\displaystyle Z}. If we are given a differential equationdydx=g(x,y){\textstyle {\frac {dy}{dx}}=g(x,y)}and the condition thaty=b{\displaystyle y=b}whenx=a{\displaystyle x=a}, then there is locally a solution to this problem ifg(x,y){\displaystyle g(x,y)}and∂g∂x{\textstyle {\frac {\partial g}{\partial x}}}are both continuous onZ{\displaystyle Z}. This solution exists on some interval with its center ata{\displaystyle a}. The solution may not be unique. (SeeOrdinary differential equationfor other results.)
However, this only helps us with first orderinitial value problems. Suppose we had a linear initial value problem of the nth order:
such that
For any nonzerofn(x){\displaystyle f_{n}(x)}, if{f0,f1,…}{\displaystyle \{f_{0},f_{1},\ldots \}}andg{\displaystyle g}are continuous on some interval containingx0{\displaystyle x_{0}},y{\displaystyle y}exists and is unique.[15]
The theory of differential equations is closely related to the theory ofdifference equations, in which the coordinates assume only discrete values, and the relationship involves values of the unknown function or functions and values at nearby coordinates. Many methods to compute numerical solutions of differential equations or study the properties of differential equations involve the approximation of the solution of a differential equation by the solution of a corresponding difference equation.
The study of differential equations is a wide field inpureandapplied mathematics,physics, andengineering. All of these disciplines are concerned with the properties of differential equations of various types. Pure mathematics focuses on the existence and uniqueness of solutions, while applied mathematics emphasizes the rigorous justification of the methods for approximating solutions. Differential equations play an important role in modeling virtually every physical, technical, or biological process, from celestial motion, to bridge design, to interactions between neurons. Differential equations such as those used to solve real-life problems may not necessarily be directly solvable, i.e. do not haveclosed formsolutions. Instead, solutions can be approximated usingnumerical methods.
Many fundamental laws ofphysicsandchemistrycan be formulated as differential equations. Inbiologyandeconomics, differential equations are used tomodelthe behavior ofcomplex systems. The mathematical theory of differential equations first developed together with the sciences where the equations had originated and where the results found application. However, diverse problems, sometimes originating in quite distinct scientific fields, may give rise to identical differential equations. Whenever this happens, mathematical theory behind the equations can be viewed as a unifying principle behind diverse phenomena. As an example, consider the propagation of light and sound in the atmosphere, and of waves on the surface of a pond. All of them may be described by the same second-orderpartial differential equation, thewave equation, which allows us to think of light and sound as forms of waves, much like familiar waves in the water. Conduction of heat, the theory of which was developed byJoseph Fourier, is governed by another second-order partial differential equation, theheat equation. It turns out that manydiffusionprocesses, while seemingly different, are described by the same equation; theBlack–Scholesequation in finance is, for instance, related to the heat equation.
The number of differential equations that have received a name, in various scientific areas is a witness of the importance of the topic. SeeList of named differential equations.
SomeCASsoftware can solve differential equations. These are the commands used in the leading programs:
|
https://en.wikipedia.org/wiki/Differential_equations
|
This is alist oftransformsinmathematics.
These transforms have a continuous frequency domain:
|
https://en.wikipedia.org/wiki/List_of_transforms
|
Inmathematics, anoperatorortransformis afunctionfrom onespace of functionsto another. Operators occur commonly inengineering,physicsand mathematics. Many areintegral operatorsanddifferential operators.
In the followingLis an operator
which takes a functiony∈F{\displaystyle y\in {\mathcal {F}}}to another functionL[y]∈G{\displaystyle L[y]\in {\mathcal {G}}}. Here,F{\displaystyle {\mathcal {F}}}andG{\displaystyle {\mathcal {G}}}are some unspecifiedfunction spaces, such asHardy space,Lpspace,Sobolev space, or, more vaguely, the space ofholomorphic functions.
|
https://en.wikipedia.org/wiki/List_of_operators
|
Inmathematics, anonlocaloperatoris amappingwhich maps functions on a topological space to functions, in such a way that the value of the output function at a given point cannot be determined solely from the values of the input function in any neighbourhood of any point. An example of a nonlocal operator is theFourier transform.
LetX{\displaystyle X}be atopological space,Y{\displaystyle Y}aset,F(X){\displaystyle F(X)}afunction spacecontaining functions withdomainX{\displaystyle X}, andG(Y){\displaystyle G(Y)}a function space containing functions with domainY{\displaystyle Y}. Two functionsu{\displaystyle u}andv{\displaystyle v}inF(X){\displaystyle F(X)}are called equivalent atx∈X{\displaystyle x\in X}if there exists aneighbourhoodN{\displaystyle N}ofx{\displaystyle x}such thatu(x′)=v(x′){\displaystyle u(x')=v(x')}for allx′∈N{\displaystyle x'\in N}. An operatorA:F(X)→G(Y){\displaystyle A:F(X)\to G(Y)}is said to be local if for everyy∈Y{\displaystyle y\in Y}there exists anx∈X{\displaystyle x\in X}such thatAu(y)=Av(y){\displaystyle Au(y)=Av(y)}for all functionsu{\displaystyle u}andv{\displaystyle v}inF(X){\displaystyle F(X)}which are equivalent atx{\displaystyle x}. A nonlocal operator is an operator which is not local.
For a local operator it is possible (in principle) to compute the valueAu(y){\displaystyle Au(y)}using only knowledge of the values ofu{\displaystyle u}in an arbitrarily small neighbourhood of a pointx{\displaystyle x}. For a nonlocal operator this is not possible.
Differential operatorsare examples of local operators. A large class of (linear) nonlocal operators is given by theintegral transforms, such as the Fourier transform and theLaplace transform. For an integral transform of the form
whereK{\displaystyle K}is some kernel function, it is necessary to know the values ofu{\displaystyle u}almost everywhere on thesupportofK(⋅,y){\displaystyle K(\cdot ,y)}in order to compute the value ofAu{\displaystyle Au}aty{\displaystyle y}.
An example of asingular integral operatoris thefractional Laplacian
The prefactorcd,s:=4sΓ(d/2+s)πd/2|Γ(−s)|{\displaystyle c_{d,s}:={\frac {4^{s}\Gamma (d/2+s)}{\pi ^{d/2}|\Gamma (-s)|}}}involves theGamma functionand serves as a normalizing factor. The fractional Laplacian plays a role in, for example, the study of nonlocalminimal surfaces.[1]
Some examples of applications of nonlocal operators are:
Thismathematical analysis–related article is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Nonlocal_operator
|
Infunctional analysis, areproducing kernel Hilbert space(RKHS) is aHilbert spaceof functions in which point evaluation is a continuouslinear functional. Specifically, a Hilbert spaceH{\displaystyle H}of functions from a setX{\displaystyle X}(toR{\displaystyle \mathbb {R} }orC{\displaystyle \mathbb {C} }) is an RKHS if the point-evaluation functionalLx:H→C{\displaystyle L_{x}:H\to \mathbb {C} },Lx(f)=f(x){\displaystyle L_{x}(f)=f(x)}, is continuous for everyx∈X{\displaystyle x\in X}. Equivalently,H{\displaystyle H}is an RKHS if there exists a functionKx∈H{\displaystyle K_{x}\in H}such that, for allf∈H{\displaystyle f\in H},⟨f,Kx⟩=f(x).{\displaystyle \langle f,K_{x}\rangle =f(x).}The functionKx{\displaystyle K_{x}}is then called thereproducing kernel, and it reproduces the value off{\displaystyle f}atx{\displaystyle x}via the inner product.
An immediate consequence of this property is that convergence in norm implies uniform convergence on any subset ofX{\displaystyle X}on which‖Kx‖{\displaystyle \|K_{x}\|}is bounded. However, the converse does not necessarily hold. Often the setX{\displaystyle X}carries a topology, and‖Kx‖{\displaystyle \|K_{x}\|}depends continuously onx∈X{\displaystyle x\in X}, in which case: convergence in norm implies uniform convergence on compact subsets ofX{\displaystyle X}.
It is not entirely straightforward to construct natural examples of a Hilbert space which are not an RKHS in a non-trivial fashion.[1]Some examples, however, have been found.[2][3]
While, formally,L2spacesare defined as Hilbert spaces of equivalence classes of functions, this definition can trivially be extended to a Hilbert space of functions by choosing a (total) function as a representative for each equivalence class. However, no choice of representatives can make this space an RKHS (K0{\displaystyle K_{0}}would need to be the non-existent Dirac delta function). However, there are RKHSs in which the norm is anL2-norm, such as the space of band-limited functions (see the example below).
An RKHS is associated with a kernel that reproduces every function in the space in the sense that for everyx{\displaystyle x}in the set on which the functions are defined, "evaluation atx{\displaystyle x}" can be performed by taking an inner product with a function determined by the kernel. Such areproducing kernelexists if and only if every evaluation functional is continuous.
The reproducing kernel was first introduced in the 1907 work ofStanisław Zarembaconcerningboundary value problemsforharmonicandbiharmonic functions.James Mercersimultaneously examinedfunctionswhich satisfy the reproducing property in the theory ofintegral equations. The idea of the reproducing kernel remained untouched for nearly twenty years until it appeared in the dissertations ofGábor Szegő,Stefan Bergman, andSalomon Bochner. The subject was eventually systematically developed in the early 1950s byNachman Aronszajnand Stefan Bergman.[4]
These spaces have wide applications, includingcomplex analysis,harmonic analysis, andquantum mechanics. Reproducing kernel Hilbert spaces are particularly important in the field ofstatistical learning theorybecause of the celebratedrepresenter theoremwhich states that every function in an RKHS that minimises an empirical risk functional can be written as alinear combinationof the kernel function evaluated at the training points. This is a practically useful result as it effectively simplifies theempirical risk minimizationproblem from an infinite dimensional to a finite dimensional optimization problem.
For ease of understanding, we provide the framework for real-valued Hilbert spaces. The theory can be easily extended to spaces of complex-valued functions and hence include the many important examples of reproducing kernel Hilbert spaces that are spaces ofanalytic functions.[5]
LetX{\displaystyle X}be an arbitrarysetandH{\displaystyle H}aHilbert spaceofreal-valued functionsonX{\displaystyle X}, equipped with pointwise addition and pointwise scalar multiplication. Theevaluationfunctional over the Hilbert space of functionsH{\displaystyle H}is a linear functional that evaluates each function at a pointx{\displaystyle x},
We say thatHis areproducing kernel Hilbert spaceif, for allx{\displaystyle x}inX{\displaystyle X},Lx{\displaystyle L_{x}}iscontinuousat everyf{\displaystyle f}inH{\displaystyle H}or, equivalently, ifLx{\displaystyle L_{x}}is abounded operatoronH{\displaystyle H}, i.e. there exists someMx>0{\displaystyle M_{x}>0}such that
AlthoughMx<∞{\displaystyle M_{x}<\infty }is assumed for allx∈X{\displaystyle x\in X}, it might still be the case thatsupxMx=∞{\textstyle \sup _{x}M_{x}=\infty }.
While property (1) is the weakest condition that ensures both the existence of an inner product and the evaluation of every function inH{\displaystyle H}at every point in the domain, it does not lend itself to easy application in practice. A more intuitive definition of the RKHS can be obtained by observing that this property guarantees that the evaluation functional can be represented by taking the inner product off{\displaystyle f}with a functionKx{\displaystyle K_{x}}inH{\displaystyle H}. This function is the so-calledreproducing kernel[citation needed]for the Hilbert spaceH{\displaystyle H}from which the RKHS takes its name. More formally, theRiesz representation theoremimplies that for allx{\displaystyle x}inX{\displaystyle X}there exists a unique elementKx{\displaystyle K_{x}}ofH{\displaystyle H}with the reproducing property,
SinceKx{\displaystyle K_{x}}is itself a function defined onX{\displaystyle X}with values in the fieldR{\displaystyle \mathbb {R} }(orC{\displaystyle \mathbb {C} }in the case of complex Hilbert spaces) and asKx{\displaystyle K_{x}}is inH{\displaystyle H}we have that
whereKy∈H{\displaystyle K_{y}\in H}is the element inH{\displaystyle H}associated toLy{\displaystyle L_{y}}.
This allows us to define the reproducing kernel ofH{\displaystyle H}as a functionK:X×X→R{\displaystyle K:X\times X\to \mathbb {R} }(orC{\displaystyle \mathbb {C} }in the complex case) by
From this definition it is easy to see thatK:X×X→R{\displaystyle K:X\times X\to \mathbb {R} }(orC{\displaystyle \mathbb {C} }in the complex case) is both symmetric (resp. conjugate symmetric) andpositive definite, i.e.
for everyn∈N,x1,…,xn∈X,andc1,…,cn∈R.{\displaystyle n\in \mathbb {N} ,x_{1},\dots ,x_{n}\in X,{\text{ and }}c_{1},\dots ,c_{n}\in \mathbb {R} .}[6]The Moore–Aronszajn theorem (see below) is a sort of converse to this: if a functionK{\displaystyle K}satisfies these conditions then there is a Hilbert space of functions onX{\displaystyle X}for which it is a reproducing kernel.
The simplest example of a reproducing kernel Hilbert space is the spaceL2(X,μ){\displaystyle L^{2}(X,\mu )}whereX{\displaystyle X}is a set andμ{\displaystyle \mu }is thecounting measureonX{\displaystyle X}. Forx∈X{\displaystyle x\in X}, the reproducing kernelKx{\displaystyle K_{x}}is theindicator functionof the one point set{x}⊂X{\displaystyle \{x\}\subset X}.
Nontrivial reproducing kernel Hilbert spaces often involveanalytic functions, as we now illustrate by example. Consider the Hilbert space ofbandlimitedcontinuous functionsH{\displaystyle H}. Fix somecutoff frequency0<a<∞{\displaystyle 0<a<\infty }and define the Hilbert space
whereL2(R){\displaystyle L^{2}(\mathbb {R} )}is the set of square integrable functions, andF(ω)=∫−∞∞f(t)e−iωtdt{\textstyle F(\omega )=\int _{-\infty }^{\infty }f(t)e^{-i\omega t}\,dt}is theFourier transformoff{\displaystyle f}. As the inner product, we use
Since this is a closed subspace ofL2(R){\displaystyle L^{2}(\mathbb {R} )}, it is a Hilbert space. Moreover, the elements ofH{\displaystyle H}are smooth functions onR{\displaystyle \mathbb {R} }that tend to zero at infinity, essentially by theRiemann-Lebesgue lemma. In fact, the elements ofH{\displaystyle H}are the restrictions toR{\displaystyle \mathbb {R} }of entireholomorphic functions, by thePaley–Wiener theorem.
From theFourier inversion theorem, we have
It then follows by theCauchy–Schwarz inequalityandPlancherel's theoremthat, for allx{\displaystyle x},
This inequality shows that the evaluation functional is bounded, proving thatH{\displaystyle H}is indeed a RKHS.
The kernel functionKx{\displaystyle K_{x}}in this case is given by
The Fourier transform ofKx(y){\displaystyle K_{x}(y)}defined above is given by
which is a consequence of thetime-shifting property of the Fourier transform. Consequently, usingPlancherel's theorem, we have
Thus we obtain the reproducing property of the kernel.
Kx{\displaystyle K_{x}}in this case is the "bandlimited version" of theDirac delta function, and thatKx(y){\displaystyle K_{x}(y)}converges toδ(y−x){\displaystyle \delta (y-x)}in the weak sense as the cutoff frequencya{\displaystyle a}tends to infinity.
We have seen how a reproducing kernel Hilbert space defines a reproducing kernel function that is both symmetric andpositive definite. The Moore–Aronszajn theorem goes in the other direction; it states that every symmetric, positive definite kernel defines a unique reproducing kernel Hilbert space. The theorem first appeared in Aronszajn'sTheory of Reproducing Kernels, although he attributes it toE. H. Moore.
Proof. For allxinX, defineKx=K(x, ⋅ ). LetH0be the linear span of {Kx:x∈X}. Define an inner product onH0by
which impliesK(x,y)=⟨Kx,Ky⟩H0{\displaystyle K(x,y)=\left\langle K_{x},K_{y}\right\rangle _{H_{0}}}.
The symmetry of this inner product follows from the symmetry ofKand the non-degeneracy follows from the fact thatKis positive definite.
LetHbe thecompletionofH0with respect to this inner product. ThenHconsists of functions of the form
Now we can check the reproducing property (2):
To prove uniqueness, letGbe another Hilbert space of functions for whichKis a reproducing kernel. For everyxandyinX, (2) implies that
By linearity,⟨⋅,⋅⟩H=⟨⋅,⋅⟩G{\displaystyle \langle \cdot ,\cdot \rangle _{H}=\langle \cdot ,\cdot \rangle _{G}}on the span of{Kx:x∈X}{\displaystyle \{K_{x}:x\in X\}}. ThenH⊂G{\displaystyle H\subset G}becauseGis complete and containsH0and hence contains its completion.
Now we need to prove that every element ofGis inH. Letf{\displaystyle f}be an element ofG. SinceHis a closed subspace ofG, we can writef=fH+fH⊥{\displaystyle f=f_{H}+f_{H^{\bot }}}wherefH∈H{\displaystyle f_{H}\in H}andfH⊥∈H⊥{\displaystyle f_{H^{\bot }}\in H^{\bot }}. Now ifx∈X{\displaystyle x\in X}then, sinceKis a reproducing kernel ofGandH:
where we have used the fact thatKx{\displaystyle K_{x}}belongs toHso that its inner product withfH⊥{\displaystyle f_{H^{\bot }}}inGis zero.
This shows thatf=fH{\displaystyle f=f_{H}}inGand concludes the proof.
We may characterize a symmetric positive definite kernelK{\displaystyle K}via the integral operator usingMercer's theoremand obtain an additional view of the RKHS. LetX{\displaystyle X}be a compact space equipped with a strictly positive finiteBorel measureμ{\displaystyle \mu }andK:X×X→R{\displaystyle K:X\times X\to \mathbb {R} }a continuous, symmetric, and positive definite function. Define the integral operatorTK:L2(X)→L2(X){\displaystyle T_{K}:L_{2}(X)\to L_{2}(X)}as
whereL2(X){\displaystyle L_{2}(X)}is the space of square integrable functions with respect toμ{\displaystyle \mu }.
Mercer's theorem states that the spectral decomposition of the integral operatorTK{\displaystyle T_{K}}ofK{\displaystyle K}yields a series representation ofK{\displaystyle K}in terms of the eigenvalues and eigenfunctions ofTK{\displaystyle T_{K}}. This then implies thatK{\displaystyle K}is a reproducing kernel so that the corresponding RKHS can be defined in terms of these eigenvalues and eigenfunctions. We provide the details below.
Under these assumptionsTK{\displaystyle T_{K}}is a compact, continuous, self-adjoint, and positive operator. Thespectral theoremfor self-adjoint operators implies that there is an at most countable decreasing sequence(σi)i≥0{\displaystyle (\sigma _{i})_{i\geq 0}}such thatlimi→∞σi=0{\textstyle \lim _{i\to \infty }\sigma _{i}=0}andTKφi(x)=σiφi(x){\displaystyle T_{K}\varphi _{i}(x)=\sigma _{i}\varphi _{i}(x)}, where the{φi}{\displaystyle \{\varphi _{i}\}}form an orthonormal basis ofL2(X){\displaystyle L_{2}(X)}. By the positivity ofTK,σi>0{\displaystyle T_{K},\sigma _{i}>0}for alli.{\displaystyle i.}One can also show thatTK{\displaystyle T_{K}}maps continuously into the space of continuous functionsC(X){\displaystyle C(X)}and therefore we may choose continuous functions as the eigenvectors, that is,φi∈C(X){\displaystyle \varphi _{i}\in C(X)}for alli.{\displaystyle i.}Then by Mercer's theoremK{\displaystyle K}may be written in terms of the eigenvalues and continuous eigenfunctions as
for allx,y∈X{\displaystyle x,y\in X}such that
This above series representation is referred to as a Mercer kernel or Mercer representation ofK{\displaystyle K}.
Furthermore, it can be shown that the RKHSH{\displaystyle H}ofK{\displaystyle K}is given by
where the inner product ofH{\displaystyle H}given by
This representation of the RKHS has application in probability and statistics, for example to theKarhunen-Loève representationfor stochastic processes andkernel PCA.
Afeature mapis a mapφ:X→F{\displaystyle \varphi \colon X\rightarrow F}, whereF{\displaystyle F}is a Hilbert space which we will call the feature space. The first sections presented the connection between bounded/continuous evaluation functions, positive definite functions, and integral operators and in this section we provide another representation of the RKHS in terms of feature maps.
Every feature map defines a kernel via
ClearlyK{\displaystyle K}is symmetric and positive definiteness follows from the properties of inner product inF{\displaystyle F}. Conversely, every positive definite function and corresponding reproducing kernel Hilbert space has infinitely many associated feature maps such that (3) holds.
For example, we can trivially takeF=H{\displaystyle F=H}andφ(x)=Kx{\displaystyle \varphi (x)=K_{x}}for allx∈X{\displaystyle x\in X}. Then (3) is satisfied by the reproducing property. Another classical example of a feature map relates to the previous section regarding integral operators by takingF=ℓ2{\displaystyle F=\ell ^{2}}andφ(x)=(σiφi(x))i{\displaystyle \varphi (x)=({\sqrt {\sigma _{i}}}\varphi _{i}(x))_{i}}.
This connection between kernels and feature maps provides us with a new way to understand positive definite functions and hence reproducing kernels as inner products inH{\displaystyle H}. Moreover, every feature map can naturally define a RKHS by means of the definition of a positive definite function.
Lastly, feature maps allow us to construct function spaces that reveal another perspective on the RKHS. Consider the linear space
We can define a norm onHφ{\displaystyle H_{\varphi }}by
It can be shown thatHφ{\displaystyle H_{\varphi }}is a RKHS with kernel defined byK(x,y)=⟨φ(x),φ(y)⟩F{\displaystyle K(x,y)=\langle \varphi (x),\varphi (y)\rangle _{F}}. This representation implies that the elements of the RKHS are inner products of elements in the feature space and can accordingly be seen as hyperplanes. This view of the RKHS is related to thekernel trickin machine learning.[7]
Useful properties of RKHSs:
The RKHSH{\displaystyle H}corresponding to this kernel is the dual space, consisting of functionsf(x)=⟨x,β⟩{\displaystyle f(x)=\langle x,\beta \rangle }satisfying‖f‖H2=‖β‖2{\displaystyle \|f\|_{H}^{2}=\|\beta \|^{2}}.
These are another common class of kernels which satisfyK(x,y)=K(‖x−y‖){\displaystyle K(x,y)=K(\|x-y\|)}. Some examples include:
We also provide examples ofBergman kernels. LetXbe finite and letHconsist of all complex-valued functions onX. Then an element ofHcan be represented as an array of complex numbers. If the usualinner productis used, thenKxis the function whose value is 1 atxand 0 everywhere else, andK(x,y){\displaystyle K(x,y)}can be thought of as an identity matrix since
In this case,His isomorphic toCn{\displaystyle \mathbb {C} ^{n}}.
The case ofX=D{\displaystyle X=\mathbb {D} }(whereD{\displaystyle \mathbb {D} }denotes theunit disc) is more sophisticated. Here theBergman spaceA2(D){\displaystyle A^{2}(\mathbb {D} )}is the space ofsquare-integrableholomorphic functionsonD{\displaystyle \mathbb {D} }. It can be shown that the reproducing kernel forA2(D){\displaystyle A^{2}(\mathbb {D} )}is
Lastly, the space of band limited functions inL2(R){\displaystyle L^{2}(\mathbb {R} )}with bandwidth2a{\displaystyle 2a}is a RKHS with reproducing kernel
In this section we extend the definition of the RKHS to spaces of vector-valued functions as this extension is particularly important inmulti-task learningandmanifold regularization. The main difference is that the reproducing kernelΓ{\displaystyle \Gamma }is a symmetric function that is now a positive semi-definitematrixfor everyx,y{\displaystyle x,y}inX{\displaystyle X}. More formally, we define a vector-valued RKHS (vvRKHS) as a Hilbert space of functionsf:X→RT{\displaystyle f:X\to \mathbb {R} ^{T}}such that for allc∈RT{\displaystyle c\in \mathbb {R} ^{T}}andx∈X{\displaystyle x\in X}
and
This second property parallels the reproducing property for the scalar-valued case. This definition can also be connected to integral operators, bounded evaluation functions, and feature maps as we saw for the scalar-valued RKHS. We can equivalently define the vvRKHS as a vector-valued Hilbert space with a bounded evaluation functional and show that this implies the existence of a unique reproducing kernel by the Riesz Representation theorem. Mercer's theorem can also be extended to address the vector-valued setting and we can therefore obtain a feature map view of the vvRKHS. Lastly, it can also be shown that the closure of the span of{Γxc:x∈X,c∈RT}{\displaystyle \{\Gamma _{x}c:x\in X,c\in \mathbb {R} ^{T}\}}coincides withH{\displaystyle H}, another property similar to the scalar-valued case.
We can gain intuition for the vvRKHS by taking a component-wise perspective on these spaces. In particular, we find that every vvRKHS is isometricallyisomorphicto a scalar-valued RKHS on a particular input space. LetΛ={1,…,T}{\displaystyle \Lambda =\{1,\dots ,T\}}. Consider the spaceX×Λ{\displaystyle X\times \Lambda }and the corresponding reproducing kernel
As noted above, the RKHS associated to this reproducing kernel is given by the closure of the span of{γ(x,t):x∈X,t∈Λ}{\displaystyle \{\gamma _{(x,t)}:x\in X,t\in \Lambda \}}whereγ(x,t)(y,s)=γ((x,t),(y,s)){\displaystyle \gamma _{(x,t)}(y,s)=\gamma ((x,t),(y,s))}for every set of pairs(x,t),(y,s)∈X×Λ{\displaystyle (x,t),(y,s)\in X\times \Lambda }.
The connection to the scalar-valued RKHS can then be made by the fact that every matrix-valued kernel can be identified with a kernel of the form of (4) via
Moreover, every kernel with the form of (4) defines a matrix-valued kernel with the above expression. Now letting the mapD:HΓ→Hγ{\displaystyle D:H_{\Gamma }\to H_{\gamma }}be defined as
whereet{\displaystyle e_{t}}is thetth{\displaystyle t^{\text{th}}}component of the canonical basis forRT{\displaystyle \mathbb {R} ^{T}}, one can show thatD{\displaystyle D}is bijective and an isometry betweenHΓ{\displaystyle H_{\Gamma }}andHγ{\displaystyle H_{\gamma }}.
While this view of the vvRKHS can be useful in multi-task learning, this isometry does not reduce the study of the vector-valued case to that of the scalar-valued case. In fact, this isometry procedure can make both the scalar-valued kernel and the input space too difficult to work with in practice as properties of the original kernels are often lost.[11][12][13]
An important class of matrix-valued reproducing kernels areseparablekernels which can factorized as the product of a scalar valued kernel and aT{\displaystyle T}-dimensional symmetric positive semi-definite matrix. In light of our previous discussion these kernels are of the form
for allx,y{\displaystyle x,y}inX{\displaystyle X}andt,s{\displaystyle t,s}inT{\displaystyle T}. As the scalar-valued kernel encodes dependencies between the inputs, we can observe that the matrix-valued kernel encodes dependencies among both the inputs and the outputs.
We lastly remark that the above theory can be further extended to spaces of functions with values in function spaces but obtaining kernels for these spaces is a more difficult task.[14]
TheReLU functionis commonly defined asf(x)=max{0,x}{\displaystyle f(x)=\max\{0,x\}}and is a mainstay in the architecture of neural networks where it is used as an activation function. One can construct a ReLU-like nonlinear function using the theory of reproducing kernel Hilbert spaces. Below, we derive this construction and show how it implies the representation power of neural networks with ReLU activations.
We will work with the Hilbert spaceH=L21(0)[0,∞){\displaystyle {\mathcal {H}}=L_{2}^{1}(0)[0,\infty )}of absolutely continuous functions withf(0)=0{\displaystyle f(0)=0}and square integrable (i.e.L2{\displaystyle L_{2}}) derivative. It has the inner product
To construct the reproducing kernel it suffices to consider a dense subspace, so letf∈C1[0,∞){\displaystyle f\in C^{1}[0,\infty )}andf(0)=0{\displaystyle f(0)=0}.
The Fundamental Theorem of Calculus then gives
where
andKy′(x)=G(x,y),Ky(0)=0{\displaystyle K_{y}'(x)=G(x,y),\ K_{y}(0)=0}i.e.
This impliesKy=K(⋅,y){\displaystyle K_{y}=K(\cdot ,y)}reproducesf{\displaystyle f}.
Moreover the minimum function onX×X=[0,∞)×[0,∞){\displaystyle X\times X=[0,\infty )\times [0,\infty )}has the following representations with the ReLu function:
Using this formulation, we can apply therepresenter theoremto the RKHS, letting one prove the optimality of using ReLU activations in neural network settings.[citation needed]
|
https://en.wikipedia.org/wiki/Reproducing_kernel
|
Incalculus,symbolic integrationis the problem of finding aformulafor theantiderivative, orindefinite integral, of a givenfunctionf(x), i.e. to find a formula for adifferentiable functionF(x) such that
The family of all functions that satisfy this property can be denoted
The termsymbolicis used to distinguish this problem from that ofnumerical integration, where the value ofFis sought at a particular input or set of inputs, rather than a general formula forF.
Both problems were held to be of practical and theoretical importance long before the time of digital computers, but they are now generally considered the domain ofcomputer science, as computers are most often used currently to tackle individual instances.
Finding the derivative of an expression is a straightforward process for which it is easy to construct analgorithm. The reverse question of finding the integral is much more difficult. Many expressions that are relatively simple do not have integrals that can be expressed inclosed form. Seeantiderivativeandnonelementary integralfor more details.
A procedure called theRisch algorithmexists that is capable of determining whether the integral of anelementary function(function built from a finite number ofexponentials,logarithms,constants, andnth rootsthroughcompositionand combinations using the fourelementary operations) is elementary and returning it if it is. In its original form, the Risch algorithm was not suitable for a direct implementation, and its complete implementation took a long time. It was first implemented inReducein the case of purelytranscendental functions; the case of purelyalgebraic functionswas solved and implemented in Reduce byJames H. Davenport; the general case was solved by Manuel Bronstein, who implemented almost all of it inAxiom, though to date there is no implementation of the Risch algorithm that can deal with all of the special cases and branches in it.[1][2]
However, the Risch algorithm applies only toindefiniteintegrals, while most of the integrals of interest to physicists, theoretical chemists, and engineers aredefiniteintegrals often related toLaplace transforms,Fourier transforms, andMellin transforms. Lacking a general algorithm, the developers ofcomputer algebra systemshave implementedheuristicsbased on pattern-matching and the exploitation of special functions, in particular theincomplete gamma function.[3]Although this approach is heuristic rather than algorithmic, it is nonetheless an effective method for solving many definite integrals encountered by practical engineering applications. Earlier systems such asMacsymahad a few definite integrals related to special functions within a look-up table. However this particular method, involving differentiation of special functions with respect to its parameters, variable transformation,pattern matchingand other manipulations, was pioneered by developers of theMaple[4]system and then later emulated byMathematica,Axiom,MuPADand other systems.
The main problem in the classical approach of symbolic integration is that, if a function is represented inclosed form, then, in general, itsantiderivativehas not a similar representation. In other words, the class of functions that can be represented in closed form is notclosedunder antiderivation.
Holonomic functionsare a large class of functions, which is closed under antiderivation and allows algorithmic implementation in computers of integration and many other operations of calculus.
More precisely, a holonomic function is a solution of a homogeneouslinear differential equationwith polynomial coefficients. Holonomic functions are closed under addition and multiplication, derivation, and antiderivation. They includealgebraic functions,exponential function,logarithm,sine,cosine,inverse trigonometric functions,inverse hyperbolic functions.
They include also most common special functions such asAiry function,error function,Bessel functions, and allhypergeometric functions.
A fundamental property of holonomic functions is that the coefficients of theirTaylor seriesat any point satisfy a linearrecurrence relationwith polynomial coefficients, and that this recurrence relation may be computed from the differential equation defining the function. Conversely given such a recurrence relation between the coefficients of apower series, this power series defines a holonomic function whose differential equation may be computed algorithmically. This recurrence relation allows a fast computation of the Taylor series, and thus of the value of the function at any point, with an arbitrary small certified error.
This makes algorithmic most operations ofcalculus, when restricted to holonomic functions, represented by their differential equation and initial conditions. This includes the computation of antiderivatives anddefinite integrals(this amounts to evaluating the antiderivative at the endpoints of the interval of integration). This includes also the computation of theasymptotic behaviorof the function at infinity, and thus the definite integrals on unbounded intervals.
All these operations are implemented in thealgoliblibrary forMaple.[5]
See also the Dynamic Dictionary of Mathematical Functions.[6]
For example:
is a symbolic result for an indefinite integral (hereCis aconstant of integration),
is a symbolic result for a definite integral, and
is a numerical result for the same definite integral.
|
https://en.wikipedia.org/wiki/Symbolic_integration
|
Inmathematics,linearization(British English:linearisation) is finding thelinear approximationto afunctionat a given point. The linear approximation of a function is the first orderTaylor expansionaround the point of interest. In the study ofdynamical systems, linearization is a method for assessing the localstabilityof anequilibrium pointof asystemofnonlinear differential equationsor discretedynamical systems.[1]This method is used in fields such asengineering,physics,economics, andecology.
Linearizations of afunctionarelines—usually lines that can be used for purposes of calculation. Linearization is an effective method for approximating the output of a functiony=f(x){\displaystyle y=f(x)}at anyx=a{\displaystyle x=a}based on the value andslopeof the function atx=b{\displaystyle x=b}, given thatf(x){\displaystyle f(x)}is differentiable on[a,b]{\displaystyle [a,b]}(or[b,a]{\displaystyle [b,a]}) and thata{\displaystyle a}is close tob{\displaystyle b}. In short, linearization approximates the output of a function nearx=a{\displaystyle x=a}.
For example,4=2{\displaystyle {\sqrt {4}}=2}. However, what would be a good approximation of4.001=4+.001{\displaystyle {\sqrt {4.001}}={\sqrt {4+.001}}}?
For any given functiony=f(x){\displaystyle y=f(x)},f(x){\displaystyle f(x)}can be approximated if it is near a known differentiable point. The most basic requisite is thatLa(a)=f(a){\displaystyle L_{a}(a)=f(a)}, whereLa(x){\displaystyle L_{a}(x)}is the linearization off(x){\displaystyle f(x)}atx=a{\displaystyle x=a}. Thepoint-slope formof an equation forms an equation of a line, given a point(H,K){\displaystyle (H,K)}and slopeM{\displaystyle M}. The general form of this equation is:y−K=M(x−H){\displaystyle y-K=M(x-H)}.
Using the point(a,f(a)){\displaystyle (a,f(a))},La(x){\displaystyle L_{a}(x)}becomesy=f(a)+M(x−a){\displaystyle y=f(a)+M(x-a)}. Because differentiable functions arelocally linear, the best slope to substitute in would be the slope of the linetangenttof(x){\displaystyle f(x)}atx=a{\displaystyle x=a}.
While the concept of local linearity applies the most to pointsarbitrarily closetox=a{\displaystyle x=a}, those relatively close work relatively well for linear approximations. The slopeM{\displaystyle M}should be, most accurately, the slope of the tangent line atx=a{\displaystyle x=a}.
Visually, the accompanying diagram shows the tangent line off(x){\displaystyle f(x)}atx{\displaystyle x}. Atf(x+h){\displaystyle f(x+h)}, whereh{\displaystyle h}is any small positive or negative value,f(x+h){\displaystyle f(x+h)}is very nearly the value of the tangent line at the point(x+h,L(x+h)){\displaystyle (x+h,L(x+h))}.
The final equation for the linearization of a function atx=a{\displaystyle x=a}is:y=(f(a)+f′(a)(x−a)){\displaystyle y=(f(a)+f'(a)(x-a))}
Forx=a{\displaystyle x=a},f(a)=f(x){\displaystyle f(a)=f(x)}. Thederivativeoff(x){\displaystyle f(x)}isf′(x){\displaystyle f'(x)}, and the slope off(x){\displaystyle f(x)}ata{\displaystyle a}isf′(a){\displaystyle f'(a)}.
To find4.001{\displaystyle {\sqrt {4.001}}}, we can use the fact that4=2{\displaystyle {\sqrt {4}}=2}. The linearization off(x)=x{\displaystyle f(x)={\sqrt {x}}}atx=a{\displaystyle x=a}isy=a+12a(x−a){\displaystyle y={\sqrt {a}}+{\frac {1}{2{\sqrt {a}}}}(x-a)}, because the functionf′(x)=12x{\displaystyle f'(x)={\frac {1}{2{\sqrt {x}}}}}defines the slope of the functionf(x)=x{\displaystyle f(x)={\sqrt {x}}}atx{\displaystyle x}. Substituting ina=4{\displaystyle a=4}, the linearization at 4 isy=2+x−44{\displaystyle y=2+{\frac {x-4}{4}}}. In this casex=4.001{\displaystyle x=4.001}, so4.001{\displaystyle {\sqrt {4.001}}}is approximately2+4.001−44=2.00025{\displaystyle 2+{\frac {4.001-4}{4}}=2.00025}. The true value is close to 2.00024998, so the linearization approximation has a relative error of less than 1 millionth of a percent.
The equation for the linearization of a functionf(x,y){\displaystyle f(x,y)}at a pointp(a,b){\displaystyle p(a,b)}is:
The general equation for the linearization of a multivariable functionf(x){\displaystyle f(\mathbf {x} )}at a pointp{\displaystyle \mathbf {p} }is:
wherex{\displaystyle \mathbf {x} }is the vector of variables,∇f{\displaystyle {\nabla f}}is thegradient, andp{\displaystyle \mathbf {p} }is the linearization point of interest
.[2]
Linearization makes it possible to use tools for studyinglinear systemsto analyze the behavior of a nonlinear function near a given point. The linearization of a function is the first order term of itsTaylor expansionaround the point of interest. For a system defined by the equation
the linearized system can be written as
wherex0{\displaystyle \mathbf {x_{0}} }is the point of interest andDF(x0,t){\displaystyle D\mathbf {F} (\mathbf {x_{0}} ,t)}is thex{\displaystyle \mathbf {x} }-JacobianofF(x,t){\displaystyle \mathbf {F} (\mathbf {x} ,t)}evaluated atx0{\displaystyle \mathbf {x_{0}} }.
Instabilityanalysis ofautonomous systems, one can use theeigenvaluesof theJacobian matrixevaluated at ahyperbolic equilibrium pointto determine the nature of that equilibrium. This is the content of thelinearization theorem. For time-varying systems, the linearization requires additional justification.[3]
Inmicroeconomics,decision rulesmay be approximated under the state-space approach to linearization.[4]Under this approach, theEuler equationsof theutility maximization problemare linearized around the stationary steady state.[4]A unique solution to the resulting system of dynamic equations then is found.[4]
Inmathematical optimization, cost functions and non-linear components within can be linearized in order to apply a linear solving method such as theSimplex algorithm. The optimized result is reached much more efficiently and is deterministic as aglobal optimum.
Inmultiphysicssystems—systems involving multiple physical fields that interact with one another—linearization with respect to each of the physical fields may be performed. This linearization of the system with respect to each of the fields results in a linearized monolithic equation system that can be solved using monolithic iterative solution procedures such as theNewton–Raphson method. Examples of this includeMRI scannersystems which results in a system of electromagnetic, mechanical and acoustic fields.[5]
|
https://en.wikipedia.org/wiki/Linearization
|
Inmathematicsandapplied mathematics,perturbation theorycomprises methods for finding anapproximate solutionto a problem, by starting from the exactsolutionof a related, simpler problem.[1][2]A critical feature of the technique is a middle step that breaks the problem into "solvable" and "perturbative" parts.[3]Inregular perturbation theory, the solution is expressed as apower seriesin a small parameterε{\displaystyle \varepsilon }.[1][2]The first term is the known solution to the solvable problem. Successive terms in the series at higher powers ofε{\displaystyle \varepsilon }usually become smaller. An approximate 'perturbation solution' is obtained by truncating the series, often keeping only the first two terms, the solution to the known problem and the 'first order' perturbation correction.
Perturbation theory is used in a wide range of fields and reaches its most sophisticated and advanced forms inquantum field theory.Perturbation theory (quantum mechanics)describes the use of this method inquantum mechanics. The field in general remains actively and heavily researched across multiple disciplines.
Perturbation theory develops an expression for the desired solution in terms of aformal power seriesknown as aperturbation seriesin some "small" parameter, that quantifies the deviation from the exactly solvable problem. The leading term in this power series is the solution of the exactly solvable problem, while further terms describe the deviation in the solution, due to the deviation from the initial problem. Formally, we have for the approximation to the full solutionA,{\displaystyle \ A\ ,}a series in the small parameter (here calledε), like the following:
In this example,A0{\displaystyle \ A_{0}\ }would be the known solution to the exactly solvable initial problem, and the termsA1,A2,A3,…{\displaystyle \ A_{1},A_{2},A_{3},\ldots \ }represent thefirst-order,second-order,third-order, andhigher-order terms, which may be found iteratively by a mechanistic but increasingly difficult procedure. For smallε{\displaystyle \ \varepsilon \ }these higher-order terms in the series generally (but not always) become successively smaller. An approximate "perturbative solution" is obtained by truncating the series, often by keeping only the first two terms, expressing the final solution as a sum of the initial (exact) solution and the "first-order" perturbative correction
Some authors usebig O notationto indicate the order of the error in the approximate solution:A=A0+εA1+O(ε2).{\displaystyle \;A=A_{0}+\varepsilon A_{1}+{\mathcal {O}}{\bigl (}\ \varepsilon ^{2}\ {\bigr )}~.}[2]
If the power series inε{\displaystyle \ \varepsilon \ }converges with a nonzero radius of convergence, the perturbation problem is called aregularperturbation problem.[1]In regular perturbation problems, the asymptotic solution smoothly approaches the exact solution.[1]However, the perturbation series can also diverge, and the truncated series can still be a good approximation to the true solution if it is truncated at a point at which its elements are minimum. This is called anasymptotic series. If the perturbation series is divergent or not a power series (for example, if the asymptotic expansion must include non-integer powersε(1/2){\displaystyle \ \varepsilon ^{\left(1/2\right)}\ }or negative powersε−2{\displaystyle \ \varepsilon ^{-2}\ }) then the perturbation problem is called asingularperturbation problem.[1]Many special techniques in perturbation theory have been developed to analyze singular perturbation problems.[1][2]
The earliest use of what would now be calledperturbation theorywas to deal with the otherwise unsolvable mathematical problems ofcelestial mechanics: for example theorbit of the Moon, which moves noticeably differently from a simpleKeplerian ellipsebecause of the competing gravitation of the Earth and theSun.[4]
Perturbation methods start with a simplified form of the original problem, which issimple enoughto be solved exactly. Incelestial mechanics, this is usually aKeplerian ellipse. UnderNewtonian gravity, an ellipse is exactly correct when there are only two gravitating bodies (say, the Earth and theMoon) but not quite correct when there arethree or more objects(say, the Earth,Moon,Sun, and the rest of theSolar System) and not quite correct when the gravitational interaction is stated using formulations fromgeneral relativity.
Keeping the above example in mind, one follows a general recipe to obtain the perturbation series. Theperturbative expansionis created by adding successive corrections to the simplified problem. The corrections are obtained by forcing consistency between the unperturbed solution, and the equations describing the system in full. WriteD{\displaystyle \ D\ }for this collection of equations; that is, let the symbolD{\displaystyle \ D\ }stand in for the problem to be solved. Quite often, these are differential equations, thus, the letter "D".
The process is generally mechanical, if laborious. One begins by writing the equationsD{\displaystyle \ D\ }so that they split into two parts: some collection of equationsD0{\displaystyle \ D_{0}\ }which can be solved exactly, and some additional remaining partεD1{\displaystyle \ \varepsilon D_{1}\ }for some smallε≪1.{\displaystyle \ \varepsilon \ll 1~.}The solutionA0{\displaystyle \ A_{0}\ }(toD0{\displaystyle \ D_{0}\ }) is known, and one seeks the general solutionA{\displaystyle \ A\ }toD=D0+εD1.{\displaystyle \ D=D_{0}+\varepsilon D_{1}~.}
Next the approximationA≈A0+εA1{\displaystyle \ A\approx A_{0}+\varepsilon A_{1}\ }is inserted intoεD1{\displaystyle \ \varepsilon D_{1}}. This results in an equation forA1,{\displaystyle \ A_{1}\ ,}which, in the general case, can be written in closed form as a sum over integrals overA0.{\displaystyle \ A_{0}~.}Thus, one has obtained thefirst-order correctionA1{\displaystyle \ A_{1}\ }and thusA≈A0+εA1{\displaystyle \ A\approx A_{0}+\varepsilon A_{1}\ }is a good approximation toA.{\displaystyle \ A~.}It is a good approximation, precisely because the parts that were ignored were of sizeε2.{\displaystyle \ \varepsilon ^{2}~.}The process can then be repeated, to obtain correctionsA2,{\displaystyle \ A_{2}\ ,}and so on.
In practice, this process rapidly explodes into a profusion of terms, which become extremely hard to manage by hand.Isaac Newtonis reported to have said, regarding the problem of theMoon's orbit, that"It causeth my head to ache."[5]This unmanageability has forced perturbation theory to develop into a high art of managing and writing out these higher order terms. One of the fundamental breakthroughs inquantum mechanicsfor controlling the expansion are theFeynman diagrams, which allowquantum mechanicalperturbation series to be represented by a sketch.
Perturbation theory has been used in a large number of different settings in physics and applied mathematics. Examples of the "collection of equations"D{\displaystyle D}includealgebraic equations,[6]differential equations[7](e.g., theequations of motion[8]and commonlywave equations),thermodynamic free energyinstatistical mechanics, radiative transfer,[9]andHamiltonian operatorsinquantum mechanics.
Examples of the kinds of solutions that are found perturbatively include the solution of the equation of motion (e.g., thetrajectoryof a particle), thestatistical averageof some physical quantity (e.g., average magnetization), and theground stateenergy of a quantum mechanical problem.
Examples of exactly solvable problems that can be used as starting points includelinear equations, including linear equations of motion (harmonic oscillator,linear wave equation), statistical or quantum-mechanical systems of non-interacting particles (or in general, Hamiltonians or free energies containing only terms quadratic in all degrees of freedom).
Examples of systems that can be solved with perturbations include systems with nonlinear contributions to the equations of motion,interactionsbetween particles, terms of higher powers in the Hamiltonian/free energy.
For physical problems involving interactions between particles, the terms of the perturbation series may be displayed (and manipulated) usingFeynman diagrams.
Perturbation theory was first devised to solveotherwise intractable problemsin the calculation of the motions of planets in the solar system. For instance,Newton's law of universal gravitationexplained the gravitation between two astronomical bodies, but when a third body is added, the problem was, "How does each body pull on each?"Kepler's orbital equationsonly solve Newton's gravitational equations when the latter are limited to just two bodies interacting. The gradually increasing accuracy ofastronomical observationsled to incremental demands in the accuracy of solutions to Newton's gravitational equations, which led many eminent 18th and 19th century mathematicians, notablyJoseph-Louis LagrangeandPierre-Simon Laplace, to extend and generalize the methods of perturbation theory.
These well-developed perturbation methods were adopted and adapted to solve new problems arising during the development ofquantum mechanicsin 20th century atomic and subatomic physics.Paul Diracdeveloped quantum perturbation theory in 1927 to evaluate when a particle would be emitted in radioactive elements. This was later namedFermi's golden rule.[10][11]Perturbation theory in quantum mechanics is fairly accessible, mainly because quantum mechanics is limited to linear wave equations, but also since the quantum mechanical notation allows expressions to be written in fairly compact form, thus making them easier to comprehend. This resulted in an explosion of applications, ranging from theZeeman effectto thehyperfine splittingin thehydrogen atom.
Despite the simpler notation, perturbation theory applied toquantum field theorystill easily gets out of hand.Richard Feynmandeveloped the celebratedFeynman diagramsby observing that many terms repeat in a regular fashion. These terms can be replaced by dots, lines, squiggles and similar marks, each standing for a term, a denominator, an integral, and so on; thus complex integrals can be written as simple diagrams, with absolutely no ambiguity as to what they mean. The one-to-one correspondence between the diagrams, and specific integrals is what gives them their power. Although originally developed for quantum field theory, it turns out the diagrammatic technique is broadly applicable to many other perturbative series (although not always worthwhile).
In the second half of the 20th century, aschaos theorydeveloped, it became clear that unperturbed systems were in generalcompletely integrable systems, while the perturbed systems were not. This promptly lead to the study of "nearly integrable systems", of which theKAM torusis the canonical example. At the same time, it was also discovered that many (rather special)non-linear systems, which were previously approachable only through perturbation theory, are in fact completely integrable. This discovery was quite dramatic, as it allowed exact solutions to be given. This, in turn, helped clarify the meaning of the perturbative series, as one could now compare the results of the series to the exact solutions.
The improved understanding ofdynamical systemscoming from chaos theory helped shed light on what was termed thesmall denominator problemorsmall divisor problem. In the 19th centuryPoincaréobserved (as perhaps had earlier mathematicians) that sometimes 2nd and higher order terms in the perturbative series have "small denominators": That is, they have the general formψnVϕm(ωn−ωm){\displaystyle \ {\frac {\ \psi _{n}V\phi _{m}\ }{\ (\omega _{n}-\omega _{m})\ }}\ }whereψn,{\displaystyle \ \psi _{n}\ ,}V,{\displaystyle \ V\ ,}andϕm{\displaystyle \ \phi _{m}\ }are some complicated expressions pertinent to the problem to be solved, andωn{\displaystyle \ \omega _{n}\ }andωm{\displaystyle \ \omega _{m}\ }are real numbers; very often they are theenergyofnormal modes. The small divisor problem arises when the differenceωn−ωm{\displaystyle \ \omega _{n}-\omega _{m}\ }is small, causing the perturbative correction to "blow up", becoming as large or maybe larger than the zeroth order term. This situation signals a breakdown of perturbation theory: It stops working at this point, and cannot be expanded or summed any further. In formal terms, the perturbative series is anasymptotic series: A useful approximation for a few terms, but at some point becomeslessaccurate if even more terms are added. The breakthrough from chaos theory was an explanation of why this happened: The small divisors occur whenever perturbation theory is applied to a chaotic system. The one signals the presence of the other.[citation needed]
Since the planets are very remote from each other, and since their mass is small as compared to the mass of the Sun, the gravitational forces between the planets can be neglected, and the planetary motion is considered, to a first approximation, as taking place along Kepler's orbits, which are defined by the equations of thetwo-body problem, the two bodies being the planet and the Sun.[12]
Since astronomic data came to be known with much greater accuracy, it became necessary to consider how the motion of a planet around the Sun is affected by other planets. This was the origin of thethree-body problem; thus, in studying the system Moon-Earth-Sun, the mass ratio between the Moon and the Earth was chosen as the "small parameter". Lagrange and Laplace were the first to advance the view that the so-called "constants" which describe the motion of a planet around the Sun gradually change: They are "perturbed", as it were, by the motion of other planets and vary as a function of time; hence the name "perturbation theory".[12]
Perturbation theory was investigated by the classical scholars – Laplace,Siméon Denis Poisson,Carl Friedrich Gauss– as a result of which the computations could be performed with a very high accuracy. Thediscovery of the planet Neptunein 1848 byUrbain Le Verrier, based on the deviations in motion of the planetUranus. He sent the coordinates toJ.G. Gallewho successfully observed Neptune through his telescope – a triumph of perturbation theory.[12]
The standard exposition of perturbation theory is given in terms of the order to which the perturbation is carried out: first-order perturbation theory or second-order perturbation theory, and whether the perturbed states are degenerate, which requiressingular perturbation. In the singular case extra care must be taken, and the theory is slightly more elaborate.
Many of theab initio quantum chemistry methodsuse perturbation theory directly or are closely related methods. Implicit perturbation theory[13]works with the complete Hamiltonian from the very beginning and never specifies a perturbation operator as such.Møller–Plesset perturbation theoryuses the difference between theHartree–FockHamiltonian and the exact non-relativistic Hamiltonian as the perturbation. The zero-order energy is the sum of orbital energies. The first-order energy is the Hartree–Fock energy and electron correlation is included at second-order or higher. Calculations to second, third or fourth order are very common and the code is included in mostab initioquantum chemistry programs. A related but more accurate method is thecoupled clustermethod.
Ashell-crossing(sc) occurs in perturbation theory when matter trajectories intersect, forming asingularity.[14]This limits the predictive power of physical simulations at small scales.
|
https://en.wikipedia.org/wiki/Perturbation_theory
|
Chapman–Enskog theoryprovides a framework in which equations ofhydrodynamicsfor a gas can be derived from theBoltzmann equation. The technique justifies the otherwise phenomenologicalconstitutive relationsappearing in hydrodynamical descriptions such as theNavier–Stokes equations. In doing so, expressions for various transport coefficients such asthermal conductivityandviscosityare obtained in terms of molecular parameters. Thus, Chapman–Enskog theory constitutes an important step in the passage from a microscopic, particle-based description to acontinuumhydrodynamical one.
The theory is named forSydney ChapmanandDavid Enskog, who introduced it independently in 1916 and 1917.[1]
The starting point of Chapman–Enskog theory is the Boltzmann equation for the 1-particle distribution functionf(r,v,t){\displaystyle f(\mathbf {r} ,\mathbf {v} ,t)}:
∂f∂t+v⋅∂f∂r+Fm⋅∂f∂v=C^f,{\displaystyle {\frac {\partial f}{\partial t}}+\mathbf {v} \cdot {\frac {\partial f}{\partial \mathbf {r} }}+{\frac {\mathbf {F} }{m}}\cdot {\frac {\partial f}{\partial \mathbf {v} }}={\hat {C}}f,}
whereC^{\displaystyle {\hat {C}}}is a nonlinear integral operator which models the evolution off{\displaystyle f}under interparticle collisions. This nonlinearity makes solving the full Boltzmann equation difficult, and motivates the development of approximate techniques such as the one provided by Chapman–Enskog theory.
Given this starting point, the various assumptions underlying the Boltzmann equation carry over to Chapman–Enskog theory as well. The most basic of these requires a separation of scale between the collision durationτc{\displaystyle \tau _{\mathrm {c} }}and the mean free time between collisionsτf{\displaystyle \tau _{\mathrm {f} }}:τc≪τf{\displaystyle \tau _{\mathrm {c} }\ll \tau _{\mathrm {f} }}. This condition ensures that collisions are well-defined events in space and time, and holds if the dimensionless parameterγ≡rc3n{\displaystyle \gamma \equiv r_{\mathrm {c} }^{3}n}is small, whererc{\displaystyle r_{\mathrm {c} }}is the range of interparticle interactions andn{\displaystyle n}is the number density.[2]In addition to this assumption, Chapman–Enskog theory also requires thatτf{\displaystyle \tau _{\mathrm {f} }}is much smaller than anyextrinsictimescalesτext{\displaystyle \tau _{\text{ext}}}. These are the timescales associated with the terms on the left hand side of the Boltzmann equation, which describe variations of the gas state over macroscopic lengths. Typically, their values are determined by initial/boundary conditions and/or external fields. This separation of scales implies that the collisional term on the right hand side of the Boltzmann equation is much larger than the streaming terms on the left hand side. Thus, an approximate solution can be found from
C^f=0.{\displaystyle {\hat {C}}f=0.}
It can be shown that the solution to this equation is aGaussian:
f=n(r,t)(m2πkBT(r,t))3/2exp[−m|v−v0(r,t)|22kBT(r,t)],{\displaystyle f=n(\mathbf {r} ,t)\left({\frac {m}{2\pi k_{\text{B}}T(\mathbf {r} ,t)}}\right)^{3/2}\exp \left[-{\frac {m{\left|\mathbf {v} -\mathbf {v} _{0}(\mathbf {r} ,t)\right|}^{2}}{2k_{\text{B}}T(\mathbf {r} ,t)}}\right],}
wherem{\displaystyle m}is the molecule mass andkB{\displaystyle k_{\text{B}}}is theBoltzmann constant.[3]A gas is said to be inlocal equilibriumif it satisfies this equation.[4]The assumption of local equilibrium leads directly to theEuler equations, which describe fluids without dissipation, i.e. with thermal conductivity and viscosity equal to0{\displaystyle 0}. The primary goal of Chapman–Enskog theory is to systematically obtain generalizations of the Euler equations which incorporate dissipation. This is achieved by expressing deviations from local equilibrium as a perturbative series inKnudsen numberKn{\displaystyle {\text{Kn}}}, which is small ifτf≪τext{\displaystyle \tau _{\mathrm {f} }\ll \tau _{\text{ext}}}. Conceptually, the resulting hydrodynamic equations describe the dynamical interplay between free streaming and interparticle collisions. The latter tend to drive the gastowardslocal equilibrium, while the former acts across spatial inhomogeneities to drive the gasawayfrom local equilibrium.[5]When the Knudsen number is of the order of 1 or greater, the gas in the system being considered cannot be described as a fluid.
To first order inKn{\displaystyle {\text{Kn}}}one obtains theNavier–Stokes equations. Second and third orders give rise, respectively, to theBurnett equationsand super-Burnett equations.
Since the Knudsen number does not appear explicitly in the Boltzmann equation, but rather implicitly in terms of the distribution function and boundary conditions, a dummy variableε{\displaystyle \varepsilon }is introduced to keep track of the appropriate orders in the Chapman–Enskog expansion:
∂f∂t+v⋅∂f∂r+Fm⋅∂f∂v=1εC^f.{\displaystyle {\frac {\partial f}{\partial t}}+\mathbf {v\cdot } {\frac {\partial f}{\partial \mathbf {r} }}+{\frac {\mathbf {F} }{m}}\cdot {\frac {\partial f}{\partial \mathbf {v} }}={\frac {1}{\varepsilon }}{\hat {C}}f.}
Smallε{\displaystyle \varepsilon }implies the collisional termC^f{\displaystyle {\hat {C}}f}dominates the streaming termv⋅∂f∂r+Fm⋅∂f∂v{\displaystyle \mathbf {v\cdot } {\frac {\partial f}{\partial \mathbf {r} }}+{\frac {\mathbf {F} }{m}}\cdot {\frac {\partial f}{\partial \mathbf {v} }}}, which is the same as saying the Knudsen number is small. Thus, the appropriate form for the Chapman–Enskog expansion is
f=f(0)+εf(1)+ε2f(2)+⋯.{\displaystyle f=f^{(0)}+\varepsilon f^{(1)}+\varepsilon ^{2}f^{(2)}+\cdots \ .}
Solutions that can be formally expanded in this way are known asnormalsolutions to the Boltzmann equation.[6]This class of solutions excludes non-perturbative contributions (such ase−1/ε{\displaystyle e^{-1/\varepsilon }}), which appear in boundary layers or near internalshock layers. Thus, Chapman–Enskog theory is restricted to situations in which such solutions are negligible.
Substituting this expansion and equating orders ofε{\displaystyle \varepsilon }leads to the hierarchy
J(f(0),f(0))=02J(f(0),f(n))=(∂∂t+v⋅∂∂r+Fm⋅∂∂v)f(n−1)−∑m=1n−1J(f(n),f(n−m)),n>0,{\displaystyle {\begin{aligned}J(f^{(0)},f^{(0)})&=0\\2J(f^{(0)},f^{(n)})&=\left({\frac {\partial }{\partial t}}+\mathbf {v\cdot } {\frac {\partial }{\partial \mathbf {r} }}+{\frac {\mathbf {F} }{m}}\cdot {\frac {\partial }{\partial \mathbf {v} }}\right)f^{(n-1)}-\sum _{m=1}^{n-1}J(f^{(n)},f^{(n-m)}),\qquad n>0,\end{aligned}}}
whereJ{\displaystyle J}is an integral operator, linear in both its arguments, which satisfiesJ(f,g)=J(g,f){\displaystyle J(f,g)=J(g,f)}andJ(f,f)=C^f{\displaystyle J(f,f)={\hat {C}}f}. The solution to the first equation is a Gaussian:
f(0)=n′(r,t)(m2πkBT′(r,t))3/2exp[−m|v−v0′(r,t)|22kBT′(r,t)].{\displaystyle f^{(0)}=n'(\mathbf {r} ,t)\left({\frac {m}{2\pi k_{\text{B}}T'(\mathbf {r} ,t)}}\right)^{3/2}\exp \left[-{\frac {m\left|\mathbf {v} -\mathbf {v} '_{0}(\mathbf {r} ,t)\right|^{2}}{2k_{\text{B}}T'(\mathbf {r} ,t)}}\right].}
for some functionsn′(r,t){\displaystyle n'(\mathbf {r} ,t)},v0′(r,t){\displaystyle \mathbf {v} '_{0}(\mathbf {r} ,t)}, andT′(r,t){\displaystyle T'(\mathbf {r} ,t)}. The expression forf(0){\displaystyle f^{(0)}}suggests a connection between these functions and the physical hydrodynamic fields defined as moments off(r,v,t){\displaystyle f(\mathbf {r} ,\mathbf {v} ,t)}:
n(r,t)=∫f(r,v,t)dvn(r,t)v0(r,t)=∫vf(r,v,t)dvn(r,t)T(r,t)=∫m3kBv2f(r,v,t)dv.{\displaystyle {\begin{aligned}n(\mathbf {r} ,t)&=\int f(\mathbf {r} ,\mathbf {v} ,t)\,d\mathbf {v} \\n(\mathbf {r} ,t)\mathbf {v} _{0}(\mathbf {r} ,t)&=\int \mathbf {v} f(\mathbf {r} ,\mathbf {v} ,t)\,d\mathbf {v} \\n(\mathbf {r} ,t)T(\mathbf {r} ,t)&=\int {\frac {m}{3k_{\text{B}}}}v^{2}f(\mathbf {r} ,\mathbf {v} ,t)\,d\mathbf {v} .\end{aligned}}}
From a purely mathematical point of view, however, the two sets of functions are not necessarily the same forε>0{\displaystyle \varepsilon >0}(forε=0{\displaystyle \varepsilon =0}they are equal by definition). Indeed, proceeding systematically in the hierarchy, one finds that similarly tof(0){\displaystyle f^{(0)}}, eachf(n){\displaystyle f^{(n)}}also contains arbitrary functions ofr{\displaystyle \mathbf {r} }andt{\displaystyle t}whose relation to the physical hydrodynamic fields isa prioriunknown. One of the key simplifying assumptions of Chapman–Enskog theory is to assume that these otherwise arbitrary functions can be written in terms of the exact hydrodynamic fields and their spatial gradients. In other words, the space and time dependence off{\displaystyle f}enters only implicitly through the hydrodynamic fields. This statement is physically plausible because small Knudsen numbers correspond to the hydrodynamic regime, in which the state of the gas is determined solely by the hydrodynamic fields. In the case off(0){\displaystyle f^{(0)}}, the functionsn′(r,t){\displaystyle n'(\mathbf {r} ,t)},v0′(r,t){\displaystyle \mathbf {v} '_{0}(\mathbf {r} ,t)}, andT′(r,t){\displaystyle T'(\mathbf {r} ,t)}are assumed exactly equal to the physical hydrodynamic fields.
While these assumptions are physically plausible, there is the question of whether solutions which satisfy these properties actually exist. More precisely, one must show that solutions exist satisfying
∫∑n=1∞εnf(n)dv=0=∫∑n=1∞εnf(n)v2dv∫∑n=1∞εnf(n)vidv=0,i∈{x,y,z}.{\displaystyle {\begin{aligned}\int \sum _{n=1}^{\infty }\varepsilon ^{n}f^{(n)}\,d\mathbf {v} =0=\int \sum _{n=1}^{\infty }\varepsilon ^{n}f^{(n)}\mathbf {v} ^{2}\,d\mathbf {v} \\[1ex]\int \sum _{n=1}^{\infty }\varepsilon ^{n}f^{(n)}v_{i}\,d\mathbf {v} =0,\qquad i\in \{x,y,z\}.\end{aligned}}}
Moreover, even if such solutions exist, there remains the additional question of whether they span the complete set of normal solutions to the Boltzmann equation, i.e. do not represent an artificial restriction of the original expansion inε{\displaystyle \varepsilon }. One of the key technical achievements of Chapman–Enskog theory is to answer both of these questions in the positive.[6]Thus, at least at the formal level, there is no loss of generality in the Chapman–Enskog approach.
With these formal considerations established, one can proceed to calculatef(1){\displaystyle f^{(1)}}. The result is[1]
f(1)=[−1n(2kBTm)1/2A(v)⋅∇lnT−2nB(v):∇v0]f(0),{\displaystyle f^{(1)}=\left[-{\frac {1}{n}}\left({\frac {2k_{\text{B}}T}{m}}\right)^{1/2}\mathbf {A} (\mathbf {v} )\cdot \nabla \ln T-{\frac {2}{n}}\mathbb {B(\mathbf {v} )\colon \nabla } \mathbf {v} _{0}\right]f^{(0)},}
whereA(v){\displaystyle \mathbf {A} (\mathbf {v} )}is a vector andB(v){\displaystyle \mathbb {B} (\mathbf {v} )}atensor, each a solution of a linear inhomogeneousintegral equationthat can be solved explicitly by a polynomial expansion. Here, the colon denotes thedouble dot product,T:T′=∑i,jTijTji′{\textstyle \mathbb {T} :\mathbb {T'} =\sum _{i,j}T_{ij}T'_{ji}}for tensorsT{\displaystyle \mathbb {T} },T′{\displaystyle \mathbb {T'} }.
To first order in the Knudsen number, the heat fluxq=m2∫f(r,v,t)v2vdv{\textstyle \mathbf {q} ={\frac {m}{2}}\int f(\mathbf {r} ,\mathbf {v} ,t)\,v^{2}\mathbf {v} \,d\mathbf {v} }is found to obeyFourier's law of heat conduction,[7]
q=−λ∇T,{\displaystyle \mathbf {q} =-\lambda \nabla T,}
and the momentum-flux tensorσ=m∫(v−v0)(v−v0)Tf(r,v,t)dv{\textstyle \mathbf {\sigma } =m\int (\mathbf {v} -\mathbf {v} _{0})(\mathbf {v} -\mathbf {v} _{0})^{\mathsf {T}}f(\mathbf {r} ,\mathbf {v} ,t)\,d\mathbf {v} }is that of aNewtonian fluid,[7]
σ=pI−μ(∇v0+∇v0T)+23μ(∇⋅v0)I,{\displaystyle \mathbf {\sigma } =p\mathbb {I} -\mu \left(\nabla \mathbf {v_{0}} +\nabla \mathbf {v_{0}} ^{T}\right)+{\frac {2}{3}}\mu (\nabla \cdot \mathbf {v_{0}} )\mathbb {I} ,}
withI{\displaystyle \mathbb {I} }the identity tensor. Here,λ{\displaystyle \lambda }andμ{\displaystyle \mu }are the thermal conductivity and viscosity. They can be calculated explicitly in terms of molecular parameters by solving a linear integral equation; the table below summarizes the results for a few important molecular models (m{\displaystyle m}is the molecule mass andkB{\displaystyle k_{\text{B}}}is the Boltzmann constant).[8]
With these results, it is straightforward to obtain the Navier–Stokes equations. Taking velocity moments of the Boltzmann equation leads to theexactbalance equations for the hydrodynamic fieldsn(r,t){\displaystyle n(\mathbf {r} ,t)},v0(r,t){\displaystyle \mathbf {v} _{0}(\mathbf {r} ,t)}, andT(r,t){\displaystyle T(\mathbf {r} ,t)}:
∂n∂t+∇⋅(nv0)=0∂v0∂t+v0⋅∇v0−Fm+1n∇⋅σ=0∂T∂t+v0⋅∇T+23kBn(σ:∇v0+∇⋅q)=0.{\displaystyle {\begin{aligned}{\frac {\partial n}{\partial t}}+\nabla \cdot \left(n\mathbf {v} _{0}\right)&=0\\{\frac {\partial \mathbf {v} _{0}}{\partial t}}+\mathbf {v} _{0}\cdot \nabla \mathbf {v} _{0}-{\frac {\mathbf {F} }{m}}+{\frac {1}{n}}\nabla \cdot \mathbf {\sigma } &=0\\{\frac {\partial T}{\partial t}}+\mathbf {v} _{0}\cdot \nabla T+{\frac {2}{3k_{\text{B}}n}}\left(\mathbf {\sigma :} \nabla \mathbf {v} _{0}+\nabla \cdot \mathbf {q} \right)&=0.\end{aligned}}}
As in the previous section the colon denotes the doubledot product,T:T′=∑i,jTijTji′{\textstyle \mathbb {T} :\mathbb {T'} =\sum _{i,j}T_{ij}T'_{ji}}. Substituting the Chapman–Enskog expressions forq{\displaystyle \mathbf {q} }andσ{\displaystyle \sigma }, one arrives at the Navier–Stokes equations.
An important prediction of Chapman–Enskog theory is that viscosity,μ{\displaystyle \mu }, is independent of density (this can be seen for each molecular model in table 1, but is actually model-independent). This counterintuitive result traces back toJames Clerk Maxwell, who inferred it in 1860 on the basis of more elementary kinetic arguments.[11]It is well-verified experimentally for gases at ordinary densities.
On the other hand, the theory predicts thatμ{\displaystyle \mu }does depend on temperature. For rigid elastic spheres, the predicted scaling isμ∝T1/2{\displaystyle \mu \propto T^{1/2}}, while other models typically show greater variation with temperature. For instance, for molecules repelling each other with force∝r−ν{\displaystyle \propto r^{-\nu }}the predicted scaling isμ∝Ts{\displaystyle \mu \propto T^{s}}, wheres=1/2+2/(ν−1){\displaystyle s=1/2+2/(\nu -1)}. Takings=0.668{\displaystyle s=0.668}, corresponding toν≈12.9{\displaystyle \nu \approx 12.9}, shows reasonable agreement with the experimentally observed scaling for helium. For more complex gases the agreement is not as good, most likely due to the neglect of attractive forces.[13]Indeed, theLennard-Jones model, which does incorporate attractions, can be brought into closer agreement with experiment (albeit at the cost of a more opaqueT{\displaystyle T}dependence; see the Lennard-Jones entry in table 1).[14]For better agreement with experimental data than that which has been obtained using theLennard-Jones model, the more flexibleMie potentialhas been used,[15]the added flexibility of this potential allows for accurate prediction of the transport properties of mixtures of a variety of spherically symmetric molecules.
Chapman–Enskog theory also predicts a simple relation between thermal conductivity,λ{\displaystyle \lambda }, and viscosity,μ{\displaystyle \mu }, in the formλ=fμcv{\displaystyle \lambda =f\mu c_{v}}, wherecv{\displaystyle c_{v}}is thespecific heatat constant volume andf{\displaystyle f}is a purely numerical factor. For spherically symmetric molecules, its value is predicted to be very close to2.5{\displaystyle 2.5}in a slightly model-dependent way. For instance, rigid elastic spheres havef≈2.522{\displaystyle f\approx 2.522}, and molecules with repulsive force∝r−13{\displaystyle \propto r^{-13}}havef≈2.511{\displaystyle f\approx 2.511}(the latter deviation is ignored in table 1). The special case ofMaxwell molecules(repulsive force∝r−5{\displaystyle \propto r^{-5}}) hasf=2.5{\displaystyle f=2.5}exactly.[16]Sinceλ{\displaystyle \lambda },μ{\displaystyle \mu }, andcv{\displaystyle c_{v}}can be measured directly in experiments, a simple experimental test of Chapman–Enskog theory is to measuref{\displaystyle f}for the spherically symmetricnoble gases. Table 2 shows that there is reasonable agreement between theory and experiment.[12]
The basic principles of Chapman–Enskog theory can be extended to more diverse physical models, including gas mixtures and molecules with internal degrees of freedom. In the high-density regime, the theory can be adapted to account for collisional transport of momentum and energy, i.e. transport over a molecular diameterduringa collision, rather than over a mean free path (in betweencollisions). Including this mechanism predicts a density dependence of the viscosity at high enough density, which is also observed experimentally. Obtaining the corrections used to account for transport during a collision for soft molecules (i.e.Lennard-JonesorMiemolecules) is in general non-trivial, but success has been achieved at applyingBarker-Henderson perturbation theoryto accurately describe these effects up to thecritical densityof various fluid mixtures.[15]
One can also carry out the theory to higher order in the Knudsen number. In particular, the second-order contributionf(2){\displaystyle f^{(2)}}has been calculated by Burnett.[17]In general circumstances, however, these higher-order corrections may not give reliable improvements to the first-order theory, due to the fact that the Chapman–Enskog expansion does not always converge.[18](On the other hand, the expansion is thought to be at least asymptotic to solutions of the Boltzmann equation, in which case truncating at low order still gives accurate results.)[19]Even if the higher order corrections do afford improvement in a given system, the interpretation of the corresponding hydrodynamical equations is still debated.[20]
The extension of Chapman–Enskog theory for multicomponent mixtures to elevated densities, in particular, densities at which thecovolumeof the mixture is non-negligible was carried out in a series of works byE. G. D. Cohenand others,[21][22][23][24][25]and was coined Revised Enskog theory (RET). The successful derivation of RET followed several previous attempt at the same, but which gave results that were shown to be inconsistent withirreversible thermodynamics. The starting point for developing the RET is a modified form of the Boltzmann Equation for thes{\displaystyle s}-particle velocity distribution function,
(∂∂t+vi⋅∂∂r+Fimi⋅∂∂vi)fi=∑jSij(fi,fj){\displaystyle \left({\frac {\partial }{\partial t}}+\mathbf {v} _{i}\cdot {\frac {\partial }{\partial \mathbf {r} }}+{\frac {\mathbf {F} _{i}}{m_{i}}}\cdot {\frac {\partial }{\partial \mathbf {v} _{i}}}\right)f_{i}=\sum _{j}S_{ij}(f_{i},f_{j})}
wherevi(r,t){\displaystyle \mathbf {v} _{i}(\mathbf {r} ,t)}is the velocity of particles of speciesi{\displaystyle i}, at positionr{\displaystyle \mathbf {r} }and timet{\displaystyle t},mi{\displaystyle m_{i}}is the particle mass,Fi{\displaystyle \mathbf {F} _{i}}is the external force, and
Sij(fi,fj)=∭[gij(σijk)fi′(r)fj′(r+σijk)−gij(−σijk)fi(r)fj(r−σijk)]dτ{\displaystyle S_{ij}(f_{i},f_{j})=\iiint \left[g_{ij}(\sigma _{ij}\mathbf {k} )\,f_{i}'(\mathbf {r} )\,f_{j}'(\mathbf {r} +\sigma _{ij}\mathbf {k} )-g_{ij}(-\sigma _{ij}\mathbf {k} )\,f_{i}(\mathbf {r} )\,f_{j}(\mathbf {r} -\sigma _{ij}\mathbf {k} )\right]d\tau }
The difference in this equation from classical Chapman–Enskog theory lies in the streaming operatorSij{\displaystyle S_{ij}}, within which the velocity distribution of the two particles are evaluated at different points in space, separated byσijk{\displaystyle \sigma _{ij}\mathbf {k} }, wherek{\displaystyle \mathbf {k} }is the unit vector along the line connecting the two particles centre of mass. Another significant difference comes from the introduction of the factorsgij{\displaystyle g_{ij}}, which represent the enhanced probability of collisions due to excluded volume. The classical Chapman–Enskog equations are recovered by settingσij=0{\displaystyle \sigma _{ij}=0}andgij(σijk)=1{\displaystyle g_{ij}(\sigma _{ij}\mathbf {k} )=1}.
A point of significance for the success of the RET is the choice of the factorsgij{\displaystyle g_{ij}}, which is interpreted as the pair distribution function evaluated at the contact distanceσij{\displaystyle \sigma _{ij}}. An important factor to note here is that in order to obtain results in agreement withirreversible thermodynamics, thegij{\displaystyle g_{ij}}must be treated as functionals of the density fields, rather than as functions of the local density.
One of the first results obtained from RET that deviates from the results from the classical Chapman–Enskog theory is theEquation of State. While from classical Chapman–Enskog theory the ideal gas law is recovered, RET developed for rigid elastic spheres yields the pressure equation
pnkT=1+2πn3∑i∑jxixjσij3gij,{\displaystyle {\frac {p}{nkT}}=1+{\frac {2\pi n}{3}}\sum _{i}\sum _{j}x_{i}x_{j}\sigma _{ij}^{3}g_{ij},}
which is consistent with theCarnahan-Starling Equation of State, and reduces to the ideal gas law in the limit of infinite dilution (i.e. whenn∑i,jxixjσij3≪1{\textstyle n\sum _{i,j}x_{i}x_{j}\sigma _{ij}^{3}\ll 1})
For thetransport coefficients:viscosity,thermal conductivity,diffusionandthermal diffusion, RET provides expressions that exactly reduce to those obtained from classical Chapman–Enskog theory in the limit of infinite dilution. However, RET predicts a density dependence of thethermal conductivity, which can be expressed as
λ=(1+nαλ)λ0+n2T1/2λσ{\displaystyle \lambda =(1+n\alpha _{\lambda })\lambda _{0}+n^{2}T^{1/2}\lambda _{\sigma }}
whereαλ{\displaystyle \alpha _{\lambda }}andλσ{\displaystyle \lambda _{\sigma }}are relatively weak functions of the composition, temperature and density, andλ0{\displaystyle \lambda _{0}}is the thermal conductivity obtained from classical Chapman–Enskog theory.
Similarly, the expression obtained for viscosity can be written as
μ=(1+nTαμ)μ0+n2T1/2μσ{\displaystyle \mu =(1+nT\alpha _{\mu })\mu _{0}+n^{2}T^{1/2}\mu _{\sigma }}
withαμ{\displaystyle \alpha _{\mu }}andμσ{\displaystyle \mu _{\sigma }}weak functions of composition, temperature and density, andμ0{\displaystyle \mu _{0}}the value obtained from classical Chapman–Enskog theory.
Fordiffusion coefficientsandthermal diffusion coefficientsthe picture is somewhat more complex. However, one of the major advantages of RET over classical Chapman–Enskog theory is that the dependence of diffusion coefficients on the thermodynamic factors, i.e. the derivatives of thechemical potentialswith respect to composition, is predicted. In addition, RET does not predict a strict dependence of
D∼1n,DT∼1n{\displaystyle D\sim {\frac {1}{n}},\quad D_{T}\sim {\frac {1}{n}}}
for all densities, but rather predicts that the coefficients will decrease more slowly with density at high densities, which is in good agreement with experiments. These modified density dependencies also lead RET to predict a density dependence of theSoret coefficient,
ST=DTD,(∂ST∂n)T≠0,{\displaystyle S_{T}={\frac {D_{T}}{D}},\quad \left({\frac {\partial S_{T}}{\partial n}}\right)_{T}\neq 0,}
while classical Chapman–Enskog theory predicts that the Soret coefficient, like the viscosity and thermal conductivity, is independent of density.
While Revised Enskog theory provides many advantages over classical Chapman–Enskog theory, this comes at the price of being significantly more difficult to apply in practice. While classical Chapman–Enskog theory can be applied to arbitrarily complex spherical potentials, given sufficiently accurate and fast integration routines to evaluate the requiredcollision integrals, Revised Enskog Theory, in addition to this, requires knowledge of the contact value of the pair distribution function.
For mixtures ofhard spheres, this value can be computed without large difficulties, but for more complex intermolecular potentials it is generally non-trivial to obtain. However, some success has been achieved at estimating the contact value of the pair distribution function forMie fluids(which consists of particles interacting through a generalisedLennard-Jones potential) and using these estimates to predict the transport properties of dense gas mixtures and supercritical fluids.[15]
Applying RET to particles interacting through realistic potentials also exposes one to the issue of determining a reasonable"contact diameter"for the soft particles. While these are unambiguously defined for hard spheres, there is still no generally agreed upon value that one should use for the contact diameter of soft particles.
The classic monograph on the topic:
Contains a technical introduction to normal solutions of the Boltzmann equation:
|
https://en.wikipedia.org/wiki/Chapman%E2%80%93Enskog_theory#Mathematical_Formulation
|
Condorcet methods
Positional voting
Cardinal voting
Quota-remainder methods
Approval-based committees
Fractional social choice
Semi-proportional representation
By ballot type
Pathological response
Strategic voting
Paradoxes ofmajority rule
Positive results
Arrow's impossibility theoremis a key result insocial choice theoryshowing that noranking-baseddecision rulefor a group can satisfy the requirements ofrational choice.[1]Specifically,Arrowshowed no such rule can satisfyindependence of irrelevant alternatives, the principle that a choice between two alternativesAandBshould not depend on the quality of some third, unrelated optionC.[2][3][4]
The result is often cited in discussions ofvoting rules,[5]where itimpliesnoranked votingrule can eliminate thespoiler effect,[6][7][8]though this was known before Arrow (dating back to theMarquis de Condorcet'svoting paradox, showing the impossibility ofmajority rule). Arrow's theorem generalizes Condorcet's findings, showing the same problems extend to everygroup decision procedurebased onrelative comparisons, including non-majoritarian rules likecollective leadershiporconsensus decision-making.[1]
While the impossibility theorem shows all ranked voting rules must have spoilers, the frequency of spoilers differs dramatically by rule.Plurality-rulemethods likechoose-oneandranked-choice (instant-runoff) votingare highly sensitive to spoilers,[9][10]creating them even in some situations (likecenter squeezes) where they are notmathematically necessary.[11][12]By contrast,majority-rule (Condorcet) methodsofranked votinguniquelyminimize the number of spoiled elections[12]by restricting them tovoting cycles,[11]which are rare in ideologically-driven elections.[13][14]Under somemodelsof voter preferences (like the left-right spectrum assumed in themedian voter theorem), spoilers disappear entirely for these methods.[15][16]
Rated voting rules, where voters assign a separate grade to each candidate, are not affected by Arrow's theorem.[17][18][19]Arrow initially asserted the information provided by these systems was meaningless and therefore could not be used to prevent paradoxes, leading him to overlook them.[20]However, Arrow would later describe this as a mistake,[21][22]stating rules based oncardinal utilities(such asscoreandapproval voting) are not subject to his theorem.[23][24]
WhenKenneth Arrowproved his theorem in 1950, it inaugurated the modern field ofsocial choice theory, a branch ofwelfare economicsstudying mechanisms to aggregatepreferencesandbeliefsacross a society.[25]Such a mechanism of study can be amarket,voting system,constitution, or even amoralorethicalframework.[1]
In the context of Arrow's theorem, citizens are assumed to haveordinal preferences, i.e.orderings of candidates. IfAandBare different candidates or alternatives, thenA≻B{\displaystyle A\succ B}meansAis preferred toB. Individual preferences (or ballots) are required to satisfy intuitive properties of orderings, e.g. they must betransitive—ifA⪰B{\displaystyle A\succeq B}andB⪰C{\displaystyle B\succeq C}, thenA⪰C{\displaystyle A\succeq C}. The social choice function is then amathematical functionthat maps the individual orderings to a new ordering that represents the preferences of all of society.
Arrow's theorem assumes as background that anynon-degeneratesocial choice rule will satisfy:[26]
Arrow's original statement of the theorem includednon-negative responsivenessas a condition, i.e., thatincreasingthe rank of an outcome should not make themlose—in other words, that a voting rule shouldn't penalize a candidate for being more popular.[2]However, this assumption is not needed or used in his proof (except to derive the weaker condition of Pareto efficiency), and Arrow later corrected his statement of the theorem to remove the inclusion of this condition.[3][29]
A commonly-considered axiom ofrational choiceisindependence of irrelevant alternatives(IIA), which says that when deciding betweenAandB, one's opinion about a third optionCshould not affect their decision.[2]
IIA is sometimes illustrated with a short joke by philosopherSidney Morgenbesser:[30]
Arrow's theorem shows that if a society wishes to make decisions while always avoiding such self-contradictions, it cannot use ranked information alone.[30]
Condorcet's exampleis already enough to see the impossibility of a fairranked voting system, given stronger conditions for fairness than Arrow's theorem assumes.[31]Suppose we have three candidates (A{\displaystyle A},B{\displaystyle B}, andC{\displaystyle C}) and three voters whose preferences are as follows:
IfC{\displaystyle C}is chosen as the winner, it can be argued any fair voting system would sayB{\displaystyle B}should win instead, since two voters (1 and 2) preferB{\displaystyle B}toC{\displaystyle C}and only one voter (3) prefersC{\displaystyle C}toB{\displaystyle B}. However, by the same argumentA{\displaystyle A}is preferred toB{\displaystyle B}, andC{\displaystyle C}is preferred toA{\displaystyle A}, by a margin of two to one on each occasion. Thus, even though each individual voter has consistent preferences, the preferences of society are contradictory:A{\displaystyle A}is preferred overB{\displaystyle B}which is preferred overC{\displaystyle C}which is preferred overA{\displaystyle A}.
Because of this example, some authors creditCondorcetwith having given an intuitive argument that presents the core of Arrow's theorem.[31]However, Arrow's theorem is substantially more general; it applies to methods of making decisions other than one-person-one-vote elections, such asmarketsorweighted voting, based onranked ballots.
LetA{\displaystyle A}be a set ofalternatives. A voter'spreferencesoverA{\displaystyle A}are acompleteandtransitivebinary relationonA{\displaystyle A}(sometimes called atotal preorder), that is, a subsetR{\displaystyle R}ofA×A{\displaystyle A\times A}satisfying:
The element(a,b){\displaystyle (\mathbf {a} ,\mathbf {b} )}being inR{\displaystyle R}is interpreted to mean that alternativea{\displaystyle \mathbf {a} }is preferred to alternativeb{\displaystyle \mathbf {b} }. This situation is often denoteda≻b{\displaystyle \mathbf {a} \succ \mathbf {b} }oraRb{\displaystyle \mathbf {a} R\mathbf {b} }. Denote the set of all preferences onA{\displaystyle A}byΠ(A){\displaystyle \Pi (A)}. LetN{\displaystyle N}be a positive integer. Anordinal (ranked)social welfare functionis a function[2]
which aggregates voters' preferences into a single preference onA{\displaystyle A}. AnN{\displaystyle N}-tuple(R1,…,RN)∈Π(A)N{\displaystyle (R_{1},\ldots ,R_{N})\in \Pi (A)^{N}}of voters' preferences is called apreference profile.
Arrow's impossibility theorem: If there are at least three alternatives, then there is no social welfare function satisfying all three of the conditions listed below:[32]
Arrow's proof used the concept ofdecisive coalitions.[3]
Definition:
Our goal is to prove that thedecisive coalitioncontains only one voter, who controls the outcome—in other words, adictator.
The following proof is a simplification taken fromAmartya Sen[33]andAriel Rubinstein.[34]The simplified proof uses an additional concept:
Thenceforth assume that the social choice system satisfies unrestricted domain, Pareto efficiency, and IIA. Also assume that there are at least 3 distinct outcomes.
Field expansion lemma—if a coalitionG{\displaystyle G}is weakly decisive over(x,y){\displaystyle (x,y)}for somex≠y{\displaystyle x\neq y}, then it is decisive.
Letz{\displaystyle z}be an outcome distinct fromx,y{\displaystyle x,y}.
Claim:G{\displaystyle G}is decisive over(x,z){\displaystyle (x,z)}.
Let everyone inG{\displaystyle G}votex{\displaystyle x}overz{\displaystyle z}. By IIA, changing the votes ony{\displaystyle y}does not matter forx,z{\displaystyle x,z}. So change the votes such thatx≻iy≻iz{\displaystyle x\succ _{i}y\succ _{i}z}inG{\displaystyle G}andy≻ix{\displaystyle y\succ _{i}x}andy≻iz{\displaystyle y\succ _{i}z}outside ofG{\displaystyle G}.
By Pareto,y≻z{\displaystyle y\succ z}. By coalition weak-decisiveness over(x,y){\displaystyle (x,y)},x≻y{\displaystyle x\succ y}. Thusx≻z{\displaystyle x\succ z}.◻{\displaystyle \square }
Similarly,G{\displaystyle G}is decisive over(z,y){\displaystyle (z,y)}.
By iterating the above two claims (note that decisiveness implies weak-decisiveness), we find thatG{\displaystyle G}is decisive over all ordered pairs in{x,y,z}{\displaystyle \{x,y,z\}}. Then iterating that, we find thatG{\displaystyle G}is decisive over all ordered pairs inX{\displaystyle X}.
Group contraction lemma—If a coalition is decisive, and has size≥2{\displaystyle \geq 2}, then it has a proper subset that is also decisive.
LetG{\displaystyle G}be a coalition with size≥2{\displaystyle \geq 2}. Partition the coalition into nonempty subsetsG1,G2{\displaystyle G_{1},G_{2}}.
Fix distinctx,y,z{\displaystyle x,y,z}. Design the following voting pattern (notice that it is the cyclic voting pattern which causes the Condorcet paradox):
voters inG1:x≻iy≻izvoters inG2:z≻ix≻iyvoters outsideG:y≻iz≻ix{\displaystyle {\begin{aligned}{\text{voters in }}G_{1}&:x\succ _{i}y\succ _{i}z\\{\text{voters in }}G_{2}&:z\succ _{i}x\succ _{i}y\\{\text{voters outside }}G&:y\succ _{i}z\succ _{i}x\end{aligned}}}
(Items other thanx,y,z{\displaystyle x,y,z}are not relevant.)
SinceG{\displaystyle G}is decisive, we havex≻y{\displaystyle x\succ y}. So at least one is true:x≻z{\displaystyle x\succ z}orz≻y{\displaystyle z\succ y}.
Ifx≻z{\displaystyle x\succ z}, thenG1{\displaystyle G_{1}}is weakly decisive over(x,z){\displaystyle (x,z)}. Ifz≻y{\displaystyle z\succ y}, thenG2{\displaystyle G_{2}}is weakly decisive over(z,y){\displaystyle (z,y)}. Now apply the field expansion lemma.
By Pareto, the entire set of voters is decisive. Thus by the group contraction lemma, there is a size-one decisive coalition—a dictator.
Proofs using the concept of thepivotal voteroriginated from Salvador Barberá in 1980.[35]The proof given here is a simplified version based on two proofs published inEconomic Theory.[32][36]
Assume there arenvoters. We assign all of these voters an arbitrary ID number, ranging from1throughn, which we can use to keep track of each voter's identity as we consider what happens when they change their votes.Without loss of generality, we can say there are three candidates who we callA,B, andC. (Because of IIA, including more than 3 candidates does not affect the proof.)
We will prove that any social choice rule respecting unanimity and independence of irrelevant alternatives (IIA) is a dictatorship. The proof is in three parts:
Consider the situation where everyone prefersAtoB, and everyone also prefersCtoB. By unanimity, society must also prefer bothAandCtoB. Call this situationprofile[0, x].
On the other hand, if everyone preferredBto everything else, then society would have to preferBto everything else by unanimity. Now arrange all the voters in some arbitrary but fixed order, and for eachiletprofile ibe the same asprofile 0, but moveBto the top of the ballots for voters 1 throughi. Soprofile 1hasBat the top of the ballot for voter 1, but not for any of the others.Profile 2hasBat the top for voters 1 and 2, but no others, and so on.
SinceBeventually moves to the top of the societal preference as the profile number increases, there must be some profile, numberk, for whichBfirstmovesaboveAin the societal rank. We call the voterkwhose ballot change causes this to happen thepivotal voter forBoverA. Note that the pivotal voter forBoverAis not,a priori, the same as the pivotal voter forAoverB. In part three of the proof we will show that these do turn out to be the same.
Also note that by IIA the same argument applies ifprofile 0is any profile in whichAis ranked aboveBby every voter, and the pivotal voter forBoverAwill still be voterk. We will use this observation below.
In this part of the argument we refer to voterk, the pivotal voter forBoverA, as thepivotal voterfor simplicity. We will show that the pivotal voter dictates society's decision forBoverC. That is, we show that no matter how the rest of society votes, ifpivotal voterranksBoverC, then that is the societal outcome. Note again that the dictator forBoverCis not a priori the same as that forCoverB. In part three of the proof we will see that these turn out to be the same too.
In the following, we call voters 1 throughk − 1,segment one, and votersk + 1throughN,segment two. To begin, suppose that the ballots are as follows:
Then by the argument in part one (and the last observation in that part), the societal outcome must rankAaboveB. This is because, except for a repositioning ofC, this profile is the same asprofile k − 1from part one. Furthermore, by unanimity the societal outcome must rankBaboveC. Therefore, we know the outcome in this case completely.
Now suppose that pivotal voter movesBaboveA, but keepsCin the same position and imagine that any number (even all!) of the other voters change their ballots to moveBbelowC, without changing the position ofA. Then aside from a repositioning ofCthis is the same asprofile kfrom part one and hence the societal outcome ranksBaboveA. Furthermore, by IIA the societal outcome must rankAaboveC, as in the previous case. In particular, the societal outcome ranksBaboveC, even though Pivotal Voter may have been theonlyvoter to rankBaboveC.ByIIA, this conclusion holds independently of howAis positioned on the ballots, so pivotal voter is a dictator forBoverC.
In this part of the argument we refer back to the original ordering of voters, and compare the positions of the different pivotal voters (identified by applying parts one and two to the other pairs of candidates). First, the pivotal voter forBoverCmust appear earlier (or at the same position) in the line than the dictator forBoverC: As we consider the argument of part one applied toBandC, successively movingBto the top of voters' ballots, the pivot point where society ranksBaboveCmust come at or before we reach the dictator forBoverC. Likewise, reversing the roles ofBandC, the pivotal voter forCoverBmust be at or later in line than the dictator forBoverC. In short, ifkX/Ydenotes the position of the pivotal voter forXoverY(for any two candidatesXandY), then we have shown
Now repeating the entire argument above withBandCswitched, we also have
Therefore, we have
and the same argument for other pairs shows that all the pivotal voters (and hence all the dictators) occur at the same position in the list of voters. This voter is the dictator for the whole election.
Arrow's impossibility theorem still holds if Pareto efficiency is weakened to the following condition:[4]
Arrow's theorem establishes that no ranked voting rule canalwayssatisfy independence of irrelevant alternatives, but it says nothing about the frequency of spoilers. This led Arrow to remark that "Most systems are not going to work badly all of the time. All I proved is that all can work badly at times."[37][38]
Attempts at dealing with the effects of Arrow's theorem take one of two approaches: either accepting his rule and searching for the least spoiler-prone methods, or dropping one or more of his assumptions, such as by focusing onrated votingrules.[30]
The first set of methods studied by economists are themajority-rule, orCondorcet, methods. These rules limit spoilers to situations where majority rule is self-contradictory, calledCondorcet cycles, and as a result uniquely minimize the possibility of a spoiler effect among ranked rules. (Indeed, many different social welfare functions can meet Arrow's conditions under such restrictions of the domain. It has been proven, however, that under any such restriction, if there exists any social welfare function that adheres to Arrow's criteria, thenCondorcet methodwill adhere to Arrow's criteria.[12]) Condorcet believed voting rules should satisfy both independence of irrelevant alternatives and themajority rule principle, i.e. if most voters rankAliceahead ofBob,Aliceshould defeatBobin the election.[31]
Unfortunately, as Condorcet proved, this rule can be intransitive on some preference profiles.[39]Thus, Condorcet proved a weaker form of Arrow's impossibility theorem long before Arrow, under the stronger assumption that a voting system in the two-candidate case will agree with a simple majority vote.[31]
Unlike pluralitarian rules such asranked-choice runoff (RCV)orfirst-preference plurality,[9]Condorcet methodsavoid the spoiler effect in non-cyclic elections, where candidates can be chosen by majority rule. Political scientists have found such cycles to be fairly rare, suggesting they may be of limited practical concern.[14]Spatial voting modelsalso suggest such paradoxes are likely to be infrequent[40][13]or even non-existent.[15]
Soon after Arrow published his theorem,Duncan Blackshowed his own remarkable result, themedian voter theorem. The theorem proves that if voters and candidates are arranged on aleft-right spectrum, Arrow's conditions are all fully compatible, and all will be met by any rule satisfyingCondorcet's majority-rule principle.[15][16]
More formally, Black's theorem assumes preferences aresingle-peaked: a voter's happiness with a candidate goes up and then down as the candidate moves along some spectrum. For example, in a group of friends choosing a volume setting for music, each friend would likely have their own ideal volume; as the volume gets progressively too loud or too quiet, they would be increasingly dissatisfied. If the domain is restricted to profiles where every individual has a single-peaked preference with respect to the linear ordering, then social preferences are acyclic. In this situation, Condorcet methods satisfy a wide variety of highly-desirable properties, including being fully spoilerproof.[15][16][12]
The rule does not fully generalize from the political spectrum to the political compass, a result related to theMcKelvey-Schofield chaos theorem.[15][41]However, a well-defined Condorcet winner does exist if thedistributionof voters isrotationally symmetricor otherwise has auniquely-defined median.[42][43]In most realistic situations, where voters' opinions follow a roughly-normal distributionor can be accurately summarized by one or two dimensions, Condorcet cycles are rare (though not unheard of).[40][11]
The Campbell-Kelly theorem shows that Condorcet methods are the most spoiler-resistant class of ranked voting systems: whenever it is possible for some ranked voting system to avoid a spoiler effect, a Condorcet method will do so.[12]In other words, replacing a ranked method with its Condorcet variant (i.e. elect a Condorcet winner if they exist, and otherwise run the method) will sometimes prevent a spoiler effect, but can never create a new one.[12]
In 1977,Ehud KalaiandEitan Mullergave a full characterization of domain restrictions admitting a nondictatorial andstrategyproofsocial welfare function. These correspond to preferences for which there is a Condorcet winner.[44]
Holliday and Pacuit devised a voting system that provably minimizes the number of candidates who are capable of spoiling an election, albeit at the cost of occasionally failingvote positivity(though at a much lower rate than seen ininstant-runoff voting).[11][clarification needed]
As shown above, the proof of Arrow's theorem relies crucially on the assumption ofranked voting, and is not applicable torated voting systems. This opens up the possibility of passing all of the criteria given by Arrow. These systems ask voters to rate candidates on a numerical scale (e.g. from 0–10), and then elect the candidate with the highest average (for score voting) or median (graduated majority judgment).[45]: 4–5
Because Arrow's theorem no longer applies, other results are required to determine whether rated methods are immune to thespoiler effect, and under what circumstances. Intuitively, cardinal information can only lead to such immunity if it's meaningful; simply providing cardinal data is not enough.[46]
Some rated systems, such asrange votingandmajority judgment, pass independence of irrelevant alternatives when the voters rate the candidates on an absolute scale. However, when they use relative scales, more general impossibility theorems show that the methods (within that context) still fail IIA.[47]As Arrow later suggested, relative ratings may provide more information than pure rankings,[48][49][50][37][51]but this information does not suffice to render the methods immune to spoilers.
While Arrow's theorem does not apply to graded systems,Gibbard's theoremstill does: no voting game can bestraightforward(i.e. have a single, clear, always-best strategy).[52]
Arrow's framework assumed individual and social preferences areorderingsorrankings, i.e. statements about which outcomes are better or worse than others.[53]Taking inspiration from thestrict behaviorismpopular in psychology, some philosophers and economists rejected the idea of comparing internal human experiences ofwell-being.[54][30]Such philosophers claimed it was impossible to compare the strength of preferences across people who disagreed;Sengives as an example that it would be impossible to know whether theGreat Fire of Romewas good or bad, because despite killing thousands of Romans, it had the positive effect of lettingNeroexpand his palace.[50]
Arrow originally agreed with these positions and rejectedcardinal utility, leading him to focus his theorem on preference rankings.[54][3]However, he later stated that cardinal methods can provide additional useful information, and that his theorem is not applicable to them.
John Harsanyinoted Arrow's theorem could be considered a weaker version of his own theorem[55][failed verification]and otherutility representation theoremslike theVNM theorem, which generally show thatrational behaviorrequires consistentcardinal utilities.[56]
Behavioral economistshave shown individualirrationalityinvolves violations of IIA (e.g. withdecoy effects),[57]suggesting human behavior can cause IIA failures even if the voting method itself does not.[58]However, past research has typically found such effects to be fairly small,[59]and such psychological spoilers can appear regardless of electoral system.BalinskiandLarakidiscuss techniques ofballot designderived frompsychometricsthat minimize these psychological effects, such as asking voters to give each candidate a verbal grade (e.g. "bad", "neutral", "good", "excellent") and issuing instructions to voters that refer to their ballots as judgments of individual candidates.[45][page needed]Similar techniques are often discussed in the context ofcontingent valuation.[51]
In addition to the above practical resolutions, there exist unusual (less-than-practical) situations where Arrow's requirement of IIA can be satisfied.
Supermajorityrules can avoid Arrow's theorem at the cost of being poorly-decisive (i.e. frequently failing to return a result). In this case, a threshold that requires a2/3{\displaystyle 2/3}majority for ordering 3 outcomes,3/4{\displaystyle 3/4}for 4, etc. does not producevoting paradoxes.[60]
Inspatial (n-dimensional ideology) models of voting, this can be relaxed to require only1−e−1{\displaystyle 1-e^{-1}}(roughly 64%) of the vote to prevent cycles, so long as the distribution of voters is well-behaved (quasiconcave).[61]These results provide some justification for the common requirement of a two-thirds majority for constitutional amendments, which is sufficient to prevent cyclic preferences in most situations.[61]
Fishburnshows all of Arrow's conditions can be satisfied foruncountably infinite setsof voters given theaxiom of choice;[62]however, Kirman and Sondermann demonstrated this requires disenfranchisingalmost allmembers of a society (eligible voters form a set ofmeasure0), leading them to refer to such societies as "invisible dictatorships".[63]
Arrow's theorem is not related tostrategic voting, which does not appear in his framework,[3][1]though the theorem does have important implications for strategic voting (being used as a lemma to proveGibbard's theorem[26]). The Arrovian framework ofsocial welfareassumes all voter preferences are known and the only issue is in aggregating them.[1]
Monotonicity(calledpositive associationby Arrow) is not a condition of Arrow's theorem.[3]This misconception is caused by a mistake by Arrow himself, who included the axiom in his original statement of the theorem but did not use it.[2]Dropping the assumption does not allow for constructing a social welfare function that meets his other conditions.[3]
Contrary to a common misconception, Arrow's theorem deals with the limited class ofranked-choice voting systems, rather than voting systems as a whole.[1][64]
Dr. Arrow:Well, I’m a little inclined to think that score systems where you categorize in maybe three or four classes (in spite of what I said about manipulation) is probably the best.[...] And some of these studies have been made. In France, [Michel] Balinski has done some studies of this kind which seem to give some support to these scoring methods.
|
https://en.wikipedia.org/wiki/Arrow%27s_impossibility_theorem
|
SPORE, theSecurity Protocols Open Repository, is an online library ofsecurity protocolswith comments and links to papers. Each protocol is downloadable in a variety of formats, including rules for use with automatic protocol verification tools. All protocols are described usingBAN logicor the style used by Clark and Jacob, and their goals. The database includes details on formal proofs or known attacks, with references to comments, analysis & papers. A large number of protocols are listed, including many which have been shown to be insecure.
It is a continuation of the seminal work byJohn ClarkandJeremy Jacob.[1]
They seek contributions for new protocols, links and comments.
This cryptography-related article is astub. You can help Wikipedia byexpanding it.
Thiscomputer sciencearticle is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Security_Protocols_Open_Repository
|
Quantum key distribution (QKD) protocolsare used inquantum key distribution. The first protocol of that kind wasBB84, introduced in 1984 byCharles H. BennettandGilles Brassard. After that, many other protocols have been defined.
|
https://en.wikipedia.org/wiki/Quantum_cryptographic_protocol
|
Incomputational complexity theory, aninteractive proof systemis anabstract machinethat modelscomputationas the exchange of messages between two parties: aproverand averifier. The parties interact by exchanging messages in order to ascertain whether a givenstringbelongs to alanguageor not. The prover is assumed to possess unlimited computational resources but cannot be trusted, while the verifier has bounded computation power but is assumed to be always honest. Messages are sent between the verifier and prover until the verifier has an answer to the problem and has "convinced" itself that it is correct.
All interactive proof systems have two requirements:
The specific nature of the system, and so thecomplexity classof languages it can recognize, depends on what sort of bounds are put on the verifier, as well as what abilities it is given—for example, most interactive proof systems depend critically on the verifier's ability to make random choices. It also depends on the nature of the messages exchanged—how many and what they can contain. Interactive proof systems have been found to have some important implications for traditional complexity classes defined using only one machine. The main complexity classes describing interactive proof systems areAMandIP.
Every interactive proof system defines aformal languageof stringsL{\displaystyle L}.Soundnessof the proof system refers to the property that no prover can make the verifier accept for the wrong statementy∉L{\displaystyle y\not \in L}except with some small probability. The upper bound of this probability is referred to as thesoundness errorof a proof system. More formally, for every prover(P~){\displaystyle ({\tilde {\mathcal {P}}})}, and everyy∉L{\displaystyle y\not \in L}:
for someϵ≪1{\displaystyle \epsilon \ll 1}.
As long as the soundness error is bounded by a polynomial fraction of the potential running time of the verifier (i.e.ϵ≤1/poly(|y|){\displaystyle \epsilon \leq 1/\mathrm {poly} (|y|)}), it is always possible to amplify soundness until the soundness error becomesnegligible functionrelative to the running time of the verifier. This is achieved by repeating the proof and accepting only if all proofs verify. Afterℓ{\displaystyle \ell }repetitions, a soundness errorϵ{\displaystyle \epsilon }will be reduced toϵℓ{\displaystyle \epsilon ^{\ell }}.[1]
The complexity classNPmay be viewed as a very simple proof system. In this system, the verifier is a deterministic, polynomial-time machine (aPmachine). The protocol is:
In the case where a valid proof certificate exists, the prover is always able to make the verifier accept by giving it that certificate. In the case where there is no valid proof certificate, however, the input is not in the language, and no prover, however malicious it is, can convince the verifier otherwise, because any proof certificate will be rejected.
Although NP may be viewed as using interaction, it wasn't until 1985 that the concept of computation through interaction was conceived (in the context of complexity theory) by two independent groups of researchers. One approach, byLászló Babai, who published "Trading group theory for randomness",[2]defined theArthur–Merlin(AM) class hierarchy. In this presentation, Arthur (the verifier) is aprobabilistic, polynomial-time machine, while Merlin (the prover) has unbounded resources.
The classMAin particular is a simple generalization of the NP interaction above in which the verifier is probabilistic instead of deterministic. Also, instead of requiring that the verifier always accept valid certificates and reject invalid certificates, it is more lenient:
This machine is potentially more powerful than an ordinary NPinteraction protocol, and the certificates are no less practical to verify, sinceBPPalgorithms are considered as abstracting practical computation (seeBPP).
In apublic coinprotocol, the random choices made by the verifier are made public. They remain private in a private coin protocol.
In the same conference where Babai defined his proof system forMA,Shafi Goldwasser,Silvio MicaliandCharles Rackoff[3]published a paper defining the interactive proof systemIP[f(n)]. This has the same machines as theMAprotocol, except thatf(n)roundsare allowed for an input of sizen. In each round, the verifier performs computation and passes a message to the prover, and the prover performs computation and passes information back to the verifier. At the end the verifier must make its decision. For example, in anIP[3] protocol, the sequence would be VPVPVPV, where V is a verifier turn and P is a prover turn.
In Arthur–Merlin protocols, Babai defined a similar classAM[f(n)] which allowedf(n) rounds, but he put one extra condition on the machine: the verifier must show the prover all the random bits it uses in its computation. The result is that the verifier cannot "hide" anything from the prover, because the prover is powerful enough to simulate everything the verifier does if it knows what random bits it used. This is called apublic coinprotocol, because the random bits ("coin flips") are visible to both machines. TheIPapproach is called aprivate coinprotocol by contrast.
The essential problem with public coins is that if the prover wishes to maliciously convince the verifier to accept a string which is not in the language, it seems like the verifier might be able to thwart its plans if it can hide its internal state from it. This was a primary motivation in defining theIPproof systems.
In 1986, Goldwasser andSipser[4]showed, perhaps surprisingly, that the verifier's ability to hide coin flips from the prover does it little good after all, in that an Arthur–Merlin public coin protocol with only two more rounds can recognize all the same languages. The result is that public-coin and private-coin protocols are roughly equivalent. In fact, as Babai shows in 1988,AM[k]=AMfor all constantk, so theIP[k] have no advantage overAM.[5]
To demonstrate the power of these classes, consider thegraph isomorphism problem, the problem of determining whether it is possible to permute the vertices of one graph so that it is identical to another graph. This problem is inNP, since the proof certificate is the permutation which makes the graphs equal. It turns out that thecomplementof the graph isomorphism problem, a co-NPproblem not known to be inNP, has anAMalgorithm and the best way to see it is via a private coins algorithm.[6]
Private coins may not be helpful, but more rounds of interaction are helpful. If we allow the probabilistic verifier machine and the all-powerful prover to interact for a polynomial number of rounds, we get the class of problems calledIP.
In 1992,Adi Shamirrevealed in one of the central results of complexity theory thatIPequalsPSPACE, the class of problems solvable by an ordinarydeterministic Turing machinein polynomial space.[7]
If we allow the elements of the system to usequantum computation, the system is called aquantum interactive proof system, and the corresponding complexity class is calledQIP.[8]A series of results culminated in a 2010 breakthrough thatQIP=PSPACE.[9][10]
Not only can interactive proof systems solve problems not believed to be inNP, but under assumptions about the existence ofone-way functions, a prover can convince the verifier of the solution without ever giving the verifier information about the solution. This is important when the verifier cannot be trusted with the full solution. At first it seems impossible that the verifier could be convinced that there is a solution when the verifier has not seen a certificate, but such proofs, known aszero-knowledge proofsare in fact believed to exist for all problems inNPand are valuable incryptography. Zero-knowledge proofs were first mentioned in the original 1985 paper onIPby Goldwasser, Micali and Rackoff for specific number theoretic languages. The extent of their power was however shown byOded Goldreich,Silvio MicaliandAvi Wigderson.[6]for all ofNP, and this was first extended byRussell ImpagliazzoandMoti Yungto allIP.[11]
One goal ofIP's designers was to create the most powerful possible interactive proof system, and at first it seems like it cannot be made more powerful without making the verifier more powerful and so impractical. Goldwasser et al. overcame this in their 1988 "Multi prover interactive proofs: How to remove intractability assumptions", which defines a variant ofIPcalledMIPin which there aretwoindependent provers.[12]The two provers cannot communicate once the verifier has begun sending messages to them. Just as it's easier to tell if a criminal is lying if he and his partner are interrogated in separate rooms, it's considerably easier to detect a malicious prover trying to trick the verifier into accepting a string not in the language if there is another prover it can double-check with.
In fact, this is so helpful that Babai, Fortnow, and Lund were able to show thatMIP=NEXPTIME, the class of all problems solvable by anondeterministicmachine inexponential time, a very large class.[13]NEXPTIME contains PSPACE, and is believed to strictly contain PSPACE. Adding a constant number of additional provers beyond two does not enable recognition of any more languages. This result paved the way for the celebratedPCP theorem, which can be considered to be a "scaled-down" version of this theorem.
MIPalso has the helpful property that zero-knowledge proofs for every language inNPcan be described without the assumption of one-way functions thatIPmust make. This has bearing on the design of provably unbreakable cryptographic algorithms.[12]Moreover, aMIPprotocol can recognize all languages inIPin only a constant number of rounds, and if a third prover is added, it can recognize all languages inNEXPTIMEin a constant number of rounds, showing again its power overIP.
It is known that for any constantk, a MIP system withkprovers and polynomially many rounds can be turned into an equivalent system with only 2 provers, and a constant number of rounds.[14]
While the designers ofIPconsidered generalizations of Babai's interactive proof systems, others considered restrictions. A very useful interactive proof system isPCP(f(n),g(n)), which is a restriction ofMAwhere Arthur can only usef(n) random bits and can only examineg(n) bits of the proof certificate sent by Merlin (essentially usingrandom access).
There are a number of easy-to-prove results about variousPCPclasses.PCP(0,poly){\displaystyle {\mathsf {PCP}}(0,{\mathsf {poly}})}, the class of polynomial-time machines with no randomness but access to a certificate, is justNP.PCP(poly,0){\displaystyle {\mathsf {PCP}}({\mathsf {poly}},0)}, the class of polynomial-time machines with access to polynomially many random bits isco-RP. Arora and Safra's first major result was thatPCP(log,log)=NP{\displaystyle {\mathsf {PCP}}(\log ,\log )={\mathsf {NP}}}; put another way, if the verifier in theNPprotocol is constrained to choose onlyO(logn){\displaystyle O(\log n)}bits of the proof certificate to look at, this won't make any difference as long as it hasO(logn){\displaystyle O(\log n)}random bits to use.[15]
Furthermore, thePCP theoremasserts that the number of proof accesses can be brought all the way down to a constant. That is,NP=PCP(log,O(1)){\displaystyle {\mathsf {NP}}={\mathsf {PCP}}(\log ,O(1))}.[16]They used this valuable characterization ofNPto prove thatapproximation algorithmsdo not exist for the optimization versions of certainNP-completeproblems unlessP = NP. Such problems are now studied in the field known ashardness of approximation.
|
https://en.wikipedia.org/wiki/Soundness_(interactive_proof)
|
Incryptography, aring signatureis a type ofdigital signaturethat can be performed by any member of a set of users that each havekeys. Therefore, a message signed with a ring signature is endorsed by someone in a particular set of people. One of the security properties of a ring signature is that it should be computationally infeasible to determinewhichof the set's members' keys was used to produce the signature. Ring signatures are similar togroup signaturesbut differ in two key ways: first, there is no way to revoke the anonymity of an individual signature; and second, any set of users can be used as a signing set without additional setup.
Ring signatures were invented byRon Rivest,Adi Shamir, andYael Tauman Kalai, and introduced atASIACRYPTin 2001.[1]The name,ring signature, comes from the ring-like structure of the signaturealgorithm.
Suppose that a set of entities each have public/private key pairs, (P1,S1), (P2,S2), ..., (Pn,Sn). Partyican compute a ring signature σ on a messagem, on input (m,Si,P1, ...,Pn). Anyone can check the validity of a ring signature given σ,m, and the public keys involved,P1, ...,Pn. If a ring signature is properly computed, it should pass the check. On the other hand, it should be hard for anyone to create a valid ring signature on any message for any set without knowing any of the private keys for that set.[2]
In the original paper, Rivest, Shamir, and Tauman described ring signatures as a way to leak a secret. For instance, a ring signature could be used to provide an anonymous signature from "a high-rankingWhite Houseofficial", without revealing which official signed the message. Ring signatures are right for this application because the anonymity of a ring signature cannot be revoked, and because the group for a ring signature can be improvised.
Another application, also described in the original paper, is fordeniable signatures. Here the sender and the recipient of a message form a group for the ring signature, then the signature is valid to the recipient, but anyone else will be unsure whether the recipient or the sender was the actual signer. Thus, such a signature is convincing, but cannot be transferred beyond its intended recipient.
There were various works, introducing new features and based on different assumptions:
Most of the proposed algorithms haveasymptoticoutput sizeO(n){\displaystyle O(n)}; i.e., the size of the resulting signature increases linearly with the size of input (number of public keys). That means that such schemes are impracticable for real use cases with sufficiently largen{\displaystyle n}(for example, an e-voting with millions of participants). But for some application with relatively smallmedianinput size such estimate may be acceptable.CryptoNoteimplementsO(n){\displaystyle O(n)}ring signature scheme by Fujisaki and Suzuki[5]in p2p payments to achieve sender's untraceability.
More efficient algorithms have appeared recently. There are schemes with the sublinear size of the signature,[6]as well as with constant size.[7]
The original paper describes anRSAbased ring signature scheme, as well as one based onRabin signatures. They define akeyed"combining function"Ck,v(y1,y2,…,yn){\displaystyle C_{k,v}(y_{1},y_{2},\dots ,y_{n})}which takes a keyk{\displaystyle k}, an initialization valuev{\displaystyle v}, and a list of arbitrary valuesy1,…yn{\displaystyle y_{1},\dots y_{n}}.yi{\displaystyle y_{i}}is defined asgi(xi){\displaystyle g_{i}(x_{i})}, wheregi{\displaystyle g_{i}}is a trap-door function (i.e. an RSA public key in the case of RSA based ring signatures).
The functionCk,v(y1,y2,…,yn){\displaystyle C_{k,v}(y_{1},y_{2},\dots ,y_{n})}is called the ring equation, and is defined below. The equation is based on asymmetric encryption functionEk{\displaystyle E_{k}}:
It outputs a single valuez{\displaystyle z}which is forced to be equal tov{\displaystyle v}. The equationv=Ck,v(y1,y2,…,yn){\displaystyle v=C_{k,v}(y_{1},y_{2},\dots ,y_{n})}can be solved as long as at least oneyi{\displaystyle y_{i}}, and by extensionxi{\displaystyle x_{i}}, can be freely chosen. Under the assumptions of RSA, this implies knowledge of at least one of the inverses of the trap door functionsgi−1{\displaystyle g_{i}^{-1}}(i.e. a private key), sincegi−1(yi)=xi{\displaystyle g_{i}^{-1}(y_{i})=x_{i}}.
Generating a ring signature involves six steps. The plaintext is signified bym{\displaystyle m}, the ring's public keys byP1,P2,…,Pn{\displaystyle P_{1},P_{2},\dots ,P_{n}}.
Signature verification involves three steps.
Here is aPythonimplementation of the original paper usingRSA. Requires 3rd-party module PyCryptodome.
To sign and verify 2 messages in a ring of 4 users:
Monero[8]and several othercryptocurrenciesuse this technology.[citation needed]
This article incorporatestextavailable under theCC BY-SA 4.0license.
|
https://en.wikipedia.org/wiki/Ring_signature
|
TheSecure Remote Password protocol(SRP) is an augmentedpassword-authenticated key exchange(PAKE) protocol, specifically designed to work around existing patents.[1]
Like all PAKE protocols, an eavesdropper orman in the middlecannot obtain enough information to be able tobrute-force guessa password or apply adictionary attackwithout further interactions with the parties for each guess. Furthermore, being an augmented PAKE protocol, the server does not store password-equivalent data.[2]This means that an attacker who steals the server data cannot masquerade as the client unless they first perform a brute force search for the password.
In layman's terms, during SRP (or any other PAKE protocol) authentication, one party (the "client" or "user") demonstrates to another party (the "server") that they know the password, without sending the password itself nor any other information from which the password can be derived. The password never leaves the client and is unknown to the server.
Furthermore, the server also needs to know about the password (but not the password itself) in order to instigate the secure connection. This means that the server also authenticates itself to the client which preventsphishingwithout reliance on the user parsing complex URLs.
The only mathematically proven security property of SRP is that it is equivalent to Diffie-Hellman against apassiveattacker.[3]Newer PAKEs such as AuCPace[4]and OPAQUE offer stronger guarantees.[5]
The SRP protocol has a number of desirable properties: it allows a user to authenticate themselves to a server, it is resistant todictionary attacksmounted by an eavesdropper, and it does not require atrusted third party. It effectively conveys azero-knowledge password prooffrom the user to the server. In revision 6 of the protocol only one password can be guessed per connection attempt. One of the interesting properties of the protocol is that even if one or two of the cryptographic primitives it uses are attacked, it is still secure. The SRP protocol has been revised several times, and is currently at revision 6a.
The SRP protocol creates a large private key shared between the two parties in a manner similar toDiffie–Hellman key exchangebased on the client side having the user password and the server side having acryptographicverifier derived from the password. The shared public key is derived from two random numbers, one generated by the client, and the other generated by the server, which are unique to the login attempt. In cases where encrypted communications as well as authentication are required, the SRP protocol is more secure than the alternativeSSHprotocol and faster than usingDiffie–Hellman key exchangewith signed messages. It is also independent of third parties, unlikeKerberos.
The SRP protocol, version 3 is described in RFC 2945. SRP version 6a is also used for strong password authentication inSSL/TLS[6](inTLS-SRP) and other standards such asEAP[7]andSAML, and is part ofIEEE 1363.2and ISO/IEC 11770-4.
The following notation is used in this description of the protocol, version 6:
All other variables are defined in terms of these.
First, to establish a passwordpwith server Steve, client Carol picks a randomsalts, and computesx=H(s,p),v=gx. Steve storesvands, indexed byI, as Carol's password verifier and salt. Carol must not sharexwith anybody, and must safely erase it at this step, because it isequivalentto the plaintext passwordp. This step is completed before the system is used as part of the user registration with Steve. Note that the saltsis shared and exchanged to negotiate a session key later so the value could be chosen by either side but is done by Carol so that she can registerI,sandvin a single registration request. The transmission and authentication of the registration request is not covered in SRP.
Then to perform a proof of password at a later date the following exchange protocol occurs:
Now the two parties have a shared, strong session keyK. To complete authentication, they need to prove to each other that their keys match. One possible way is as follows:
This method requires guessing more of the shared state to be successful in impersonation than just the key. While most of the additional state is public, private information could safely be added to the inputs to the hash function, like the server private key.[clarification needed]
Alternatively, in a password-only proof the calculation ofKcan be skipped and the sharedSproven with:
When using SRP to negotiate a shared keyKwhich will be immediately used after the negotiation, it is tempting to skip the verification steps ofM1andM2. The server will reject the very first request from the client which it cannot decrypt. This can however be dangerous as demonstrated in the Implementation Pitfalls section below.
The two parties also employ the following safeguards:
If the server sends an encrypted message without waiting for verification from the client then an attacker is able to mount an offline bruteforce attack similar to hash cracking. This can happen if the server sends an encrypted message in the second packet alongside the salt andBor if key verification is skipped and the server (rather than the client) sends the first encrypted message. This is tempting as after the very first packet, the server has every information to compute the shared keyK.
The attack goes as follow:
Carol doesn't knowxorv. But given any passwordpshe can compute:
Kpis the key that Steve would use ifpwas the expected password. All values required to computeKpare either controlled by Carol or known from the first packet from Steve. Carol can now try to guess the password, generate the corresponding key, and attempt to decrypt Steve's encrypted messagecto verify the key. As protocol messages tend to be structured, it is assumed that identifying thatcwas properly decrypted is easy. This allows offline recovery of the password.
This attack would not be possible had Steve waited for Carol to prove she was able to compute the correct key before sending an encrypted message. Proper implementations of SRP are not affected by this attack as the attacker would be unable to pass the key verification step.
In 2021 Daniel De Almeida Braga, Pierre-Alain Fouque and Mohamed Sabt published PARASITE,[10]a paper in which they demonstrate practical exploitation of a timing attack over the network. This exploits non-constant implementations of modular exponentiation of big numbers and impacted OpenSSL in particular.
The SRP project was started in 1997.[11]Two different approaches to fixing a security hole in SRP-1 resulted in SRP-2 and SRP-3.[12]SRP-3 was first published in 1998 in a conference.[13]RFC 2945, which describes SRP-3 with SHA1, was published in 2000.[14]SRP-6, which fixes "two-for-one" guessing and messaging ordering attacks, was published in 2002.[8]SRP-6a appeared in the official "libsrp" in version 2.1.0, dated 2005.[15]SRP-6a is found in standards as:
IEEE 1363.2 also includes a description of "SRP5", a variant replacing the discrete logarithm with anelliptic curvecontributed by Yongge Wang in 2001.[18]It also describes SRP-3 as found in RFC 2945.
|
https://en.wikipedia.org/wiki/Secure_Remote_Password_protocol
|
Inprobability theory,couplingis aprooftechnique that allows one to compare two unrelated random variables (distributions)XandYby creating arandom vectorWwhosemarginal distributionscorrespond toXandYrespectively. The choice ofWis generally not unique, and the whole idea of "coupling" is about making such a choice so thatXandYcan be related in a particularly desirable way.
Using the standard formalism ofprobability theory, letX1{\displaystyle X_{1}}andX2{\displaystyle X_{2}}be two random variables defined onprobability spaces(Ω1,F1,P1){\displaystyle (\Omega _{1},F_{1},P_{1})}and(Ω2,F2,P2){\displaystyle (\Omega _{2},F_{2},P_{2})}. Then a coupling ofX1{\displaystyle X_{1}}andX2{\displaystyle X_{2}}is anewprobability space(Ω,F,P){\displaystyle (\Omega ,F,P)}over which there are two random variablesY1{\displaystyle Y_{1}}andY2{\displaystyle Y_{2}}such thatY1{\displaystyle Y_{1}}has the same distribution asX1{\displaystyle X_{1}}whileY2{\displaystyle Y_{2}}has the same distribution asX2{\displaystyle X_{2}}.
An interesting case is whenY1{\displaystyle Y_{1}}andY2{\displaystyle Y_{2}}arenotindependent.
Assume two particlesAandBperform a simplerandom walkin two dimensions, but they start from different points. The simplest way to couple them is simply to force them to walk together. On every step, ifAwalks up, so doesB, ifAmoves to the left, so doesB, etc. Thus, the difference between the two particles' positions stays fixed. As far asAis concerned, it is doing a perfect random walk, whileBis the copycat.Bholds the opposite view, i.e. that it is, in effect, the original and thatAis the copy. And in a sense they both are right. In other words, any mathematical theorem, or result that holds for a regular random walk, will also hold for bothAandB.
Consider now a more elaborate example. Assume thatAstarts from the point (0,0) andBfrom (10,10). First couple them so that they walk together in the vertical direction, i.e. ifAgoes up, so doesB, etc., but are mirror images in the horizontal direction i.e. ifAgoes left,Bgoes right and vice versa. We continue this coupling untilAandBhave the same horizontal coordinate, or in other words are on the vertical line (5,y). If they never meet, we continue this process forever (the probability of that is zero, though). After this event, we change the coupling rule. We let them walk together in the horizontal direction, but in a mirror image rule in the vertical direction. We continue this rule until they meet in the vertical direction too (if they do), and from that point on, we just let them walk together.
This is a coupling in the sense that neither particle, taken on its own, can "feel" anything we did. Neither the fact that the other particle follows it in one way or the other, nor the fact that we changed the coupling rule or when we did it. Each particle performs a simple random walk. And yet, our coupling rule forces them to meetalmost surelyand to continue from that point on together permanently. This allows one to prove many interesting results that say that "in the long run", it is not important where you started in order to obtain that particular result.
Assume two biased coins, the first with probabilitypof turning up heads and the second with probabilityq>pof turning up heads. Intuitively, if both coins are tossed the same number of times, we should expect the first coin turns up fewer heads than the second one. More specifically, for any fixedk, the probability that the first coin produces at leastkheads should be less than the probability that the second coin produces at leastkheads. However proving such a fact can be difficult with a standard counting argument.[1]Coupling easily circumvents this problem.
LetX1,X2, ...,Xnbeindicator variablesfor heads in a sequence of flips of the first coin. For the second coin, define a new sequenceY1,Y2, ...,Ynsuch that
Then the sequence ofYihas exactly theprobability distributionof tosses made with the second coin. However, becauseYidepends onXi, a toss by toss comparison of the two coins is now possible. That is, for anyk≤n
Initialize one processXn{\displaystyle X_{n}}outside the stationary distribution and initialize another processYn{\displaystyle Y_{n}}inside the stationary distribution. Couple these two independent processes together(Xn,Yn){\displaystyle (X_{n},Y_{n})}. As you let time run these two processes will evolve independently. Under certain conditions, these two processes will eventually meet and can be considered the same process at that point. This means that the process outside the stationary distribution converges to the stationary distribution.
|
https://en.wikipedia.org/wiki/Coupling_(probability)
|
Inprobability theory, thecentral limit theorem(CLT) states that, under appropriate conditions, thedistributionof a normalized version of the sample mean converges to astandard normal distribution. This holds even if the original variables themselves are notnormally distributed. There are several versions of the CLT, each applying in the context of different conditions.
The theorem is a key concept in probability theory because it implies that probabilistic and statistical methods that work for normal distributions can be applicable to many problems involving other types of distributions.
This theorem has seen many changes during the formal development of probability theory. Previous versions of the theorem date back to 1811, but in its modern form it was only precisely stated as late as 1920.[1]
Instatistics, the CLT can be stated as: letX1,X2,…,Xn{\displaystyle X_{1},X_{2},\dots ,X_{n}}denote astatistical sampleof sizen{\displaystyle n}from a population withexpected value(average)μ{\displaystyle \mu }and finite positivevarianceσ2{\displaystyle \sigma ^{2}}, and letX¯n{\displaystyle {\bar {X}}_{n}}denote the sample mean (which is itself arandom variable). Then thelimit asn→∞{\displaystyle n\to \infty }of the distributionof(X¯n−μ)n{\displaystyle ({\bar {X}}_{n}-\mu ){\sqrt {n}}}is a normal distribution with mean0{\displaystyle 0}and varianceσ2{\displaystyle \sigma ^{2}}.[2]
In other words, suppose that a large sample ofobservationsis obtained, each observation being randomly produced in a way that does not depend on the values of the other observations, and the average (arithmetic mean) of the observed values is computed. If this procedure is performed many times, resulting in a collection of observed averages, the central limit theorem says that if the sample size is large enough, theprobability distributionof these averages will closely approximate a normal distribution.
The central limit theorem has several variants. In its common form, the random variables must beindependent and identically distributed(i.i.d.). This requirement can be weakened; convergence of the mean to the normal distribution also occurs for non-identical distributions or for non-independent observations if they comply with certain conditions.
The earliest version of this theorem, that the normal distribution may be used as an approximation to thebinomial distribution, is thede Moivre–Laplace theorem.
Let{X1,…,Xn}{\displaystyle \{X_{1},\ldots ,X_{n}}\}be a sequence ofi.i.d. random variableshaving a distribution withexpected valuegiven byμ{\displaystyle \mu }and finitevariancegiven byσ2.{\displaystyle \sigma ^{2}.}Suppose we are interested in thesample average
X¯n≡X1+⋯+Xnn.{\displaystyle {\bar {X}}_{n}\equiv {\frac {X_{1}+\cdots +X_{n}}{n}}.}
By thelaw of large numbers, the sample averageconverges almost surely(and therefore alsoconverges in probability) to the expected valueμ{\displaystyle \mu }asn→∞.{\displaystyle n\to \infty .}
The classical central limit theorem describes the size and the distributional form of thestochasticfluctuations around the deterministic numberμ{\displaystyle \mu }during this convergence. More precisely, it states that asn{\displaystyle n}gets larger, the distribution of the normalized meann(X¯n−μ){\displaystyle {\sqrt {n}}({\bar {X}}_{n}-\mu )}, i.e. the difference between the sample averageX¯n{\displaystyle {\bar {X}}_{n}}and its limitμ,{\displaystyle \mu ,}scaled by the factorn{\displaystyle {\sqrt {n}}}, approaches thenormal distributionwith mean0{\displaystyle 0}and varianceσ2.{\displaystyle \sigma ^{2}.}For large enoughn,{\displaystyle n,}the distribution ofX¯n{\displaystyle {\bar {X}}_{n}}gets arbitrarily close to the normal distribution with meanμ{\displaystyle \mu }and varianceσ2/n.{\displaystyle \sigma ^{2}/n.}
The usefulness of the theorem is that the distribution ofn(X¯n−μ){\displaystyle {\sqrt {n}}({\bar {X}}_{n}-\mu )}approaches normality regardless of the shape of the distribution of the individualXi.{\displaystyle X_{i}.}Formally, the theorem can be stated as follows:
Lindeberg–Lévy CLT—SupposeX1,X2,X3…{\displaystyle X_{1},X_{2},X_{3}\ldots }is a sequence ofi.i.d.random variables withE[Xi]=μ{\displaystyle \operatorname {E} [X_{i}]=\mu }andVar[Xi]=σ2<∞.{\displaystyle \operatorname {Var} [X_{i}]=\sigma ^{2}<\infty .}Then, asn{\displaystyle n}approaches infinity, the random variablesn(X¯n−μ){\displaystyle {\sqrt {n}}({\bar {X}}_{n}-\mu )}converge in distributionto anormalN(0,σ2){\displaystyle {\mathcal {N}}(0,\sigma ^{2})}:[4]
n(X¯n−μ)⟶dN(0,σ2).{\displaystyle {\sqrt {n}}\left({\bar {X}}_{n}-\mu \right)\mathrel {\overset {d}{\longrightarrow }} {\mathcal {N}}\left(0,\sigma ^{2}\right).}
In the caseσ>0,{\displaystyle \sigma >0,}convergence in distribution means that thecumulative distribution functionsofn(X¯n−μ){\displaystyle {\sqrt {n}}({\bar {X}}_{n}-\mu )}converge pointwise to the cdf of theN(0,σ2){\displaystyle {\mathcal {N}}(0,\sigma ^{2})}distribution: for every real numberz,{\displaystyle z,}
limn→∞P[n(X¯n−μ)≤z]=limn→∞P[n(X¯n−μ)σ≤zσ]=Φ(zσ),{\displaystyle \lim _{n\to \infty }\mathbb {P} \left[{\sqrt {n}}({\bar {X}}_{n}-\mu )\leq z\right]=\lim _{n\to \infty }\mathbb {P} \left[{\frac {{\sqrt {n}}({\bar {X}}_{n}-\mu )}{\sigma }}\leq {\frac {z}{\sigma }}\right]=\Phi \left({\frac {z}{\sigma }}\right),}
whereΦ(z){\displaystyle \Phi (z)}is the standard normal cdf evaluated atz.{\displaystyle z.}The convergence is uniform inz{\displaystyle z}in the sense that
limn→∞supz∈R|P[n(X¯n−μ)≤z]−Φ(zσ)|=0,{\displaystyle \lim _{n\to \infty }\;\sup _{z\in \mathbb {R} }\;\left|\mathbb {P} \left[{\sqrt {n}}({\bar {X}}_{n}-\mu )\leq z\right]-\Phi \left({\frac {z}{\sigma }}\right)\right|=0~,}
wheresup{\displaystyle \sup }denotes the least upper bound (orsupremum) of the set.[5]
In this variant of the central limit theorem the random variablesXi{\textstyle X_{i}}have to be independent, but not necessarily identically distributed. The theorem also requires that random variables|Xi|{\textstyle \left|X_{i}\right|}havemomentsof some order(2+δ){\textstyle (2+\delta )},and that the rate of growth of these moments is limited by the Lyapunov condition given below.
Lyapunov CLT[6]—Suppose{X1,…,Xn,…}{\textstyle \{X_{1},\ldots ,X_{n},\ldots \}}is a sequence of independent random variables, each with finite expected valueμi{\textstyle \mu _{i}}and varianceσi2{\textstyle \sigma _{i}^{2}}.Define
sn2=∑i=1nσi2.{\displaystyle s_{n}^{2}=\sum _{i=1}^{n}\sigma _{i}^{2}.}
If for someδ>0{\textstyle \delta >0},Lyapunov’s condition
limn→∞1sn2+δ∑i=1nE[|Xi−μi|2+δ]=0{\displaystyle \lim _{n\to \infty }\;{\frac {1}{s_{n}^{2+\delta }}}\,\sum _{i=1}^{n}\operatorname {E} \left[\left|X_{i}-\mu _{i}\right|^{2+\delta }\right]=0}
is satisfied, then a sum ofXi−μisn{\textstyle {\frac {X_{i}-\mu _{i}}{s_{n}}}}converges in distribution to a standard normal random variable, asn{\textstyle n}goes to infinity:
1sn∑i=1n(Xi−μi)⟶dN(0,1).{\displaystyle {\frac {1}{s_{n}}}\,\sum _{i=1}^{n}\left(X_{i}-\mu _{i}\right)\mathrel {\overset {d}{\longrightarrow }} {\mathcal {N}}(0,1).}
In practice it is usually easiest to check Lyapunov's condition forδ=1{\textstyle \delta =1}.
If a sequence of random variables satisfies Lyapunov's condition, then it also satisfies Lindeberg's condition. The converse implication, however, does not hold.
In the same setting and with the same notation as above, the Lyapunov condition can be replaced with the following weaker one (fromLindebergin 1920).
Suppose that for everyε>0{\textstyle \varepsilon >0},
limn→∞1sn2∑i=1nE[(Xi−μi)2⋅1{|Xi−μi|>εsn}]=0{\displaystyle \lim _{n\to \infty }{\frac {1}{s_{n}^{2}}}\sum _{i=1}^{n}\operatorname {E} \left[(X_{i}-\mu _{i})^{2}\cdot \mathbf {1} _{\left\{\left|X_{i}-\mu _{i}\right|>\varepsilon s_{n}\right\}}\right]=0}
where1{…}{\textstyle \mathbf {1} _{\{\ldots \}}}is theindicator function. Then the distribution of the standardized sums
1sn∑i=1n(Xi−μi){\displaystyle {\frac {1}{s_{n}}}\sum _{i=1}^{n}\left(X_{i}-\mu _{i}\right)}
converges towards the standard normal distributionN(0,1){\textstyle {\mathcal {N}}(0,1)}.
Rather than summing an integer numbern{\displaystyle n}of random variables and takingn→∞{\displaystyle n\to \infty }, the sum can be of a random numberN{\displaystyle N}of random variables, with conditions onN{\displaystyle N}.
Robbins CLT[7][8]—Let{Xi,i≥1}{\displaystyle \{X_{i},i\geq 1\}}be independent, identically distributed random variables withE(Xi)=μ{\displaystyle E(X_{i})=\mu }andVar(Xi)=σ2{\displaystyle {\text{Var}}(X_{i})=\sigma ^{2}}, and let{Nn,n≥1}{\displaystyle \{N_{n},n\geq 1\}}be a sequence of non-negative integer-valued random variables that are independent of{Xi,i≥1}{\displaystyle \{X_{i},i\geq 1\}}. Assume for eachn=1,2,…{\displaystyle n=1,2,\dots }thatE(Nn2)<∞{\displaystyle E(N_{n}^{2})<\infty }and
Nn−E(Nn)Var(Nn)→dN(0,1){\displaystyle {\frac {N_{n}-E(N_{n})}{\sqrt {{\text{Var}}(N_{n})}}}\xrightarrow {\quad d\quad } {\mathcal {N}}(0,1)}
where→d{\displaystyle \xrightarrow {\,d\,} }denotes convergence in distribution andN(0,1){\displaystyle {\mathcal {N}}(0,1)}is the normal distribution with mean 0, variance 1.
Then
∑i=1NnXi−μE(Nn)σ2E(Nn)+μ2Var(Nn)→dN(0,1){\displaystyle {\frac {\sum _{i=1}^{N_{n}}X_{i}-\mu E(N_{n})}{\sqrt {\sigma ^{2}E(N_{n})+\mu ^{2}{\text{Var}}(N_{n})}}}\xrightarrow {\quad d\quad } {\mathcal {N}}(0,1)}
Proofs that use characteristic functions can be extended to cases where each individualXi{\textstyle \mathbf {X} _{i}}is arandom vectorinRk{\textstyle \mathbb {R} ^{k}},with mean vectorμ=E[Xi]{\textstyle {\boldsymbol {\mu }}=\operatorname {E} [\mathbf {X} _{i}]}andcovariance matrixΣ{\textstyle \mathbf {\Sigma } }(among the components of the vector), and these random vectors are independent and identically distributed. The multidimensional central limit theorem states that when scaled, sums converge to amultivariate normal distribution.[9]Summation of these vectors is done component-wise.
Fori=1,2,3,…,{\displaystyle i=1,2,3,\ldots ,}let
Xi=[Xi(1)⋮Xi(k)]{\displaystyle \mathbf {X} _{i}={\begin{bmatrix}X_{i}^{(1)}\\\vdots \\X_{i}^{(k)}\end{bmatrix}}}
be independent random vectors. The sum of the random vectorsX1,…,Xn{\displaystyle \mathbf {X} _{1},\ldots ,\mathbf {X} _{n}}is
∑i=1nXi=[X1(1)⋮X1(k)]+[X2(1)⋮X2(k)]+⋯+[Xn(1)⋮Xn(k)]=[∑i=1nXi(1)⋮∑i=1nXi(k)]{\displaystyle \sum _{i=1}^{n}\mathbf {X} _{i}={\begin{bmatrix}X_{1}^{(1)}\\\vdots \\X_{1}^{(k)}\end{bmatrix}}+{\begin{bmatrix}X_{2}^{(1)}\\\vdots \\X_{2}^{(k)}\end{bmatrix}}+\cdots +{\begin{bmatrix}X_{n}^{(1)}\\\vdots \\X_{n}^{(k)}\end{bmatrix}}={\begin{bmatrix}\sum _{i=1}^{n}X_{i}^{(1)}\\\vdots \\\sum _{i=1}^{n}X_{i}^{(k)}\end{bmatrix}}}
and their average is
X¯n=[X¯i(1)⋮X¯i(k)]=1n∑i=1nXi.{\displaystyle \mathbf {{\bar {X}}_{n}} ={\begin{bmatrix}{\bar {X}}_{i}^{(1)}\\\vdots \\{\bar {X}}_{i}^{(k)}\end{bmatrix}}={\frac {1}{n}}\sum _{i=1}^{n}\mathbf {X} _{i}.}
Therefore,
1n∑i=1n[Xi−E(Xi)]=1n∑i=1n(Xi−μ)=n(X¯n−μ).{\displaystyle {\frac {1}{\sqrt {n}}}\sum _{i=1}^{n}\left[\mathbf {X} _{i}-\operatorname {E} \left(\mathbf {X} _{i}\right)\right]={\frac {1}{\sqrt {n}}}\sum _{i=1}^{n}(\mathbf {X} _{i}-{\boldsymbol {\mu }})={\sqrt {n}}\left({\overline {\mathbf {X} }}_{n}-{\boldsymbol {\mu }}\right).}
The multivariate central limit theorem states that
n(X¯n−μ)⟶dNk(0,Σ),{\displaystyle {\sqrt {n}}\left({\overline {\mathbf {X} }}_{n}-{\boldsymbol {\mu }}\right)\mathrel {\overset {d}{\longrightarrow }} {\mathcal {N}}_{k}(0,{\boldsymbol {\Sigma }}),}where thecovariance matrixΣ{\displaystyle {\boldsymbol {\Sigma }}}is equal toΣ=[Var(X1(1))Cov(X1(1),X1(2))Cov(X1(1),X1(3))⋯Cov(X1(1),X1(k))Cov(X1(2),X1(1))Var(X1(2))Cov(X1(2),X1(3))⋯Cov(X1(2),X1(k))Cov(X1(3),X1(1))Cov(X1(3),X1(2))Var(X1(3))⋯Cov(X1(3),X1(k))⋮⋮⋮⋱⋮Cov(X1(k),X1(1))Cov(X1(k),X1(2))Cov(X1(k),X1(3))⋯Var(X1(k))].{\displaystyle {\boldsymbol {\Sigma }}={\begin{bmatrix}{\operatorname {Var} \left(X_{1}^{(1)}\right)}&\operatorname {Cov} \left(X_{1}^{(1)},X_{1}^{(2)}\right)&\operatorname {Cov} \left(X_{1}^{(1)},X_{1}^{(3)}\right)&\cdots &\operatorname {Cov} \left(X_{1}^{(1)},X_{1}^{(k)}\right)\\\operatorname {Cov} \left(X_{1}^{(2)},X_{1}^{(1)}\right)&\operatorname {Var} \left(X_{1}^{(2)}\right)&\operatorname {Cov} \left(X_{1}^{(2)},X_{1}^{(3)}\right)&\cdots &\operatorname {Cov} \left(X_{1}^{(2)},X_{1}^{(k)}\right)\\\operatorname {Cov} \left(X_{1}^{(3)},X_{1}^{(1)}\right)&\operatorname {Cov} \left(X_{1}^{(3)},X_{1}^{(2)}\right)&\operatorname {Var} \left(X_{1}^{(3)}\right)&\cdots &\operatorname {Cov} \left(X_{1}^{(3)},X_{1}^{(k)}\right)\\\vdots &\vdots &\vdots &\ddots &\vdots \\\operatorname {Cov} \left(X_{1}^{(k)},X_{1}^{(1)}\right)&\operatorname {Cov} \left(X_{1}^{(k)},X_{1}^{(2)}\right)&\operatorname {Cov} \left(X_{1}^{(k)},X_{1}^{(3)}\right)&\cdots &\operatorname {Var} \left(X_{1}^{(k)}\right)\\\end{bmatrix}}~.}
The multivariate central limit theorem can be proved using theCramér–Wold theorem.[9]
The rate of convergence is given by the followingBerry–Esseentype result:
Theorem[10]—LetX1,…,Xn,…{\displaystyle X_{1},\dots ,X_{n},\dots }be independentRd{\displaystyle \mathbb {R} ^{d}}-valued random vectors, each having mean zero. WriteS=∑i=1nXi{\displaystyle S=\sum _{i=1}^{n}X_{i}}and assumeΣ=Cov[S]{\displaystyle \Sigma =\operatorname {Cov} [S]}is invertible. LetZ∼N(0,Σ){\displaystyle Z\sim {\mathcal {N}}(0,\Sigma )}be ad{\displaystyle d}-dimensional Gaussian with the same mean and same covariance matrix asS{\displaystyle S}. Then for all convex setsU⊆Rd{\displaystyle U\subseteq \mathbb {R} ^{d}},
|P[S∈U]−P[Z∈U]|≤Cd1/4γ,{\displaystyle \left|\mathbb {P} [S\in U]-\mathbb {P} [Z\in U]\right|\leq C\,d^{1/4}\gamma ~,}whereC{\displaystyle C}is a universal constant,γ=∑i=1nE[‖Σ−1/2Xi‖23]{\displaystyle \gamma =\sum _{i=1}^{n}\operatorname {E} \left[\left\|\Sigma ^{-1/2}X_{i}\right\|_{2}^{3}\right]},and‖⋅‖2{\displaystyle \|\cdot \|_{2}}denotes the Euclidean norm onRd{\displaystyle \mathbb {R} ^{d}}.
It is unknown whether the factord1/4{\textstyle d^{1/4}}is necessary.[11]
The generalized central limit theorem (GCLT) was an effort of multiple mathematicians (Bernstein,Lindeberg,Lévy,Feller,Kolmogorov, and others) over the period from 1920 to 1937.[12]The first published complete proof of the GCLT was in 1937 byPaul Lévyin French.[13]An English language version of the complete proof of the GCLT is available in the translation ofGnedenkoandKolmogorov's 1954 book.[14]
The statement of the GCLT is as follows:[15]
In other words, if sums of independent, identically distributed random variables converge in distribution to someZ, thenZmust be astable distribution.
A useful generalization of a sequence of independent, identically distributed random variables is amixingrandom process in discrete time; "mixing" means, roughly, that random variables temporally far apart from one another are nearly independent. Several kinds of mixing are used in ergodic theory and probability theory. See especiallystrong mixing(also called α-mixing) defined byα(n)→0{\textstyle \alpha (n)\to 0}whereα(n){\textstyle \alpha (n)}is so-calledstrong mixing coefficient.
A simplified formulation of the central limit theorem under strong mixing is:[16]
Theorem—Suppose that{X1,…,Xn,…}{\textstyle \{X_{1},\ldots ,X_{n},\ldots \}}is stationary andα{\displaystyle \alpha }-mixing withαn=O(n−5){\textstyle \alpha _{n}=O\left(n^{-5}\right)}and thatE[Xn]=0{\textstyle \operatorname {E} [X_{n}]=0}andE[Xn12]<∞{\textstyle \operatorname {E} [X_{n}^{12}]<\infty }.DenoteSn=X1+⋯+Xn{\textstyle S_{n}=X_{1}+\cdots +X_{n}},then the limit
σ2=limn→∞E(Sn2)n{\displaystyle \sigma ^{2}=\lim _{n\rightarrow \infty }{\frac {\operatorname {E} \left(S_{n}^{2}\right)}{n}}}
exists, and ifσ≠0{\textstyle \sigma \neq 0}thenSnσn{\textstyle {\frac {S_{n}}{\sigma {\sqrt {n}}}}}converges in distribution toN(0,1){\textstyle {\mathcal {N}}(0,1)}.
In fact,
σ2=E(X12)+2∑k=1∞E(X1X1+k),{\displaystyle \sigma ^{2}=\operatorname {E} \left(X_{1}^{2}\right)+2\sum _{k=1}^{\infty }\operatorname {E} \left(X_{1}X_{1+k}\right),}
where the series converges absolutely.
The assumptionσ≠0{\textstyle \sigma \neq 0}cannot be omitted, since the asymptotic normality fails forXn=Yn−Yn−1{\textstyle X_{n}=Y_{n}-Y_{n-1}}whereYn{\textstyle Y_{n}}are anotherstationary sequence.
There is a stronger version of the theorem:[17]the assumptionE[Xn12]<∞{\textstyle \operatorname {E} \left[X_{n}^{12}\right]<\infty }is replaced withE[|Xn|2+δ]<∞{\textstyle \operatorname {E} \left[{\left|X_{n}\right|}^{2+\delta }\right]<\infty },and the assumptionαn=O(n−5){\textstyle \alpha _{n}=O\left(n^{-5}\right)}is replaced with
∑nαnδ2(2+δ)<∞.{\displaystyle \sum _{n}\alpha _{n}^{\frac {\delta }{2(2+\delta )}}<\infty .}
Existence of suchδ>0{\textstyle \delta >0}ensures the conclusion. For encyclopedic treatment of limit theorems under mixing conditions see (Bradley 2007).
Theorem—Let amartingaleMn{\textstyle M_{n}}satisfy
thenMnn{\textstyle {\frac {M_{n}}{\sqrt {n}}}}converges in distribution toN(0,1){\textstyle {\mathcal {N}}(0,1)}asn→∞{\textstyle n\to \infty }.[18][19]
The central limit theorem has a proof usingcharacteristic functions.[20]It is similar to the proof of the (weak)law of large numbers.
Assume{X1,…,Xn,…}{\textstyle \{X_{1},\ldots ,X_{n},\ldots \}}are independent and identically distributed random variables, each with meanμ{\textstyle \mu }and finite varianceσ2{\textstyle \sigma ^{2}}.The sumX1+⋯+Xn{\textstyle X_{1}+\cdots +X_{n}}hasmeannμ{\textstyle n\mu }andvariancenσ2{\textstyle n\sigma ^{2}}.Consider the random variable
Zn=X1+⋯+Xn−nμnσ2=∑i=1nXi−μnσ2=∑i=1n1nYi,{\displaystyle Z_{n}={\frac {X_{1}+\cdots +X_{n}-n\mu }{\sqrt {n\sigma ^{2}}}}=\sum _{i=1}^{n}{\frac {X_{i}-\mu }{\sqrt {n\sigma ^{2}}}}=\sum _{i=1}^{n}{\frac {1}{\sqrt {n}}}Y_{i},}
where in the last step we defined the new random variablesYi=Xi−μσ{\textstyle Y_{i}={\frac {X_{i}-\mu }{\sigma }}},each with zero mean and unit variance(var(Y)=1{\textstyle \operatorname {var} (Y)=1}).Thecharacteristic functionofZn{\textstyle Z_{n}}is given by
φZn(t)=φ∑i=1n1nYi(t)=φY1(tn)φY2(tn)⋯φYn(tn)=[φY1(tn)]n,{\displaystyle \varphi _{Z_{n}}\!(t)=\varphi _{\sum _{i=1}^{n}{{\frac {1}{\sqrt {n}}}Y_{i}}}\!(t)\ =\ \varphi _{Y_{1}}\!\!\left({\frac {t}{\sqrt {n}}}\right)\varphi _{Y_{2}}\!\!\left({\frac {t}{\sqrt {n}}}\right)\cdots \varphi _{Y_{n}}\!\!\left({\frac {t}{\sqrt {n}}}\right)\ =\ \left[\varphi _{Y_{1}}\!\!\left({\frac {t}{\sqrt {n}}}\right)\right]^{n},}
where in the last step we used the fact that all of theYi{\textstyle Y_{i}}are identically distributed. The characteristic function ofY1{\textstyle Y_{1}}is, byTaylor's theorem,φY1(tn)=1−t22n+o(t2n),(tn)→0{\displaystyle \varphi _{Y_{1}}\!\left({\frac {t}{\sqrt {n}}}\right)=1-{\frac {t^{2}}{2n}}+o\!\left({\frac {t^{2}}{n}}\right),\quad \left({\frac {t}{\sqrt {n}}}\right)\to 0}
whereo(t2/n){\textstyle o(t^{2}/n)}is "littleonotation" for some function oft{\textstyle t}that goes to zero more rapidly thant2/n{\textstyle t^{2}/n}.By the limit of theexponential function(ex=limn→∞(1+xn)n{\textstyle e^{x}=\lim _{n\to \infty }\left(1+{\frac {x}{n}}\right)^{n}}),the characteristic function ofZn{\displaystyle Z_{n}}equals
φZn(t)=(1−t22n+o(t2n))n→e−12t2,n→∞.{\displaystyle \varphi _{Z_{n}}(t)=\left(1-{\frac {t^{2}}{2n}}+o\left({\frac {t^{2}}{n}}\right)\right)^{n}\rightarrow e^{-{\frac {1}{2}}t^{2}},\quad n\to \infty .}
All of the higher order terms vanish in the limitn→∞{\textstyle n\to \infty }.The right hand side equals the characteristic function of a standard normal distributionN(0,1){\textstyle {\mathcal {N}}(0,1)}, which implies throughLévy's continuity theoremthat the distribution ofZn{\textstyle Z_{n}}will approachN(0,1){\textstyle {\mathcal {N}}(0,1)}asn→∞{\textstyle n\to \infty }.Therefore, thesample average
X¯n=X1+⋯+Xnn{\displaystyle {\bar {X}}_{n}={\frac {X_{1}+\cdots +X_{n}}{n}}}
is such that
nσ(X¯n−μ)=Zn{\displaystyle {\frac {\sqrt {n}}{\sigma }}({\bar {X}}_{n}-\mu )=Z_{n}}
converges to the normal distributionN(0,1){\textstyle {\mathcal {N}}(0,1)},from which the central limit theorem follows.
The central limit theorem gives only anasymptotic distribution. As an approximation for a finite number of observations, it provides a reasonable approximation only when close to the peak of the normal distribution; it requires a very large number of observations to stretch into the tails.[citation needed]
The convergence in the central limit theorem isuniformbecause the limiting cumulative distribution function is continuous. If the third centralmomentE[(X1−μ)3]{\textstyle \operatorname {E} \left[(X_{1}-\mu )^{3}\right]}exists and is finite, then the speed of convergence is at least on the order of1/n{\textstyle 1/{\sqrt {n}}}(seeBerry–Esseen theorem).Stein's method[21]can be used not only to prove the central limit theorem, but also to provide bounds on the rates of convergence for selected metrics.[22]
The convergence to the normal distribution is monotonic, in the sense that theentropyofZn{\textstyle Z_{n}}increasesmonotonicallyto that of the normal distribution.[23]
The central limit theorem applies in particular to sums of independent and identically distributeddiscrete random variables. A sum ofdiscrete random variablesis still adiscrete random variable, so that we are confronted with a sequence ofdiscrete random variableswhose cumulative probability distribution function converges towards a cumulative probability distribution function corresponding to a continuous variable (namely that of thenormal distribution). This means that if we build ahistogramof the realizations of the sum ofnindependent identical discrete variables, the piecewise-linear curve that joins the centers of the upper faces of the rectangles forming the histogram converges toward a Gaussian curve asnapproaches infinity; this relation is known asde Moivre–Laplace theorem. Thebinomial distributionarticle details such an application of the central limit theorem in the simple case of a discrete variable taking only two possible values.
Studies have shown that the central limit theorem is subject to several common but serious misconceptions, some of which appear in widely used textbooks.[24][25][26]These include:
Thelaw of large numbersas well as the central limit theorem are partial solutions to a general problem: "What is the limiting behavior ofSnasnapproaches infinity?" In mathematical analysis,asymptotic seriesare one of the most popular tools employed to approach such questions.
Suppose we have an asymptotic expansion off(n){\textstyle f(n)}:
f(n)=a1φ1(n)+a2φ2(n)+O(φ3(n))(n→∞).{\displaystyle f(n)=a_{1}\varphi _{1}(n)+a_{2}\varphi _{2}(n)+O{\big (}\varphi _{3}(n){\big )}\qquad (n\to \infty ).}
Dividing both parts byφ1(n)and taking the limit will producea1, the coefficient of the highest-order term in the expansion, which represents the rate at whichf(n)changes in its leading term.
limn→∞f(n)φ1(n)=a1.{\displaystyle \lim _{n\to \infty }{\frac {f(n)}{\varphi _{1}(n)}}=a_{1}.}
Informally, one can say: "f(n)grows approximately asa1φ1(n)". Taking the difference betweenf(n)and its approximation and then dividing by the next term in the expansion, we arrive at a more refined statement aboutf(n):
limn→∞f(n)−a1φ1(n)φ2(n)=a2.{\displaystyle \lim _{n\to \infty }{\frac {f(n)-a_{1}\varphi _{1}(n)}{\varphi _{2}(n)}}=a_{2}.}
Here one can say that the difference between the function and its approximation grows approximately asa2φ2(n). The idea is that dividing the function by appropriate normalizing functions, and looking at the limiting behavior of the result, can tell us much about the limiting behavior of the original function itself.
Informally, something along these lines happens when the sum,Sn, of independent identically distributed random variables,X1, ...,Xn, is studied in classical probability theory.[citation needed]If eachXihas finite meanμ, then by the law of large numbers,Sn/n→μ.[28]If in addition eachXihas finite varianceσ2, then by the central limit theorem,
Sn−nμn→ξ,{\displaystyle {\frac {S_{n}-n\mu }{\sqrt {n}}}\to \xi ,}
whereξis distributed asN(0,σ2). This provides values of the first two constants in the informal expansion
Sn≈μn+ξn.{\displaystyle S_{n}\approx \mu n+\xi {\sqrt {n}}.}
In the case where theXido not have finite mean or variance, convergence of the shifted and rescaled sum can also occur with different centering and scaling factors:
Sn−anbn→Ξ,{\displaystyle {\frac {S_{n}-a_{n}}{b_{n}}}\rightarrow \Xi ,}
or informally
Sn≈an+Ξbn.{\displaystyle S_{n}\approx a_{n}+\Xi b_{n}.}
DistributionsΞwhich can arise in this way are calledstable.[29]Clearly, the normal distribution is stable, but there are also other stable distributions, such as theCauchy distribution, for which the mean or variance are not defined. The scaling factorbnmay be proportional tonc, for anyc≥1/2; it may also be multiplied by aslowly varying functionofn.[30][31]
Thelaw of the iterated logarithmspecifies what is happening "in between" thelaw of large numbersand the central limit theorem. Specifically it says that the normalizing function√nlog logn, intermediate in size betweennof the law of large numbers and√nof the central limit theorem, provides a non-trivial limiting behavior.
Thedensityof the sum of two or more independent variables is theconvolutionof their densities (if these densities exist). Thus the central limit theorem can be interpreted as a statement about the properties of density functions under convolution: the convolution of a number of density functions tends to the normal density as the number of density functions increases without bound. These theorems require stronger hypotheses than the forms of the central limit theorem given above. Theorems of this type are often called local limit theorems. See Petrov[32]for a particular local limit theorem for sums ofindependent and identically distributed random variables.
Since thecharacteristic functionof a convolution is the product of the characteristic functions of the densities involved, the central limit theorem has yet another restatement: the product of the characteristic functions of a number of density functions becomes close to the characteristic function of the normal density as the number of density functions increases without bound, under the conditions stated above. Specifically, an appropriate scaling factor needs to be applied to the argument of the characteristic function.
An equivalent statement can be made aboutFourier transforms, since the characteristic function is essentially a Fourier transform.
LetSnbe the sum ofnrandom variables. Many central limit theorems provide conditions such thatSn/√Var(Sn)converges in distribution toN(0,1)(the normal distribution with mean 0, variance 1) asn→ ∞. In some cases, it is possible to find a constantσ2and functionf(n)such thatSn/(σ√n⋅f(n))converges in distribution toN(0,1)asn→ ∞.
Lemma[33]—SupposeX1,X2,…{\displaystyle X_{1},X_{2},\dots }is a sequence of real-valued and strictly stationary random variables withE(Xi)=0{\displaystyle \operatorname {E} (X_{i})=0}for alli{\displaystyle i},g:[0,1]→R{\displaystyle g:[0,1]\to \mathbb {R} },andSn=∑i=1ng(in)Xi{\displaystyle S_{n}=\sum _{i=1}^{n}g\left({\tfrac {i}{n}}\right)X_{i}}.Construct
σ2=E(X12)+2∑i=1∞E(X1X1+i){\displaystyle \sigma ^{2}=\operatorname {E} (X_{1}^{2})+2\sum _{i=1}^{\infty }\operatorname {E} (X_{1}X_{1+i})}
Thelogarithmof a product is simply the sum of the logarithms of the factors. Therefore, when the logarithm of a product of random variables that take only positive values approaches a normal distribution, the product itself approaches alog-normal distribution. Many physical quantities (especially mass or length, which are a matter of scale and cannot be negative) are the products of differentrandomfactors, so they follow a log-normal distribution. This multiplicative version of the central limit theorem is sometimes calledGibrat's law.
Whereas the central limit theorem for sums of random variables requires the condition of finite variance, the corresponding theorem for products requires the corresponding condition that the density function be square-integrable.[34]
Asymptotic normality, that is,convergenceto the normal distribution after appropriate shift and rescaling, is a phenomenon much more general than the classical framework treated above, namely, sums of independent random variables (or vectors). New frameworks are revealed from time to time; no single unifying framework is available for now.
Theorem—There exists a sequenceεn↓ 0for which the following holds. Letn≥ 1, and let random variablesX1, ...,Xnhave alog-concavejoint densityfsuch thatf(x1, ...,xn) =f(|x1|, ..., |xn|)for allx1, ...,xn, andE(X2k) = 1for allk= 1, ...,n. Then the distribution of
X1+⋯+Xnn{\displaystyle {\frac {X_{1}+\cdots +X_{n}}{\sqrt {n}}}}
isεn-close toN(0,1){\textstyle {\mathcal {N}}(0,1)}in thetotal variation distance.[35]
These twoεn-close distributions have densities (in fact, log-concave densities), thus, the total variance distance between them is the integral of the absolute value of the difference between the densities. Convergence in total variation is stronger than weak convergence.
An important example of a log-concave density is a function constant inside a given convex body and vanishing outside; it corresponds to the uniform distribution on the convex body, which explains the term "central limit theorem for convex bodies".
Another example:f(x1, ...,xn) = const · exp(−(|x1|α+ ⋯ + |xn|α)β)whereα> 1andαβ> 1. Ifβ= 1thenf(x1, ...,xn)factorizes intoconst · exp (−|x1|α) … exp(−|xn|α),which meansX1, ...,Xnare independent. In general, however, they are dependent.
The conditionf(x1, ...,xn) =f(|x1|, ..., |xn|)ensures thatX1, ...,Xnare of zero mean anduncorrelated;[citation needed]still, they need not be independent, nor evenpairwise independent.[citation needed]By the way, pairwise independence cannot replace independence in the classical central limit theorem.[36]
Here is aBerry–Esseentype result.
Theorem—LetX1, ...,Xnsatisfy the assumptions of the previous theorem, then[37]
|P(a≤X1+⋯+Xnn≤b)−12π∫abe−12t2dt|≤Cn{\displaystyle \left|\mathbb {P} \left(a\leq {\frac {X_{1}+\cdots +X_{n}}{\sqrt {n}}}\leq b\right)-{\frac {1}{\sqrt {2\pi }}}\int _{a}^{b}e^{-{\frac {1}{2}}t^{2}}\,dt\right|\leq {\frac {C}{n}}}
for alla<b; hereCis auniversal (absolute) constant. Moreover, for everyc1, ...,cn∈Rsuch thatc21+ ⋯ +c2n= 1,
|P(a≤c1X1+⋯+cnXn≤b)−12π∫abe−12t2dt|≤C(c14+⋯+cn4).{\displaystyle \left|\mathbb {P} \left(a\leq c_{1}X_{1}+\cdots +c_{n}X_{n}\leq b\right)-{\frac {1}{\sqrt {2\pi }}}\int _{a}^{b}e^{-{\frac {1}{2}}t^{2}}\,dt\right|\leq C\left(c_{1}^{4}+\dots +c_{n}^{4}\right).}
The distribution ofX1+ ⋯ +Xn/√nneed not be approximately normal (in fact, it can be uniform).[38]However, the distribution ofc1X1+ ⋯ +cnXnis close toN(0,1){\textstyle {\mathcal {N}}(0,1)}(in the total variation distance) for most vectors(c1, ...,cn)according to the uniform distribution on the spherec21+ ⋯ +c2n= 1.
Theorem (Salem–Zygmund)—LetUbe a random variable distributed uniformly on(0,2π), andXk=rkcos(nkU+ak), where
Then[39][40]
X1+⋯+Xkr12+⋯+rk2{\displaystyle {\frac {X_{1}+\cdots +X_{k}}{\sqrt {r_{1}^{2}+\cdots +r_{k}^{2}}}}}
converges in distribution toN(0,12){\textstyle {\mathcal {N}}{\big (}0,{\frac {1}{2}}{\big )}}.
Theorem—LetA1, ...,Anbe independent random points on the planeR2each having the two-dimensional standard normal distribution. LetKnbe theconvex hullof these points, andXnthe area ofKnThen[41]
Xn−E(Xn)Var(Xn){\displaystyle {\frac {X_{n}-\operatorname {E} (X_{n})}{\sqrt {\operatorname {Var} (X_{n})}}}}converges in distribution toN(0,1){\textstyle {\mathcal {N}}(0,1)}asntends to infinity.
The same also holds in all dimensions greater than 2.
ThepolytopeKnis called a Gaussian random polytope.
A similar result holds for the number of vertices (of the Gaussian polytope), the number of edges, and in fact, faces of all dimensions.[42]
A linear function of a matrixMis a linear combination of its elements (with given coefficients),M↦ tr(AM)whereAis the matrix of the coefficients; seeTrace (linear algebra)#Inner product.
A randomorthogonal matrixis said to be distributed uniformly, if its distribution is the normalizedHaar measureon theorthogonal groupO(n,R); seeRotation matrix#Uniform random rotation matrices.
Theorem—LetMbe a random orthogonaln×nmatrix distributed uniformly, andAa fixedn×nmatrix such thattr(AA*) =n, and letX= tr(AM). Then[43]the distribution ofXis close toN(0,1){\textstyle {\mathcal {N}}(0,1)}in the total variation metric up to[clarification needed]2√3/n− 1.
Theorem—Let random variablesX1,X2, ... ∈L2(Ω)be such thatXn→ 0weaklyinL2(Ω)andXn→ 1weakly inL1(Ω). Then there exist integersn1<n2< ⋯such that
Xn1+⋯+Xnkk{\displaystyle {\frac {X_{n_{1}}+\cdots +X_{n_{k}}}{\sqrt {k}}}}
converges in distribution toN(0,1){\textstyle {\mathcal {N}}(0,1)}asktends to infinity.[44]
The central limit theorem may be established for the simplerandom walkon a crystal lattice (an infinite-fold abelian covering graph over a finite graph), and is used for design of crystal structures.[45][46]
A simple example of the central limit theorem is rolling many identical, unbiased dice. The distribution of the sum (or average) of the rolled numbers will be well approximated by a normal distribution. Since real-world quantities are often the balanced sum of many unobserved random events, the central limit theorem also provides a partial explanation for the prevalence of the normal probability distribution. It also justifies the approximation of large-samplestatisticsto the normal distribution in controlled experiments.
Regression analysis, and in particularordinary least squares, specifies that adependent variabledepends according to some function upon one or moreindependent variables, with an additiveerror term. Various types of statistical inference on the regression assume that the error term is normally distributed. This assumption can be justified by assuming that the error term is actually the sum of many independent error terms; even if the individual error terms are not normally distributed, by the central limit theorem their sum can be well approximated by a normal distribution.
Given its importance to statistics, a number of papers and computer packages are available that demonstrate the convergence involved in the central limit theorem.[47]
Dutch mathematicianHenk Tijmswrites:[48]
The central limit theorem has an interesting history. The first version of this theorem was postulated by the French-born mathematicianAbraham de Moivrewho, in a remarkable article published in 1733, used the normal distribution to approximate the distribution of the number of heads resulting from many tosses of a fair coin. This finding was far ahead of its time, and was nearly forgotten until the famous French mathematicianPierre-Simon Laplacerescued it from obscurity in his monumental workThéorie analytique des probabilités, which was published in 1812. Laplace expanded De Moivre's finding by approximating the binomial distribution with the normal distribution. But as with De Moivre, Laplace's finding received little attention in his own time. It was not until the nineteenth century was at an end that the importance of the central limit theorem was discerned, when, in 1901, Russian mathematicianAleksandr Lyapunovdefined it in general terms and proved precisely how it worked mathematically. Nowadays, the central limit theorem is considered to be the unofficial sovereign of probability theory.
SirFrancis Galtondescribed the Central Limit Theorem in this way:[49]
I know of scarcely anything so apt to impress the imagination as the wonderful form of cosmic order expressed by the "Law of Frequency of Error". The law would have been personified by the Greeks and deified, if they had known of it. It reigns with serenity and in complete self-effacement, amidst the wildest confusion. The huger the mob, and the greater the apparent anarchy, the more perfect is its sway. It is the supreme law of Unreason. Whenever a large sample of chaotic elements are taken in hand and marshalled in the order of their magnitude, an unsuspected and most beautiful form of regularity proves to have been latent all along.
The actual term "central limit theorem" (in German: "zentraler Grenzwertsatz") was first used byGeorge Pólyain 1920 in the title of a paper.[50][51]Pólya referred to the theorem as "central" due to its importance in probability theory. According to Le Cam, the French school of probability interprets the wordcentralin the sense that "it describes the behaviour of the centre of the distribution as opposed to its tails".[51]The abstract of the paperOn the central limit theorem of calculus of probability and the problem of momentsby Pólya[50]in 1920 translates as follows.
The occurrence of the Gaussian probability density1 =e−x2in repeated experiments, in errors of measurements, which result in the combination of very many and very small elementary errors, in diffusion processes etc., can be explained, as is well-known, by the very same limit theorem, which plays a central role in the calculus of probability. The actual discoverer of this limit theorem is to be named Laplace; it is likely that its rigorous proof was first given by Tschebyscheff and its sharpest formulation can be found, as far as I am aware of, in an article byLiapounoff. ...
A thorough account of the theorem's history, detailing Laplace's foundational work, as well asCauchy's,Bessel's andPoisson's contributions, is provided by Hald.[52]Two historical accounts, one covering the development from Laplace to Cauchy, the second the contributions byvon Mises,Pólya,Lindeberg,Lévy, andCramérduring the 1920s, are given by Hans Fischer.[53]Le Cam describes a period around 1935.[51]Bernstein[54]presents a historical discussion focusing on the work ofPafnuty Chebyshevand his studentsAndrey MarkovandAleksandr Lyapunovthat led to the first proofs of the CLT in a general setting.
A curious footnote to the history of the Central Limit Theorem is that a proof of a result similar to the 1922 Lindeberg CLT was the subject ofAlan Turing's 1934 Fellowship Dissertation forKing's Collegeat theUniversity of Cambridge. Only after submitting the work did Turing learn it had already been proved. Consequently, Turing's dissertation was not published.[55]
|
https://en.wikipedia.org/wiki/Central_limit_theorem
|
Instatistics,correlationordependenceis any statistical relationship, whethercausalor not, between tworandom variablesorbivariate data. Although in the broadest sense, "correlation" may indicate any type of association, in statistics it usually refers to the degree to which a pair of variables arelinearlyrelated.
Familiar examples of dependent phenomena include the correlation between theheightof parents and their offspring, and the correlation between the price of a good and the quantity the consumers are willing to purchase, as it is depicted in thedemand curve.
Correlations are useful because they can indicate a predictive relationship that can be exploited in practice. For example, an electrical utility may produce less power on a mild day based on the correlation between electricity demand and weather. In this example, there is acausal relationship, becauseextreme weathercauses people to use more electricity for heating or cooling. However, in general, the presence of a correlation is not sufficient to infer the presence of a causal relationship (i.e.,correlation does not imply causation).
Formally, random variables aredependentif they do not satisfy a mathematical property ofprobabilistic independence. In informal parlance,correlationis synonymous withdependence. However, when used in a technical sense, correlation refers to any of several specific types of mathematical relationship betweenthe conditional expectation of one variable given the other is not constant as the conditioning variable changes; broadly correlation in this specific sense is used whenE(Y|X=x){\displaystyle E(Y|X=x)}is related tox{\displaystyle x}in some manner (such as linearly, monotonically, or perhaps according to some particular functional form such as logarithmic). Essentially, correlation is the measure of how two or more variables are related to one another. There are severalcorrelation coefficients, often denotedρ{\displaystyle \rho }orr{\displaystyle r}, measuring the degree of correlation. The most common of these is thePearson correlation coefficient, which is sensitive only to a linear relationship between two variables (which may be present even when one variable is a nonlinear function of the other). Other correlation coefficients – such asSpearman's rank correlation coefficient– have been developed to be morerobustthan Pearson's and to detect less structured relationships between variables.[1][2][3]Mutual informationcan also be applied to measure dependence between two variables.
The most familiar measure of dependence between two quantities is thePearson product-moment correlation coefficient(PPMCC), or "Pearson's correlation coefficient", commonly called simply "the correlation coefficient". It is obtained by taking the ratio of the covariance of the two variables in question of our numerical dataset, normalized to the square root of their variances. Mathematically, one simply divides thecovarianceof the two variables by the product of theirstandard deviations.Karl Pearsondeveloped the coefficient from a similar but slightly different idea byFrancis Galton.[4]
A Pearson product-moment correlation coefficient attempts to establish a line of best fit through a dataset of two variables by essentially laying out the expected values and the resulting Pearson's correlation coefficient indicates how far away the actual dataset is from the expected values. Depending on the sign of our Pearson's correlation coefficient, we can end up with either a negative or positive correlation if there is any sort of relationship between the variables of our data set.[citation needed]
The population correlation coefficientρX,Y{\displaystyle \rho _{X,Y}}between tworandom variablesX{\displaystyle X}andY{\displaystyle Y}withexpected valuesμX{\displaystyle \mu _{X}}andμY{\displaystyle \mu _{Y}}andstandard deviationsσX{\displaystyle \sigma _{X}}andσY{\displaystyle \sigma _{Y}}is defined as:
ρX,Y=corr(X,Y)=cov(X,Y)σXσY=E[(X−μX)(Y−μY)]σXσY,ifσXσY>0.{\displaystyle \rho _{X,Y}=\operatorname {corr} (X,Y)={\operatorname {cov} (X,Y) \over \sigma _{X}\sigma _{Y}}={\operatorname {E} [(X-\mu _{X})(Y-\mu _{Y})] \over \sigma _{X}\sigma _{Y}},\quad {\text{if}}\ \sigma _{X}\sigma _{Y}>0.}
whereE{\displaystyle \operatorname {E} }is theexpected valueoperator,cov{\displaystyle \operatorname {cov} }meanscovariance, andcorr{\displaystyle \operatorname {corr} }is a widely used alternative notation for the correlation coefficient. The Pearson correlation is defined only if both standard deviations are finite and positive. An alternative formula purely in terms ofmomentsis:
ρX,Y=E(XY)−E(X)E(Y)E(X2)−E(X)2⋅E(Y2)−E(Y)2{\displaystyle \rho _{X,Y}={\operatorname {E} (XY)-\operatorname {E} (X)\operatorname {E} (Y) \over {\sqrt {\operatorname {E} (X^{2})-\operatorname {E} (X)^{2}}}\cdot {\sqrt {\operatorname {E} (Y^{2})-\operatorname {E} (Y)^{2}}}}}
It is a corollary of theCauchy–Schwarz inequalitythat theabsolute valueof the Pearson correlation coefficient is not bigger than 1. Therefore, the value of a correlation coefficient ranges between −1 and +1. The correlation coefficient is +1 in the case of a perfect direct (increasing) linear relationship (correlation), −1 in the case of a perfect inverse (decreasing) linear relationship (anti-correlation),[5]and some value in theopen interval(−1,1){\displaystyle (-1,1)}in all other cases, indicating the degree oflinear dependencebetween the variables. As it approaches zero there is less of a relationship (closer to uncorrelated). The closer the coefficient is to either −1 or 1, the stronger the correlation between the variables.
If the variables areindependent, Pearson's correlation coefficient is 0. However, because the correlation coefficient detects only linear dependencies between two variables, the converse is not necessarily true. A correlation coefficient of 0 does not imply that the variables are independent[citation needed].
X,Yindependent⇒ρX,Y=0(X,Yuncorrelated)ρX,Y=0(X,Yuncorrelated)⇏X,Yindependent{\displaystyle {\begin{aligned}X,Y{\text{ independent}}\quad &\Rightarrow \quad \rho _{X,Y}=0\quad (X,Y{\text{ uncorrelated}})\\\rho _{X,Y}=0\quad (X,Y{\text{ uncorrelated}})\quad &\nRightarrow \quad X,Y{\text{ independent}}\end{aligned}}}
For example, suppose the random variableX{\displaystyle X}is symmetrically distributed about zero, andY=X2{\displaystyle Y=X^{2}}. ThenY{\displaystyle Y}is completely determined byX{\displaystyle X}, so thatX{\displaystyle X}andY{\displaystyle Y}are perfectly dependent, but their correlation is zero; they areuncorrelated. However, in the special case whenX{\displaystyle X}andY{\displaystyle Y}arejointly normal, uncorrelatedness is equivalent to independence.
Even though uncorrelated data does not necessarily imply independence, one can check if random variables are independent if theirmutual informationis 0.
Given a series ofn{\displaystyle n}measurements of the pair(Xi,Yi){\displaystyle (X_{i},Y_{i})}indexed byi=1,…,n{\displaystyle i=1,\ldots ,n}, thesample correlation coefficientcan be used to estimate the population Pearson correlationρX,Y{\displaystyle \rho _{X,Y}}betweenX{\displaystyle X}andY{\displaystyle Y}. The sample correlation coefficient is defined as
wherex¯{\displaystyle {\overline {x}}}andy¯{\displaystyle {\overline {y}}}are the samplemeansofX{\displaystyle X}andY{\displaystyle Y}, andsx{\displaystyle s_{x}}andsy{\displaystyle s_{y}}are thecorrected sample standard deviationsofX{\displaystyle X}andY{\displaystyle Y}.
Equivalent expressions forrxy{\displaystyle r_{xy}}are
wheresx′{\displaystyle s'_{x}}andsy′{\displaystyle s'_{y}}are theuncorrectedsample standard deviationsofX{\displaystyle X}andY{\displaystyle Y}.
Ifx{\displaystyle x}andy{\displaystyle y}are results of measurements that contain measurement error, the realistic limits on the correlation coefficient are not −1 to +1 but a smaller range.[6]For the case of a linear model with a single independent variable, thecoefficient of determination (R squared)is the square ofrxy{\displaystyle r_{xy}}, Pearson's product-moment coefficient.
Consider thejoint probability distributionofXandYgiven in the table below.
For this joint distribution, themarginal distributionsare:
This yields the following expectations and variances:
Therefore:
Rank correlationcoefficients, such asSpearman's rank correlation coefficientandKendall's rank correlation coefficient (τ)measure the extent to which, as one variable increases, the other variable tends to increase, without requiring that increase to be represented by a linear relationship. If, as the one variable increases, the otherdecreases, the rank correlation coefficients will be negative. It is common to regard these rank correlation coefficients as alternatives to Pearson's coefficient, used either to reduce the amount of calculation or to make the coefficient less sensitive to non-normality in distributions. However, this view has little mathematical basis, as rank correlation coefficients measure a different type of relationship than thePearson product-moment correlation coefficient, and are best seen as measures of a different type of association, rather than as an alternative measure of the population correlation coefficient.[7][8]
To illustrate the nature of rank correlation, and its difference from linear correlation, consider the following four pairs of numbers(x,y){\displaystyle (x,y)}:
As we go from each pair to the next pairx{\displaystyle x}increases, and so doesy{\displaystyle y}. This relationship is perfect, in the sense that an increase inx{\displaystyle x}isalwaysaccompanied by an increase iny{\displaystyle y}. This means that we have a perfect rank correlation, and both Spearman's and Kendall's correlation coefficients are 1, whereas in this example Pearson product-moment correlation coefficient is 0.7544, indicating that the points are far from lying on a straight line. In the same way ify{\displaystyle y}alwaysdecreaseswhenx{\displaystyle x}increases, the rank correlation coefficients will be −1, while the Pearson product-moment correlation coefficient may or may not be close to −1, depending on how close the points are to a straight line. Although in the extreme cases of perfect rank correlation the two coefficients are both equal (being both +1 or both −1), this is not generally the case, and so values of the two coefficients cannot meaningfully be compared.[7]For example, for the three pairs (1, 1) (2, 3) (3, 2) Spearman's coefficient is 1/2, while Kendall's coefficient is 1/3.
The information given by a correlation coefficient is not enough to define the dependence structure between random variables. The correlation coefficient completely defines the dependence structure only in very particular cases, for example when the distribution is amultivariate normal distribution. (See diagram above.) In the case ofelliptical distributionsit characterizes the (hyper-)ellipses of equal density; however, it does not completely characterize the dependence structure (for example, amultivariate t-distribution's degrees of freedom determine the level of tail dependence).
For continuous variables, multiple alternative measures of dependence were introduced to address the deficiency of Pearson's correlation that it can be zero for dependent random variables (see[9]and reference references therein for an overview). They all share the important property that a value of zero implies independence. This led some authors[9][10]to recommend their routine usage, particularly ofdistance correlation.[11][12]Another alternative measure is the Randomized Dependence Coefficient.[13]The RDC is a computationally efficient,copula-based measure of dependence between multivariate random variables and is invariant with respect to non-linear scalings of random variables.
One important disadvantage of the alternative, more general measures is that, when used to test whether two variables are associated, they tend to have lower power compared to Pearson's correlation when the data follow a multivariate normal distribution.[9]This is an implication of theNo free lunch theorem. To detect all kinds of relationships, these measures have to sacrifice power on other relationships, particularly for the important special case of a linear relationship with Gaussian marginals, for which Pearson's correlation is optimal. Another problem concerns interpretation. While Person's correlation can be interpreted for all values, the alternative measures can generally only be interpreted meaningfully at the extremes.[14]
For twobinary variables, theodds ratiomeasures their dependence, and takes range non-negative numbers, possibly infinity:[0,+∞]{\displaystyle [0,+\infty ]}. Related statistics such asYule'sYandYule'sQnormalize this to the correlation-like range[−1,1]{\displaystyle [-1,1]}. The odds ratio is generalized by thelogistic modelto model cases where the dependent variables are discrete and there may be one or more independent variables.
Thecorrelation ratio,entropy-basedmutual information,total correlation,dual total correlationandpolychoric correlationare all also capable of detecting more general dependencies, as is consideration of thecopulabetween them, while thecoefficient of determinationgeneralizes the correlation coefficient tomultiple regression.
The degree of dependence between variablesXandYdoes not depend on the scale on which the variables are expressed. That is, if we are analyzing the relationship betweenXandY, most correlation measures are unaffected by transformingXtoa+bXandYtoc+dY, wherea,b,c, anddare constants (banddbeing positive). This is true of some correlationstatisticsas well as theirpopulationanalogues. Some correlation statistics, such as the rank correlation coefficient, are also invariant tomonotone transformationsof the marginal distributions ofXand/orY.
Most correlation measures are sensitive to the manner in whichXandYare sampled. Dependencies tend to be stronger if viewed over a wider range of values. Thus, if we consider the correlation coefficient between the heights of fathers and their sons over all adult males, and compare it to the same correlation coefficient calculated when the fathers are selected to be between 165 cm and 170 cm in height, the correlation will be weaker in the latter case. Several techniques have been developed that attempt to correct for range restriction in one or both variables, and are commonly used in meta-analysis; the most common are Thorndike's case II and case III equations.[15]
Various correlation measures in use may be undefined for certain joint distributions ofXandY. For example, the Pearson correlation coefficient is defined in terms ofmoments, and hence will be undefined if the moments are undefined. Measures of dependence based onquantilesare always defined. Sample-based statistics intended to estimate population measures of dependence may or may not have desirable statistical properties such as beingunbiased, orasymptotically consistent, based on the spatial structure of the population from which the data were sampled.
Sensitivity to the data distribution can be used to an advantage. For example,scaled correlationis designed to use the sensitivity to the range in order to pick out correlations between fast components oftime series.[16]By reducing the range of values in a controlled manner, the correlations on long time scale are filtered out and only the correlations on short time scales are revealed.
The correlation matrix ofn{\displaystyle n}random variablesX1,…,Xn{\displaystyle X_{1},\ldots ,X_{n}}is then×n{\displaystyle n\times n}matrixC{\displaystyle C}whose(i,j){\displaystyle (i,j)}entry is
Thus the diagonal entries are all identicallyone. If the measures of correlation used are product-moment coefficients, the correlation matrix is the same as thecovariance matrixof thestandardized random variablesXi/σ(Xi){\displaystyle X_{i}/\sigma (X_{i})}fori=1,…,n{\displaystyle i=1,\dots ,n}. This applies both to the matrix of population correlations (in which caseσ{\displaystyle \sigma }is the population standard deviation), and to the matrix of sample correlations (in which caseσ{\displaystyle \sigma }denotes the sample standard deviation). Consequently, each is necessarily apositive-semidefinite matrix. Moreover, the correlation matrix is strictlypositive definiteif no variable can have all its values exactly generated as a linear function of the values of the others.
The correlation matrix is symmetric because the correlation betweenXi{\displaystyle X_{i}}andXj{\displaystyle X_{j}}is the same as the correlation betweenXj{\displaystyle X_{j}}andXi{\displaystyle X_{i}}.
A correlation matrix appears, for example, in one formula for thecoefficient of multiple determination, a measure of goodness of fit inmultiple regression.
Instatistical modelling, correlation matrices representing the relationships between variables are categorized into different correlation structures, which are distinguished by factors such as the number of parameters required to estimate them. For example, in anexchangeablecorrelation matrix, all pairs of variables are modeled as having the same correlation, so all non-diagonal elements of the matrix are equal to each other. On the other hand, anautoregressivematrix is often used when variables represent a time series, since correlations are likely to be greater when measurements are closer in time. Other examples include independent, unstructured, M-dependent, andToeplitz.
Inexploratory data analysis, theiconography of correlationsconsists in replacing a correlation matrix by a diagram where the "remarkable" correlations are represented by a solid line (positive correlation), or a dotted line (negative correlation).
In some applications (e.g., building data models from only partially observed data) one wants to find the "nearest" correlation matrix to an "approximate" correlation matrix (e.g., a matrix which typically lacks semi-definite positiveness due to the way it has been computed).
In 2002, Higham[17]formalized the notion of nearness using theFrobenius normand provided a method for computing the nearest correlation matrix using theDykstra's projection algorithm, of which an implementation is available as an online Web API.[18]
This sparked interest in the subject, with new theoretical (e.g., computing the nearest correlation matrix with factor structure[19]) and numerical (e.g. usage theNewton's methodfor computing the nearest correlation matrix[20]) results obtained in the subsequent years.
Similarly for two stochastic processes{Xt}t∈T{\displaystyle \left\{X_{t}\right\}_{t\in {\mathcal {T}}}}and{Yt}t∈T{\displaystyle \left\{Y_{t}\right\}_{t\in {\mathcal {T}}}}: If they are independent, then they are uncorrelated.[21]: p. 151The opposite of this statement might not be true. Even if two variables are uncorrelated, they might not be independent to each other.
The conventional dictum that "correlation does not imply causation" means that correlation cannot be used by itself to infer a causal relationship between the variables.[22]This dictum should not be taken to mean that correlations cannot indicate the potential existence of causal relations. However, the causes underlying the correlation, if any, may be indirect and unknown, and high correlations also overlap withidentityrelations (tautologies), where no causal process exists (e.g., between two variables measuring the same construct). Consequently, a correlation between two variables is not a sufficient condition to establish a causal relationship (in either direction).
A correlation between age and height in children is fairly causally transparent, but a correlation between mood and health in people is less so. Does improved mood lead to improved health, or does good health lead to good mood, or both? Or does some other factor underlie both? In other words, a correlation can be taken as evidence for a possible causal relationship, but cannot indicate what the causal relationship, if any, might be.
The Pearson correlation coefficient indicates the strength of alinearrelationship between two variables, but its value generally does not completely characterize their relationship. In particular, if theconditional meanofY{\displaystyle Y}givenX{\displaystyle X}, denotedE(Y∣X){\displaystyle \operatorname {E} (Y\mid X)}, is not linear inX{\displaystyle X}, the correlation coefficient will not fully determine the form ofE(Y∣X){\displaystyle \operatorname {E} (Y\mid X)}.
The adjacent image showsscatter plotsofAnscombe's quartet, a set of four different pairs of variables created byFrancis Anscombe.[23]The foury{\displaystyle y}variables have the same mean (7.5), variance (4.12), correlation (0.816) and regression line (y=3+0.5x{\textstyle y=3+0.5x}). However, as can be seen on the plots, the distribution of the variables is very different. The first one (top left) seems to be distributed normally, and corresponds to what one would expect when considering two variables correlated and following the assumption of normality. The second one (top right) is not distributed normally; while an obvious relationship between the two variables can be observed, it is not linear. In this case the Pearson correlation coefficient does not indicate that there is an exact functional relationship: only the extent to which that relationship can be approximated by a linear relationship. In the third case (bottom left), the linear relationship is perfect, except for oneoutlierwhich exerts enough influence to lower the correlation coefficient from 1 to 0.816. Finally, the fourth example (bottom right) shows another example when one outlier is enough to produce a high correlation coefficient, even though the relationship between the two variables is not linear.
These examples indicate that the correlation coefficient, as asummary statistic, cannot replace visual examination of the data. The examples are sometimes said to demonstrate that the Pearson correlation assumes that the data follow anormal distribution, but this is only partially correct.[4]The Pearson correlation can be accurately calculated for any distribution that has a finitecovariance matrix, which includes most distributions encountered in practice. However, the Pearson correlation coefficient (taken together with the sample mean and variance) is only asufficient statisticif the data is drawn from amultivariate normal distribution. As a result, the Pearson correlation coefficient fully characterizes the relationship between variables if and only if the data are drawn from a multivariate normal distribution.
If a pair(X,Y){\displaystyle \ (X,Y)\ }of random variables follows abivariate normal distribution, the conditional meanE(X∣Y){\displaystyle \operatorname {\boldsymbol {\mathcal {E}}} (X\mid Y)}is a linear function ofY{\displaystyle Y}, and the conditional meanE(Y∣X){\displaystyle \operatorname {\boldsymbol {\mathcal {E}}} (Y\mid X)}is a linear function ofX.{\displaystyle \ X~.}The correlation coefficientρX,Y{\displaystyle \ \rho _{X,Y}\ }betweenX{\displaystyle \ X\ }andY,{\displaystyle \ Y\ ,}and themarginalmeans and variances ofX{\displaystyle \ X\ }andY{\displaystyle \ Y\ }determine this linear relationship:
whereE(X){\displaystyle \operatorname {\boldsymbol {\mathcal {E}}} (X)}andE(Y){\displaystyle \operatorname {\boldsymbol {\mathcal {E}}} (Y)}are the expected values ofX{\displaystyle \ X\ }andY,{\displaystyle \ Y\ ,}respectively, andσX{\displaystyle \ \sigma _{X}\ }andσY{\displaystyle \ \sigma _{Y}\ }are the standard deviations ofX{\displaystyle \ X\ }andY,{\displaystyle \ Y\ ,}respectively.
The empirical correlationr{\displaystyle r}is anestimateof the correlation coefficientρ.{\displaystyle \ \rho ~.}A distribution estimate forρ{\displaystyle \ \rho \ }is given by
whereFHyp{\displaystyle \ F_{\mathsf {Hyp}}\ }is theGaussian hypergeometric function.
This density is both a Bayesianposteriordensity and an exact optimalconfidence distributiondensity.[24][25]
|
https://en.wikipedia.org/wiki/Correlation_and_dependence
|
Inprobability theoryandstatistics, acentral momentis amomentof aprobability distributionof arandom variableabout the random variable'smean; that is, it is theexpected valueof a specified integer power of the deviation of the random variable from the mean. The various moments form one set of values by which the properties of a probability distribution can be usefully characterized. Central moments are used in preference to ordinary moments, computed in terms of deviations from the mean instead of from zero, because the higher-order central moments relate only to the spread and shape of the distribution, rather than also to itslocation.
Sets of central moments can be defined for both univariate and multivariate distributions.
Then-thmomentabout themean(orn-thcentral moment) of a real-valuedrandom variableXis the quantityμn:= E[(X− E[X])n], where E is theexpectation operator. For acontinuousunivariateprobability distributionwithprobability density functionf(x), then-th moment about the meanμis[1]μn=E[(X−E[X])n]=∫−∞+∞(x−μ)nf(x)dx.{\displaystyle \mu _{n}=\operatorname {E} \left[{\left(X-\operatorname {E} [X]\right)}^{n}\right]=\int _{-\infty }^{+\infty }(x-\mu )^{n}f(x)\,\mathrm {d} x.}
For random variables that have no mean, such as theCauchy distribution, central moments are not defined.
The first few central moments have intuitive interpretations:
For alln, then-th central moment ishomogeneousof degreen:
μn(cX)=cnμn(X).{\displaystyle \mu _{n}(cX)=c^{n}\mu _{n}(X).\,}
Onlyfornsuch that n equals 1, 2, or 3 do we have an additivity property for random variablesXandYthat areindependent:
μn(X+Y)=μn(X)+μn(Y){\displaystyle \mu _{n}(X+Y)=\mu _{n}(X)+\mu _{n}(Y)\,}providedn∈{1, 2, 3}.
A related functional that shares the translation-invariance and homogeneity properties with then-th central moment, but continues to have this additivity property even whenn≥ 4is then-thcumulantκn(X). Forn= 1, then-th cumulant is just theexpected value; forn= either 2 or 3, then-th cumulant is just then-th central moment; forn≥ 4, then-th cumulant is ann-th-degree monic polynomial in the firstnmoments (about zero), and is also a (simpler)n-th-degree polynomial in the firstncentral moments.
Sometimes it is convenient to convert moments about the origin to moments about the mean. The general equation for converting then-th-order moment about the origin to the moment about the mean is
μn=E[(X−E[X])n]=∑j=0n(nj)(−1)n−jμj′μn−j,{\displaystyle \mu _{n}=\operatorname {E} \left[\left(X-\operatorname {E} [X]\right)^{n}\right]=\sum _{j=0}^{n}{\binom {n}{j}}{\left(-1\right)}^{n-j}\mu '_{j}\mu ^{n-j},}
whereμis the mean of the distribution, and the moment about the origin is given by
μm′=∫−∞+∞xmf(x)dx=E[Xm]=∑j=0m(mj)μjμm−j.{\displaystyle \mu '_{m}=\int _{-\infty }^{+\infty }x^{m}f(x)\,dx=\operatorname {E} [X^{m}]=\sum _{j=0}^{m}{\binom {m}{j}}\mu _{j}\mu ^{m-j}.}
For the casesn= 2, 3, 4— which are of most interest because of the relations tovariance,skewness, andkurtosis, respectively — this formula becomes (noting thatμ=μ1′{\displaystyle \mu =\mu '_{1}}andμ0′=1{\displaystyle \mu '_{0}=1}):
μ2=μ2′−μ2{\displaystyle \mu _{2}=\mu '_{2}-\mu ^{2}\,}which is commonly referred to asVar(X)=E[X2]−(E[X])2{\displaystyle \operatorname {Var} (X)=\operatorname {E} [X^{2}]-\left(\operatorname {E} [X]\right)^{2}}
μ3=μ3′−3μμ2′+2μ3μ4=μ4′−4μμ3′+6μ2μ2′−3μ4.{\displaystyle {\begin{aligned}\mu _{3}&=\mu '_{3}-3\mu \mu '_{2}+2\mu ^{3}\\\mu _{4}&=\mu '_{4}-4\mu \mu '_{3}+6\mu ^{2}\mu '_{2}-3\mu ^{4}.\end{aligned}}}
... and so on,[2]followingPascal's triangle, i.e.
μ5=μ5′−5μμ4′+10μ2μ3′−10μ3μ2′+4μ5.{\displaystyle \mu _{5}=\mu '_{5}-5\mu \mu '_{4}+10\mu ^{2}\mu '_{3}-10\mu ^{3}\mu '_{2}+4\mu ^{5}.\,}
because5μ4μ1′−μ5μ0′=5μ4μ−μ5=5μ5−μ5=4μ5{\displaystyle 5\mu ^{4}\mu '_{1}-\mu ^{5}\mu '_{0}=5\mu ^{4}\mu -\mu ^{5}=5\mu ^{5}-\mu ^{5}=4\mu ^{5}}.
The following sum is a stochastic variable having acompound distribution
W=∑i=1MYi,{\displaystyle W=\sum _{i=1}^{M}Y_{i},}
where theYi{\displaystyle Y_{i}}are mutually independent random variables sharing the same common distribution andM{\displaystyle M}a random integer variable independent of theYk{\displaystyle Y_{k}}with its own distribution. The moments ofW{\displaystyle W}are obtained as
E[Wn]=∑i=0nE[(Mi)]∑j=0i(ij)(−1)i−jE[(∑k=1jYk)n],{\displaystyle \operatorname {E} [W^{n}]=\sum _{i=0}^{n}\operatorname {E} \left[{\binom {M}{i}}\right]\sum _{j=0}^{i}{\binom {i}{j}}{\left(-1\right)}^{i-j}\operatorname {E} \left[\left(\sum _{k=1}^{j}Y_{k}\right)^{n}\right],}
whereE[(∑k=1jYk)n]{\textstyle \operatorname {E} \left[{\left(\sum _{k=1}^{j}Y_{k}\right)}^{n}\right]}is defined as zero forj=0{\displaystyle j=0}.
In distributions that aresymmetric about their means(unaffected by beingreflectedabout the mean), all odd central moments equal zero whenever they exist, because in the formula for then-th moment, each term involving a value ofXless than the mean by a certain amount exactly cancels out the term involving a value ofXgreater than the mean by the same amount.
For acontinuousbivariateprobability distributionwithprobability density functionf(x,y)the(j,k)moment about the meanμ= (μX,μY)isμj,k=E[(X−E[X])j(Y−E[Y])k]=∫−∞+∞∫−∞+∞(x−μX)j(y−μY)kf(x,y)dxdy.{\displaystyle {\begin{aligned}\mu _{j,k}&=\operatorname {E} \left[{\left(X-\operatorname {E} [X]\right)}^{j}{\left(Y-\operatorname {E} [Y]\right)}^{k}\right]\\[2pt]&=\int _{-\infty }^{+\infty }\int _{-\infty }^{+\infty }{\left(x-\mu _{X}\right)}^{j}{\left(y-\mu _{Y}\right)}^{k}f(x,y)\,dx\,dy.\end{aligned}}}
Then-th central moment for a complex random variableXis defined as[3]
αn=E[(X−E[X])n],{\displaystyle \alpha _{n}=\operatorname {E} \left[{\left(X-\operatorname {E} [X]\right)}^{n}\right],}
The absoluten-th central moment ofXis defined as
βn=E[|(X−E[X])|n].{\displaystyle \beta _{n}=\operatorname {E} \left[{\left|\left(X-\operatorname {E} [X]\right)\right|}^{n}\right].}
The 2nd-order central momentβ2is called thevarianceofXwhereas the 2nd-order central momentα2is thepseudo-varianceofX.
|
https://en.wikipedia.org/wiki/Central_moment
|
Instatistics, alocation parameterof aprobability distributionis a scalar- or vector-valuedparameterx0{\displaystyle x_{0}}, which determines the "location" or shift of the distribution. In the literature of location parameter estimation, the probability distributions with such parameter are found to be formally defined in one of the following equivalent ways:
A direct example of a location parameter is the parameterμ{\displaystyle \mu }of thenormal distribution. To see this, note that the probability density functionf(x|μ,σ){\displaystyle f(x|\mu ,\sigma )}of a normal distributionN(μ,σ2){\displaystyle {\mathcal {N}}(\mu ,\sigma ^{2})}can have the parameterμ{\displaystyle \mu }factored out and be written as:
thus fulfilling the first of the definitions given above.
The above definition indicates, in the one-dimensional case, that ifx0{\displaystyle x_{0}}is increased, the probability density or mass function shifts rigidly to the right, maintaining its exact shape.
A location parameter can also be found in families having more than one parameter, such aslocation–scale families. In this case, the probability density function or probability mass function will be a special case of the more general form
wherex0{\displaystyle x_{0}}is the location parameter,θrepresents additional parameters, andfθ{\displaystyle f_{\theta }}is a function parametrized on the additional parameters.
Source:[4]
Letf(x){\displaystyle f(x)}be any probability density function and letμ{\displaystyle \mu }andσ>0{\displaystyle \sigma >0}be any given constants. Then the function
g(x|μ,σ)=1σf(x−μσ){\displaystyle g(x|\mu ,\sigma )={\frac {1}{\sigma }}f\left({\frac {x-\mu }{\sigma }}\right)}
is a probability density function.
The location family is then defined as follows:
Letf(x){\displaystyle f(x)}be any probability density function. Then the family of probability density functionsF={f(x−μ):μ∈R}{\displaystyle {\mathcal {F}}=\{f(x-\mu ):\mu \in \mathbb {R} \}}is called the location family with standard probability density functionf(x){\displaystyle f(x)}, whereμ{\displaystyle \mu }is called thelocation parameterfor the family.
An alternative way of thinking of location families is through the concept ofadditive noise. Ifx0{\displaystyle x_{0}}is a constant andWis randomnoisewith probability densityfW(w),{\displaystyle f_{W}(w),}thenX=x0+W{\displaystyle X=x_{0}+W}has probability densityfx0(x)=fW(x−x0){\displaystyle f_{x_{0}}(x)=f_{W}(x-x_{0})}and its distribution is therefore part of a location family.
For the continuous univariate case, consider a probability density functionf(x|θ),x∈[a,b]⊂R{\displaystyle f(x|\theta ),x\in [a,b]\subset \mathbb {R} }, whereθ{\displaystyle \theta }is a vector of parameters. A location parameterx0{\displaystyle x_{0}}can be added by defining:
it can be proved thatg{\displaystyle g}is a p.d.f. by verifying if it respects the two conditions[5]g(x|θ,x0)≥0{\displaystyle g(x|\theta ,x_{0})\geq 0}and∫−∞∞g(x|θ,x0)dx=1{\displaystyle \int _{-\infty }^{\infty }g(x|\theta ,x_{0})dx=1}.g{\displaystyle g}integrates to 1 because:
now making the variable changeu=x−x0{\displaystyle u=x-x_{0}}and updating the integration interval accordingly yields:
becausef(x|θ){\displaystyle f(x|\theta )}is a p.d.f. by hypothesis.g(x|θ,x0)≥0{\displaystyle g(x|\theta ,x_{0})\geq 0}follows fromg{\displaystyle g}sharing the same image off{\displaystyle f}, which is a p.d.f. so its range is contained in[0,1]{\displaystyle [0,1]}.
|
https://en.wikipedia.org/wiki/Location_parameter
|
Ameanis a quantity representing the "center" of a collection of numbers and is intermediate to the extreme values of the set of numbers.[1]There are several kinds ofmeans(or "measures ofcentral tendency") inmathematics, especially instatistics. Each attempts to summarize or typify a given group ofdata, illustrating themagnitudeandsignof thedata set. Which of these measures is most illuminating depends on what is being measured, and on context and purpose.[2]
Thearithmetic mean, also known as "arithmetic average", is the sum of the values divided by the number of values. The arithmetic mean of a set of numbersx1,x2, ..., xnis typically denoted using anoverhead bar,x¯{\displaystyle {\bar {x}}}.[note 1]If the numbers are from observing asampleof alarger group, the arithmetic mean is termed thesample mean(x¯{\displaystyle {\bar {x}}}) to distinguish it from thegroup mean(orexpected value) of the underlying distribution, denotedμ{\displaystyle \mu }orμx{\displaystyle \mu _{x}}.[note 2][3]
Outside probability and statistics, a wide range of other notions of mean are often used ingeometryandmathematical analysis; examples are given below.
In mathematics, the three classicalPythagorean meansare thearithmetic mean(AM), thegeometric mean(GM), and theharmonic mean(HM). These means were studied with proportions byPythagoreansand later generations of Greek mathematicians[4]because of their importance in geometry and music.
Thearithmetic mean(or simplymeanoraverage) of a list of numbers, is the sum of all of the numbers divided by their count. Similarly, the mean of a samplex1,x2,…,xn{\displaystyle x_{1},x_{2},\ldots ,x_{n}}, usually denoted byx¯{\displaystyle {\bar {x}}}, is the sum of the sampled values divided by the number of items in the sample.
For example, the arithmetic mean of five values: 4, 36, 45, 50, 75 is:
Thegeometric meanis an average that is useful for sets of positive numbers, that are interpreted according to their product (as is the case with rates of growth) and not their sum (as is the case with the arithmetic mean):
For example, the geometric mean of five values: 4, 36, 45, 50, 75 is:
Theharmonic meanis an average which is useful for sets of numbers which are defined in relation to someunit, as in the case ofspeed(i.e., distance per unit of time):
For example, the harmonic mean of the five values: 4, 36, 45, 50, 75 is
If we have five pumps that can empty a tank of a certain size in respectively 4, 36, 45, 50, and 75 minutes, then the harmonic mean of15{\displaystyle 15}tells us that these five different pumps working together will pump at the same rate as much as five pumps that can each empty the tank in15{\displaystyle 15}minutes.
AM, GM, and HM ofnonnegativereal numberssatisfy these inequalities:[5]
Equality holds if all the elements of the given sample are equal.
Indescriptive statistics, the mean may be confused with themedian,modeormid-range, as any of these may incorrectly be called an "average" (more formally, a measure ofcentral tendency). The mean of a set of observations is the arithmetic average of the values; however, forskewed distributions, the mean is not necessarily the same as the middle value (median), or the most likely value (mode). For example, mean income is typically skewed upwards by a small number of people with very large incomes, so that the majority have an income lower than the mean. By contrast, the median income is the level at which half the population is below and half is above. The mode income is the most likely income and favors the larger number of people with lower incomes. While the median and mode are often more intuitive measures for such skewed data, many skewed distributions are in fact best described by their mean, including theexponentialandPoissondistributions.
The mean of aprobability distributionis the long-run arithmetic average value of arandom variablehaving that distribution. If the random variable is denoted byX{\displaystyle X}, then the mean is also known as theexpected valueofX{\displaystyle X}(denotedE(X){\displaystyle E(X)}). For adiscrete probability distribution, the mean is given by∑xP(x){\displaystyle \textstyle \sum xP(x)}, where the sum is taken over all possible values of the random variable andP(x){\displaystyle P(x)}is theprobability mass function. For acontinuous distribution, the mean is∫−∞∞xf(x)dx{\displaystyle \textstyle \int _{-\infty }^{\infty }xf(x)\,dx}, wheref(x){\displaystyle f(x)}is theprobability density function.[7]In all cases, including those in which the distribution is neither discrete nor continuous, the mean is theLebesgue integralof the random variable with respect to itsprobability measure. The mean need not exist or be finite; for some probability distributions the mean is infinite (+∞or−∞), while for others the mean isundefined.
Thegeneralized mean, also known as the power mean or Hölder mean, is an abstraction of thequadratic, arithmetic, geometric, and harmonic means. It is defined for a set ofnpositive numbersxiby
x¯(m)=(1n∑i=1nxim)1m{\displaystyle {\bar {x}}(m)=\left({\frac {1}{n}}\sum _{i=1}^{n}x_{i}^{m}\right)^{\frac {1}{m}}}[1]
By choosing different values for the parameterm, the following types of means are obtained:
This can be generalized further as thegeneralizedf-mean
and again a suitable choice of an invertiblefwill give
Theweighted arithmetic mean(or weighted average) is used if one wants to combine average values from different sized samples of the same population:
Wherexi¯{\displaystyle {\bar {x_{i}}}}andwi{\displaystyle w_{i}}are the mean and size of samplei{\displaystyle i}respectively. In other applications, they represent a measure for the reliability of the influence upon the mean by the respective values.
Sometimes, a set of numbers might contain outliers (i.e., data values which are much lower or much higher than the others). Often, outliers are erroneous data caused byartifacts. In this case, one can use atruncated mean. It involves discarding given parts of the data at the top or the bottom end, typically an equal amount at each end and then taking the arithmetic mean of the remaining data. The number of values removed is indicated as a percentage of the total number of values.
Theinterquartile meanis a specific example of a truncated mean. It is simply the arithmetic mean after removing the lowest and the highest quarter of values.
assuming the values have been ordered, so is simply a specific example of a weighted mean for a specific set of weights.
In some circumstances, mathematicians may calculate a mean of an infinite (or even anuncountable) set of values. This can happen when calculating the mean valueyavg{\displaystyle y_{\text{avg}}}of a functionf(x){\displaystyle f(x)}. Intuitively, a mean of a function can be thought of as calculating the area under a section of a curve, and then dividing by the length of that section. This can be done crudely by counting squares on graph paper, or more precisely byintegration. The integration formula is written as:
In this case, care must be taken to make sure that the integral converges. But the mean may be finite even if the function itself tends to infinity at some points.
Angles, times of day, and other cyclical quantities requiremodular arithmeticto add and otherwise combine numbers. These quantities can be averaged using thecircular mean. In all these situations, it is possible that no mean exists, for example if all points being averaged are equidistant. Consider acolor wheel—there is no mean to the set of all colors. Additionally, there may not be auniquemean for a set of values: for example, when averaging points on a clock, the mean of the locations of 11:00 and 13:00 is 12:00, but this location is equivalent to that of 00:00.
TheFréchet meangives a manner for determining the "center" of a mass distribution on asurfaceor, more generally,Riemannian manifold. Unlike many other means, the Fréchet mean is defined on a space whose elements cannot necessarily be added together or multiplied by scalars.
It is sometimes also known as theKarcher mean(named after Hermann Karcher).
In geometry, there are thousands of different
definitions forthe center of a trianglethat can all be interpreted as the mean of a triangular set of points in the plane.[8]
This is an approximation to the mean for a moderately skewed distribution.[9]It is used inhydrocarbon explorationand is defined as:
whereP10{\textstyle P_{10}},P50{\textstyle P_{50}}andP90{\textstyle P_{90}}are the 10th, 50th and 90th percentiles of the distribution, respectively.
|
https://en.wikipedia.org/wiki/Mean
|
Thesample mean(sample average) orempirical mean(empirical average), and thesample covarianceorempirical covariancearestatisticscomputed from asampleof data on one or morerandom variables.
The sample mean is theaveragevalue (ormean value) of asampleof numbers taken from a largerpopulationof numbers, where "population" indicates not number of people but the entirety of relevant data, whether collected or not. A sample of 40 companies' sales from theFortune 500might be used for convenience instead of looking at the population, all 500 companies' sales. The sample mean is used as anestimatorfor the population mean, the average value in the entire population, where the estimate is more likely to be close to the population mean if the sample is large and representative. The reliability of the sample mean is estimated using thestandard error, which in turn is calculated using thevarianceof the sample. If the sample is random, the standard error falls with the size of the sample and the sample mean's distribution approaches the normal distribution as the sample size increases.
The term "sample mean" can also be used to refer to avectorof average values when the statistician is looking at the values of several variables in the sample, e.g. the sales, profits, and employees of a sample of Fortune 500 companies. In this case, there is not just a sample variance for each variable but a samplevariance-covariance matrix(or simplycovariance matrix) showing also the relationship between each pair of variables. This would be a 3×3 matrix when 3 variables are being considered. The sample covariance is useful in judging the reliability of the sample means as estimators and is also useful as an estimate of the population covariance matrix.
Due to their ease of calculation and other desirable characteristics, the sample mean and sample covariance are widely used in statistics to represent thelocationanddispersionof thedistributionof values in the sample, and to estimate the values for the population.
The sample mean is the average of the values of a variable in a sample, which is the sum of those values divided by the number of values. Using mathematical notation, if a sample ofNobservations on variableXis taken from the population, the sample mean is:
Under this definition, if the sample (1, 4, 1) is taken from the population (1,1,3,4,0,2,1,0), then the sample mean isx¯=(1+4+1)/3=2{\displaystyle {\bar {x}}=(1+4+1)/3=2}, as compared to the population mean ofμ=(1+1+3+4+0+2+1+0)/8=12/8=1.5{\displaystyle \mu =(1+1+3+4+0+2+1+0)/8=12/8=1.5}. Even if a sample is random, it is rarely perfectly representative, and other samples would have other sample means even if the samples were all from the same population. The sample (2, 1, 0), for example, would have a sample mean of 1.
If the statistician is interested inKvariables rather than one, each observation having a value for each of thoseKvariables, the overall sample mean consists ofKsample means for individual variables. Letxij{\displaystyle x_{ij}}be theithindependently drawn observation (i=1,...,N) on thejthrandom variable (j=1,...,K). These observations can be arranged intoNcolumn vectors, each withKentries, with theK×1 column vector giving thei-th observations of all variables being denotedxi{\displaystyle \mathbf {x} _{i}}(i=1,...,N).
Thesample mean vectorx¯{\displaystyle \mathbf {\bar {x}} }is a column vector whosej-th elementx¯j{\displaystyle {\bar {x}}_{j}}is the average value of theNobservations of thejthvariable:
Thus, the sample mean vector contains the average of the observations for each variable, and is written
Thesample covariance matrixis aK-by-KmatrixQ=[qjk]{\displaystyle \textstyle \mathbf {Q} =\left[q_{jk}\right]}with entries
whereqjk{\displaystyle q_{jk}}is an estimate of thecovariancebetween thejthvariable and thekthvariable of the population underlying the data.
In terms of the observation vectors, the sample covariance is
Alternatively, arranging the observation vectors as the columns of a matrix, so that
which is a matrix ofKrows andNcolumns.
Here, the sample covariance matrix can be computed as
where1N{\displaystyle \mathbf {1} _{N}}is anNby1vector of ones.
If the observations are arranged as rows instead of columns, sox¯{\displaystyle \mathbf {\bar {x}} }is now a 1×Krow vector andM=FT{\displaystyle \mathbf {M} =\mathbf {F} ^{\mathrm {T} }}is anN×Kmatrix whose columnjis the vector ofNobservations on variablej, then applying transposes
in the appropriate places yields
Like covariance matrices forrandom vector, sample covariance matrices arepositive semi-definite. To prove it, note that for any matrixA{\displaystyle \mathbf {A} }the matrixATA{\displaystyle \mathbf {A} ^{T}\mathbf {A} }is positive semi-definite. Furthermore, a covariance matrix is positive definite if and only if the rank of thexi.−x¯{\displaystyle \mathbf {x} _{i}.-\mathbf {\bar {x}} }vectors is K.
The sample mean and the sample covariance matrix areunbiased estimatesof themeanand thecovariance matrixof therandom vectorX{\displaystyle \textstyle \mathbf {X} }, a row vector whosejthelement (j = 1, ..., K) is one of the random variables.[1]The sample covariance matrix hasN−1{\displaystyle \textstyle N-1}in the denominator rather thanN{\displaystyle \textstyle N}due to a variant ofBessel's correction: In short, the sample covariance relies on the difference between each observation and the sample mean, but the sample mean is slightly correlated with each observation since it is defined in terms of all observations. If the population meanE(X){\displaystyle \operatorname {E} (\mathbf {X} )}is known, the analogous unbiased estimate
using the population mean, hasN{\displaystyle \textstyle N}in the denominator. This is an example of why in probability and statistics it is essential to distinguish betweenrandom variables(upper case letters) andrealizationsof the random variables (lower case letters).
Themaximum likelihoodestimate of the covariance
for theGaussian distributioncase hasNin the denominator as well. The ratio of 1/Nto 1/(N− 1) approaches 1 for largeN, so the maximum likelihood estimate approximately approximately equals the unbiased estimate when the sample is large.
For each random variable, the sample mean is a goodestimatorof the population mean, where a "good" estimator is defined as beingefficientand unbiased. Of course the estimator will likely not be the true value of thepopulationmean since different samples drawn from the same distribution will give different sample means and hence different estimates of the true mean. Thus the sample mean is arandom variable, not a constant, and consequently has its own distribution. For a random sample ofNobservations on thejthrandom variable, the sample mean's distribution itself has mean equal to the population meanE(Xj){\displaystyle E(X_{j})}and variance equal toσj2/N{\displaystyle \sigma _{j}^{2}/N}, whereσj2{\displaystyle \sigma _{j}^{2}}is the population variance.
The arithmetic mean of apopulation, or population mean, is often denotedμ.[2]The sample meanx¯{\displaystyle {\bar {x}}}(the arithmetic mean of a sample of values drawn from the population) makes a goodestimatorof the population mean, as its expected value is equal to the population mean (that is, it is anunbiased estimator). The sample mean is arandom variable, not a constant, since its calculated value will randomly differ depending on which members of the population are sampled, and consequently it will have its own distribution. For a random sample ofnindependentobservations, the expected value of the sample mean is
and thevarianceof the sample mean is
If the samples are not independent, butcorrelated, then special care has to be taken in order to avoid the problem ofpseudoreplication.
If the population isnormally distributed, then the sample mean is normally distributed as follows:
If the population is not normally distributed, the sample mean is nonetheless approximately normally distributed ifnis large andσ2/n< +∞. This is a consequence of thecentral limit theorem.
In a weighted sample, each vectorxi{\displaystyle \textstyle {\textbf {x}}_{i}}(each set of single observations on each of theKrandom variables) is assigned a weightwi≥0{\displaystyle \textstyle w_{i}\geq 0}. Without loss of generality, assume that the weights arenormalized:
(If they are not, divide the weights by their sum).
Then theweighted meanvectorx¯{\displaystyle \textstyle \mathbf {\bar {x}} }is given by
and the elementsqjk{\displaystyle q_{jk}}of the weighted covariance matrixQ{\displaystyle \textstyle \mathbf {Q} }are[3]
If all weights are the same,wi=1/N{\displaystyle \textstyle w_{i}=1/N}, the weighted mean and covariance reduce to the (biased) sample mean and covariance mentioned above.
The sample mean and sample covariance are notrobust statistics, meaning that they are sensitive tooutliers. As robustness is often a desired trait, particularly in real-world applications, robust alternatives may prove desirable, notablyquantile-based statistics such as thesample medianfor location,[4]andinterquartile range(IQR) for dispersion. Other alternatives includetrimmingandWinsorising, as in thetrimmed meanand theWinsorized mean.
|
https://en.wikipedia.org/wiki/Sample_mean
|
Beliefs depend on the available information. This idea is formalized inprobability theorybyconditioning.Conditional probabilities,conditional expectations, andconditional probability distributionsare treated on three levels:discrete probabilities,probability density functions, andmeasure theory. Conditioning leads to a non-random result if the condition is completely specified; otherwise, if the condition is left random, the result of conditioning is also random.
Example: A fair coin is tossed 10 times; therandom variableXis the number of heads in these 10 tosses, andYis the number of heads in the first 3 tosses. In spite of the fact thatYemerges beforeXit may happen that someone knowsXbut notY.
Given thatX= 1, the conditional probability of the eventY= 0 is
More generally,
One may also treat the conditional probability as a random variable, — a function of the random variableX, namely,
Theexpectationof this random variable is equal to the (unconditional) probability,
namely,
which is an instance of thelaw of total probabilityE(P(A|X))=P(A).{\displaystyle \mathbb {E} (\mathbb {P} (A|X))=\mathbb {P} (A).}
Thus,P(Y=0|X=1){\displaystyle \mathbb {P} (Y=0|X=1)}may be treated as the value of the random variableP(Y=0|X){\displaystyle \mathbb {P} (Y=0|X)}corresponding toX= 1.On the other hand,P(Y=0|X=1){\displaystyle \mathbb {P} (Y=0|X=1)}is well-defined irrespective of other possible values ofX.
Given thatX= 1, the conditional expectation of the random variableYisE(Y|X=1)=310{\displaystyle \mathbb {E} (Y|X=1)={\tfrac {3}{10}}}More generally,
(In this example it appears to be a linear function, but in general it is nonlinear.) One may also treat the conditional expectation as a random variable, — a function of the random variableX, namely,
The expectation of this random variable is equal to the (unconditional) expectation ofY,
namely,
or simply
which is an instance of thelaw of total expectationE(E(Y|X))=E(Y).{\displaystyle \mathbb {E} (\mathbb {E} (Y|X))=\mathbb {E} (Y).}
The random variableE(Y|X){\displaystyle \mathbb {E} (Y|X)}is the best predictor ofYgivenX. That is, it minimizes the mean square errorE(Y−f(X))2{\displaystyle \mathbb {E} (Y-f(X))^{2}}on the class of all random variables of the formf(X). This class of random variables remains intact ifXis replaced, say, with 2X. Thus,E(Y|2X)=E(Y|X).{\displaystyle \mathbb {E} (Y|2X)=\mathbb {E} (Y|X).}It does not mean thatE(Y|2X)=310×2X;{\displaystyle \mathbb {E} (Y|2X)={\tfrac {3}{10}}\times 2X;}rather,E(Y|2X)=320×2X=310X.{\displaystyle \mathbb {E} (Y|2X)={\tfrac {3}{20}}\times 2X={\tfrac {3}{10}}X.}In particular,E(Y|2X=2)=310.{\displaystyle \mathbb {E} (Y|2X=2)={\tfrac {3}{10}}.}More generally,E(Y|g(X))=E(Y|X){\displaystyle \mathbb {E} (Y|g(X))=\mathbb {E} (Y|X)}for every functiongthat is one-to-one on the set of all possible values ofX. The values ofXare irrelevant; what matters is the partition (denote it αX)
of thesample spaceΩ into disjoint sets {X=xn}. (Herex1,x2,…{\displaystyle x_{1},x_{2},\ldots }are all possible values ofX.) Given an arbitrary partition α of Ω, one may define the random variableE (Y| α ).Still,E ( E (Y| α)) = E (Y).
Conditional probability may be treated as a special case of conditional expectation. Namely,P (A|X) = E (Y|X)ifYis theindicatorofA. Therefore the conditional probability also depends on the partition αXgenerated byXrather than onXitself;P (A|g(X) ) = P (A|X) = P (A| α),α = αX= αg(X).
On the other hand, conditioning on an eventBis well-defined, provided thatP(B)≠0,{\displaystyle \mathbb {P} (B)\neq 0,}irrespective of any partition that may containBas one of several parts.
GivenX= x, the conditional distribution ofYis
for0 ≤y≤ min ( 3,x).It is thehypergeometric distributionH (x; 3, 7 ),or equivalently,H ( 3;x, 10-x).The corresponding expectation 0.3x, obtained from the general formula
forH (n;R,W),is nothing but the conditional expectationE (Y|X=x) = 0.3x.
TreatingH (X; 3, 7 )as a random distribution (a random vector in the four-dimensional space of all measures on {0,1,2,3}), one may take its expectation, getting the unconditional distribution ofY, — thebinomial distributionBin ( 3, 0.5 ).This fact amounts to the equality
fory= 0,1,2,3; which is an instance of thelaw of total probability.
Example.A point of the spherex2+y2+z2= 1 is chosen at random according to theuniform distribution on the sphere.[1]The random variablesX,Y,Zare the coordinates of the random point. The joint density ofX,Y,Zdoes not exist (since the sphere is of zero volume), but the joint densityfX,YofX,Yexists,
(The density is non-constant because of a non-constant angle between thesphere and the plane.) The density ofXmay be calculated by integration,
surprisingly, the result does not depend onxin (−1,1),
which means thatXis distributed uniformly on (−1,1). The same holds forYandZ(and in fact, foraX+bY+cZwhenevera2+ b2+ c2= 1).
Example.A different measure of calculating themarginal distributionfunction is provided below[2][3]
fX,Y,Z(x,y,z)=34π{\displaystyle f_{X,Y,Z}(x,y,z)={\frac {3}{4\pi }}}
fX(x)=∫−1−y2−x2+1−y2−x2∫−1−x2+1−x23dydz4π=31−x2/4;{\displaystyle f_{X}(x)=\int _{-{\sqrt {1-y^{2}-x^{2}}}}^{+{\sqrt {1-y^{2}-x^{2}}}}\int _{-{\sqrt {1-x^{2}}}}^{+{\sqrt {1-x^{2}}}}{\frac {3\mathrm {d} y\mathrm {d} z}{4\pi }}=3{\sqrt {1-x^{2}}}/4\,;}
Given thatX= 0.5, the conditional probability of the eventY≤ 0.75 is the integral of the conditional density,
More generally,
for allxandysuch that −1 <x< 1 (otherwise the denominatorfX(x) vanishes) and−1−x2<y<1−x2{\displaystyle \textstyle -{\sqrt {1-x^{2}}}<y<{\sqrt {1-x^{2}}}}(otherwise the conditional probability degenerates to 0 or 1). One may also treat the conditional probability as a random variable, — a function of the random variableX, namely,
The expectation of this random variable is equal to the (unconditional) probability,
which is an instance of thelaw of total probabilityE ( P (A|X) ) = P (A).
The conditional probabilityP (Y≤ 0.75 |X= 0.5 )cannot be interpreted asP (Y≤ 0.75,X= 0.5 ) / P (X= 0.5 ),since the latter gives 0/0. Accordingly,P (Y≤ 0.75 |X= 0.5 )cannot be interpreted via empirical frequencies, since the exact valueX= 0.5 has no chance to appear at random, not even once during an infinite sequence of independent trials.
The conditional probability can be interpreted as a limit,
The conditional expectationE (Y|X= 0.5 )is of little interest; it vanishes just by symmetry. It is more interesting to calculateE ( |Z| |X= 0.5 )treating |Z| as a function ofX,Y:
More generally,
for −1 <x< 1. One may also treat the conditional expectation as a random variable, — a function of the random variableX, namely,
The expectation of this random variable is equal to the (unconditional) expectation of |Z|,
namely,
which is an instance of thelaw of total expectationE ( E (Y|X) ) = E (Y).
The random variableE(|Z| |X)is the best predictor of |Z| givenX. That is, it minimizes the mean square errorE ( |Z| -f(X) )2on the class of all random variables of the formf(X). Similarly to the discrete case,E ( |Z| |g(X) ) = E ( |Z| |X)for every measurable functiongthat is one-to-one on (-1,1).
GivenX= x, the conditional distribution ofY, given by the densityfY|X=x(y), is the (rescaled) arcsin distribution; its cumulative distribution function is
for allxandysuch thatx2+y2< 1.The corresponding expectation ofh(x,Y) is nothing but the conditional expectationE (h(X,Y) |X=x).Themixtureof these conditional distributions, taken for allx(according to the distribution ofX) is the unconditional distribution ofY. This fact amounts to the equalities
the latter being the instance of the law of total probabilitymentioned above.
On the discrete level, conditioning is possible only if the condition is of nonzero probability (one cannot divide by zero). On the level of densities, conditioning onX=xis possible even thoughP (X=x) = 0.This success may create the illusion that conditioning isalwayspossible. Regretfully, it is not, for several reasons presented below.
The resultP (Y≤ 0.75 |X= 0.5 ) = 5/6,mentioned above, is geometrically evident in the following sense. The points (x,y,z) of the spherex2+y2+z2= 1, satisfying the conditionx= 0.5, are a circley2+z2= 0.75 of radius0.75{\displaystyle {\sqrt {0.75}}}on the planex= 0.5. The inequalityy≤ 0.75 holds on an arc. The length of the arc is 5/6 of the length of the circle, which is why the conditional probability is equal to 5/6.
This successful geometric explanation may create the illusion that the following question is trivial.
It may seem evident that the conditional distribution must be uniform on the given circle (the intersection of the given sphere and the given plane). Sometimes it really is, but in general it is not. Especially,Zis distributed uniformly on (-1,+1) and independent of the ratioY/X, thus,P (Z≤ 0.5 |Y/X) = 0.75.On the other hand, the inequalityz≤ 0.5 holds on an arc of the circlex2+y2+z2= 1,y=cx(for any givenc). The length of the arc is 2/3 of the length of the circle. However, the conditional probability is 3/4, not 2/3. This is a manifestation of the classical Borel paradox.[4][5]
Appeals to symmetry can be misleading if not formalized as invariance arguments.
Another example. Arandom rotationof the three-dimensional space is a rotation by a random angle around a random axis. Geometric intuition suggests that the angle is independent of the axis and distributed uniformly. However, the latter is wrong; small values of the angle are less probable.
Given an eventBof zero probability, the formulaP(A|B)=P(A∩B)/P(B){\displaystyle \textstyle \mathbb {P} (A|B)=\mathbb {P} (A\cap B)/\mathbb {P} (B)}is useless, however, one can tryP(A|B)=limn→∞P(A∩Bn)/P(Bn){\displaystyle \textstyle \mathbb {P} (A|B)=\lim _{n\to \infty }\mathbb {P} (A\cap B_{n})/\mathbb {P} (B_{n})}for an appropriate sequence of eventsBnof nonzero probability such thatBn↓B(that is,B1⊃B2⊃…{\displaystyle \textstyle B_{1}\supset B_{2}\supset \dots }andB1∩B2∩⋯=B{\displaystyle \textstyle B_{1}\cap B_{2}\cap \dots =B}). One example is givenabove. Two more examples areBrownian bridge and Brownian excursion.
In the latter two examples the law of total probability is irrelevant, since only a single event (the condition) is given. By contrast, in the exampleabovethe law of total probabilityapplies, since the eventX= 0.5 is included into a family of eventsX=xwherexruns over (−1,1), and these events are a partition of the probability space.
In order to avoid paradoxes (such as theBorel's paradox), the following important distinction should be taken into account. If a given event is of nonzero probability then conditioning on it is well-defined (irrespective of any other events), as was notedabove. By contrast, if the given event is of zero probability then conditioning on it is ill-defined unless some additional input is provided. Wrong choice of this additional input leads to wrong conditional probabilities (expectations, distributions). In this sense, "the concept of a conditional probability with regard to an isolated hypothesis whose probability equals 0 is inadmissible." (Kolmogorov[6])
The additional input may be (a) a symmetry (invariance group); (b) a sequence of eventsBnsuch thatBn↓B, P (Bn) > 0; (c) a partition containing the given event. Measure-theoretic conditioning (below) investigates Case (c), discloses its relation to (b) in general and to (a) when applicable.
Some events of zero probability are beyond the reach of conditioning. An example: letXnbe independent random variables distributed uniformly on (0,1), andBthe event"Xn→ 0asn→ ∞";what aboutP (Xn< 0.5 |B) ?Does it tend to 1, or not? Another example: letXbe a random variable distributed uniformly on (0,1), andBthe event "Xis a rational number"; what aboutP (X= 1/n|B) ?The only answer is that, once again,
the concept of a conditional probability with regard to an isolated hypothesis whose probability equals 0 is inadmissible.
Example.LetYbe a random variable distributed uniformly on (0,1), andX=f(Y) wherefis a given function. Two cases are treated below:f=f1andf=f2, wheref1is the continuous piecewise-linear function
andf2is theWeierstrass function.
GivenX= 0.75, two values ofYare possible, 0.25 and 0.5. It may seem evident that both values are of conditional probability 0.5 just because one point iscongruentto another point. However, this is an illusion; see below.
The conditional probabilityP (Y≤ 1/3 |X)may be defined as the best predictor of the indicator
givenX. That is, it minimizes the mean square errorE (I-g(X) )2on the class of all random variables of the formg(X).
In the casef=f1the corresponding functiong=g1may be calculated explicitly,[details 1]
Alternatively, the limiting procedure may be used,
giving the same result.
Thus,P (Y≤ 1/3 |X) =g1(X).The expectation of this random variable is equal to the (unconditional) probability,E ( P (Y≤ 1/3 |X) ) = P (Y≤ 1/3 ),namely,
which is an instance of thelaw of total probabilityE ( P (A|X) ) = P (A).
In the casef=f2the corresponding functiong=g2probably cannot be calculated explicitly. Nevertheless it exists, and can be computed numerically. Indeed, thespaceL2(Ω) of all square integrable random variables is aHilbert space; the indicatorIis a vector of this space; and random variables of the formg(X) are a (closed, linear) subspace. Theorthogonal projectionof this vector to this subspace is well-defined. It can be computed numerically, usingfinite-dimensional approximationsto the infinite-dimensional Hilbert space.
Once again, the expectation of the random variableP (Y≤ 1/3 |X) =g2(X)is equal to the (unconditional) probability,E ( P (Y≤ 1/3 |X) ) = P (Y≤ 1/3 ),namely,
However, the Hilbert space approach treatsg2as anequivalence classof functions rather than an individual function. Measurability ofg2is ensured, but continuity (or evenRiemann integrability) is not. The valueg2(0.5) is determined uniquely, since the point 0.5 is an atom of the distribution ofX. Other valuesxare not atoms, thus, corresponding valuesg2(x) are not determined uniquely. Once again, "the concept of a conditional probability with regard to an isolated hypothesis whose probability equals 0 is inadmissible." (Kolmogorov.[6]
Alternatively, the same functiong(be itg1org2) may be defined as theRadon–Nikodym derivative
where measures μ, ν are defined by
for all Borel setsB⊂R.{\displaystyle B\subset \mathbb {R} .}That is, μ is the (unconditional) distribution ofX, while ν is one third of its conditional distribution,
Both approaches (via the Hilbert space, and via the Radon–Nikodym derivative) treatgas an equivalence class of functions; two functionsgandg′are treated as equivalent, ifg(X) =g′(X) almost surely. Accordingly, the conditional probabilityP (Y≤ 1/3 |X)is treated as an equivalence class of random variables; as usual, two random variables are treated as equivalent if they are equal almost surely.
The conditional expectationE(Y|X){\displaystyle \mathbb {E} (Y|X)}may be defined as the best predictor ofYgivenX. That is, it minimizes the mean square errorE(Y−h(X))2{\displaystyle \mathbb {E} (Y-h(X))^{2}}on the class of all random variables of the formh(X).
In the casef=f1the corresponding functionh=h1may be calculated explicitly,[details 2]
Alternatively, the limiting procedure may be used,
giving the same result.
Thus,E(Y|X)=h1(X).{\displaystyle \mathbb {E} (Y|X)=h_{1}(X).}The expectation of this random variable is equal to the (unconditional) expectation,E(E(Y|X))=E(Y),{\displaystyle \mathbb {E} (\mathbb {E} (Y|X))=\mathbb {E} (Y),}namely,
which is an instance of thelaw of total expectationE(E(Y|X))=E(Y).{\displaystyle \mathbb {E} (\mathbb {E} (Y|X))=\mathbb {E} (Y).}
In the casef=f2the corresponding functionh=h2probably cannot be calculated explicitly. Nevertheless it exists, and can be computed numerically in the same way asg2above, — as the orthogonal projection in the Hilbert space. The law of total expectation holds, since the projection cannot change the scalar product by the constant 1 belonging to the subspace.
Alternatively, the same functionh(be ith1orh2) may be defined as theRadon–Nikodym derivative
where measures μ, ν are defined by
for all Borel setsB⊂R.{\displaystyle B\subset \mathbb {R} .}HereE(Y;A){\displaystyle \mathbb {E} (Y;A)}is the restricted expectation, not to be confused with the conditional expectationE(Y|A)=E(Y;A)/P(A).{\displaystyle \mathbb {E} (Y|A)=\mathbb {E} (Y;A)/\mathbb {P} (A).}
In the casef=f1the conditionalcumulative distribution functionmay be calculated explicitly, similarly tog1. The limiting procedure gives:
which cannot be correct, since a cumulative distribution function must beright-continuous!
This paradoxical result is explained by measure theory as follows. For a givenythe correspondingFY|X=x(y)=P(Y⩽y|X=x){\displaystyle F_{Y|X=x}(y)=\mathbb {P} (Y\leqslant y|X=x)}is well-defined (via the Hilbert space or the Radon–Nikodym derivative) as an equivalence class of functions (ofx). Treated as a function ofyfor a givenxit is ill-defined unless some additional input is provided. Namely, a function (ofx) must be chosen within every (or at least almost every) equivalence class. Wrong choice leads to wrong conditional cumulative distribution functions.
A right choice can be made as follows. First,FY|X=x(y)=P(Y⩽y|X=x){\displaystyle F_{Y|X=x}(y)=\mathbb {P} (Y\leqslant y|X=x)}is considered for rational numbersyonly. (Any other dense countable set may be used equally well.) Thus, only a countable set of equivalence classes is used; all choices of functions within these classes are mutually equivalent, and the corresponding function of rationalyis well-defined (for almost everyx). Second, the function is extended from rational numbers to real numbers by right continuity.
In general the conditional distribution is defined for almost allx(according to the distribution ofX), but sometimes the result is continuous inx, in which case individual values are acceptable. In the considered example this is the case; the correct result forx= 0.75,
shows that the conditional distribution ofYgivenX= 0.75 consists of two atoms, at 0.25 and 0.5, of probabilities 1/3 and 2/3 respectively.
Similarly, the conditional distribution may be calculated for allxin (0, 0.5) or (0.5, 1).
The valuex= 0.5 is an atom of the distribution ofX, thus, the corresponding conditional distribution is well-defined and may be calculated by elementary means (the denominator does not vanish); the conditional distribution ofYgivenX= 0.5 is uniform on (2/3, 1). Measure theory leads to the same result.
The mixture of all conditional distributions is the (unconditional) distribution ofY.
The conditional expectationE(Y|X=x){\displaystyle \mathbb {E} (Y|X=x)}is nothing but the expectation with respect to the conditional distribution.
In the casef=f2the correspondingFY|X=x(y)=P(Y⩽y|X=x){\displaystyle F_{Y|X=x}(y)=\mathbb {P} (Y\leqslant y|X=x)}probably cannot be calculated explicitly. For a givenyit is well-defined (via the Hilbert space or the Radon–Nikodym derivative) as an equivalence class of functions (ofx). The right choice of functions within these equivalence classes may be made as above; it leads to correct conditional cumulative distribution functions, thus, conditional distributions. In general, conditional distributions need not beatomicorabsolutely continuous(nor mixtures of both types). Probably, in the considered example they aresingular(like theCantor distribution).
Once again, the mixture of all conditional distributions is the (unconditional) distribution, and the conditional expectation is the expectation with respect to the conditional distribution.
|
https://en.wikipedia.org/wiki/Conditioning_(probability)
|
Inprobability theory, theDoob–Dynkin lemma, named afterJoseph L. DoobandEugene Dynkin(also known as thefactorization lemma), characterizes the situation when onerandom variableis a function of another by theinclusionof theσ{\displaystyle \sigma }-algebrasgenerated by the random variables. The usual statement of the lemma is formulated in terms of one random variable beingmeasurablewith respect to theσ{\displaystyle \sigma }-algebra generated by the other.
The lemma plays an important role in theconditional expectationin probability theory, where it allows replacement of the conditioning on arandom variableby conditioning on theσ{\displaystyle \sigma }-algebrathat isgeneratedby the random variable.
In the lemma below,B[0,1]{\displaystyle {\mathcal {B}}[0,1]}is theσ{\displaystyle \sigma }-algebra ofBorel setson[0,1].{\displaystyle [0,1].}IfT:X→Y,{\displaystyle T\colon X\to Y,}and(Y,Y){\displaystyle (Y,{\mathcal {Y}})}is a measurable space, then
is the smallestσ{\displaystyle \sigma }-algebra onX{\displaystyle X}such thatT{\displaystyle T}isσ(T)/Y{\displaystyle \sigma (T)/{\mathcal {Y}}}-measurable.
LetT:Ω→Ω′{\displaystyle T\colon \Omega \rightarrow \Omega '}be a function, and(Ω′,A′){\displaystyle (\Omega ',{\mathcal {A}}')}a measurable space. A functionf:Ω→[0,1]{\displaystyle f\colon \Omega \rightarrow [0,1]}isσ(T)/B[0,1]{\displaystyle \sigma (T)/{\mathcal {B}}[0,1]}-measurable if and only iff=g∘T,{\displaystyle f=g\circ T,}for someA′/B[0,1]{\displaystyle {\mathcal {A}}'/{\mathcal {B}}[0,1]}-measurableg:Ω′→[0,1].{\displaystyle g\colon \Omega '\to [0,1].}[1]
Remark.The "if" part simply states that the composition of two measurable functions is measurable. The "only if" part is proven below.
Letf{\displaystyle f}beσ(T)/B[0,1]{\displaystyle \sigma (T)/{\mathcal {B}}[0,1]}-measurable.
First, note that, by the above descriptive definition ofσ(T){\displaystyle \sigma (T)}as the set of preimages ofA′{\displaystyle {\mathcal {A}}'}-measurable sets underT{\displaystyle T}, we know that ifA∈σ(T){\displaystyle A\in \sigma (T)}, then there exists someA′∈A′{\displaystyle A'\in {\mathcal {A}}'}such thatA=T−1(A′){\displaystyle A=T^{-1}(A')}.
Now, assume thatf=1A{\displaystyle f=\mathbf {1} _{A}}is anindicatorof some setA∈σ(T){\displaystyle A\in \sigma (T)}. If we identifyA′∈A′{\displaystyle A'\in {\mathcal {A}}'}such thatA=T−1(A′){\displaystyle A=T^{-1}(A')}, then the functiong=1A′{\displaystyle g=\mathbf {1} _{A'}}suits the requirement, and sinceA∈σ(T){\displaystyle A\in \sigma (T)}, such a setA′∈A′{\displaystyle A'\in {\mathcal {A}}'}always exists. By linearity, the claim extends to anysimple measurable functionf.{\displaystyle f.}
Letf{\displaystyle f}bemeasurablebut not necessarily simple. As explained in the article onsimple functions,f{\displaystyle f}is a pointwise limit of a monotonically non-decreasing sequencefn≥0{\displaystyle f_{n}\geq 0}of simple functions. The previous step guarantees thatfn=gn∘T,{\displaystyle f_{n}=g_{n}\circ T,}for some measurablegn.{\displaystyle g_{n}.}The supremumg(x)=supn≥1gn(x){\displaystyle \textstyle g(x)=\sup _{n\geq 1}g_{n}(x)}exists on the entireΩ′{\displaystyle \Omega '}and is measurable. (The article onmeasurable functionsexplains why supremum of a sequence of measurable functions is measurable). For everyx∈ImT,{\displaystyle x\in \operatorname {Im} T,}the sequencegn(x){\displaystyle g_{n}(x)}is non-decreasing, sog|ImT(x)=limn→∞gn|ImT(x){\displaystyle \textstyle g|_{\operatorname {Im} T}(x)=\lim _{n\to \infty }g_{n}|_{\operatorname {Im} T}(x)}which shows thatf=g∘T.{\displaystyle f=g\circ T.}
Remark.The lemma remains valid if the space([0,1],B[0,1]){\displaystyle ([0,1],{\mathcal {B}}[0,1])}is replaced with(S,B(S)),{\displaystyle (S,{\mathcal {B}}(S)),}whereS⊆[−∞,∞],{\displaystyle S\subseteq [-\infty ,\infty ],}S{\displaystyle S}is bijective with[0,1],{\displaystyle [0,1],}and the bijection is measurable in both directions.
By definition, the measurability off{\displaystyle f}means thatf−1(S)∈σ(T){\displaystyle f^{-1}(S)\in \sigma (T)}for every Borel setS⊆[0,1].{\displaystyle S\subseteq [0,1].}Thereforeσ(f)⊆σ(T),{\displaystyle \sigma (f)\subseteq \sigma (T),}and the lemma may be restated as follows.
Lemma.LetT:Ω→Ω′,{\displaystyle T\colon \Omega \rightarrow \Omega ',}f:Ω→[0,1],{\displaystyle f\colon \Omega \rightarrow [0,1],}and(Ω′,A′){\displaystyle (\Omega ',{\mathcal {A}}')}is a measurable space. Thenf=g∘T,{\displaystyle f=g\circ T,}for someA′/B[0,1]{\displaystyle {\mathcal {A}}'/{\mathcal {B}}[0,1]}-measurableg:Ω′→[0,1],{\displaystyle g\colon \Omega '\to [0,1],}if and only ifσ(f)⊆σ(T){\displaystyle \sigma (f)\subseteq \sigma (T)}.
|
https://en.wikipedia.org/wiki/Doob%E2%80%93Dynkin_lemma
|
Inprobability theory, theDoob–Dynkin lemma, named afterJoseph L. DoobandEugene Dynkin(also known as thefactorization lemma), characterizes the situation when onerandom variableis a function of another by theinclusionof theσ{\displaystyle \sigma }-algebrasgenerated by the random variables. The usual statement of the lemma is formulated in terms of one random variable beingmeasurablewith respect to theσ{\displaystyle \sigma }-algebra generated by the other.
The lemma plays an important role in theconditional expectationin probability theory, where it allows replacement of the conditioning on arandom variableby conditioning on theσ{\displaystyle \sigma }-algebrathat isgeneratedby the random variable.
In the lemma below,B[0,1]{\displaystyle {\mathcal {B}}[0,1]}is theσ{\displaystyle \sigma }-algebra ofBorel setson[0,1].{\displaystyle [0,1].}IfT:X→Y,{\displaystyle T\colon X\to Y,}and(Y,Y){\displaystyle (Y,{\mathcal {Y}})}is a measurable space, then
is the smallestσ{\displaystyle \sigma }-algebra onX{\displaystyle X}such thatT{\displaystyle T}isσ(T)/Y{\displaystyle \sigma (T)/{\mathcal {Y}}}-measurable.
LetT:Ω→Ω′{\displaystyle T\colon \Omega \rightarrow \Omega '}be a function, and(Ω′,A′){\displaystyle (\Omega ',{\mathcal {A}}')}a measurable space. A functionf:Ω→[0,1]{\displaystyle f\colon \Omega \rightarrow [0,1]}isσ(T)/B[0,1]{\displaystyle \sigma (T)/{\mathcal {B}}[0,1]}-measurable if and only iff=g∘T,{\displaystyle f=g\circ T,}for someA′/B[0,1]{\displaystyle {\mathcal {A}}'/{\mathcal {B}}[0,1]}-measurableg:Ω′→[0,1].{\displaystyle g\colon \Omega '\to [0,1].}[1]
Remark.The "if" part simply states that the composition of two measurable functions is measurable. The "only if" part is proven below.
Letf{\displaystyle f}beσ(T)/B[0,1]{\displaystyle \sigma (T)/{\mathcal {B}}[0,1]}-measurable.
First, note that, by the above descriptive definition ofσ(T){\displaystyle \sigma (T)}as the set of preimages ofA′{\displaystyle {\mathcal {A}}'}-measurable sets underT{\displaystyle T}, we know that ifA∈σ(T){\displaystyle A\in \sigma (T)}, then there exists someA′∈A′{\displaystyle A'\in {\mathcal {A}}'}such thatA=T−1(A′){\displaystyle A=T^{-1}(A')}.
Now, assume thatf=1A{\displaystyle f=\mathbf {1} _{A}}is anindicatorof some setA∈σ(T){\displaystyle A\in \sigma (T)}. If we identifyA′∈A′{\displaystyle A'\in {\mathcal {A}}'}such thatA=T−1(A′){\displaystyle A=T^{-1}(A')}, then the functiong=1A′{\displaystyle g=\mathbf {1} _{A'}}suits the requirement, and sinceA∈σ(T){\displaystyle A\in \sigma (T)}, such a setA′∈A′{\displaystyle A'\in {\mathcal {A}}'}always exists. By linearity, the claim extends to anysimple measurable functionf.{\displaystyle f.}
Letf{\displaystyle f}bemeasurablebut not necessarily simple. As explained in the article onsimple functions,f{\displaystyle f}is a pointwise limit of a monotonically non-decreasing sequencefn≥0{\displaystyle f_{n}\geq 0}of simple functions. The previous step guarantees thatfn=gn∘T,{\displaystyle f_{n}=g_{n}\circ T,}for some measurablegn.{\displaystyle g_{n}.}The supremumg(x)=supn≥1gn(x){\displaystyle \textstyle g(x)=\sup _{n\geq 1}g_{n}(x)}exists on the entireΩ′{\displaystyle \Omega '}and is measurable. (The article onmeasurable functionsexplains why supremum of a sequence of measurable functions is measurable). For everyx∈ImT,{\displaystyle x\in \operatorname {Im} T,}the sequencegn(x){\displaystyle g_{n}(x)}is non-decreasing, sog|ImT(x)=limn→∞gn|ImT(x){\displaystyle \textstyle g|_{\operatorname {Im} T}(x)=\lim _{n\to \infty }g_{n}|_{\operatorname {Im} T}(x)}which shows thatf=g∘T.{\displaystyle f=g\circ T.}
Remark.The lemma remains valid if the space([0,1],B[0,1]){\displaystyle ([0,1],{\mathcal {B}}[0,1])}is replaced with(S,B(S)),{\displaystyle (S,{\mathcal {B}}(S)),}whereS⊆[−∞,∞],{\displaystyle S\subseteq [-\infty ,\infty ],}S{\displaystyle S}is bijective with[0,1],{\displaystyle [0,1],}and the bijection is measurable in both directions.
By definition, the measurability off{\displaystyle f}means thatf−1(S)∈σ(T){\displaystyle f^{-1}(S)\in \sigma (T)}for every Borel setS⊆[0,1].{\displaystyle S\subseteq [0,1].}Thereforeσ(f)⊆σ(T),{\displaystyle \sigma (f)\subseteq \sigma (T),}and the lemma may be restated as follows.
Lemma.LetT:Ω→Ω′,{\displaystyle T\colon \Omega \rightarrow \Omega ',}f:Ω→[0,1],{\displaystyle f\colon \Omega \rightarrow [0,1],}and(Ω′,A′){\displaystyle (\Omega ',{\mathcal {A}}')}is a measurable space. Thenf=g∘T,{\displaystyle f=g\circ T,}for someA′/B[0,1]{\displaystyle {\mathcal {A}}'/{\mathcal {B}}[0,1]}-measurableg:Ω′→[0,1],{\displaystyle g\colon \Omega '\to [0,1],}if and only ifσ(f)⊆σ(T){\displaystyle \sigma (f)\subseteq \sigma (T)}.
|
https://en.wikipedia.org/wiki/Factorization_lemma
|
Inmathematics,non-commutative conditional expectationis a generalization of the notion ofconditional expectationin classicalprobability. The space of essentially bounded measurable functions on aσ{\displaystyle \sigma }-finite measure space(X,μ){\displaystyle (X,\mu )}is the canonical example of acommutative von Neumann algebra. For this reason, the theory of von Neumann algebras is sometimes referred to as noncommutative measure theory. The intimate connections ofprobability theorywith measure theory suggest that one may be able to extend the classical ideas in probability to a noncommutative setting by studying those ideas on general von Neumann algebras.
For von Neumann algebras with a faithful normal tracial state, for example finite von Neumann algebras, the notion of conditional expectation is especially useful.
LetR⊆S{\displaystyle {\mathcal {R}}\subseteq {\mathcal {S}}}be von Neumann algebras (S{\displaystyle {\mathcal {S}}}andR{\displaystyle {\mathcal {R}}}may be generalC*-algebrasas well), a positive, linear mappingΦ{\displaystyle \Phi }ofS{\displaystyle {\mathcal {S}}}ontoR{\displaystyle {\mathcal {R}}}is said to be aconditional expectation(ofS{\displaystyle {\mathcal {S}}}ontoR{\displaystyle {\mathcal {R}}}) whenΦ(I)=I{\displaystyle \Phi (I)=I}andΦ(R1SR2)=R1Φ(S)R2{\displaystyle \Phi (R_{1}SR_{2})=R_{1}\Phi (S)R_{2}}ifR1,R2∈R{\displaystyle R_{1},R_{2}\in {\mathcal {R}}}andS∈S{\displaystyle S\in {\mathcal {S}}}.
LetB{\displaystyle {\mathcal {B}}}be a C*-subalgebra of the C*-algebraA,φ0{\displaystyle {\mathfrak {A}},\varphi _{0}}an idempotent linear mapping ofA{\displaystyle {\mathfrak {A}}}ontoB{\displaystyle {\mathcal {B}}}such that‖φ0‖=1,A{\displaystyle \|\varphi _{0}\|=1,{\mathfrak {A}}}acting onH{\displaystyle {\mathcal {H}}}the universal representation ofA{\displaystyle {\mathfrak {A}}}. Thenφ0{\displaystyle \varphi _{0}}extends uniquely to an ultraweakly continuous idempotent linear mappingφ{\displaystyle \varphi }ofA−{\displaystyle {\mathfrak {A}}^{-}}, the weak-operator closure ofA{\displaystyle {\mathfrak {A}}}, ontoB−{\displaystyle {\mathcal {B}}^{-}}, the weak-operator closure ofB{\displaystyle {\mathcal {B}}}.
In the above setting, a result[1]first proved by Tomiyama may be formulated in the following manner.
Theorem.LetA,B,φ,φ0{\displaystyle {\mathfrak {A}},{\mathcal {B}},\varphi ,\varphi _{0}}be as described above. Thenφ{\displaystyle \varphi }is a conditional expectation fromA−{\displaystyle {\mathfrak {A}}^{-}}ontoB−{\displaystyle {\mathcal {B}}^{-}}andφ0{\displaystyle \varphi _{0}}is a conditional expectation fromA{\displaystyle {\mathfrak {A}}}ontoB{\displaystyle {\mathcal {B}}}.
With the aid of Tomiyama's theorem an elegant proof ofSakai's resulton the characterization of those C*-algebras that are *-isomorphic to von Neumann algebras may be given.
|
https://en.wikipedia.org/wiki/Non-commutative_conditional_expectation
|
Thefundamental theorem of pokeris a principle first articulated byDavid Sklansky[1]that he believes expresses the essential nature ofpokeras agameofdecision-makingin the face ofincomplete information.
Every time you play a hand differently from the way you would have played it if you could see all your opponents' cards, they gain; and every time you play your hand the same way you would have played it if you could see all their cards, they lose. Conversely, every time opponents play their hands differently from the way they would have if they could see all your cards, you gain; and every time they play their hands the same way they would have played if they could see all your cards, you lose.
The fundamental theorem is stated in common language, but its formulation is based on mathematical reasoning. Each decision that is made in poker can be analyzed in terms of theexpected valueof the payoff of a decision. The correct decision to make in a given situation is the decision that has the largest expected value. If a player could see all of their opponents' cards, they would always be able to calculate the correct decision with mathematical certainty, and the less they deviate from these correct decisions, the better their expected long-term results. This is certainly trueheads-up, butMorton's theorem, in which an opponent's correct decision can benefit a player, may apply in multi-way pots.
Suppose Bob is playing limitTexas hold 'emand is dealt9♣ 9♠under the gunbefore theflop. Hecalls, and everyone elsefoldsto Carol in thebig blindwhochecks. The flop comesA♣K♦ 10♦, and Carol bets.
Bob now has a decision to make based upon incomplete information. In this particular circumstance, the correct decision is almost certainly to fold. There are too manyturnandriver cardsthat could kill his hand. Even if Carol does not have anAor aK, there are 3 cards to astraightand 2 cards to aflushon the flop, and she could easily be on a straight or flushdraw. Bob is essentially drawing to 2outs(another9), and even if he catches one of these outs, his set may not hold up.
However, suppose Bob knew (with 100% certainty) that Carol held8♦ 7♦. In this case, it would be correct toraise. Even though Carol would still be getting the correctpot oddsto call, the best decision for Bob is to raise. Therefore, by folding (or even calling), Bob has played his hand differently from the way he would have played it if he could see his opponent's cards, and so by the fundamental theorem of poker, his opponent has gained. Bob has made a "mistake", in the sense that he has played differently from the way he would have played if he knew Carol held8♦ 7♦, even though this "mistake" is almost certainly the best decision given the incomplete information available to him.
This example also illustrates that one of the most important goals in poker is to induce the opponents to make mistakes. In this particular hand, Carol has practiced deception by employing asemi-bluff— she has bet a hand, hoping Bob will fold, but she still has outs even if he calls or raises. Carol has induced Bob to make a mistake.
The Fundamental Theorem of Poker applies to allheads-updecisions, but it does not apply to all multi-way decisions. This is because each opponent of a player can make an incorrect decision, but the "collective decision" of all the opponents works against the player.
This type of situation occurs mostly in games with multi-way pots, when a player has a strong hand, but several opponents are chasing withdrawsor other weaker hands. Also, a good example is a player with a deep stack making a play that favors ashort-stackedopponent because he can extract moreexpected valuefrom the other deep-stacked opponents. Such a situation is sometimes referred to asimplicit collusion.
The fundamental theorem of poker is simply expressed and appears axiomatic, yet its proper application to the countless varieties of circumstances that a poker player may face requires a great deal of knowledge, skill, and experience.
|
https://en.wikipedia.org/wiki/Fundamental_theorem_of_poker
|
Inprobability theory, thelaw(orformula)of total probabilityis a fundamental rule relatingmarginal probabilitiestoconditional probabilities. It expresses the total probability of an outcome which can be realized via several distinctevents, hence the name.
The law of total probability is[1]atheoremthat states, in its discrete case, if{Bn:n=1,2,3,…}{\displaystyle \left\{{B_{n}:n=1,2,3,\ldots }\right\}}is a finite orcountably infiniteset ofmutually exclusiveandcollectively exhaustiveevents, then for any eventA{\displaystyle A}
or, alternatively,[1]
where, for anyn{\displaystyle n}, ifP(Bn)=0{\displaystyle P(B_{n})=0}, then these terms are simply omitted from the summation sinceP(A∣Bn){\displaystyle P(A\mid B_{n})}is finite.
The summation can be interpreted as aweighted average, and consequently the marginal probability,P(A){\displaystyle P(A)}, is sometimes called "average probability";[2]"overall probability" is sometimes used in less formal writings.[3]
The law of total probability can also be stated for conditional probabilities:
Taking theBn{\displaystyle B_{n}}as above, and assumingC{\displaystyle C}is an eventindependentof any of theBn{\displaystyle B_{n}}:
The law of total probability extends to the case of conditioning on events generated by continuous random variables. Let(Ω,F,P){\displaystyle (\Omega ,{\mathcal {F}},P)}be aprobability space. SupposeX{\displaystyle X}is a random variable with distribution functionFX{\displaystyle F_{X}}, andA{\displaystyle A}an event on(Ω,F,P){\displaystyle (\Omega ,{\mathcal {F}},P)}. Then the law of total probability states
P(A)=∫−∞∞P(A|X=x)dFX(x).{\displaystyle P(A)=\int _{-\infty }^{\infty }P(A|X=x)dF_{X}(x).}
IfX{\displaystyle X}admits a density functionfX{\displaystyle f_{X}}, then the result is
P(A)=∫−∞∞P(A|X=x)fX(x)dx.{\displaystyle P(A)=\int _{-\infty }^{\infty }P(A|X=x)f_{X}(x)dx.}
Moreover, for the specific case whereA={Y∈B}{\displaystyle A=\{Y\in B\}}, whereB{\displaystyle B}is a Borel set, then this yields
P(Y∈B)=∫−∞∞P(Y∈B|X=x)fX(x)dx.{\displaystyle P(Y\in B)=\int _{-\infty }^{\infty }P(Y\in B|X=x)f_{X}(x)dx.}
Suppose that two factories supplylight bulbsto the market. FactoryX's bulbs work for over 5000 hours in 99% of cases, whereas factoryY's bulbs work for over 5000 hours in 95% of cases. It is known that factoryXsupplies 60% of the total bulbs available and Y supplies 40% of the total bulbs available. What is the chance that a purchased bulb will work for longer than 5000 hours?
Applying the law of total probability, we have:
where
Thus each purchased light bulb has a 97.4% chance to work for more than 5000 hours.
The termlaw of total probabilityis sometimes taken to mean thelaw of alternatives, which is a special case of the law of total probability applying todiscrete random variables.[citation needed]One author uses the terminology of the "Rule of Average Conditional Probabilities",[4]while another refers to it as the "continuous law of alternatives" in the continuous case.[5]This result is given by Grimmett and Welsh[6]as thepartition theorem, a name that they also give to the relatedlaw of total expectation.
|
https://en.wikipedia.org/wiki/Law_of_total_probability
|
Inprobability theory, thelaw of total covariance,[1]covariance decomposition formula, orconditional covariance formulastates that ifX,Y, andZarerandom variableson the sameprobability space, and thecovarianceofXandYis finite, then
The nomenclature in this article's title parallels the phraselaw of total variance. Some writers on probability call this the "conditional covarianceformula"[2]or use other names.
Note: Theconditional expected valuesE(X|Z) and E(Y|Z) are random variables whose values depend on the value ofZ. Note that the conditional expected value ofXgiven theeventZ=zis a function ofz. If we write E(X|Z=z) =g(z) then the random variable E(X|Z) isg(Z). Similar comments apply to the conditional covariance.
The law of total covariance can be proved using thelaw of total expectation: First,
from a simple standard identity on covariances. Then we apply the law of total expectation by conditioning on the random variableZ:
Now we rewrite the term inside the first expectation using the definition of covariance:
Since expectation of a sum is the sum of expectations, we can regroup the terms:
Finally, we recognize the final two terms as the covariance of the conditional expectations E[X|Z] and E[Y|Z]:
|
https://en.wikipedia.org/wiki/Law_of_total_covariance
|
Inprobability theoryandmathematicalstatistics, thelaw of total cumulanceis a generalization tocumulantsof thelaw of total probability, thelaw of total expectation, and thelaw of total variance. It has applications in the analysis oftime series. It was introduced byDavid Brillinger.[1]
It is most transparent when stated in its most general form, forjointcumulants, rather than for cumulants of a specified order for just onerandom variable. In general, we have
where
Only in casen= either 2 or 3 is thenth cumulant the same as thenthcentral moment. The casen= 2 is well-known (seelaw of total variance). Below is the casen= 3. The notationμ3means the third central moment.
For general 4th-order cumulants, the rule gives a sum of 15 terms, as follows:
SupposeYhas aPoisson distributionwithexpected valueλ, andXis the sum ofYcopies ofWthat areindependentof each other and ofY.
All of the cumulants of the Poisson distribution are equal to each other, and so in this case are equal toλ. Also recall that if random variablesW1, ...,Wmareindependent, then thenth cumulant is additive:
We will find the 4th cumulant ofX. We have:
We recognize the last sum as the sum over all partitions of the set { 1, 2, 3, 4 }, of the product over all blocks of the partition, of cumulants ofWof order equal to the size of the block. That is precisely the 4th rawmomentofW(seecumulantfor a more leisurely discussion of this fact). Hence the cumulants ofXare the moments ofWmultiplied byλ.
In this way we see that every moment sequence is also a cumulant sequence (the converse cannot be true, since cumulants of even order ≥ 4 are in some cases negative, and also because the cumulant sequence of thenormal distributionis not a moment sequence of any probability distribution).
SupposeY= 1 with probabilitypandY= 0 with probabilityq= 1 −p. Suppose the conditional probability distribution ofXgivenYisFifY= 1 andGifY= 0. Then we have
whereπ<1^{\displaystyle \pi <{\widehat {1}}}meansπis a partition of the set { 1, ...,n} that is finer than the coarsest partition – the sum is over all partitions except that one. For example, ifn= 3, then we have
|
https://en.wikipedia.org/wiki/Law_of_total_cumulance
|
Aproduct distributionis aprobability distributionconstructed as the distribution of theproductofrandom variableshaving two other known distributions. Given twostatistically independentrandom variablesXandY, the distribution of the random variableZthat is formed as the productZ=XY{\displaystyle Z=XY}is aproduct distribution.
The product distribution is the PDF of the product of sample values. This is not the same as the product of their PDFs yet the concepts are often ambiguously termed as in "product of Gaussians".
The product is one type of algebra for random variables: Related to the product distribution are theratio distribution, sum distribution (seeList of convolutions of probability distributions) and difference distribution. More generally, one may talk of combinations of sums, differences, products and ratios.
Many of these distributions are described in Melvin D. Springer's book from 1979The Algebra of Random Variables.[1]
IfX{\displaystyle X}andY{\displaystyle Y}are two independent, continuous random variables, described by probability density functionsfX{\displaystyle f_{X}}andfY{\displaystyle f_{Y}}then the probability density function ofZ=XY{\displaystyle Z=XY}is[2]
We first write thecumulative distribution functionofZ{\displaystyle Z}starting with its definition
We find the desired probability density function by taking the derivative of both sides with respect toz{\displaystyle z}. Since on the right hand side,z{\displaystyle z}appears only in the integration limits, the derivative is easily performed using thefundamental theorem of calculusand thechain rule. (Note the negative sign that is needed when the variable occurs in the lower limit of the integration.)
where the absolute value is used to conveniently combine the two terms.[3]
A faster more compact proof begins with the same step of writing the cumulative distribution ofZ{\displaystyle Z}starting with its definition:
whereu(⋅){\displaystyle u(\cdot )}is theHeaviside step functionand serves to limit the region of integration to values ofx{\displaystyle x}andy{\displaystyle y}satisfyingxy≤z{\displaystyle xy\leq z}.
We find the desired probability density function by taking the derivative of both sides with respect toz{\displaystyle z}.
where we utilize the translation and scaling properties of theDirac delta functionδ{\displaystyle \delta }.
A more intuitive description of the procedure is illustrated in the figure below. The joint pdffX(x)fY(y){\displaystyle f_{X}(x)f_{Y}(y)}exists in thex{\displaystyle x}-y{\displaystyle y}plane and an arc of constantz{\displaystyle z}value is shown as the shaded line. To find the marginal probabilityfZ(z){\displaystyle f_{Z}(z)}on this arc, integrate over increments of areadxdyf(x,y){\displaystyle dx\,dy\;f(x,y)}on this contour.
Starting withy=zx{\displaystyle y={\frac {z}{x}}}, we havedy=−zx2dx=−yxdx{\displaystyle dy=-{\frac {z}{x^{2}}}\,dx=-{\frac {y}{x}}\,dx}. So the probability increment isδp=f(x,y)dx|dy|=fX(x)fY(z/x)y|x|dxdx{\displaystyle \delta p=f(x,y)\,dx\,|dy|=f_{X}(x)f_{Y}(z/x){\frac {y}{|x|}}\,dx\,dx}. Sincez=yx{\displaystyle z=yx}impliesdz=ydx{\displaystyle dz=y\,dx}, we can relate the probability increment to thez{\displaystyle z}-increment, namelyδp=fX(x)fY(z/x)1|x|dxdz{\displaystyle \delta p=f_{X}(x)f_{Y}(z/x){\frac {1}{|x|}}\,dx\,dz}. Then integration overx{\displaystyle x}, yieldsfZ(z)=∫fX(x)fY(z/x)1|x|dx{\displaystyle f_{Z}(z)=\int f_{X}(x)f_{Y}(z/x){\frac {1}{|x|}}\,dx}.
LetX∼f(x){\displaystyle X\sim f(x)}be a random sample drawn from probability distributionfx(x){\displaystyle f_{x}(x)}. ScalingX{\displaystyle X}byθ{\displaystyle \theta }generates a sample from scaled distributionθX∼1|θ|fX(xθ){\displaystyle \theta X\sim {\frac {1}{|\theta |}}f_{X}\left({\frac {x}{\theta }}\right)}which can be written as a conditional distributiongx(x|θ)=1|θ|fx(xθ){\displaystyle g_{x}(x|\theta )={\frac {1}{|\theta |}}f_{x}\left({\frac {x}{\theta }}\right)}.
Lettingθ{\displaystyle \theta }be a random variable with pdffθ(θ){\displaystyle f_{\theta }(\theta )}, the distribution of the scaled sample becomesfX(θx)=gX(x∣θ)fθ(θ){\displaystyle f_{X}(\theta x)=g_{X}(x\mid \theta )f_{\theta }(\theta )}and integrating outθ{\displaystyle \theta }we gethx(x)=∫−∞∞gX(x|θ)fθ(θ)dθ{\displaystyle h_{x}(x)=\int _{-\infty }^{\infty }g_{X}(x|\theta )f_{\theta }(\theta )d\theta }soθX{\displaystyle \theta X}is drawn from this distributionθX∼hX(x){\displaystyle \theta X\sim h_{X}(x)}. However, substituting the definition ofg{\displaystyle g}we also havehX(x)=∫−∞∞1|θ|fx(xθ)fθ(θ)dθ{\displaystyle h_{X}(x)=\int _{-\infty }^{\infty }{\frac {1}{|\theta |}}f_{x}\left({\frac {x}{\theta }}\right)f_{\theta }(\theta )\,d\theta }which has the same form as the product distribution above. Thus the Bayesian posterior distributionhX(x){\displaystyle h_{X}(x)}is the distribution of the product of the two independent random samplesθ{\displaystyle \theta }andX{\displaystyle X}.
For the case of one variable being discrete, letθ{\displaystyle \theta }have probabilityPi{\displaystyle P_{i}}at levelsθi{\displaystyle \theta _{i}}with∑iPi=1{\displaystyle \sum _{i}P_{i}=1}. The conditional density isfX(x∣θi)=1|θi|fx(xθi){\displaystyle f_{X}(x\mid \theta _{i})={\frac {1}{|\theta _{i}|}}f_{x}\left({\frac {x}{\theta _{i}}}\right)}. ThereforefX(θx)=∑iPi|θi|fX(xθi){\displaystyle f_{X}(\theta x)=\sum _{i}{\frac {P_{i}}{|\theta _{i}|}}f_{X}\left({\frac {x}{\theta _{i}}}\right)}.
When two random variables are statistically independent,the expectation of their product is the product of their expectations. This can be proved from thelaw of total expectation:
In the inner expression,Yis a constant. Hence:
This is true even ifXandYare statistically dependent in which caseE[X∣Y]{\displaystyle \operatorname {E} [X\mid Y]}is a function ofY. In the special case in whichXandYare statistically
independent, it is a constant independent ofY. Hence:
LetX,Y{\displaystyle X,Y}be uncorrelated random variables with meansμX,μY,{\displaystyle \mu _{X},\mu _{Y},}and variancesσX2,σY2{\displaystyle \sigma _{X}^{2},\sigma _{Y}^{2}}.
If, additionally, the random variablesX2{\displaystyle X^{2}}andY2{\displaystyle Y^{2}}are uncorrelated, then the variance of the productXYis[4]
In the case of the product of more than two variables, ifX1⋯Xn,n>2{\displaystyle X_{1}\cdots X_{n},\;\;n>2}are statistically independent then[5]the variance of their product is
AssumeX,Yare independent random variables. The characteristic function ofXisφX(t){\displaystyle \varphi _{X}(t)}, and the distribution ofYis known. Then from thelaw of total expectation, we have[6]
If the characteristic functions and distributions of bothXandYare known, then alternatively,φZ(t)=E(φY(tX)){\displaystyle \varphi _{Z}(t)=\operatorname {E} (\varphi _{Y}(tX))}also holds.
TheMellin transformof a distributionf(x){\displaystyle f(x)}with supportonlyonx≥0{\displaystyle x\geq 0}and having a random sampleX{\displaystyle X}is
The inverse transform is
ifXandY{\displaystyle X{\text{ and }}Y}are two independent random samples from different distributions, then the Mellin transform of their product is equal to the product of their Mellin transforms:
Ifsis restricted to integer values, a simpler result is
Thus the moments of the random productXY{\displaystyle XY}are the product of the corresponding moments ofXandY{\displaystyle X{\text{ and }}Y}and this extends to non-integer moments, for example
The pdf of a function can be reconstructed from its moments using thesaddlepoint approximation method.
A further result is that for independentX,Y
Gamma distribution exampleTo illustrate how the product of moments yields a much simpler result than finding the moments of the distribution of the product, letX,Y{\displaystyle X,Y}be sampled from twoGamma distributions,fGamma(x;θ,1)=Γ(θ)−1xθ−1e−x{\displaystyle f_{Gamma}(x;\theta ,1)=\Gamma (\theta )^{-1}x^{\theta -1}e^{-x}}with parametersθ=α,β{\displaystyle \theta =\alpha ,\beta }whose moments are
Multiplying the corresponding moments gives the Mellin transform result
Independently, it is known that the product of two independent Gamma-distributed samples (~Gamma(α,1) and Gamma(β,1)) has aK-distribution:
To find the moments of this, make the change of variabley=2z{\displaystyle y=2{\sqrt {z}}}, simplifying similar integrals to:
thus
The definite integral
which, after some difficulty, has agreed with the moment product result above.
IfX,Yare drawn independently from Gamma distributions with shape parametersα,β{\displaystyle \alpha ,\;\beta }then
This type of result is universally true, since for bivariate independent variablesfX,Y(x,y)=fX(x)fY(y){\displaystyle f_{X,Y}(x,y)=f_{X}(x)f_{Y}(y)}thus
or equivalently it is clear thatXpandYq{\displaystyle X^{p}{\text{ and }}Y^{q}}are independent variables.
The distribution of the product of two random variables which havelognormal distributionsis again lognormal. This is itself a special case of a more general set of results where the logarithm of the product can be written as the sum of the logarithms. Thus, in cases where a simple result can be found in thelist of convolutions of probability distributions, where the distributions to be convolved are those of the logarithms of the components of the product, the result might be transformed to provide the distribution of the product. However this approach is only useful where the logarithms of the components of the product are in some standard families of distributions.
LetZ{\displaystyle Z}be the product of two independent variablesZ=X1X2{\displaystyle Z=X_{1}X_{2}}each uniformly distributed on the interval [0,1], possibly the outcome of acopulatransformation. As noted in "Lognormal Distributions" above, PDF convolution operations in the Log domain correspond to the product of sample values in the original domain. Thus, making the transformationu=ln(x){\displaystyle u=\ln(x)}, such thatpU(u)|du|=pX(x)|dx|{\displaystyle p_{U}(u)\,|du|=p_{X}(x)\,|dx|}, each variate is distributed independently onuas
and the convolution of the two distributions is the autoconvolution
Next retransform the variable toz=ey{\displaystyle z=e^{y}}yielding the distribution
For the product of multiple (> 2) independent samples thecharacteristic functionroute is favorable. If we definey~=−y{\displaystyle {\tilde {y}}=-y}thenc(y~){\displaystyle c({\tilde {y}})}above is aGamma distributionof shape 1 and scale factor 1,c(y~)=y~e−y~{\displaystyle c({\tilde {y}})={\tilde {y}}e^{-{\tilde {y}}}}, and its known CF is(1−it)−1{\displaystyle (1-it)^{-1}}. Note that|dy~|=|dy|{\displaystyle |d{\tilde {y}}|=|dy|}so the Jacobian of the transformation is unity.
The convolution ofn{\displaystyle n}independent samples fromY~{\displaystyle {\tilde {Y}}}therefore has CF(1−it)−n{\displaystyle (1-it)^{-n}}which is known to be the CF of a Gamma distribution of shapen{\displaystyle n}:
Make the inverse transformationz=ey{\displaystyle z=e^{y}}to extract the PDF of the product of thensamples:
The following, more conventional, derivation from Stackexchange[7]is consistent with this result.
First of all, lettingZ2=X1X2{\displaystyle Z_{2}=X_{1}X_{2}}its CDF is
The density ofz2is thenf(z2)=−log(z2){\displaystyle z_{2}{\text{ is then }}f(z_{2})=-\log(z_{2})}
Multiplying by a third independent sample gives distribution function
Taking the derivative yieldsfZ3(z)=12log2(z),0<z≤1.{\displaystyle f_{Z_{3}}(z)={\frac {1}{2}}\log ^{2}(z),\;\;0<z\leq 1.}
The author of the note conjectures that, in general,fZn(z)=(−logz)n−1(n−1)!,0<z≤1{\displaystyle f_{Z_{n}}(z)={\frac {(-\log z)^{n-1}}{(n-1)!\;\;\;}},\;\;0<z\leq 1}
The figure illustrates the nature of the integrals above. The area of the selection within the unit square and below the line z = xy, represents the CDF of z. This divides into two parts. The first is for 0 < x < z where the increment of area in the vertical slot is just equal todx. The second part lies below thexyline, hasy-heightz/x, and incremental areadx z/x.
The product of two independent Normal samples follows amodified Bessel function. Letx,y{\displaystyle x,y}be independent samples from a Normal(0,1) distribution andz=xy{\displaystyle z=xy}.
Then
The variance of this distribution could be determined, in principle, by a definite integral from Gradsheyn and Ryzhik,[8]
thusE[Z2]=∫−∞∞z2K0(|z|)πdz=4πΓ2(32)=1{\displaystyle \operatorname {E} [Z^{2}]=\int _{-\infty }^{\infty }{\frac {z^{2}K_{0}(|z|)}{\pi }}\,dz={\frac {4}{\pi }}\;\Gamma ^{2}{\Big (}{\frac {3}{2}}{\Big )}=1}
A much simpler result, stated in a section above, is that the variance of the product of zero-mean independent samples is equal to the product of their variances. Since the variance of each Normal sample is one, the variance of the product is also one.
The product of two Gaussian samples is often confused with the product of two Gaussian PDFs. The latter simply results in a bivariate Gaussian distribution.
The product of correlated Normal samples case was recently addressed by Nadarajaha and Pogány.[9]LetX,Y{\displaystyle X{\text{, }}Y}be zero mean, unit variance, normally distributed variates with correlation coefficientρand letZ=XY{\displaystyle \rho {\text{ and let }}Z=XY}
Then
Mean and variance: For the mean we haveE[Z]=ρ{\displaystyle \operatorname {E} [Z]=\rho }from the definition of correlation coefficient. The variance can be found by transforming from two unit variance zero mean uncorrelated variablesU, V. Let
ThenX, Yare unit variance variables with correlation coefficientρ{\displaystyle \rho }and
Removing odd-power terms, whose expectations are obviously zero, we get
Since(E[Z])2=ρ2{\displaystyle (\operatorname {E} [Z])^{2}=\rho ^{2}}we have
High correlation asymptoteIn the highly correlated case,ρ→1{\displaystyle \rho \rightarrow 1}the product converges on the square of one sample. In this case theK0{\displaystyle K_{0}}asymptote isK0(x)→π2xe−xin the limit asx=|z|1−ρ2→∞{\displaystyle K_{0}(x)\rightarrow {\sqrt {\tfrac {\pi }{2x}}}e^{-x}{\text{ in the limit as }}x={\frac {|z|}{1-\rho ^{2}}}\rightarrow \infty }and
which is aChi-squared distributionwith one degree of freedom.
Multiple correlated samples. Nadarajaha et al. further show that ifZ1,Z2,..Znaren{\displaystyle Z_{1},Z_{2},..Z_{n}{\text{ are }}n}iid random variables sampled fromfZ(z){\displaystyle f_{Z}(z)}andZ¯=1n∑Zi{\displaystyle {\bar {Z}}={\tfrac {1}{n}}\sum Z_{i}}is their mean then
whereWis the Whittaker function whileβ=n1−ρ,γ=n1+ρ{\displaystyle \beta ={\frac {n}{1-\rho }},\;\;\gamma ={\frac {n}{1+\rho }}}.
Using the identityW0,ν(x)=xπKν(x/2),x≥0{\displaystyle W_{0,\nu }(x)={\sqrt {\frac {x}{\pi }}}K_{\nu }(x/2),\;\;x\geq 0}, see for example the DLMF compilation. eqn(13.13.9),[10]this expression can be somewhat simplified to
The pdf gives the marginal distribution of a sample bivariate normal covariance, a result also shown in the Wishart Distribution article. The approximate distribution of a correlation coefficient can be found via theFisher transformation.
Multiple non-central correlated samples. The distribution of the product of correlated non-central normal samples was derived by Cui et al.[11]and takes the form of an infinite series of modified Bessel functions of the first kind.
Moments of product of correlated central normal samples
For a centralnormal distributionN(0,1) the moments are
wheren!!{\displaystyle n!!}denotes thedouble factorial.
IfX,Y∼Norm(0,1){\displaystyle X,Y\sim {\text{Norm}}(0,1)}are central correlated variables, the simplest bivariate case of the multivariate normal moment problem described by Kan,[12]then
where
[needs checking]
The distribution of the product of non-central correlated normal samples was derived by Cui et al.[11]and takes the form of an infinite series.
These product distributions are somewhat comparable to theWishart distribution. The latter is thejointdistribution of the four elements (actually only three independent elements) of a sample covariance matrix. Ifxt,yt{\displaystyle x_{t},y_{t}}are samples from a bivariate time series then theW=∑t=1K(xtyt)(xtyt)T{\displaystyle W=\sum _{t=1}^{K}{\dbinom {x_{t}}{y_{t}}}{\dbinom {x_{t}}{y_{t}}}^{T}}is a Wishart matrix withKdegrees of freedom. The product distributions above are the unconditional distribution of the aggregate ofK> 1 samples ofW2,1{\displaystyle W_{2,1}}.
Letu1,v1,u2,v2{\displaystyle u_{1},v_{1},u_{2},v_{2}}be independent samples from a normal(0,1) distribution.Settingz1=u1+iv1andz2=u2+iv2thenz1,z2{\displaystyle z_{1}=u_{1}+iv_{1}{\text{ and }}z_{2}=u_{2}+iv_{2}{\text{ then }}z_{1},z_{2}}are independent zero-mean complex normal samples with circular symmetry. Their complex variances areVar|zi|=2.{\displaystyle \operatorname {Var} |z_{i}|=2.}
The density functions of
The variableyi≡ri2{\displaystyle y_{i}\equiv r_{i}^{2}}is clearly Chi-squared with two degrees of freedom and has PDF
Wells et al.[13]show that the density function ofs≡|z1z2|{\displaystyle s\equiv |z_{1}z_{2}|}is
and the cumulative distribution function ofs{\displaystyle s}is
Thus the polar representation of the product of two uncorrelated complex Gaussian samples is
The first and second moments of this distribution can be found from the integral inNormal Distributionsabove
Thus its variance isVar(s)=m2−m12=4−π24{\displaystyle \operatorname {Var} (s)=m_{2}-m_{1}^{2}=4-{\frac {\pi ^{2}}{4}}}.
Further, the density ofz≡s2=|r1r2|2=|r1|2|r2|2=y1y2{\displaystyle z\equiv s^{2}={|r_{1}r_{2}|}^{2}={|r_{1}|}^{2}{|r_{2}|}^{2}=y_{1}y_{2}}corresponds to the product of two independent Chi-square samplesyi{\displaystyle y_{i}}each with two DoF. Writing these as scaled Gamma distributionsfy(yi)=1θΓ(1)e−yi/θwithθ=2{\displaystyle f_{y}(y_{i})={\tfrac {1}{\theta \Gamma (1)}}e^{-y_{i}/\theta }{\text{ with }}\theta =2}then, from the Gamma products below, the density of the product is
Letu1,v1,u2,v2,…,u2N,v2N,{\displaystyle u_{1},v_{1},u_{2},v_{2},\ldots ,u_{2N},v_{2N},}be4N{\displaystyle 4N}independent samples from a normal(0,1) distribution.Settingz1=u1+iv1,z2=u2+iv2,…,andz2N=u2N+iv2N,{\displaystyle z_{1}=u_{1}+iv_{1},z_{2}=u_{2}+iv_{2},\ldots ,{\text{ and }}z_{2N}=u_{2N}+iv_{2N},}thenz1,z2,…,z2N{\displaystyle z_{1},z_{2},\ldots ,z_{2N}}are independent zero-mean complex normal samples with circular symmetry.
Let ofs≡∑i=1Nz2i−1z2i{\displaystyle s\equiv \sum _{i=1}^{N}z_{2i-1}z_{2i}}, Heliot et al.[14]show that the joint density function of the real and imaginary parts ofs{\displaystyle s}, denotedsR{\displaystyle s_{\textrm {R}}}andsI{\displaystyle s_{\textrm {I}}}, respectively, is given by
psR,sI(sR,sI)=2(sR2+sI2)N−12πΓ(n)σsN+1Kn−1(2sR2+sI2σs),{\displaystyle p_{s_{\textrm {R}},s_{\textrm {I}}}(s_{\textrm {R}},s_{\textrm {I}})={\frac {2\left(s_{\textrm {R}}^{2}+s_{\textrm {I}}^{2}\right)^{\frac {N-1}{2}}}{\pi \Gamma (n)\sigma _{s}^{N+1}}}K_{n-1}\!\left(\!2{\frac {\sqrt {s_{\textrm {R}}^{2}+s_{\textrm {I}}^{2}}}{\sigma _{s}}}\right),}whereσs{\displaystyle \sigma _{s}}is the standard deviation ofs{\displaystyle s}. Note thatσs=1{\displaystyle \sigma _{s}=1}if all theui,vi{\displaystyle u_{i},v_{i}}variables are normal(0,1).
Besides, they also prove that the density function of the magnitude ofs{\displaystyle s},|s|{\displaystyle |s|}, is
p|s|(s)=4Γ(N)σsN+1sNKN−1(2sσs),{\displaystyle p_{|s|}(s)={\frac {4}{\Gamma (N)\sigma _{s}^{N+1}}}s^{N}K_{N-1}\left({\frac {2s}{\sigma _{s}}}\right),}wheres=sR2+sI2{\displaystyle s={\sqrt {s_{\textrm {R}}^{2}+s_{\textrm {I}}^{2}}}}.
The first moment of this distribution, i.e. the mean of|s|{\displaystyle |s|}, can be expressed as
E{|s|}=πσsΓ(N+12)2Γ(N),{\displaystyle E\{|s|\}={\sqrt {\pi }}\sigma _{s}{\frac {\Gamma (N+{\frac {1}{2}})}{2\Gamma (N)}},}which further simplifies asE{|s|}∼σsπN2,{\displaystyle E\{|s|\}\sim {\frac {\sigma _{s}{\sqrt {\pi N}}}{2}},}whenN{\displaystyle N}is asymptotically large (i.e.,N→∞{\displaystyle N\rightarrow \infty }) .
The product of non-central independent complex Gaussians is described by O’Donoughue and Moura[15]and forms a double infinite series ofmodified Bessel functionsof the first and second types.
The product of two independent Gamma samples,z=x1x2{\displaystyle z=x_{1}x_{2}}, definingΓ(x;ki,θi)=xki−1e−x/θiΓ(ki)θiki{\displaystyle \Gamma (x;k_{i},\theta _{i})={\frac {x^{k_{i}-1}e^{-x/\theta _{i}}}{\Gamma (k_{i})\theta _{i}^{k_{i}}}}}, follows[16]
Nagar et al.[17]define a correlated bivariate beta distribution
where
Then the pdf ofZ=XYis given by
where2F1{\displaystyle {_{2}F_{1}}}is the Gauss hypergeometric function defined by the Euler integral
Note that multivariate distributions are not generally unique, apart from the Gaussian case, and there may be alternatives.
The distribution of the product of a random variable having auniform distributionon (0,1) with a random variable having agamma distributionwith shape parameter equal to 2, is anexponential distribution.[18]A more general case of this concerns the distribution of the product of a random variable having abeta distributionwith a random variable having agamma distribution: for some cases where the parameters of the two component distributions are related in a certain way, the result is again a gamma distribution but with a changed shape parameter.[18]
TheK-distributionis an example of a non-standard distribution that can be defined as a product distribution (where both components have a gamma distribution).
The product ofnGamma andmPareto independent samples was derived by Nadarajah.[19]
|
https://en.wikipedia.org/wiki/Product_distribution#expectation
|
Instatistics, theHorvitz–Thompson estimator, named afterDaniel G. Horvitzand Donovan J. Thompson,[1]is a method for estimating the total[2]and mean of apseudo-populationin astratified sampleby applyinginverse probability weightingto account for the difference in the sampling distribution between the collected data and the target population. The Horvitz–Thompson estimator is frequently applied insurvey analysesand can be used to account formissing data, as well as manysources of unequal selection probabilities.
Formally, letYi,i=1,2,…,n{\displaystyle Y_{i},i=1,2,\ldots ,n}be anindependentsample fromn{\displaystyle n}ofN≥n{\displaystyle N\geq n}distinctstratawith an overall meanμ{\displaystyle \mu }. Suppose further thatπi{\displaystyle \pi _{i}}is theinclusion probabilitythat a randomly sampled individual in a superpopulation belongs to thei{\displaystyle i}th stratum. The Horvitz–Thompson estimator of the total is given by:[3]: 51
and the Horvitz–Thompson estimate of the mean is given by:
In aBayesianprobabilistic frameworkπi{\displaystyle \pi _{i}}is considered the proportion of individuals in a target population belonging to thei{\displaystyle i}th stratum. Hence,Yi/πi{\displaystyle Y_{i}/\pi _{i}}could be thought of as an estimate of the complete sample of persons within thei{\displaystyle i}th stratum. The Horvitz–Thompson estimator can also be expressed as the limit of a weightedbootstrapresamplingestimate of the mean. It can also be viewed as a special case of multipleimputationapproaches.[4]
Forpost-stratifiedstudy designs, estimation ofπ{\displaystyle \pi }andμ{\displaystyle \mu }are done in distinct steps. In such cases, computating the variance ofμ^HT{\displaystyle {\hat {\mu }}_{HT}}is not straightforward. Resampling techniques such as the bootstrap or the jackknife can be applied to gain consistent estimates of the variance of the Horvitz–Thompson estimator.[5]The "survey" package forRconducts analyses for post-stratified data using the Horvitz–Thompson estimator.[6]
For this proof it will be useful to represent the sample as a random subsetS⊆{1,…,N}{\displaystyle S\subseteq \{1,\ldots ,N\}}of sizen{\displaystyle n}. We can then define indicator random variablesIj=1[j∈S]{\displaystyle I_{j}=\mathbf {1} [j\in S]}representing whether for eachj{\displaystyle j}in{1,…,N}{\displaystyle \{1,\ldots ,N\}}whether it is present in the sample. Note that for any observation in the sample, the expectation is the definition of the inclusion probability:πi=E(Ii)=Pr(i∈S){\displaystyle \pi _{i}=\operatorname {\mathbb {E} } \left(I_{i}\right)=\Pr(i\in S)}.[a]
Taking the expectation of the estimator we can prove it is unbiased as follows:
The Hansen–Hurwitz (1943) is known to be inferior to the Horvitz–Thompson (1952) strategy, associated with a number of Inclusion Probabilities Proportional to Size (IPPS) sampling procedures.[7]
|
https://en.wikipedia.org/wiki/Horvitz%E2%80%93Thompson_estimator
|
In thisstatistics,quality assurance, andsurvey methodology,samplingis the selection of a subset or astatistical sample(termedsamplefor short) of individuals from within astatistical populationto estimate characteristics of the whole population. The subset is meant to reflect the whole population, and statisticians attempt to collect samples that are representative of the population. Sampling has lower costs and faster data collection compared to recording data from the entire population (in many cases, collecting the whole population is impossible, like getting sizes of all stars in the universe), and thus, it can provide insights in cases where it is infeasible to measure an entire population.
Eachobservationmeasures one or more properties (such as weight, location, colour or mass) of independent objects or individuals. Insurvey sampling, weights can be applied to the data to adjust for the sample design, particularly instratified sampling.[1]Results fromprobability theoryandstatistical theoryare employed to guide the practice. In business and medical research, sampling is widely used for gathering information about a population.[2]Acceptance samplingis used to determine if a production lot of material meets the governingspecifications.
Random sampling by using lots is an old idea, mentioned several times in the Bible. In 1786, Pierre SimonLaplaceestimated the population of France by using a sample, along withratio estimator. He also computed probabilistic estimates of the error. These were not expressed as modernconfidence intervalsbut as the sample size that would be needed to achieve a particular upper bound on the sampling error with probability 1000/1001. His estimates usedBayes' theoremwith a uniformprior probabilityand assumed that his sample was random.Alexander Ivanovich Chuprovintroduced sample surveys toImperial Russiain the 1870s.[3]
In the US, the 1936Literary Digestprediction of a Republican win in thepresidential electionwent badly awry, due to severebias[1]. More than two million people responded to the study with their names obtained through magazine subscription lists and telephone directories. It was not appreciated that these lists were heavily biased towards Republicans and the resulting sample, though very large, was deeply flawed.[4][5]
Elections in Singaporehave adopted this practice since the2015 election, also known as the sample counts, whereas according to theElections Department(ELD), their country's election commission, sample counts help reduce speculation and misinformation, while helping election officials to check against the election result for that electoral division. While the reported sample counts yield a fairly accurate indicative result with a 4%margin of errorat a 95%confidence interval, ELD reminded the public that sample counts are separate from official results, and only thereturning officerwill declare the official results once vote counting is complete.[6][7]
Successful statistical practice is based on focused problem definition. In sampling, this includes defining the "population" from which our sample is drawn. A population can be defined as including all people or items with the characteristics one wishes to understand. Because there is very rarely enough time or money to gather information from everyone or everything in a population, the goal becomes finding a representative sample (or subset) of that population.
Sometimes what defines a population is obvious. For example, a manufacturer needs to decide whether a batch of material fromproductionis of high enough quality to be released to the customer or should be scrapped or reworked due to poor quality. In this case, the batch is the population.
Although the population of interest often consists of physical objects, sometimes it is necessary to sample over time, space, or some combination of these dimensions. For instance, an investigation of supermarket staffing could examine checkout line length at various times, or a study on endangered penguins might aim to understand their usage of various hunting grounds over time. For the time dimension, the focus may be on periods or discrete occasions.
In other cases, the examined 'population' may be even less tangible. For example,Joseph Jaggerstudied the behaviour ofroulettewheels at a casino inMonte Carlo, and used this to identify a biased wheel. In this case, the 'population' Jagger wanted to investigate was the overall behaviour of the wheel (i.e. theprobability distributionof its results over infinitely many trials), while his 'sample' was formed from observed results from that wheel. Similar considerations arise when taking repeated measurements of properties of materials such as theelectrical conductivityofcopper.
This situation often arises when seeking knowledge about thecause systemof which theobservedpopulation is an outcome. In such cases, sampling theory may treat the observed population as a sample from a larger 'superpopulation'. For example, a researcher might study the success rate of a new 'quit smoking' program on a test group of 100 patients, in order to predict the effects of the program if it were made available nationwide. Here the superpopulation is "everybody in the country, given access to this treatment" – a group that does not yet exist since the program is not yet available to all.
The population from which the sample is drawn may not be the same as the population from which information is desired. Often there is a large but not complete overlap between these two groups due to frame issues etc. (see below). Sometimes they may be entirely separate – for instance, one might study rats in order to get a better understanding of human health, or one might study records from people born in 2008 in order to make predictions about people born in 2009.
Time spent in making the sampled population and population of concern precise is often well spent because it raises many issues, ambiguities, and questions that would otherwise have been overlooked at this stage.
In the most straightforward case, such as the sampling of a batch of material from production (acceptance sampling by lots), it would be most desirable to identify and measure every single item in the population and to include any one of them in our sample. However, in the more general case this is not usually possible or practical. There is no way to identify all rats in the set of all rats. Where voting is not compulsory, there is no way to identify which people will vote at a forthcoming election (in advance of the election). These imprecise populations are not amenable to sampling in any of the ways below and to which we could apply statistical theory.
As a remedy, we seek asampling framewhich has the property that we can identify every single element and include any in our sample.[8][9][10][11]The most straightforward type of frame is a list of elements of the population (preferably the entire population) with appropriate contact information. For example, in anopinion poll, possible sampling frames include anelectoral registerand atelephone directory.
Aprobability sampleis a sample in which every unit in the population has a chance (greater than zero) of being selected in the sample, and this probability can be accurately determined. The combination of these traits makes it possible to produce unbiased estimates of population totals, by weighting sampled units according to their probability of selection.
Example: We want to estimate the total income of adults living in a given street. We visit each household in that street, identify all adults living there, and randomly select one adult from each household. (For example, we can allocate each person a random number, generated from auniform distributionbetween 0 and 1, and select the person with the highest number in each household). We then interview the selected person and find their income.
People living on their own are certain to be selected, so we simply add their income to our estimate of the total. But a person living in a household of two adults has only a one-in-two chance of selection. To reflect this, when we come to such a household, we would count the selected person's income twice towards the total. (The person whoisselected from that household can be loosely viewed as also representing the person whoisn'tselected.)
In the above example, not everybody has the same probability of selection; what makes it a probability sample is the fact that each person's probability is known. When every element in the populationdoeshave the same probability of selection, this is known as an 'equal probability of selection' (EPS) design. Such designs are also referred to as 'self-weighting' because all sampled units are given the same weight.
Probability sampling includes:simple random sampling,systematic sampling,stratified sampling, probability-proportional-to-size sampling, andclusterormultistage sampling. These various ways of probability sampling have two things in common:
Nonprobability samplingis any sampling method where some elements of the population havenochance of selection (these are sometimes referred to as 'out of coverage'/'undercovered'), or where the probability of selection cannot be accurately determined. It involves the selection of elements based on assumptions regarding the population of interest, which forms the criteria for selection. Hence, because the selection of elements is nonrandom, nonprobability sampling does not allow the estimation of sampling errors. These conditions give rise toexclusion bias, placing limits on how much information a sample can provide about the population. Information about the relationship between sample and population is limited, making it difficult to extrapolate from the sample to the population.
Example: We visit every household in a given street, and interview the first person to answer the door. In any household with more than one occupant, this is a nonprobability sample, because some people are more likely to answer the door (e.g. an unemployed person who spends most of their time at home is more likely to answer than an employed housemate who might be at work when the interviewer calls) and it's not practical to calculate these probabilities.
Nonprobability sampling methods includeconvenience sampling,quota sampling, andpurposive sampling. In addition, nonresponse effects may turnanyprobability design into a nonprobability design if the characteristics of nonresponse are not well understood, since nonresponse effectively modifies each element's probability of being sampled.
Within any of the types of frames identified above, a variety of sampling methods can be employed individually or in combination. Factors commonly influencing the choice between these designs include:
In a simple random sample (SRS) of a given size, all subsets of a sampling frame have an equal probability of being selected. Each element of the frame thus has an equal probability of selection: the frame is not subdivided or partitioned. Furthermore, any givenpairof elements has the same chance of selection as any other such pair (and similarly for triples, and so on). This minimizes bias and simplifies analysis of results. In particular, the variance between individual results within the sample is a good indicator of variance in the overall population, which makes it relatively easy to estimate the accuracy of results.
Simple random sampling can be vulnerable to sampling error because the randomness of the selection may result in a sample that does not reflect the makeup of the population. For instance, a simple random sample of ten people from a given country willon averageproduce five men and five women, but any given trial is likely to over represent one sex and underrepresent the other. Systematic and stratified techniques attempt to overcome this problem by "using information about the population" to choose a more "representative" sample.
Also, simple random sampling can be cumbersome and tedious when sampling from a large target population. In some cases, investigators are interested in research questions specific to subgroups of the population. For example, researchers might be interested in examining whether cognitive ability as a predictor of job performance is equally applicable across racial groups. Simple random sampling cannot accommodate the needs of researchers in this situation, because it does not provide subsamples of the population, and other sampling strategies, such as stratified sampling, can be used instead.
Systematic sampling (also known as interval sampling) relies on arranging the study population according to some ordering scheme, and then selecting elements at regular intervals through that ordered list. Systematic sampling involves a random start and then proceeds with the selection of everykth element from then onwards. In this case,k=(population size/sample size). It is important that the starting point is not automatically the first in the list, but is instead randomly chosen from within the first to thekth element in the list. A simple example would be to select every 10th name from the telephone directory (an 'every 10th' sample, also referred to as 'sampling with a skip of 10').
As long as the starting point israndomized, systematic sampling is a type ofprobability sampling. It is easy to implement and thestratificationinduced can make it efficient,ifthe variable by which the list is ordered is correlated with the variable of interest. 'Every 10th' sampling is especially useful for efficient sampling fromdatabases.
For example, suppose we wish to sample people from a long street that starts in a poor area (house No. 1) and ends in an expensive district (house No. 1000). A simple random selection of addresses from this street could easily end up with too many from the high end and too few from the low end (or vice versa), leading to an unrepresentative sample. Selecting (e.g.) every 10th street number along the street ensures that the sample is spread evenly along the length of the street, representing all of these districts. (If we always start at house #1 and end at #991, the sample is slightly biased towards the low end; by randomly selecting the start between #1 and #10, this bias is eliminated.)
However, systematic sampling is especially vulnerable to periodicities in the list. If periodicity is present and the period is a multiple or factor of the interval used, the sample is especially likely to beunrepresentative of the overall population, making the scheme less accurate than simple random sampling.
For example, consider a street where the odd-numbered houses are all on the north (expensive) side of the road, and the even-numbered houses are all on the south (cheap) side. Under the sampling scheme given above, it is impossible to get a representative sample; either the houses sampled willallbe from the odd-numbered, expensive side, or they willallbe from the even-numbered, cheap side, unless the researcher has previous knowledge of this bias and avoids it by a using a skip which ensures jumping between the two sides (any odd-numbered skip).
Another drawback of systematic sampling is that even in scenarios where it is more accurate than SRS, its theoretical properties make it difficult toquantifythat accuracy. (In the two examples of systematic sampling that are given above, much of the potential sampling error is due to variation between neighbouring houses – but because this method never selects two neighbouring houses, the sample will not give us any information on that variation.)
As described above, systematic sampling is an EPS method, because all elements have the same probability of selection (in the example given, one in ten). It isnot'simple random sampling' because different subsets of the same size have different selection probabilities – e.g. the set {4,14,24,...,994} has a one-in-ten probability of selection, but the set {4,13,24,34,...} has zero probability of selection.
Systematic sampling can also be adapted to a non-EPS approach; for an example, see discussion of PPS samples below.
When the population embraces a number of distinct categories, the frame can be organized by these categories into separate "strata." Each stratum is then sampled as an independent sub-population, out of which individual elements can be randomly selected.[8]The ratio of the size of this random selection (or sample) to the size of the population is called asampling fraction.[12]There are several potential benefits to stratified sampling.[12]
First, dividing the population into distinct, independent strata can enable researchers to draw inferences about specific subgroups that may be lost in a more generalized random sample.
Second, utilizing a stratified sampling method can lead to more efficient statistical estimates (provided that strata are selected based upon relevance to the criterion in question, instead of availability of the samples). Even if a stratified sampling approach does not lead to increased statistical efficiency, such a tactic will not result in less efficiency than would simple random sampling, provided that each stratum is proportional to the group's size in the population.
Third, it is sometimes the case that data are more readily available for individual, pre-existing strata within a population than for the overall population; in such cases, using a stratified sampling approach may be more convenient than aggregating data across groups (though this may potentially be at odds with the previously noted importance of utilizing criterion-relevant strata).
Finally, since each stratum is treated as an independent population, different sampling approaches can be applied to different strata, potentially enabling researchers to use the approach best suited (or most cost-effective) for each identified subgroup within the population.
There are, however, some potential drawbacks to using stratified sampling. First, identifying strata and implementing such an approach can increase the cost and complexity of sample selection, as well as leading to increased complexity of population estimates. Second, when examining multiple criteria, stratifying variables may be related to some, but not to others, further complicating the design, and potentially reducing the utility of the strata. Finally, in some cases (such as designs with a large number of strata, or those with a specified minimum sample size per group), stratified sampling can potentially require a larger sample than would other methods (although in most cases, the required sample size would be no larger than would be required for simple random sampling).
Stratification is sometimes introduced after the sampling phase in a process called "poststratification".[8]This approach is typically implemented due to a lack of prior knowledge of an appropriate stratifying variable or when the experimenter lacks the necessary information to create a stratifying variable during the sampling phase. Although the method is susceptible to the pitfalls of post hoc approaches, it can provide several benefits in the right situation. Implementation usually follows a simple random sample. In addition to allowing for stratification on an ancillary variable, poststratification can be used to implement weighting, which can improve the precision of a sample's estimates.[8]
Choice-based sampling or oversampling is one of the stratified sampling strategies. In choice-based sampling,[13]the data are stratified on the target and a sample is taken from each stratum so that rarer target classes will be more represented in the sample. The model is then built on thisbiased sample. The effects of the input variables on the target are often estimated with more precision with the choice-based sample even when a smaller overall sample size is taken, compared to a random sample. The results usually must be adjusted to correct for the oversampling.
In some cases the sample designer has access to an "auxiliary variable" or "size measure", believed to be correlated to the variable of interest, for each element in the population. These data can be used to improve accuracy in sample design. One option is to use the auxiliary variable as a basis for stratification, as discussed above.
Another option is probability proportional to size ('PPS') sampling, in which the selection probability for each element is set to be proportional to its size measure, up to a maximum of 1. In a simple PPS design, these selection probabilities can then be used as the basis forPoisson sampling. However, this has the drawback of variable sample size, and different portions of the population may still be over- or under-represented due to chance variation in selections.
Systematic sampling theory can be used to create a probability proportionate to size sample. This is done by treating each count within the size variable as a single sampling unit. Samples are then identified by selecting at even intervals among these counts within the size variable. This method is sometimes called PPS-sequential or monetary unit sampling in the case of audits or forensic sampling.
Example: Suppose we have six schools with populations of 150, 180, 200, 220, 260, and 490 students respectively (total 1500 students), and we want to use student population as the basis for a PPS sample of size three. To do this, we could allocate the first school numbers 1 to 150, the second school 151 to 330 (= 150 + 180), the third school 331 to 530, and so on to the last school (1011 to 1500). We then generate a random start between 1 and 500 (equal to 1500/3) and count through the school populations by multiples of 500. If our random start was 137, we would select the schools which have been allocated numbers 137, 637, and 1137, i.e. the first, fourth, and sixth schools.
The PPS approach can improve accuracy for a given sample size by concentrating sample on large elements that have the greatest impact on population estimates. PPS sampling is commonly used for surveys of businesses, where element size varies greatly and auxiliary information is often available – for instance, a survey attempting to measure the number of guest-nights spent in hotels might use each hotel's number of rooms as an auxiliary variable. In some cases, an older measurement of the variable of interest can be used as an auxiliary variable when attempting to produce more current estimates.[14]
Sometimes it is more cost-effective to select respondents in groups ('clusters'). Sampling is often clustered by geography, or by time periods. (Nearly all samples are in some sense 'clustered' in time – although this is rarely taken into account in the analysis.) For instance, if surveying households within a city, we might choose to select 100 city blocks and then interview every household within the selected blocks.
Clustering can reduce travel and administrative costs. In the example above, an interviewer can make a single trip to visit several households in one block, rather than having to drive to a different block for each household.
It also means that one does not need asampling framelisting all elements in the target population. Instead, clusters can be chosen from a cluster-level frame, with an element-level frame created only for the selected clusters. In the example above, the sample only requires a block-level city map for initial selections, and then a household-level map of the 100 selected blocks, rather than a household-level map of the whole city.
Cluster sampling (also known as clustered sampling) generally increases the variability of sample estimates above that of simple random sampling, depending on how the clusters differ between one another as compared to the within-cluster variation. For this reason, cluster sampling requires a larger sample than SRS to achieve the same level of accuracy – but cost savings from clustering might still make this a cheaper option.
Cluster samplingis commonly implemented asmultistage sampling. This is a complex form of cluster sampling in which two or more levels of units are embedded one in the other. The first stage consists of constructing the clusters that will be used to sample from. In the second stage, a sample of primary units is randomly selected from each cluster (rather than using all units contained in all selected clusters). In following stages, in each of those selected clusters, additional samples of units are selected, and so on. All ultimate units (individuals, for instance) selected at the last step of this procedure are then surveyed. This technique, thus, is essentially the process of taking random subsamples of preceding random samples.
Multistage sampling can substantially reduce sampling costs, where the complete population list would need to be constructed (before other sampling methods could be applied). By eliminating the work involved in describing clusters that are not selected, multistage sampling can reduce the large costs associated with traditional cluster sampling.[14]However, each sample may not be a full representative of the whole population.
Inquota sampling, the population is first segmented intomutually exclusivesub-groups, just as instratified sampling. Then judgement is used to select the subjects or units from each segment based on a specified proportion. For example, an interviewer may be told to sample 200 females and 300 males between the age of 45 and 60.
It is this second step which makes the technique one of non-probability sampling. In quota sampling the selection of the sample is non-random. For example, interviewers might be tempted to interview those who look most helpful. The problem is that these samples may be biased because not everyone gets a chance of selection. This random element is its greatest weakness and quota versus probability has been a matter of controversy for several years.
In imbalanced datasets, where the sampling ratio does not follow the population statistics, one can resample the dataset in a conservative manner calledminimax sampling. The minimax sampling has its origin inAndersonminimax ratio whose value is proved to be 0.5: in a binary classification, the class-sample sizes should be chosen equally. This ratio can be proved to be minimax ratio only under the assumption ofLDAclassifier with Gaussian distributions. The notion of minimax sampling is recently developed for a general class of classification rules, called class-wise smart classifiers. In this case, the sampling ratio of classes is selected so that the worst case classifier error over all the possible population statistics for class prior probabilities, would be the best.[12]
Accidental sampling (sometimes known asgrab,convenienceoropportunity sampling) is a type of nonprobability sampling which involves the sample being drawn from that part of the population which is close to hand. That is, a population is selected because it is readily available and convenient. It may be through meeting the person or including a person in the sample when one meets them or chosen by finding them through technological means such as the internet or through phone. The researcher using such a sample cannot scientifically make generalizations about the total population from this sample because it would not be representative enough. For example, if the interviewer were to conduct such a survey at a shopping center early in the morning on a given day, the people that they could interview would be limited to those given there at that given time, which would not represent the views of other members of society in such an area, if the survey were to be conducted at different times of day and several times per week. This type of sampling is most useful for pilot testing. Several important considerations for researchers using convenience samples include:
In social science research,snowball samplingis a similar technique, where existing study subjects are used to recruit more subjects into the sample. Some variants of snowball sampling, such as respondent driven sampling, allow calculation of selection probabilities and are probability sampling methods under certain conditions.
The voluntary sampling method is a type of non-probability sampling. Volunteers choose to complete a survey.
Volunteers may be invited through advertisements in social media.[15]The target population for advertisements can be selected by characteristics like location, age, sex, income, occupation, education, or interests using tools provided by the social medium. The advertisement may include a message about the research and link to a survey. After following the link and completing the survey, the volunteer submits the data to be included in the sample population. This method can reach a global population but is limited by the campaign budget. Volunteers outside the invited population may also be included in the sample.
It is difficult to make generalizations from this sample because it may not represent the total population. Often, volunteers have a strong interest in the main topic of the survey.
Line-intercept sampling is a method of sampling elements in a region whereby an element is sampled if a chosen line segment, called a "transect", intersects the element.
Panel samplingis the method of first selecting a group of participants through a random sampling method and then asking that group for (potentially the same) information several times over a period of time. Therefore, each participant is interviewed at two or more time points; each period of data collection is called a "wave". The method was developed by sociologistPaul Lazarsfeldin 1938 as a means of studyingpolitical campaigns.[16]Thislongitudinalsampling-method allows estimates of changes in the population, for example with regard to chronic illness to job stress to weekly food expenditures. Panel sampling can also be used to inform researchers about within-person health changes due to age or to help explain changes in continuous dependent variables such as spousal interaction.[17]There have been several proposed methods of analyzingpanel data, includingMANOVA,growth curves, andstructural equation modelingwith lagged effects.
Snowball sampling involves finding a small group of initial respondents and using them to recruit more respondents. It is particularly useful in cases where the population is hidden or difficult to enumerate.
Theoretical sampling[18]occurs when samples are selected on the basis of the results of the data collected so far with a goal of developing a deeper understanding of the area or develop theories. An initial, general sample is first collected with the goal of investigating general trends, where further sampling may consist of extreme or very specific cases might be selected in order to maximize the likelihood a phenomenon will actually be observable.
Inactive sampling, the samples which are used for training a machine learning algorithm are actively selected, also compareactive learning (machine learning).
Judgement sampling, also known as expert or purposive sampling, is a type non-random sampling where samples are selected based on the opinion of an expert, who can select participants based on how valuable the information they provide is.
Haphazard sampling refers to the idea of using human judgement to simulate randomness. Despite samples being hand-picked, the goal is to ensure that no conscious bias exists within the choice of samples, but often fails due toselection bias.[19]Haphazard sampling is generally opted for due to its convenience, when the tools or capacity to perform other sampling methods may not exist.
The major weakness of such samples is that they often do not represent the characteristics of the entire population, but just a segment of the population. Because of this unbalanced representation, results from haphazard sampling are often biased.[20]
Sampling schemes may bewithout replacement('WOR' – no element can be selected more than once in the same sample) orwith replacement('WR' – an element may appear multiple times in the one sample). For example, if we catch fish, measure them, and immediately return them to the water before continuing with the sample, this is a WR design, because we might end up catching and measuring the same fish more than once. However, if we do not return the fish to the water ortag and releaseeach fish after catching it, this becomes a WOR design.
Formulas, tables, and power function charts are well known approaches to determine sample size.
Steps for using sample size tables:
Good data collection involves:
Sampling enables the selection of right data points from within the larger data set to estimate the characteristics of the whole population. For example, there are about 600 million tweets produced every day. It is not necessary to look at all of them to determine the topics that are discussed during the day, nor is it necessary to look at all the tweets to determine the sentiment on each of the topics. A theoretical formulation for sampling Twitter data has been developed.[22]
In manufacturing different types of sensory data such as acoustics, vibration, pressure, current, voltage, and controller data are available at short time intervals. To predict down-time it may not be necessary to look at all the data but a sample may be sufficient.
Survey results are typically subject to some error. Total errors can be classified into sampling errors and non-sampling errors. The term "error" here includes systematic biases as well as random errors.
Sampling errors and biases are induced by the sample design. They include:
Non-sampling errors are other errors which can impact final survey estimates, caused by problems in data collection, processing, or sample design. Such errors may include:
After sampling, a review is held of the exact process followed in sampling, rather than that intended, in order to study any effects that any divergences might have on subsequent analysis.
A particular problem involvesnon-response. Two major types of non-response exist:[23][24]
Insurvey sampling, many of the individuals identified as part of the sample may be unwilling to participate, not have the time to participate (opportunity cost),[25]or survey administrators may not have been able to contact them. In this case, there is a risk of differences between respondents and nonrespondents, leading to biased estimates of population parameters. This is often addressed by improving survey design, offering incentives, and conducting follow-up studies which make a repeated attempt to contact the unresponsive and to characterize their similarities and differences with the rest of the frame.[26]The effects can also be mitigated by weighting the data (when population benchmarks are available) or by imputing data based on answers to other questions. Nonresponse is particularly a problem in internet sampling. Reasons for this problem may include improperly designed surveys,[24]over-surveying (or survey fatigue),[17][27][need quotation to verify]and the fact that potential participants may have multiple e-mail addresses, which they do not use anymore or do not check regularly.
In many situations, the sample fraction may be varied by stratum and data will have to be weighted to correctly represent the population. Thus for example, a simple random sample of individuals in the United Kingdom might not include some in remote Scottish islands who would be inordinately expensive to sample. A cheaper method would be to use a stratified sample with urban and rural strata. The rural sample could be under-represented in the sample, but weighted up appropriately in the analysis to compensate.
More generally, data should usually be weighted if the sample design does not give each individual an equal chance of being selected. For instance, when households have equal selection probabilities but one person is interviewed from within each household, this gives people from large households a smaller chance of being interviewed. This can be accounted for using survey weights. Similarly, households with more than one telephone line have a greater chance of being selected in a random digit dialing sample, and weights can adjust for this.
Weights can also serve other purposes, such as helping to correct for non-response.
The textbook by Groves et alia provides an overview of survey methodology, including recent literature on questionnaire development (informed bycognitive psychology) :
The other books focus on thestatistical theoryof survey sampling and require some knowledge of basic statistics, as discussed in the following textbooks:
The elementary book by Scheaffer et alia uses quadratic equations from high-school algebra:
More mathematical statistics is required for Lohr, for Särndal et alia, and for Cochran:[28]
The historically important books by Deming and Kish remain valuable for insights for social scientists (particularly about the U.S. census and theInstitute for Social Researchat theUniversity of Michigan):
|
https://en.wikipedia.org/wiki/Sample_(statistics)
|
Instatistics,stratified samplingis a method ofsamplingfrom apopulationwhich can bepartitionedintosubpopulations.
Instatistical surveys, when subpopulations within an overall population vary, it could be advantageous to sample each subpopulation (stratum) independently.
Stratificationis the process of dividing members of the population into homogeneous subgroups before sampling. The strata should define a partition of the population. That is, it should becollectively exhaustiveandmutually exclusive: every element in the population must be assigned to one and only one stratum. Then sampling is done in each stratum, for example: bysimple random sampling. The objective is to improve the precision of the sample by reducingsampling error. It can produce aweighted meanthat has less variability than thearithmetic meanof asimple random sampleof the population.
Incomputational statistics, stratified sampling is a method ofvariance reductionwhenMonte Carlo methodsare used to estimate population statistics from a known population.[1]
Assume that we need to estimate the average number of votes for each candidate in an election. Assume that a country has 3 towns: Town A has 1 million factory workers, Town B has 2 million office workers and Town C has 3 million retirees. We can choose to get a random sample of size 60 over the entire population but there is some chance that the resulting random sample is poorly balanced across these towns and hence is biased, causing a significant error in estimation (when the outcome of interest has a different distribution, in terms of the parameter of interest, between the towns). Instead, if we choose to take a random sample of 10, 20 and 30 from Town A, B and C respectively, then we can produce a smaller error in estimation for the same total sample size. This method is generally used when a population is not a homogeneous group.
A real-world example of using stratified sampling would be for a politicalsurvey. If the respondents needed to reflect the diversity of the population, the researcher would specifically seek to include participants of various minority groups such as race or religion, based on their proportionality to the total population as mentioned above. A stratified survey could thus claim to be more representative of the population than a survey ofsimple random samplingorsystematic sampling. Both mean and variance can be corrected for disproportionate sampling costs usingstratified sample sizes.
The reasons to use stratified sampling rather thansimple random samplinginclude[2]
If the population density varies greatly within a region, stratified sampling will ensure that estimates can be made with equal accuracy in different parts of the region, and that comparisons of sub-regions can be made with equalstatistical power. For example, inOntarioa survey taken throughout the province might use a larger sampling fraction in the less populated north, since the disparity in population between north and south is so great that a sampling fraction based on the provincial sample as a whole might result in the collection of only a handful of data from the north.
It would be a misapplication of the technique to make subgroups' sample sizes proportional to the amount of data available from the subgroups, rather than scaling sample sizes to subgroup sizes (or to their variances, if known to vary significantly—e.g. using anF test). Data representing each subgroup are taken to be of equal importance if suspected variation among them warrants stratified sampling. If subgroup variances differ significantly and the data needs to be stratified by variance, it is not possible to simultaneously make each subgroup sample size proportional to subgroup size within the total population. For an efficient way to partition sampling resources among groups that vary in their means, variance and costs, see"optimum allocation".
The problem of stratified sampling in the case of unknown class priors (ratio of subpopulations in the entire population) can have a deleterious effect on the performance of any analysis on the dataset, e.g. classification.[3]In that regard,minimax sampling ratiocan be used to make the dataset robust with respect to uncertainty in the underlying data generating process.[3]
Combining sub-strata to ensure adequate numbers can lead toSimpson's paradox, where trends that exist in different groups of data disappear or even reverse when the groups are combined.
The mean and variance of stratified random sampling are given by:[2]
where
Note that the term(Nh−nh)/(Nh−1){\displaystyle (N_{h}-n_{h})/(N_{h}-1)}, which equals1−nh−1Nh−1{\displaystyle 1-{\frac {n_{h}-1}{N_{h}-1}}}, is afinite population correctionandNh{\displaystyle N_{h}}must be expressed in "sample units". Forgoing the finite population correction gives:
where thewh=Nh/N{\displaystyle w_{h}=N_{h}/N}is the population weight of stratumh{\displaystyle h}.
For the proportional allocation strategy, the size of the sample in each stratum is taken in proportion to the size of the stratum. Suppose that in a company there are the following staff:[4]
and we are asked to take a sample of 40 staff, stratified according to the above categories.
The first step is to calculate the percentage of each group of the total.
This tells us that of our sample of 40,
Another easy way without having to calculate the percentage is to multiply each group size by the sample size and divide by the total population size (size of entire staff):
|
https://en.wikipedia.org/wiki/Stratum_(statistics)
|
Bootstrappingis a procedure for estimating the distribution of an estimator byresampling(oftenwith replacement) one's data or a model estimated from the data.[1]Bootstrapping assigns measures of accuracy (bias, variance,confidence intervals, prediction error, etc.) to sample estimates.[2][3]This technique allows estimation of the sampling distribution of almost any statistic using random sampling methods.[1]
Bootstrapping estimates the properties of anestimand(such as itsvariance) by measuring those properties when sampling from an approximating distribution. One standard choice for an approximating distribution is theempirical distribution functionof the observed data. In the case where a set of observations can be assumed to be from anindependent and identically distributedpopulation, this can be implemented by constructing a number ofresampleswith replacement, of the observed data set (and of equal size to the observed data set). A key result in Efron's seminal paper that introduced the bootstrap[4]is the favorable performance of bootstrap methods usingsampling with replacementcompared to prior methods like thejackknifethat sample without replacement. However, since its introduction, numerous variants on the bootstrap have been proposed, including methods that sample without replacement or that create bootstrap samples larger or smaller than the original data.
The bootstrap may also be used for constructinghypothesis tests.[5]It is often used as an alternative tostatistical inferencebased on the assumption of a parametric model when that assumption is in doubt, or where parametric inference is impossible or requires complicated formulas for the calculation ofstandard errors.
The bootstrap[a]was first described byBradley Efronin "Bootstrap methods: another look at the jackknife" (1979),[4]inspired by earlier work on thejackknife.[6][7][8]Improved estimates of the variance were developed later.[9][10]A Bayesian extension was developed in 1981.[11]The bias-corrected and accelerated (BCa{\displaystyle BC_{a}}) bootstrap was developed by Efron in 1987,[12]and the approximate bootstrap confidence interval (ABC, or approximateBCa{\displaystyle BC_{a}}) procedure in 1992.[13]
The basic idea of bootstrapping is that inference about a population from sample data (sample → population) can be modeled byresamplingthe sample data and performing inference about a sample from resampled data (resampled → sample).[14]As the population is unknown, the true error in a sample statistic against its population value is unknown. In bootstrap-resamples, the 'population' is in fact the sample, and this is known; hence the quality of inference of the 'true' sample from resampled data (resampled → sample) is measurable.
More formally, the bootstrap works by treating inference of the trueprobability distributionJ, given the original data, as being analogous to an inference of the empirical distributionĴ, given the resampled data. The accuracy of inferences regardingĴusing the resampled data can be assessed because we knowĴ. IfĴis a reasonable approximation toJ, then the quality of inference onJcan in turn be inferred.
As an example, assume we are interested in the average (ormean) height of people worldwide. We cannot measure all the people in the global population, so instead, we sample only a tiny part of it, and measure that. Assume the sample is of sizeN; that is, we measure the heights ofNindividuals. From that single sample, only one estimate of the mean can be obtained. In order to reason about the population, we need some sense of thevariabilityof the mean that we have computed. The simplest bootstrap method involves taking the original data set of heights, and, using a computer, sampling from it to form a new sample (called a 'resample' or bootstrap sample) that is also of sizeN. The bootstrap sample is taken from the original by usingsampling with replacement(e.g. we might 'resample' 5 times from [1,2,3,4,5] and get [2,5,4,4,1]), so, assumingNis sufficiently large, for all practical purposes there is virtually zero probability that it will be identical to the original "real" sample. This process is repeated a large number of times (typically 1,000 or 10,000 times), and for each of these bootstrap samples, we compute its mean (each of these is called a "bootstrap estimate"). We now can create a histogram of bootstrap means. This histogram provides an estimate of the shape of the distribution of the sample mean from which we can answer questions about how much the mean varies across samples. (The method here, described for the mean, can be applied to almost any otherstatisticorestimator.)
A great advantage of bootstrap is its simplicity. It is a straightforward way to derive estimates ofstandard errorsandconfidence intervalsfor complex estimators of the distribution, such as percentile points, proportions,Odds ratio, and correlation coefficients. However, despite its simplicity, bootstrapping can be applied to complex sampling designs (e.g. for population divided into s strata with nsobservations per strata, one example of which is a dose-response experiment, where bootstrapping can be applied for each stratum).[15]Bootstrap is also an appropriate way to control and check the stability of the results. Although for most problems it is impossible to know the true confidence interval, bootstrap is asymptotically more accurate than the standard intervals obtained using sample variance and assumptions of normality.[16]Bootstrapping is also a convenient method that avoids the cost of repeating the experiment to get other groups of sample data.
Bootstrapping depends heavily on the estimator used and, though simple, naive use of bootstrapping will not always yield asymptotically valid results and can lead to inconsistency.[17]Although bootstrapping is (under some conditions) asymptoticallyconsistent, it does not provide general finite-sample guarantees. The result may depend on the representative sample. The apparent simplicity may conceal the fact that important assumptions are being made when undertaking the bootstrap analysis (e.g. independence of samples or large enough of a sample size) where these would be more formally stated in other approaches. Also, bootstrapping can be time-consuming and there are not many available software for bootstrapping as it is difficult to automate using traditional statistical computer packages.[15]
Scholars have recommended more bootstrap samples as available computing power has increased. If the results may have substantial real-world consequences, then one should use as many samples as is reasonable, given available computing power and time. Increasing the number of samples cannot increase the amount of information in the original data; it can only reduce the effects of random sampling errors which can arise from a bootstrap procedure itself. Moreover, there is evidence that numbers of samples greater than 100 lead to negligible improvements in the estimation of standard errors.[18]In fact, according to the original developer of the bootstrapping method, even setting the number of samples at 50 is likely to lead to fairly good standard error estimates.[19]
Adèr et al. recommend the bootstrap procedure for the following situations:[20]
However, Athreya has shown[21]that if one performs a naive bootstrap on the sample mean when the underlying population lacks a finite variance (for example, apower law distribution), then the bootstrap distribution will not converge to the same limit as the sample mean. As a result, confidence intervals on the basis of aMonte Carlo simulationof the bootstrap could be misleading. Athreya states that "Unless one is reasonably sure that the underlying distribution is notheavy tailed, one should hesitate to use the naive bootstrap".
In univariate problems, it is usually acceptable to resample the individual observations with replacement ("case resampling" below) unlikesubsampling, in which resampling is without replacement and is valid under much weaker conditions compared to the bootstrap. In small samples, a parametric bootstrap approach might be preferred. For other problems, asmooth bootstrapwill likely be preferred.
For regression problems, various other alternatives are available.[2]
The bootstrap is generally useful for estimating the distribution of a statistic (e.g. mean, variance) without using normality assumptions (as required, e.g., for a z-statistic or a t-statistic). In particular, the bootstrap is useful when there is no analytical form or an asymptotic theory (e.g., an applicablecentral limit theorem) to help estimate the distribution of the statistics of interest. This is because bootstrap methods can apply to most random quantities, e.g., the ratio of variance and mean. There are at least two ways of performing case resampling.
Consider a coin-flipping experiment. We flip the coin and record whether it lands heads or tails. LetX = x1,x2, …,x10be 10 observations from the experiment.xi= 1if the i th flip lands heads, and 0 otherwise. By invoking the assumption that the average of the coin flips is normally distributed, we can use thet-statisticto estimate the distribution of the sample mean,
Such a normality assumption can be justified either as an approximation of the distribution of eachindividualcoin flip or as an approximation of the distribution of theaverageof a large number of coin flips. The former is a poor approximation because the true distribution of the coin flips isBernoulliinstead of normal. The latter is a valid approximation ininfinitely largesamples due to thecentral limit theorem.
However, if we are not ready to make such a justification, then we can use the bootstrap instead. Using case resampling, we can derive the distribution ofx¯{\displaystyle {\bar {x}}}. We first resample the data to obtain abootstrap resample. An example of the first resample might look like thisX1* =x2,x1,x10,x10,x3,x4,x6,x7,x1,x9. There are some duplicates since a bootstrap resample comes from sampling with replacement from the data. Also the number of data points in a bootstrap resample is equal to the number of data points in our original observations. Then we compute the mean of this resample and obtain the firstbootstrap mean:μ1*. We repeat this process to obtain the second resampleX2* and compute the second bootstrap meanμ2*. If we repeat this 100 times, then we haveμ1*,μ2*, ...,μ100*. This represents anempirical bootstrap distributionof sample mean. From this empirical distribution, one can derive abootstrap confidence intervalfor the purpose of hypothesis testing.
In regression problems,case resamplingrefers to the simple scheme of resampling individual cases – often rows of adata set. For regression problems, as long as the data set is fairly large, this simple scheme is often acceptable.[citation needed]However, the method is open to criticism[citation needed].[15]
In regression problems, theexplanatory variablesare often fixed, or at least observed with more control than the response variable. Also, the range of the explanatory variables defines the information available from them. Therefore, to resample cases means that each bootstrap sample will lose some information. As such, alternative bootstrap procedures should be considered.
Bootstrapping can be interpreted in aBayesianframework using a scheme that creates new data sets through reweighting the initial data. Given a set ofN{\displaystyle N}data points, the weighting assigned to data pointi{\displaystyle i}in a new data setDJ{\displaystyle {\mathcal {D}}^{J}}iswiJ=xiJ−xi−1J{\displaystyle w_{i}^{J}=x_{i}^{J}-x_{i-1}^{J}}, wherexJ{\displaystyle \mathbf {x} ^{J}}is a low-to-high ordered list ofN−1{\displaystyle N-1}uniformly distributed random numbers on[0,1]{\displaystyle [0,1]}, preceded by 0 and succeeded by 1. The distributions of a parameter inferred from considering many such data setsDJ{\displaystyle {\mathcal {D}}^{J}}are then interpretable asposterior distributionson that parameter.[23]
Under this scheme, a small amount of (usually normally distributed) zero-centered random noise is added onto each resampled observation. This is equivalent to sampling from akernel densityestimate of the data. AssumeKto be a symmetric kernel density function with unit variance. The standard kernel estimatorf^h(x){\displaystyle {\hat {f\,}}_{h}(x)}off(x){\displaystyle f(x)}is
whereh{\displaystyle h}is the smoothing parameter. And the corresponding distribution function estimatorF^h(x){\displaystyle {\hat {F\,}}_{h}(x)}is
Based on the assumption that the original data set is a realization of a random sample from a distribution of a specific parametric type, in this case a parametric model is fitted by parameter θ, often bymaximum likelihood, and samples ofrandom numbersare drawn from this fitted model. Usually the sample drawn has the same sample size as the original data. Then the estimate of original function F can be written asF^=Fθ^{\displaystyle {\hat {F}}=F_{\hat {\theta }}}. This sampling process is repeated many times as for other bootstrap methods. Considering the centeredsample meanin this case, the random sample original distribution functionFθ{\displaystyle F_{\theta }}is replaced by a bootstrap random sample with functionFθ^{\displaystyle F_{\hat {\theta }}}, and theprobability distributionofXn¯−μθ{\displaystyle {\bar {X_{n}}}-\mu _{\theta }}is approximated by that ofX¯n∗−μ∗{\displaystyle {\bar {X}}_{n}^{*}-\mu ^{*}}, whereμ∗=μθ^{\displaystyle \mu ^{*}=\mu _{\hat {\theta }}}, which is the expectation corresponding toFθ^{\displaystyle F_{\hat {\theta }}}.[25]The use of a parametric model at the sampling stage of the bootstrap methodology leads to procedures which are different from those obtained by applying basic statistical theory to inference for the same model.
Another approach to bootstrapping in regression problems is to resampleresiduals. The method proceeds as follows.
This scheme has the advantage that it retains the information in the explanatory variables. However, a question arises as to which residuals to resample. Raw residuals are one option; another isstudentized residuals(in linear regression). Although there are arguments in favor of using studentized residuals; in practice, it often makes little difference, and it is easy to compare the results of both schemes.
When data are temporally correlated, straightforward bootstrapping destroys the inherent correlations. This method uses Gaussian process regression (GPR) to fit a probabilistic model from which replicates may then be drawn. GPR is a Bayesian non-linear regression method. A Gaussian process (GP) is a collection of random variables, any finite number of which have a joint Gaussian (normal) distribution. A GP is defined by a mean function and a covariance function, which specify the mean vectors and covariance matrices for each finite collection of the random variables.[26]
Regression model:
Gaussian process prior:
For any finite collection of variables,x1, ...,xn, the function outputsf(x1),…,f(xn){\displaystyle f(x_{1}),\ldots ,f(x_{n})}are jointly distributed according to a multivariate Gaussian with meanm=[m(x1),…,m(xn)]⊺{\displaystyle m=[m(x_{1}),\ldots ,m(x_{n})]^{\intercal }}and covariance matrix(K)ij=k(xi,xj).{\displaystyle (K)_{ij}=k(x_{i},x_{j}).}
Assumef(x)∼GP(m,k).{\displaystyle f(x)\sim {\mathcal {GP}}(m,k).}Theny(x)∼GP(m,l){\displaystyle y(x)\sim {\mathcal {GP}}(m,l)},
wherel(xi,xj)=k(xi,xj)+σ2δ(xi,xj){\displaystyle l(x_{i},x_{j})=k(x_{i},x_{j})+\sigma ^{2}\delta (x_{i},x_{j})}, andδ(xi,xj){\displaystyle \delta (x_{i},x_{j})}is the standard Kronecker delta function.[26]
Gaussian process posterior:
According to GP prior, we can get
wherem0=[m(x1),…,m(xr)]⊺{\displaystyle m_{0}=[m(x_{1}),\ldots ,m(x_{r})]^{\intercal }}and(K0)ij=k(xi,xj)+σ2δ(xi,xj).{\displaystyle (K_{0})_{ij}=k(x_{i},x_{j})+\sigma ^{2}\delta (x_{i},x_{j}).}
Let x1*,...,xs*be another finite collection of variables, it's obvious that
wherem∗=[m(x1∗),…,m(xs∗)]⊺{\displaystyle m_{*}=[m(x_{1}^{*}),\ldots ,m(x_{s}^{*})]^{\intercal }},(K∗∗)ij=k(xi∗,xj∗){\displaystyle (K_{**})_{ij}=k(x_{i}^{*},x_{j}^{*})},(K∗)ij=k(xi,xj∗).{\displaystyle (K_{*})_{ij}=k(x_{i},x_{j}^{*}).}
According to the equations above, the outputsyare also jointly distributed according to a multivariate Gaussian. Thus,
wherey=[y1,...,yr]⊺{\displaystyle y=[y_{1},...,y_{r}]^{\intercal }},mpost=m∗+K∗⊺(KO+σ2Ir)−1(y−m0){\displaystyle m_{\text{post}}=m_{*}+K_{*}^{\intercal }(K_{O}+\sigma ^{2}I_{r})^{-1}(y-m_{0})},Kpost=K∗∗−K∗⊺(KO+σ2Ir)−1K∗{\displaystyle K_{\text{post}}=K_{**}-K_{*}^{\intercal }(K_{O}+\sigma ^{2}I_{r})^{-1}K_{*}}, andIr{\displaystyle I_{r}}isr×r{\displaystyle r\times r}identity matrix.[26]
The wild bootstrap, proposed originally by Wu (1986),[27]is suited when the model exhibitsheteroskedasticity. The idea is, as the residual bootstrap, to leave the regressors at their sample value, but to resample the response variable based on the residuals values. That is, for each replicate, one computes a newy{\displaystyle y}based on
so the residuals are randomly multiplied by a random variablevi{\displaystyle v_{i}}with mean 0 and variance 1. For most distributions ofvi{\displaystyle v_{i}}(but not Mammen's), this method assumes that the 'true' residual distribution is symmetric and can offer advantages over simple residual sampling for smaller sample sizes. Different forms are used for the random variablevi{\displaystyle v_{i}}, such as
The block bootstrap is used when the data, or the errors in a model, are correlated. In this case, a simple case or residual resampling will fail, as it is not able to replicate the correlation in the data. The block bootstrap tries to replicate the correlation by resampling inside blocks of data (seeBlocking (statistics)). The block bootstrap has been used mainly with data correlated in time (i.e. time series) but can also be used with data correlated in space, or among groups (so-called cluster data).
In the (simple) block bootstrap, the variable of interest is split into non-overlapping blocks.
In the moving block bootstrap, introduced by Künsch (1989),[29]data is split inton−b+ 1 overlapping blocks of lengthb: Observation 1 to b will be block 1, observation 2 tob+ 1 will be block 2, etc. Then from thesen−b+ 1 blocks,n/bblocks will be drawn at random with replacement. Then aligning these n/b blocks in the order they were picked, will give the bootstrap observations.
This bootstrap works with dependent data, however, the bootstrapped observations will not be stationary anymore by construction. But, it was shown that varying randomly the block length can avoid this problem.[30]This method is known as thestationary bootstrap.Other related modifications of the moving block bootstrap are theMarkovian bootstrapand a stationary bootstrap method that matches subsequent blocks based on standard deviation matching.
Vinod (2006),[31]presents a method that bootstraps time series data using maximum entropy principles satisfying the Ergodic theorem with mean-preserving and mass-preserving constraints. There is an R package,meboot,[32]that utilizes the method, which has applications in econometrics and computer science.
Cluster data describes data where many observations per unit are observed. This could be observing many firms in many states or observing students in many classes. In such cases, the correlation structure is simplified, and one does usually make the assumption that data is correlated within a group/cluster, but independent between groups/clusters. The structure of the block bootstrap is easily obtained (where the block just corresponds to the group), and usually only the groups are resampled, while the observations within the groups are left unchanged.Cameronet al. (2008) discusses this for clustered errors in linear regression.[33]
The bootstrap is a powerful technique although may require substantial computing resources in both time and memory. Some techniques have been developed to reduce this burden. They can generally be combined with many of the different types of Bootstrap schemes and various choices of statistics.
Most bootstrap methods areembarrassingly parallelalgorithms. That is, the statistic of interest for each bootstrap sample does not depend on other bootstrap samples. Such computations can therefore be performed on separateCPUsor compute nodes with the results from the separate nodes eventually aggregated for final analysis.
The nonparametric bootstrap samples items from a list of size n with counts drawn from amultinomial distribution. IfWi{\displaystyle W_{i}}denotes the number times element i is included in a given bootstrap sample, then eachWi{\displaystyle W_{i}}is distributed as abinomial distributionwith n trials and mean 1, butWi{\displaystyle W_{i}}is not independent ofWj{\displaystyle W_{j}}fori≠j{\displaystyle i\neq j}.
The Poisson bootstrap instead draws samples assuming allWi{\displaystyle W_{i}}'s are independently and identically distributed as Poisson variables with mean 1.
The rationale is that the limit of the binomial distribution is Poisson:
The Poisson bootstrap had been proposed by Hanley and MacGibbon as potentially useful for non-statisticians using software likeSASandSPSS, which lacked the bootstrap packages ofRandS-Plusprogramming languages.[34]The same authors report that for large enough n, the results are relatively similar to the nonparametric bootstrap estimates but go on to note the Poisson bootstrap has seen minimal use in applications.
Another proposed advantage of the Poisson bootstrap is the independence of theWi{\displaystyle W_{i}}makes the method easier to apply for large datasets that must be processed as streams.[35]
A way to improve on the Poisson bootstrap, termed "sequential bootstrap", is by taking the first samples so that the proportion of unique values is ≈0.632 of the original sample size n. This provides a distribution with main empirical characteristics being within a distance ofO(n3/4){\displaystyle O(n^{3/4})}.[36]Empirical investigation has shown this method can yield good results.[37]This is related to the reduced bootstrap method.[38]
For massive data sets, it is often computationally prohibitive to hold all the sample data in memory and resample from the sample data. The Bag of Little Bootstraps (BLB)[39]provides a method of pre-aggregating data before bootstrapping to reduce computational constraints. This works by partitioning the data set intob{\displaystyle b}equal-sized buckets and aggregating the data within each bucket. This pre-aggregated data set becomes the new sample data over which to draw samples with replacement. This method is similar to the Block Bootstrap, but the motivations and definitions of the blocks are very different. Under certain assumptions, the sample distribution should approximate the full bootstrapped scenario. One constraint is the number of bucketsb=nγ{\displaystyle b=n^{\gamma }}whereγ∈[0.5,1]{\displaystyle \gamma \in [0.5,1]}and the authors recommend usage ofb=n0.7{\displaystyle b=n^{0.7}}as a general solution.
The bootstrap distribution of a point estimator of a population parameter has been used to produce a bootstrappedconfidence intervalfor the parameter's true value if the parameter can be written as afunction of the population's distribution.
Population parametersare estimated with manypoint estimators. Popular families of point-estimators includemean-unbiased minimum-variance estimators,median-unbiased estimators,Bayesian estimators(for example, theposterior distribution'smode,median,mean), andmaximum-likelihood estimators.
A Bayesian point estimator and a maximum-likelihood estimator have good performance when the sample size is infinite, according toasymptotic theory. For practical problems with finite samples, other estimators may be preferable. Asymptotic theory suggests techniques that often improve the performance of bootstrapped estimators; the bootstrapping of a maximum-likelihood estimator may often be improved using transformations related topivotal quantities.[40]
The bootstrap distribution of a parameter-estimator is often used to calculateconfidence intervalsfor its population-parameter.[2]A variety of methods for constructing the confidence intervals have been proposed, although there is disagreement which method is the best.
The survey of bootstrap confidence interval methods of DiCiccio and Efron and consequent discussion lists several desired properties of confidence intervals, which generally are not all simultaneously met.
There are several methods for constructing confidence intervals from the bootstrap distribution of arealparameter:
Efron and Tibshirani[2]suggest the following algorithm for comparing the means of two independent samples:
Letx1,…,xn{\displaystyle x_{1},\ldots ,x_{n}}be a random sample from distribution F with sample meanx¯{\displaystyle {\bar {x}}}and sample varianceσx2{\displaystyle \sigma _{x}^{2}}. Lety1,…,ym{\displaystyle y_{1},\ldots ,y_{m}}be another, independent random sample from distribution G with meany¯{\displaystyle {\bar {y}}}and varianceσy2{\displaystyle \sigma _{y}^{2}}
In 1878,Simon Newcombtook observations on thespeed of light.[46]The data set contains twooutliers, which greatly influence thesample mean. (The sample mean need not be aconsistent estimatorfor anypopulation mean, because no mean needs to exist for aheavy-tailed distribution.) A well-defined androbust statisticfor the central tendency is the sample median, which is consistent andmedian-unbiasedfor the population median.
The bootstrap distribution for Newcomb's data appears below. We can reduce the discreteness of the bootstrap distribution by adding a small amount of random noise to each bootstrap sample. A conventional choice is to add noise with a standard deviation ofσ/n{\displaystyle \sigma /{\sqrt {n}}}for a sample sizen; this noise is often drawn from a Student-t distribution withn-1degrees of freedom.[47]This results in an approximately-unbiased estimator for the variance of the sample mean.[48]This means that samples taken from the bootstrap distribution will have a variance which is, on average, equal to the variance of the total population.
Histograms of the bootstrap distribution and the smooth bootstrap distribution appear below. The bootstrap distribution of the sample-median has only a small number of values. The smoothed bootstrap distribution has a richersupport. However, note that whether the smoothed or standard bootstrap procedure is favorable is case-by-case and is shown to depend on both the underlying distribution function and on the quantity being estimated.[49]
In this example, the bootstrapped 95% (percentile) confidence-interval for the population median is (26, 28.5), which is close to the interval for (25.98, 28.46) for the smoothed bootstrap.
The bootstrap is distinguished from:
Bootstrap aggregating(bagging) is ameta-algorithmbased on averaging model predictions obtained from models trained on multiple bootstrap samples.
In situations where an obvious statistic can be devised to measure a required characteristic using only a small number,r, of data items, a corresponding statistic based on the entire sample can be formulated. Given anr-sample statistic, one can create ann-sample statistic by something similar to bootstrapping (taking the average of the statistic over all subsamples of sizer). This procedure is known to have certain good properties and the result is aU-statistic. Thesample meanandsample varianceare of this form, forr= 1 andr= 2.
The bootstrap has under certain conditions desirableasymptotic properties. The asymptotic properties most often described are weak convergence / consistency of thesample pathsof the bootstrap empirical process and the validity ofconfidence intervalsderived from the bootstrap. This section describes the convergence of the empirical bootstrap.
This paragraph summarizes more complete descriptions of stochastic convergence in van der Vaart and Wellner[50]and Kosorok.[51]The bootstrap defines astochastic process, a collection of random variables indexed by some setT{\displaystyle T}, whereT{\displaystyle T}is typically thereal line(R{\displaystyle \mathbb {R} }) or a family of functions. Processes of interest are those with bounded sample paths, i.e., sample paths inL-infinity(ℓ∞(T){\displaystyle \ell ^{\infty }(T)}), the set of alluniformly boundedfunctionsfromT{\displaystyle T}toR{\displaystyle \mathbb {R} }. When equipped with the uniform distance,ℓ∞(T){\displaystyle \ell ^{\infty }(T)}is ametric space, and whenT=R{\displaystyle T=\mathbb {R} }, two subspaces ofℓ∞(T){\displaystyle \ell ^{\infty }(T)}are of particular interest,C[0,1]{\displaystyle C[0,1]}, the space of allcontinuous functionsfromT{\displaystyle T}to theunit interval[0,1], andD[0,1]{\displaystyle D[0,1]}, the space of allcadlag functionsfromT{\displaystyle T}to [0,1]. This is becauseC[0,1]{\displaystyle C[0,1]}contains thedistribution functionsfor all continuous random variables, andD[0,1]{\displaystyle D[0,1]}contains the distribution functions for all random variables. Statements about the consistency of the bootstrap are statements about the convergence of the sample paths of the bootstrap process asrandom elementsof the metric spaceℓ∞(T){\displaystyle \ell ^{\infty }(T)}or somesubspacethereof, especiallyC[0,1]{\displaystyle C[0,1]}orD[0,1]{\displaystyle D[0,1]}.
Horowitz in a recent review[1]definesconsistencyas: the bootstrap estimatorGn(⋅,Fn){\displaystyle G_{n}(\cdot ,F_{n})}is consistent [for a statisticTn{\displaystyle T_{n}}] if, for eachF0{\displaystyle F_{0}},supτ|Gn(τ,Fn)−G∞(τ,F0)|{\displaystyle \sup _{\tau }|G_{n}(\tau ,F_{n})-G_{\infty }(\tau ,F_{0})|}converges in probabilityto 0 asn→∞{\displaystyle n\to \infty }, whereFn{\displaystyle F_{n}}is the distribution of the statistic of interest in the original sample,F0{\displaystyle F_{0}}is the true but unknown distribution of the statistic,G∞(τ,F0){\displaystyle G_{\infty }(\tau ,F_{0})}is the asymptotic distribution function ofTn{\displaystyle T_{n}}, andτ{\displaystyle \tau }is the indexing variable in the distribution function, i.e.,P(Tn≤τ)=Gn(τ,F0){\displaystyle P(T_{n}\leq \tau )=G_{n}(\tau ,F_{0})}. This is sometimes more specifically calledconsistency relative to the Kolmogorov-Smirnov distance.[52]
Horowitz goes on to recommend using a theorem from Mammen[53]that provides easier to check necessary and sufficient conditions for consistency for statistics of a certain common form. In particular, let{Xi:i=1,…,n}{\displaystyle \{X_{i}:i=1,\ldots ,n\}}be the random sample. IfTn=∑i=1ngn(Xi)−tnσn{\displaystyle T_{n}={\frac {\sum _{i=1}^{n}g_{n}(X_{i})-t_{n}}{\sigma _{n}}}}for a sequence of numberstn{\displaystyle t_{n}}andσn{\displaystyle \sigma _{n}}, then the bootstrap estimate of the cumulative distribution function estimates the empirical cumulative distribution function if and only ifTn{\displaystyle T_{n}}converges in distributionto thestandard normal distribution.
Convergence in (outer) probability as described above is also calledweak consistency. It can also be shown with slightly stronger assumptions, that the bootstrap isstrongly consistent, where convergence in (outer) probability is replaced by convergence (outer) almost surely. When only one type of consistency is described, it is typically weak consistency. This is adequate for most statistical applications since it implies confidence bands derived from the bootstrap are asymptotically valid.[51]
In simpler cases, it is possible to use thecentral limit theoremdirectly to show theconsistencyof the bootstrap procedure for estimating the distribution of the sample mean.
Specifically, let us considerXn1,…,Xnn{\displaystyle X_{n1},\ldots ,X_{nn}}independent identically distributed random variables withE[Xn1]=μ{\displaystyle \mathbb {E} [X_{n1}]=\mu }andVar[Xn1]=σ2<∞{\displaystyle {\text{Var}}[X_{n1}]=\sigma ^{2}<\infty }for eachn≥1{\displaystyle n\geq 1}. LetX¯n=n−1(Xn1+⋯+Xnn){\displaystyle {\bar {X}}_{n}=n^{-1}(X_{n1}+\cdots +X_{nn})}. In addition, for eachn≥1{\displaystyle n\geq 1}, conditional onXn1,…,Xnn{\displaystyle X_{n1},\ldots ,X_{nn}}, letXn1∗,…,Xnn∗{\displaystyle X_{n1}^{*},\ldots ,X_{nn}^{*}}be independent random variables with distribution equal to the empirical distribution ofXn1,…,Xnn{\displaystyle X_{n1},\ldots ,X_{nn}}. This is the sequence of bootstrap samples.
Then it can be shown thatsupτ∈R|P∗(n(X¯n∗−X¯n)σ^n≤τ)−P(n(X¯n−μ)σ≤τ)|→0in probability asn→∞,{\displaystyle \sup _{\tau \in \mathbb {R} }\left|P^{*}\left({\frac {{\sqrt {n}}({\bar {X}}_{n}^{*}-{\bar {X}}_{n})}{{\hat {\sigma }}_{n}}}\leq \tau \right)-P\left({\frac {{\sqrt {n}}({\bar {X}}_{n}-\mu )}{\sigma }}\leq \tau \right)\right|\to 0{\text{ in probability as }}n\to \infty ,}whereP∗{\displaystyle P^{*}}represents probability conditional onXn1,…,Xnn{\displaystyle X_{n1},\ldots ,X_{nn}},n≥1{\displaystyle n\geq 1},X¯n∗=n−1(Xn1∗+⋯+Xnn∗){\displaystyle {\bar {X}}_{n}^{*}=n^{-1}(X_{n1}^{*}+\cdots +X_{nn}^{*})}, andσ^n2=n−1∑i=1n(Xni−X¯n)2{\displaystyle {\hat {\sigma }}_{n}^{2}=n^{-1}\sum _{i=1}^{n}(X_{ni}-{\bar {X}}_{n})^{2}}.
To see this, note that(Xni∗−X¯n)/nσ^n{\displaystyle (X_{ni}^{*}-{\bar {X}}_{n})/{\sqrt {n}}{\hat {\sigma }}_{n}}satisfies theLindeberg condition, so the CLT holds.[54]
TheGlivenko–Cantelli theoremprovides theoretical background for the bootstrap method.
Finite populationsanddrawing without replacementrequire adaptations of the bootstrap due to the violation of the i.i.d assumption. One example is "population bootstrap"[55].
|
https://en.wikipedia.org/wiki/Bootstrap_world
|
Instatisticsand in particular inregression analysis, adesign matrix, also known asmodel matrixorregressor matrixand often denoted byX, is amatrixof values ofexplanatory variablesof a set of objects. Each row represents an individual object, with the successive columns corresponding to the variables and their specific values for that object. The design matrix is used in certainstatistical models, e.g., thegeneral linear model.[1][2][3]It can containindicator variables(ones and zeros) that indicate group membership in anANOVA, or it can contain values ofcontinuous variables.
The design matrix contains data on theindependent variables(also called explanatory variables), in a statistical model that is intended to explain observed data on a response variable (often called adependent variable). The theory relating to such models uses the design matrix as input to somelinear algebra: see for examplelinear regression. A notable feature of the concept of a design matrix is that it is able to represent a number of differentexperimental designsand statistical models, e.g.,ANOVA,ANCOVA, and linear regression.[citation needed]
The design matrix is defined to be a matrixX{\displaystyle X}such thatXij{\displaystyle X_{ij}}(thejthcolumn of theithrow ofX{\displaystyle X}) represents the value of thejthvariable associated with theithobject.
A regression model may be represented via matrix multiplication as
whereXis the design matrix,β{\displaystyle \beta }is a vector of the model's coefficients (one for each variable),e{\displaystyle e}is a vector of random errors with mean zero, andyis the vector of predicted outputs for each object.
The design matrix has dimensionn-by-p, wherenis the number of samples observed, andpis the number of variables (features) measured in all samples.[4][5]
In this representation different rows typically represent different repetitions of an experiment, while columns represent different types of data (say, the results from particular probes). For example, suppose an experiment is run where 10 people are pulled off the street and asked 4 questions. The data matrixMwould be a 10×4 matrix (meaning 10 rows and 4 columns). The datum in rowiand columnjof this matrix would be the answer of theithperson to thejthquestion.
The design matrix for anarithmetic meanis acolumnvector of ones.
This section gives an example ofsimple linear regression—that is, regression with only a single explanatory variable—with seven observations.
The seven data points are {yi,xi}, fori= 1, 2, …, 7. The simple linear regression model is
whereβ0{\displaystyle \beta _{0}}is they-intercept andβ1{\displaystyle \beta _{1}}is the slope of the regression line. This model can be represented in matrix form as
where the first column of 1s in the design matrix allows estimation of they-intercept while the second column contains thex-values associated with the correspondingy-values. The matrix whose columns are 1's andx's in this example is the design matrix.
This section contains an example ofmultiple regressionwith two covariates (explanatory variables):wandx.
Again suppose that the data consist of seven observations, and that for each observed value to be predicted (yi{\displaystyle y_{i}}), valueswiandxiof the two covariates are also observed. The model to be considered is
This model can be written in matrix terms as
Here the 7×3 matrix on the right side is the design matrix.
This section contains an example with a one-way analysis of variance (ANOVA) with three groups and seven observations. The given data set has the first three observations belonging to the first group, the following two observations belonging to the second group and the final two observations belonging to the third group.
If the model to be fit is just the mean of each group, then the model is
which can be written
In this modelμi{\displaystyle \mu _{i}}represents the mean of thei{\displaystyle i}th group.
The ANOVA model could be equivalently written as each group parameterτi{\displaystyle \tau _{i}}being an offset from some overall reference. Typically this reference point is taken to be one of the groups under consideration. This makes sense in the context of comparing multiple treatment groups to a control group and the control group is considered the "reference". In this example, group 1 was chosen to be the reference group. As such the model to be fit is
with the constraint thatτ1{\displaystyle \tau _{1}}is zero.
In this modelμ{\displaystyle \mu }is the mean of the reference group andτi{\displaystyle \tau _{i}}is the difference from groupi{\displaystyle i}to the reference group.τ1{\displaystyle \tau _{1}}is not included in the matrix because its difference from the reference group (itself) is necessarily zero.
|
https://en.wikipedia.org/wiki/Design_matrix#Simple_linear_regression
|
Linear trend estimationis astatisticaltechnique used to analyzedatapatterns.Datapatterns, or trends, occur when theinformationgathered tends to increase or decrease over time or is influenced by changes in an external factor. Linear trend estimation essentially creates a straight line on agraphofdatathat models the general direction that thedatais heading.
Given a set ofdata, there are a variety offunctionsthat can be chosen to fit the data. The simplest function is astraight linewith the dependent variable (typically the measured data) on the vertical axis and the independent variable (often time) on the horizontal axis.
Theleast-squaresfit is a common method to fit a straight line through the data. This methodminimizesthe sum of the squared errors in the data seriesy{\displaystyle y}. Given a set of points in timet{\displaystyle t}and data valuesyt{\displaystyle y_{t}}observed for those points in time, values ofa^{\displaystyle {\hat {a}}}andb^{\displaystyle {\hat {b}}}are chosen to minimize the sum of squared errors
This formula first calculates the difference between the observed datayt{\displaystyle y_{t}}and the estimate(a^t+b^){\displaystyle ({\hat {a}}t+{\hat {b}})}, the difference at each data point is squared, and then added together, giving the "sum of squares" measurement of error. The values ofa^{\displaystyle {\hat {a}}}andb^{\displaystyle {\hat {b}}}derived from the data parameterize the simple linear estimatory^=a^x+b^{\displaystyle {\hat {y}}={\hat {a}}x+{\hat {b}}}. The term "trend" refers to the slopea^{\displaystyle {\hat {a}}}in the least squares estimator.
To analyze a (time) series of data, it can be assumed that it may be represented as trend plus noise:
wherea{\displaystyle a}andb{\displaystyle b}are unknown constants and thee{\displaystyle e}'s are randomly distributederrors. If one can reject the null hypothesis that the errors arenon-stationary, then the non-stationary series{yt}{\displaystyle \{y_{t}\}}is calledtrend-stationary. The least-squares method assumes the errors are independently distributed with a normal distribution. If this is not the case, hypothesis tests about the unknown parametersa{\displaystyle a}andb{\displaystyle b}may be inaccurate. It is simplest if thee{\displaystyle e}'s all have the same distribution, but if not (if some havehigher variance, meaning that those data points are effectively less certain), then this can be taken into account during the least-squares fitting by weighting each point by the inverse of the variance of that point.
Commonly, where only a single time series exists to be analyzed, the variance of thee{\displaystyle e}'s is estimated by fitting a trend to obtain the estimated parameter valuesa^{\displaystyle {\hat {a}}}andb^,{\displaystyle {\hat {b}},}thus allowing the predicted values
to be subtracted from the datayt{\displaystyle y_{t}}(thusdetrendingthe data), leaving theresidualse^t{\displaystyle {\hat {e}}_{t}}as thedetrended data, and estimating the variance of theet{\displaystyle e_{t}}'s from the residuals — this is often the only way of estimating the variance of theet{\displaystyle e_{t}}'s.
Once the "noise" of the series is known, the significance of the trend can be assessed by making thenull hypothesisthat the trend,a{\displaystyle a}, is not different from 0. From the above discussion of trends in random data with knownvariance, the distribution of calculated trends is to be expected from random (trendless) data. If the estimated trend,a^{\displaystyle {\hat {a}}}, is larger than the critical value for a certainsignificance level, then the estimated trend is deemed significantly different from zero at that significance level, and the null hypothesis of a zero underlying trend is rejected.
The use of a linear trend line has been the subject of criticism, leading to a search for alternative approaches to avoid its use in model estimation. One of the alternative approaches involvesunit roottests and thecointegrationtechnique in econometric studies.
The estimated coefficient associated with a linear trend variable such as time is interpreted as a measure of the impact of a number of unknown or known but immeasurable factors on the dependent variable over one unit of time. Strictly speaking, this interpretation is applicable for the estimation time frame only. Outside of this time frame, it cannot be determined how these immeasurable factors behave both qualitatively and quantitatively.
Research results by mathematicians, statisticians, econometricians, and economists have been published in response to those questions. For example, detailed notes on the meaning of linear time trends in the regression model are given in Cameron (2005);[1]Granger, Engle, and many other econometricians have written on stationarity, unit root testing, co-integration, and related issues (a summary of some of the works in this area can be found in an information paper[2]by the Royal Swedish Academy of Sciences (2003)); and Ho-Trieu & Tucker (1990) have written on logarithmic time trends with results indicating linear time trends are special cases ofcycles.
It is harder to see a trend in a noisy time series. For example, if the true series is 0, 1, 2, 3, all plus some independent normally distributed "noise"eofstandard deviationE, and a sample series of length 50 is given, then ifE=0.1, the trend will be obvious; ifE=100, the trend will probably be visible; but ifE=10000, the trend will be buried in the noise.
Consider a concrete example, such as theglobal surface temperaturerecord of the past 140 years as presented by theIPCC.[3]The interannual variation is about 0.2°C, and the trend is about 0.6°C over 140 years, with 95% confidence limits of 0.2°C (by coincidence, about the same value as the interannual variation). Hence, the trend is statistically different from 0. However, as noted elsewhere,[4]this time series doesn't conform to the assumptions necessary for least-squares to be valid.
The least-squares fitting process produces a value,r-squared(r2), which is 1 minus the ratio of the variance of theresidualsto the variance of the dependent variable. It says what fraction of the variance of the data is explained by the fitted trend line. It doesnotrelate to thestatistical significanceof the trend line (see graph); the statistical significance of the trend is determined by itst-statistic. Often, filtering a series increasesr2while making little difference to the fitted trend.
Thus far, the data have been assumed to consist of the trend plus noise, with the noise at each data point beingindependent and identically distributed random variableswith a normal distribution. Real data (for example, climate data) may not fulfill these criteria. This is important, as it makes an enormous difference to the ease with which the statistics can be analyzed so as to extract maximum information from the data series. If there are other non-linear effects that have acorrelationto the independent variable (such as cyclic influences), the use of least-squares estimation of the trend is not valid. Also, where the variations are significantly larger than the resulting straight line trend, the choice of start and end points can significantly change the result. That is, the model is mathematicallymisspecified. Statistical inferences (tests for the presence of a trend, confidence intervals for the trend, etc.) are invalid unless departures from the standard assumptions are properly accounted for, for example, as follows:
InR, the linear trend in data can be estimated by using the 'tslm' function of the 'forecast' package.
Medical andbiomedicalstudies often seek to determine a link between sets of data, such as of a clinical or scientific metric in three different diseases. But data may also be linked in time (such as change in the effect of a drug from baseline, to month 1, to month 2), or by an external factor that may or may not be determined by the researcher and/or their subject (such as no pain, mild pain, moderate pain, or severe pain). In these cases, one would expect the effect test statistic (e.g., influence of astatinon levels ofcholesterol, ananalgesicon the degree of pain, or increasing doses of different strengths of a drug on a measurable index, i.e. a dose - response effect) to change in direct order as the effect develops. Suppose the mean level of cholesterol before and after the prescription of a statin falls from 5.6mmol/Lat baseline to 3.4 mmol/L at one month and to 3.7 mmol/L at two months. Given sufficient power, anANOVA (analysis of variance)would most likely find a significant fall at one and two months, but the fall is not linear. Furthermore, a post-hoc test may be required. An alternative test may be a repeated measures (two way) ANOVA orFriedman test, depending on the nature of the data. Nevertheless, because the groups are ordered, a standard ANOVA is inappropriate. Should the cholesterol fall from 5.4 to 4.1 to 3.7, there is a clear linear trend. The same principle may be applied to the effects of allele/genotype frequency, where it could be argued that asingle-nucleotide polymorphismin nucleotides XX, XY, YY are in fact a trend of no Y's, one Y, and then two Y's.[3]
The mathematics of linear trend estimation is a variant of the standard ANOVA, giving different information, and would be the most appropriate test if the researchers hypothesize a trend effect in their test statistic. One example is levels of serumtrypsinin six groups of subjects ordered by age decade (10–19 years up to 60–69 years). Levels of trypsin (ng/mL) rise in a direct linear trend of 128, 152, 194, 207, 215, 218 (data from Altman). Unsurprisingly, a 'standard' ANOVA givesp< 0.0001, whereas linear trend estimation givesp= 0.00006. Incidentally, it could be reasonably argued that as age is a natural continuously variable index, it should not be categorized into decades, and an effect of age and serum trypsin is sought by correlation (assuming the raw data is available). A further example is of a substance measured at four time points in different groups:
This is a clear trend. ANOVA givesp= 0.091, because the overall variance exceeds the means, whereas linear trend estimation givesp= 0.012. However, should the data have been collected at four time points in the same individuals, linear trend estimation would be inappropriate, and a two-way (repeated measures) ANOVA would have been applied.
|
https://en.wikipedia.org/wiki/Linear_trend_estimation
|
Segmented regression, also known aspiecewise regressionorbroken-stick regression, is a method inregression analysisin which theindependent variableis partitioned into intervals and a separate line segment is fit to each interval. Segmented regression analysis can also be performed on multivariate data by partitioning the various independent variables. Segmented regression is useful when the independent variables, clustered into different groups, exhibit different relationships between the variables in these regions. The boundaries between the segments arebreakpoints.
Segmented linear regressionis segmented regression whereby the relations in the intervals are obtained bylinear regression.
Segmented linear regression with two segments separated by abreakpointcan be useful to quantify an abrupt change of the response function (Yr) of a varying influential factor (x). The breakpoint can be interpreted as acritical,safe, orthresholdvalue beyond or below which (un)desired effects occur. The breakpoint can be important in decision making[1]
The figures illustrate some of the results and regression types obtainable.
A segmented regression analysis is based on the presence of a set of (y, x) data, in whichyis thedependent variableandxtheindependent variable.
Theleast squaresmethod applied separately to each segment, by which the two regression lines are made to fit the data set as closely as possible while minimizing thesum of squares of the differences(SSD) between observed (y) and calculated (Yr) values of the dependent variable, results in the following two equations:
where:
The data may show many types or trends,[2]see the figures.
The method also yields twocorrelation coefficients(R):
and
where:
and
In the determination of the most suitable trend,statistical testsmust be performed to ensure that this trend is reliable (significant).
When no significant breakpoint can be detected, one must fall back on a regression without breakpoint.
For the blue figure at the right that gives the relation between yield of mustard (Yr = Ym, t/ha) andsoil salinity(x= Ss, expressed as electric conductivity of the soil solution EC in dS/m) it is found that:[3]
BP = 4.93, A1= 0, K1= 1.74, A2= −0.129, K2= 2.38, R12= 0.0035 (insignificant), R22= 0.395 (significant) and:
indicating that soil salinities < 4.93 dS/m are safe and soil salinities > 4.93 dS/m reduce the yield @ 0.129 t/ha per unit increase of soil salinity.
The figure also shows confidence intervals and uncertainty as elaborated hereunder.
The followingstatistical testsare used to determine the type of trend:
In addition, use is made of thecorrelation coefficientof all data (Ra), thecoefficient of determinationor coefficient of explanation,confidence intervalsof the regression functions, andANOVAanalysis.[5]
The coefficient of determination for all data (Cd), that is to be maximized under the conditions set by the significance tests, is found from:
where Yr is the expected (predicted) value ofyaccording to the former regression equations and Ya is the average of allyvalues.
The Cd coefficient ranges between 0 (no explanation at all) to 1 (full explanation, perfect match).In a pure, unsegmented, linear regression, the values of Cd and Ra2are equal. In a segmented regression, Cd needs to be significantly larger than Ra2to justify the segmentation.
Theoptimalvalue of the breakpoint may be found such that the Cd coefficient ismaximum.
Segmented regression is often used to detect over which range an explanatory variable (X) has no effect on the dependent variable (Y), while beyond the reach there is a clear response, be it positive or negative.
The reach of no effect may be found at the initial part of X domain or conversely at its last part. For the "no effect" analysis, application of theleast squaresmethod for the segmented regression analysis[6]may not be the most appropriate technique because the aim is rather to find the longest stretch over which the Y-X relation can be considered to possess zero slope while beyond the reach the slope is significantly different from zero but knowledge about the best value of this slope is not material. The method to find the no-effect range is progressive partial regression[7]over the range, extending the range with small steps until the regression coefficient gets significantly different from zero.
In the next figure the break point is found at X=7.9 while for the same data (see blue figure above for mustard yield), the least squares method yields a break point only at X=4.9. The latter value is lower, but the fit of the data beyond the break point is better. Hence, it will depend on the purpose of the analysis which method needs to be employed.
|
https://en.wikipedia.org/wiki/Segmented_regression
|
The purpose of this page is to provide supplementary materials for theordinary least squaresarticle, reducing the load of the main article with mathematics and improving its accessibility, while at the same time retaining the completeness of exposition.
Define thei{\displaystyle i}thresidualto be
Then the objectiveS{\displaystyle S}can be rewritten
Given thatSis convex, it isminimizedwhen its gradient vector is zero (This follows by definition: if the gradient vector is not zero, there is a direction in which we can move to minimize it further – seemaxima and minima.) The elements of the gradient vector are the partial derivatives ofSwith respect to the parameters:
The derivatives are
Substitution of the expressions for the residuals and the derivatives into the gradient equations gives
Thus ifβ^{\displaystyle {\widehat {\beta }}}minimizesS, we have
Upon rearrangement, we obtain thenormal equations:
The normal equations are written in matrix notation as
The solution of the normal equations yields the vectorβ^{\displaystyle {\widehat {\boldsymbol {\beta }}}}of the optimal parameter values.
The normal equations can be derived directly from a matrix representation of the problem as follows. The objective is to minimize
Here(βTXTy)T=yTXβ{\displaystyle ({\boldsymbol {\beta }}^{\rm {T}}\mathbf {X} ^{\rm {T}}\mathbf {y} )^{\rm {T}}=\mathbf {y} ^{\rm {T}}\mathbf {X} {\boldsymbol {\beta }}}has the dimension 1x1 (the number of columns ofy{\displaystyle \mathbf {y} }), so it is a scalar and equal to its own transpose, henceβTXTy=yTXβ{\displaystyle {\boldsymbol {\beta }}^{\rm {T}}\mathbf {X} ^{\rm {T}}\mathbf {y} =\mathbf {y} ^{\rm {T}}\mathbf {X} {\boldsymbol {\beta }}}and the quantity to minimize becomes
Differentiatingthis with respect toβ{\displaystyle {\boldsymbol {\beta }}}and equating to zero to satisfy the first-order conditions gives
which is equivalent to the above-given normal equations. A sufficient condition for satisfaction of the second-order conditions for a minimum is thatX{\displaystyle \mathbf {X} }have full column rank, in which caseXTX{\displaystyle \mathbf {X} ^{\rm {T}}\mathbf {X} }ispositive definite.
WhenXTX{\displaystyle \mathbf {X} ^{\rm {T}}\mathbf {X} }is positive definite, the formula for the minimizing value ofβ{\displaystyle {\boldsymbol {\beta }}}can be derived without the use of derivatives. The quantity
can be written as
whereC{\displaystyle C}depends only ony{\displaystyle \mathbf {y} }andX{\displaystyle \mathbf {X} }, and⟨⋅,⋅⟩{\displaystyle \langle \cdot ,\cdot \rangle }is theinner productdefined by
It follows thatS(β){\displaystyle S({\boldsymbol {\beta }})}is equal to
and therefore minimized exactly when
In general, the coefficients of the matricesX,β{\displaystyle \mathbf {X} ,{\boldsymbol {\beta }}}andy{\displaystyle \mathbf {y} }can be complex. By using aHermitian transposeinstead of a simple transpose, it is possible to find a vectorβ^{\displaystyle {\boldsymbol {\widehat {\beta }}}}which minimizesS(β){\displaystyle S({\boldsymbol {\beta }})}, just as for the real matrix case. In order to get the normal equations we follow a similar path as in previous derivations:
where†{\displaystyle \dagger }stands for Hermitian transpose.
We should now take derivatives ofS(β){\displaystyle S({\boldsymbol {\beta }})}with respect to each of the coefficientsβj{\displaystyle \beta _{j}}, but first we separate real and imaginary parts to deal with the conjugate factors in above expression. For theβj{\displaystyle \beta _{j}}we have
and the derivatives change into
After rewritingS(β){\displaystyle S({\boldsymbol {\beta }})}in the summation form and writingβj{\displaystyle \beta _{j}}explicitly, we can calculate both partial derivatives with result:
which, after adding it together and comparing to zero (minimization condition forβ^{\displaystyle {\boldsymbol {\widehat {\beta }}}}) yields
In matrix form:
Using matrix notation, the sum of squared residuals is given by
Since this is a quadratic expression, the vector which gives the global minimum may be found viamatrix calculusby differentiating with respect to the vectorβ{\displaystyle \beta }(using denominator layout) and setting equal to zero:
By assumption matrixXhas full column rank, and thereforeXTXis invertible and the least squares estimator forβis given by
Plugy=Xβ+εinto the formula forβ^{\displaystyle {\widehat {\beta }}}and then use thelaw of total expectation:
where E[ε|X] = 0 by assumptions of the model. Since the expected value ofβ^{\displaystyle {\widehat {\beta }}}equals the parameter it estimates,β{\displaystyle \beta }, it is anunbiased estimatorofβ{\displaystyle \beta }.
For the variance, let the covariance matrix ofε{\displaystyle \varepsilon }beE[εεT]=σ2I{\displaystyle \operatorname {E} [\,\varepsilon \varepsilon ^{T}\,]=\sigma ^{2}I}(whereI{\displaystyle I}is the identitym×m{\displaystyle m\,\times \,m}matrix), and let X be a known constant.
Then,
where we used the fact thatβ^−β{\displaystyle {\widehat {\beta }}-\beta }is just anaffine transformationofε{\displaystyle \varepsilon }by the matrix(XTX)−1XT{\displaystyle (X^{T}X)^{-1}X^{T}}.
For asimple linear regressionmodel, whereβ=[β0,β1]T{\displaystyle \beta =[\beta _{0},\beta _{1}]^{T}}(β0{\displaystyle \beta _{0}}is they-intercept andβ1{\displaystyle \beta _{1}}is the slope), one obtains
First we will plug in the expression foryinto the estimator, and use the fact thatX'M=MX= 0 (matrixMprojects onto the space orthogonal toX):
Now we can recognizeε′Mεas a 1×1 matrix, such matrix is equal to its owntrace. This is useful because by properties of trace operator,tr(AB) =tr(BA), and we can use this to separate disturbanceεfrom matrixMwhich is a function of regressorsX:
Using theLaw of iterated expectationthis can be written as
Recall thatM=I−PwherePis the projection onto linear space spanned by columns of matrixX. By properties of aprojection matrix, it hasp= rank(X) eigenvalues equal to 1, and all other eigenvalues are equal to 0. Trace of a matrix is equal to the sum of its characteristic values, thus tr(P) =p, and tr(M) =n−p. Therefore,
Since the expected value ofσ^2{\displaystyle {\widehat {\sigma }}^{\,2}}does not equal the parameter it estimates,σ2{\displaystyle \sigma ^{\,2}}, it is abiased estimatorofσ2{\displaystyle \sigma ^{\,2}}. Note in the later section“Maximum likelihood”we show that under the additional assumption that errors are distributed normally, the estimatorσ^2{\displaystyle {\widehat {\sigma }}^{\,2}}is proportional to a chi-squared distribution withn–pdegrees of freedom, from which the formula for expected value would immediately follow. However the result we have shown in this section is valid regardless of the distribution of the errors, and thus has importance on its own.
Estimatorβ^{\displaystyle {\widehat {\beta }}}can be written as
We can use thelaw of large numbersto establish that
BySlutsky's theoremandcontinuous mapping theoremthese results can be combined to establish consistency of estimatorβ^{\displaystyle {\widehat {\beta }}}:
Thecentral limit theoremtells us that
ApplyingSlutsky's theoremagain we'll have
Maximum likelihood estimationis a generic technique for estimating the unknown parameters in astatistical modelby constructing a log-likelihood function corresponding to the joint distribution of the data, then maximizing this function over all possible parameter values. In order to apply this method, we have to make an assumption about the distribution of y given X so that the log-likelihood function can be constructed. The connection of maximum likelihood estimation to OLS arises when this distribution is modeled as amultivariate normal.
Specifically, assume that the errors ε have multivariate normal distribution with mean 0 and variance matrixσ2I. Then the distribution ofyconditionally onXis
and the log-likelihood function of the data will be
Differentiating this expression with respect toβandσ2we'll find the ML estimates of these parameters:
We can check that this is indeed a maximum by looking at theHessian matrixof the log-likelihood function.
Since we have assumed in this section that the distribution of error terms is known to be normal, it becomes possible to derive the explicit expressions for the distributions of estimatorsβ^{\displaystyle {\widehat {\beta }}}andσ^2{\displaystyle {\widehat {\sigma }}^{\,2}}:
so that by theaffine transformation properties of multivariate normal distribution
Similarly the distribution ofσ^2{\displaystyle {\widehat {\sigma }}^{\,2}}follows from
whereM=I−X(X′X)−1X′{\displaystyle M=I-X(X'X)^{-1}X'}is the symmetricprojection matrixonto subspace orthogonal toX, and thusMX=X′M= 0. We have arguedbeforethat this matrix rankn–p, and thus by properties ofchi-squared distribution,
Moreover, the estimatorsβ^{\displaystyle {\widehat {\beta }}}andσ^2{\displaystyle {\widehat {\sigma }}^{\,2}}turn out to beindependent(conditional onX), a fact which is fundamental for construction of the classical t- and F-tests. The independence can be easily seen from following: the estimatorβ^{\displaystyle {\widehat {\beta }}}represents coefficients of vector decomposition ofy^=Xβ^=Py=Xβ+Pε{\displaystyle {\widehat {y}}=X{\widehat {\beta }}=Py=X\beta +P\varepsilon }by the basis of columns ofX, as suchβ^{\displaystyle {\widehat {\beta }}}is a function ofPε. At the same time, the estimatorσ^2{\displaystyle {\widehat {\sigma }}^{\,2}}is a norm of vectorMεdivided byn, and thus this estimator is a function ofMε. Now, random variables (Pε,Mε) are jointly normal as a linear transformation ofε, and they are also uncorrelated becausePM= 0. By properties of multivariate normal distribution, this means thatPεandMεare independent, and therefore estimatorsβ^{\displaystyle {\widehat {\beta }}}andσ^2{\displaystyle {\widehat {\sigma }}^{\,2}}will be independent as well.
We look forα^{\displaystyle {\widehat {\alpha }}}andβ^{\displaystyle {\widehat {\beta }}}that minimize the sum of squared errors (SSE):
To find a minimum take partial derivatives with respect toα^{\displaystyle {\widehat {\alpha }}}andβ^{\displaystyle {\widehat {\beta }}}
Before takingpartial derivativewith respect toβ^{\displaystyle {\widehat {\beta }}}, substitute the previous result forα^.{\displaystyle {\widehat {\alpha }}.}
Now, take the derivative with respect toβ^{\displaystyle {\widehat {\beta }}}:
And finally substituteβ^{\displaystyle {\widehat {\beta }}}to determineα^{\displaystyle {\widehat {\alpha }}}
|
https://en.wikipedia.org/wiki/Proofs_involving_ordinary_least_squares
|
ANewey–West estimatoris used instatisticsandeconometricsto provide an estimate of thecovariance matrixof the parameters of aregression-typemodel where the standard assumptions ofregression analysisdo not apply.[1]It was devised byWhitney K. NeweyandKenneth D. Westin 1987, although there are a number of later variants.[2][3][4][5]The estimator is used to try to overcomeautocorrelation(also called serial correlation), andheteroskedasticityin theerror termsin the models, often for regressions applied totime seriesdata. The abbreviation "HAC," sometimes used for the estimator, stands for "heteroskedasticity and autocorrelation consistent."[2]There are a number of HAC estimators described in,[6]and HAC estimator does not refer uniquely to Newey–West. One version of Newey–West Bartlett requires the user to specify the bandwidth and usage of the Bartlett kernel fromKernel density estimation[6]
Regression models estimated with time series data often exhibit autocorrelation; that is, theerror termsare correlated over time. Theheteroscedastic consistent estimatorof the error covariance is constructed from a termXTΣX{\displaystyle X^{\operatorname {T} }\Sigma X}, whereX{\displaystyle X}is the design matrix for the regression problem andΣ{\displaystyle \Sigma }is the covariance matrix of the residuals. The least squares estimatorb{\displaystyle b}is aconsistent estimatorofβ{\displaystyle \beta }. This implies that theleast squaresresidualsei{\displaystyle e_{i}}are "point-wise" consistent estimators of their population counterpartsEi{\displaystyle E_{i}}. The general approach, then, will be to useX{\displaystyle X}ande{\displaystyle e}to devise an estimator ofXTΣX{\displaystyle X^{\operatorname {T} }\Sigma X}.[7]This means that as the time between error terms increases, the correlation between the error terms decreases. The estimator thus can be used to improve theordinary least squares(OLS)regressionwhen the residuals are heteroscedastic and/or autocorrelated.
whereTis the sample size,et{\displaystyle e_{t}}is thetth{\displaystyle t^{\text{th}}}residual andxt{\displaystyle x_{t}}is thetth{\displaystyle t^{\text{th}}}row of the design matrix, andwℓ{\displaystyle w_{\ell }}is the Bartlett kernel[8]and can be thought of as a weight that decreases with increasing separation between samples. Disturbances that are farther apart from each other are given lower weight, while those with equal subscripts are given a weight of 1. This ensures that second term converges (in some appropriate sense) to a finite matrix. This weighting scheme also ensures that the resulting covariance matrix ispositive semi-definite.[2]L= 0 reduces the Newey–West estimator toHuber–White standard error.[9]Lspecifies the "maximum lag considered for the control of autocorrelation. A common choice forL" isT1/4{\displaystyle T^{1/4}}.[9][10]
InJulia, the CovarianceMatrices.jl package[11]supports several types of heteroskedasticity and autocorrelation consistent covariance matrix estimation including Newey–West, White, and Arellano.
InR, the packagessandwich[6]andplm[12]include a function for the Newey–West estimator.
InStata, the commandneweyproduces Newey–West standard errors for coefficients estimated by OLS regression.[13]
InMATLAB, the commandhacin the Econometrics toolbox produces the Newey–West estimator (among others).[14]
InPython, thestatsmodels[15]module includes functions for the covariance matrix using Newey–West.
InGretl, the option--robustto several estimation commands (such asols) in the context of a time-series dataset produces Newey–West standard errors.[16]
InSAS, the Newey–West corrected standard errors can be obtained in PROC AUTOREG and PROC MODEL[17]
|
https://en.wikipedia.org/wiki/Newey%E2%80%93West_estimator
|
Inprobability theory,Lorden's inequalityis a bound for themomentsof overshoot for a stopped sum ofrandom variables, first published by Gary Lorden in 1970.[1]Overshoots play a central role inrenewal theory.[2]
LetX1,X2, ... beindependent and identically distributed positive random variablesand define the sumSn=X1+X2+ ... +Xn. Consider the first timeSnexceeds a given valueband at that time computeRb=Sn−b.Rbis called the overshoot or excess atb. Lorden's inequality states that the expectation of this overshoot is bounded as[2]
Three proofs are known due to Lorden,[1]Carlsson and Nerman[3]and Chang.[4]
Thisprobability-related article is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Lorden%27s_inequality
|
Inprobability theory,Wald's martingaleis the name sometimes given to amartingaleused to study sums ofi.i.d. random variables. It is named after the mathematicianAbraham Wald, who used these ideas in a series of influential publications.[1][2][3]
Wald's martingale can be seen as discrete-time equivalent of theDoléans-Dade exponential.
Let(Xn)n≥1{\displaystyle (X_{n})_{n\geq 1}}be a sequence of i.i.d. random variables whose moment generating functionM:θ↦E(eθX1){\displaystyle M:\theta \mapsto \mathbb {E} (e^{\theta X_{1}})}is finite for someθ>0{\displaystyle \theta >0}, and letSn=X1+⋯+Xn{\displaystyle S_{n}=X_{1}+\cdots +X_{n}}, withS0=0{\displaystyle S_{0}=0}. Then, the process(Wn)n≥0{\displaystyle (W_{n})_{n\geq 0}}defined by
is a martingale known asWald's martingale.[4]In particular,E(Wn)=1{\displaystyle \mathbb {E} (W_{n})=1}for alln≥0{\displaystyle n\geq 0}.
Thisprobability-related article is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Wald%27s_martingale
|
Inprobability theory,Spitzer's formulaorSpitzer's identitygives the joint distribution of partial sums and maximal partial sums of a collection of random variables. The result was first published byFrank Spitzerin 1956.[1]The formula is regarded as "a stepping stone in the theory of sums of independent random variables".[2]
LetX1,X2,...{\displaystyle X_{1},X_{2},...}beindependent and identically distributed random variablesand define the partial sumsSn=X1+X2+...+Xn{\displaystyle S_{n}=X_{1}+X_{2}+...+X_{n}}. DefineRn=max(0,S1,S2,...Sn){\displaystyle R_{n}={\text{max}}(0,S_{1},S_{2},...S_{n})}. Then[3]
where
andS±denotes (|S| ±S)/2.
Two proofs are known, due to Spitzer[1]and Wendel.[3]
Thisprobability-related article is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Spitzer%27s_formula
|
ABayesian network(also known as aBayes network,Bayes net,belief network, ordecision network) is aprobabilistic graphical modelthat represents a set of variables and theirconditional dependenciesvia adirected acyclic graph(DAG).[1]While it is one of several forms ofcausal notation, causal networks are special cases of Bayesian networks. Bayesian networks are ideal for taking an event that occurred and predicting the likelihood that any one of several possible known causes was the contributing factor. For example, a Bayesian network could represent the probabilistic relationships between diseases and symptoms. Given symptoms, the network can be used to compute the probabilities of the presence of various diseases.
Efficient algorithms can performinferenceandlearningin Bayesian networks. Bayesian networks that model sequences of variables (e.g.speech signalsorprotein sequences) are calleddynamic Bayesian networks. Generalizations of Bayesian networks that can represent and solve decision problems under uncertainty are calledinfluence diagrams.
Formally, Bayesian networks aredirected acyclic graphs(DAGs) whose nodes represent variables in theBayesiansense: they may be observable quantities,latent variables, unknown parameters or hypotheses. Each edge represents a direct conditional dependency. Any pair of nodes that are not connected (i.e. no path connects one node to the other) represent variables that areconditionally independentof each other. Each node is associated with aprobability functionthat takes, as input, a particular set of values for the node'sparentvariables, and gives (as output) the probability (or probability distribution, if applicable) of the variable represented by the node. For example, ifm{\displaystyle m}parent nodes representm{\displaystyle m}Boolean variables, then the probability function could be represented by a table of2m{\displaystyle 2^{m}}entries, one entry for each of the2m{\displaystyle 2^{m}}possible parent combinations. Similar ideas may be applied to undirected, and possibly cyclic, graphs such asMarkov networks.
Suppose we want to model the dependencies between three variables: the sprinkler (or more appropriately, its state - whether it is on or not), the presence or absence of rain and whether the grass is wet or not. Observe that two events can cause the grass to become wet: an active sprinkler or rain. Rain has a direct effect on the use of the sprinkler (namely that when it rains, the sprinkler usually is not active). This situation can be modeled with a Bayesian network (shown to the right). Each variable has two possible values, T (for true) and F (for false).
Thejoint probability functionis, by thechain rule of probability,
whereG= "Grass wet (true/false)",S= "Sprinkler turned on (true/false)", andR= "Raining (true/false)".
The model can answer questions about the presence of a cause given the presence of an effect (so-called inverse probability) like "What is the probability that it is raining, given the grass is wet?" by using theconditional probabilityformula and summing over allnuisance variables:
Using the expansion for the joint probability functionPr(G,S,R){\displaystyle \Pr(G,S,R)}and the conditional probabilities from theconditional probability tables (CPTs)stated in the diagram, one can evaluate each term in the sums in the numerator and denominator. For example,
Then the numerical results (subscripted by the associated variable values) are
To answer an interventional question, such as "What is the probability that it would rain, given that we wet the grass?" the answer is governed by the post-intervention joint distribution function
obtained by removing the factorPr(G∣S,R){\displaystyle \Pr(G\mid S,R)}from the pre-intervention distribution. The do operator forces the value of G to be true. The probability of rain is unaffected by the action:
To predict the impact of turning the sprinkler on:
with the termPr(S=T∣R){\displaystyle \Pr(S=T\mid R)}removed, showing that the action affects the grass but not the rain.
These predictions may not be feasible given unobserved variables, as in most policy evaluation problems. The effect of the actiondo(x){\displaystyle {\text{do}}(x)}can still be predicted, however, whenever the back-door criterion is satisfied.[2][3]It states that, if a setZof nodes can be observed thatd-separates[4](or blocks) all back-door paths fromXtoYthen
A back-door path is one that ends with an arrow intoX. Sets that satisfy the back-door criterion are called "sufficient" or "admissible." For example, the setZ=Ris admissible for predicting the effect ofS=TonG, becauseRd-separates the (only) back-door pathS←R→G. However, ifSis not observed, no other setd-separates this path and the effect of turning the sprinkler on (S=T) on the grass (G) cannot be predicted from passive observations. In that caseP(G| do(S=T)) is not "identified". This reflects the fact that, lacking interventional data, the observed dependence betweenSandGis due to a causal connection or is spurious
(apparent dependence arising from a common cause,R). (seeSimpson's paradox)
To determine whether a causal relation is identified from an arbitrary Bayesian network with unobserved variables, one can use the three rules of "do-calculus"[2][5]and test whether alldoterms can be removed from the expression of that relation, thus confirming that the desired quantity is estimable from frequency data.[6]
Using a Bayesian network can save considerable amounts of memory over exhaustive probability tables, if the dependencies in the joint distribution are sparse. For example, a naive way of storing the conditional probabilities of 10 two-valued variables as a table requires storage space for210=1024{\displaystyle 2^{10}=1024}values. If no variable's local distribution depends on more than three parent variables, the Bayesian network representation stores at most10⋅23=80{\displaystyle 10\cdot 2^{3}=80}values.
One advantage of Bayesian networks is that it is intuitively easier for a human to understand (a sparse set of) direct dependencies and local distributions than complete joint distributions.
Bayesian networks perform three main inference tasks:
Because a Bayesian network is a complete model for its variables and their relationships, it can be used to answer probabilistic queries about them. For example, the network can be used to update knowledge of the state of a subset of variables when other variables (theevidencevariables) are observed. This process of computing theposteriordistribution of variables given evidence is called probabilistic inference. The posterior gives a universalsufficient statisticfor detection applications, when choosing values for the variable subset that minimize some expected loss function, for instance the probability of decision error. A Bayesian network can thus be considered a mechanism for automatically applyingBayes' theoremto complex problems.
The most common exact inference methods are:variable elimination, which eliminates (by integration or summation) the non-observed non-query variables one by one by distributing the sum over the product;clique tree propagation, which caches the computation so that many variables can be queried at one time and new evidence can be propagated quickly; and recursive conditioning and AND/OR search, which allow for aspace–time tradeoffand match the efficiency of variable elimination when enough space is used. All of these methods have complexity that is exponential in the network'streewidth. The most commonapproximate inferencealgorithms areimportance sampling, stochasticMCMCsimulation, mini-bucket elimination,loopy belief propagation,generalized belief propagationandvariational methods.
In order to fully specify the Bayesian network and thus fully represent thejoint probability distribution, it is necessary to specify for each nodeXthe probability distribution forXconditional uponX'sparents. The distribution ofXconditional upon its parents may have any form. It is common to work with discrete orGaussian distributionssince that simplifies calculations. Sometimes only constraints on distribution are known; one can then use theprinciple of maximum entropyto determine a single distribution, the one with the greatestentropygiven the constraints. (Analogously, in the specific context of adynamic Bayesian network, the conditional distribution for the hidden state's temporal evolution is commonly specified to maximize theentropy rateof the implied stochastic process.)
Often these conditional distributions include parameters that are unknown and must be estimated from data, e.g., via themaximum likelihoodapproach. Direct maximization of the likelihood (or of theposterior probability) is often complex given unobserved variables. A classical approach to this problem is theexpectation-maximization algorithm, which alternates computing expected values of the unobserved variables conditional on observed data, with maximizing the complete likelihood (or posterior) assuming that previously computed expected values are correct. Under mild regularity conditions, this process converges on maximum likelihood (or maximum posterior) values for parameters.
A more fully Bayesian approach to parameters is to treat them as additional unobserved variables and to compute a full posterior distribution over all nodes conditional upon observed data, then to integrate out the parameters. This approach can be expensive and lead to large dimension models, making classical parameter-setting approaches more tractable.
In the simplest case, a Bayesian network is specified by an expert and is then used to perform inference. In other applications, the task of defining the network is too complex for humans. In this case, the network structure and the parameters of the local distributions must be learned from data.
Automatically learning the graph structure of a Bayesian network (BN) is a challenge pursued withinmachine learning. The basic idea goes back to a recovery algorithm developed by Rebane andPearl[7]and rests on the distinction between the three possible patterns allowed in a 3-node DAG:
The first 2 represent the same dependencies (X{\displaystyle X}andZ{\displaystyle Z}are independent givenY{\displaystyle Y}) and are, therefore, indistinguishable. The collider, however, can be uniquely identified, sinceX{\displaystyle X}andZ{\displaystyle Z}are marginally independent and all other pairs are dependent. Thus, while theskeletons(the graphs stripped of arrows) of these three triplets are identical, the directionality of the arrows is partially identifiable. The same distinction applies whenX{\displaystyle X}andZ{\displaystyle Z}have common parents, except that one must first condition on those parents. Algorithms have been developed to systematically determine the skeleton of the underlying graph and, then, orient all arrows whose directionality is dictated by the conditional independences observed.[2][8][9][10]
An alternative method of structural learning uses optimization-based search. It requires ascoring functionand a search strategy. A common scoring function isposterior probabilityof the structure given the training data, like theBICor the BDeu. The time requirement of anexhaustive searchreturning a structure that maximizes the score issuperexponentialin the number of variables. A local search strategy makes incremental changes aimed at improving the score of the structure. A global search algorithm likeMarkov chain Monte Carlocan avoid getting trapped inlocal minima. Friedman et al.[11][12]discuss usingmutual informationbetween variables and finding a structure that maximizes this. They do this by restricting the parent candidate set toknodes and exhaustively searching therein.
A particularly fast method for exact BN learning is to cast the problem as an optimization problem, and solve it usinginteger programming. Acyclicity constraints are added to the integer program (IP) during solving in the form ofcutting planes.[13]Such method can handle problems with up to 100 variables.
In order to deal with problems with thousands of variables, a different approach is necessary. One is to first sample one ordering, and then find the optimal BN structure with respect to that ordering. This implies working on the search space of the possible orderings, which is convenient as it is smaller than the space of network structures. Multiple orderings are then sampled and evaluated. This method has been proven to be the best available in literature when the number of variables is huge.[14]
Another method consists of focusing on the sub-class of decomposable models, for which theMLEhave a closed form. It is then possible to discover a consistent structure for hundreds of variables.[15]
Learning Bayesian networks with bounded treewidth is necessary to allow exact, tractable inference, since the worst-case inference complexity is exponential in the treewidth k (under the exponential time hypothesis). Yet, as a global property of the graph, it considerably increases the difficulty of the learning process. In this context it is possible to useK-treefor effective learning.[16]
Given datax{\displaystyle x\,\!}and parameterθ{\displaystyle \theta }, a simpleBayesian analysisstarts with aprior probability(prior)p(θ){\displaystyle p(\theta )}andlikelihoodp(x∣θ){\displaystyle p(x\mid \theta )}to compute aposterior probabilityp(θ∣x)∝p(x∣θ)p(θ){\displaystyle p(\theta \mid x)\propto p(x\mid \theta )p(\theta )}.
Often the prior onθ{\displaystyle \theta }depends in turn on other parametersφ{\displaystyle \varphi }that are not mentioned in the likelihood. So, the priorp(θ){\displaystyle p(\theta )}must be replaced by a likelihoodp(θ∣φ){\displaystyle p(\theta \mid \varphi )}, and a priorp(φ){\displaystyle p(\varphi )}on the newly introduced parametersφ{\displaystyle \varphi }is required, resulting in a posterior probability
This is the simplest example of ahierarchical Bayes model.
The process may be repeated; for example, the parametersφ{\displaystyle \varphi }may depend in turn on additional parametersψ{\displaystyle \psi \,\!}, which require their own prior. Eventually the process must terminate, with priors that do not depend on unmentioned parameters.
Given the measured quantitiesx1,…,xn{\displaystyle x_{1},\dots ,x_{n}\,\!}each withnormally distributederrors of knownstandard deviationσ{\displaystyle \sigma \,\!},
Suppose we are interested in estimating theθi{\displaystyle \theta _{i}}. An approach would be to estimate theθi{\displaystyle \theta _{i}}using amaximum likelihoodapproach; since the observations are independent, the likelihood factorizes and the maximum likelihood estimate is simply
However, if the quantities are related, so that for example the individualθi{\displaystyle \theta _{i}}have themselves been drawn from an underlying distribution, then this relationship destroys the independence and suggests a more complex model, e.g.,
withimproper priorsφ∼flat{\displaystyle \varphi \sim {\text{flat}}},τ∼flat∈(0,∞){\displaystyle \tau \sim {\text{flat}}\in (0,\infty )}. Whenn≥3{\displaystyle n\geq 3}, this is anidentified model(i.e. there exists a unique solution for the model's parameters), and the posterior distributions of the individualθi{\displaystyle \theta _{i}}will tend to move, orshrinkaway from the maximum likelihood estimates towards their common mean. Thisshrinkageis a typical behavior in hierarchical Bayes models.
Some care is needed when choosing priors in a hierarchical model, particularly on scale variables at higher levels of the hierarchy such as the variableτ{\displaystyle \tau \,\!}in the example. The usual priors such as theJeffreys prioroften do not work, because the posterior distribution will not be normalizable and estimates made by minimizing theexpected losswill beinadmissible.
Several equivalent definitions of a Bayesian network have been offered. For the following, letG= (V,E) be adirected acyclic graph(DAG) and letX= (Xv),v∈Vbe a set ofrandom variablesindexed byV.
Xis a Bayesian network with respect toGif its jointprobability density function(with respect to aproduct measure) can be written as a product of the individual density functions, conditional on their parent variables:[17]
where pa(v) is the set of parents ofv(i.e. those vertices pointing directly tovvia a single edge).
For any set of random variables, the probability of any member of ajoint distributioncan be calculated from conditional probabilities using thechain rule(given atopological orderingofX) as follows:[17]
Using the definition above, this can be written as:
The difference between the two expressions is theconditional independenceof the variables from any of their non-descendants, given the values of their parent variables.
Xis a Bayesian network with respect toGif it satisfies thelocal Markov property: each variable isconditionally independentof its non-descendants given its parent variables:[18]
where de(v) is the set of descendants andV\ de(v) is the set of non-descendants ofv.
This can be expressed in terms similar to the first definition, as
The set of parents is a subset of the set of non-descendants because the graph isacyclic.
In general, learning a Bayesian network from data is known to beNP-hard.[19]This is due in part to thecombinatorial explosionofenumerating DAGsas the number of variables increases. Nevertheless, insights about an underlying Bayesian network can be learned from data in polynomial time by focusing on its marginal independence structure:[20]while the conditional independence statements of a distribution modeled by a Bayesian network are encoded by a DAG (according to the factorization and Markov properties above), its marginal independence statements—the conditional independence statements in which the conditioning set is empty—are encoded by asimple undirected graphwith special properties such as equalintersectionandindependence numbers.
Developing a Bayesian network often begins with creating a DAGGsuch thatXsatisfies the local Markov property with respect toG. Sometimes this is acausalDAG. The conditional probability distributions of each variable given its parents inGare assessed. In many cases, in particular in the case where the variables are discrete, if the joint distribution ofXis the product of these conditional distributions, thenXis a Bayesian network with respect toG.[21]
TheMarkov blanketof a node is the set of nodes consisting of its parents, its children, and any other parents of its children. The Markov blanket renders the node independent of the rest of the network; the joint distribution of the variables in the Markov blanket of a node is sufficient knowledge for calculating the distribution of the node.Xis a Bayesian network with respect toGif every node is conditionally independent of all other nodes in the network, given itsMarkov blanket.[18]
This definition can be made more general by defining the "d"-separation of two nodes, where d stands for directional.[2]We first define the "d"-separation of a trail and then we will define the "d"-separation of two nodes in terms of that.
LetPbe a trail from nodeutov. A trail is a loop-free, undirected (i.e. all edge directions are ignored) path between two nodes. ThenPis said to bed-separated by a set of nodesZif any of the following conditions holds:
The nodesuandvared-separated byZif all trails between them ared-separated. Ifuandvare not d-separated, they are d-connected.
Xis a Bayesian network with respect toGif, for any two nodesu,v:
whereZis a set whichd-separatesuandv. (TheMarkov blanketis the minimal set of nodes whichd-separates nodevfrom all other nodes.)
Although Bayesian networks are often used to representcausalrelationships, this need not be the case: a directed edge fromutovdoes not require thatXvbe causally dependent onXu. This is demonstrated by the fact that Bayesian networks on the graphs:
are equivalent: that is they impose exactly the same conditional independence requirements.
A causal network is a Bayesian network with the requirement that the relationships be causal. The additional semantics of causal networks specify that if a nodeXis actively caused to be in a given statex(an action written as do(X=x)), then the probability density function changes to that of the network obtained by cutting the links from the parents ofXtoX, and settingXto the caused valuex.[2]Using these semantics, the impact of external interventions from data obtained prior to intervention can be predicted.
In 1990, while working at Stanford University on large bioinformatic applications, Cooper proved that exact inference in Bayesian networks isNP-hard.[22]This result prompted research on approximation algorithms with the aim of developing a tractable approximation to probabilistic inference. In 1993, Paul Dagum andMichael Lubyproved two surprising results on the complexity of approximation of probabilistic inference in Bayesian networks.[23]First, they proved that no tractabledeterministic algorithmcan approximate probabilistic inference to within anabsolute errorɛ< 1/2. Second, they proved that no tractablerandomized algorithmcan approximate probabilistic inference to within an absolute errorɛ< 1/2 with confidence probability greater than 1/2.
At about the same time,Rothproved that exact inference in Bayesian networks is in fact#P-complete(and thus as hard as counting the number of satisfying assignments of aconjunctive normal formformula (CNF)) and that approximate inference within a factor 2n1−ɛfor everyɛ> 0, even for Bayesian networks with restricted architecture, is NP-hard.[24][25]
In practical terms, these complexity results suggested that while Bayesian networks were rich representations for AI and machine learning applications, their use in large real-world applications would need to be tempered by either topological structural constraints, such as naïve Bayes networks, or by restrictions on the conditional probabilities. The bounded variance algorithm[26]developed by Dagum and Luby was the first provable fast approximation algorithm to efficiently approximate probabilistic inference in Bayesian networks with guarantees on the error approximation. This powerful algorithm required the minor restriction on the conditional probabilities of the Bayesian network to be bounded away from zero and one by1/p(n){\displaystyle 1/p(n)}wherep(n){\displaystyle p(n)}was any polynomial of the number of nodes in the network,n{\displaystyle n}.
Notable software for Bayesian networks include:
The term Bayesian network was coined byJudea Pearlin 1985 to emphasize:[28]
In the late 1980s Pearl'sProbabilistic Reasoning in Intelligent Systems[30]andNeapolitan'sProbabilistic Reasoning in Expert Systems[31]summarized their properties and established them as a field of study.
|
https://en.wikipedia.org/wiki/Bayesian_network
|
Knowledge representation(KR) aims to model information in a structured manner to formally represent it as knowledge in knowledge-based systems. Whereasknowledge representationand reasoning(KRR,KR&R, orKR²) also aims to understand, reason and interpret knowledge. KRR is widely used in the field ofartificial intelligence(AI) with the goal to representinformationabout the world in a form that a computer system can use to solve complex tasks, such asdiagnosing a medical conditionorhaving a natural-language dialog. KR incorporates findings from psychology[1]about how humans solve problems and represent knowledge, in order to designformalismsthat make complex systems easier to design and build. KRR also incorporates findings fromlogicto automate various kinds ofreasoning.
Traditional KRR focuses more on the declarative representation of knowledge. Related knowledge representation formalisms mainly includevocabularies,thesaurus,semantic networks,axiom systems,frames,rules,logic programs, andontologies. Examples ofautomated reasoningengines includeinference engines,theorem provers,model generators, andclassifiers.
In a broader sense, parameterized models inmachine learning— includingneural networkarchitectures such asconvolutional neural networksandtransformers— can also be regarded as a family of knowledge representation formalisms. The question of which formalism is most appropriate for knowledge-based systems has long been a subject of extensive debate. For instance, Frank van Harmelen et al. discussed the suitability of logic as a knowledge representation formalism and reviewed arguments presented by anti-logicists.[2]Paul Smolensky criticized the limitations of symbolic formalisms and explored the possibilities of integrating it with connectionist approaches.[3]
More recently, Heng Zhang et al. have demonstrated that all universal (or equally expressive and natural) knowledge representation formalisms are recursively isomorphic.[4]This finding indicates a theoretical equivalence among mainstream knowledge representation formalisms with respect to their capacity for supportingartificial general intelligence(AGI). They further argue that while diverse technical approaches may draw insights from one another via recursive isomorphisms, the fundamental challenges remain inherently shared.
The earliest work in computerized knowledge representation was focused on general problem-solvers such as theGeneral Problem Solver(GPS) system developed byAllen NewellandHerbert A. Simonin 1959 and theAdvice Takerproposed byJohn McCarthyalso in 1959. GPS featured data structures for planning and decomposition. The system would begin with a goal. It would then decompose that goal into sub-goals and then set out to construct strategies that could accomplish each subgoal. The Advisor Taker, on the other hand, proposed the use of thepredicate calculusto implementcommon sense reasoning.
Many of the early approaches to knowledge representation in Artificial Intelligence (AI) used graph representations andsemantic networks, similar toknowledge graphstoday. In such approaches, problem solving was a form of graph traversal[5]or path-finding, as in theA* search algorithm. Typical applications included robot plan-formation and game-playing.
Other researchers focused on developingautomated theorem-proversfor first-order logic, motivated by the use ofmathematical logicto formalise mathematics and to automate the proof of mathematical theorems. A major step in this direction was the development of theresolution methodbyJohn Alan Robinson.
In the meanwhile, John McCarthy andPat Hayesdeveloped thesituation calculusas a logical representation of common sense knowledge about the laws of cause and effect.Cordell Green, in turn, showed how to do robot plan-formation by applying resolution to the situation calculus. He also showed how to use resolution forquestion-answeringandautomatic programming.[6]
In contrast, researchers at Massachusetts Institute of Technology (MIT) rejected the resolution uniform proof procedure paradigm and advocated the procedural embedding of knowledge instead.[7]The resulting conflict between the use of logical representations and the use of procedural representations was resolved in the early 1970s with the development oflogic programmingandProlog, usingSLD resolutionto treatHorn clausesas goal-reduction procedures.
The early development of logic programming was largely a European phenomenon. In North America, AI researchers such asEd FeigenbaumandFrederick Hayes-Rothadvocated the representation of domain-specific knowledge rather than general-purpose reasoning.[8]
These efforts led to thecognitive revolutionin psychology and to the phase of AI focused on knowledge representation that resulted inexpert systemsin the 1970s and 80s,production systems,frame languages, etc. Rather than general problem solvers, AI changed its focus to expert systems that could match human competence on a specific task, such as medical diagnosis.[9]
Expert systems gave us the terminology still in use today where AI systems are divided into aknowledge base, which includes facts and rules about a problem domain, and aninference engine, which applies the knowledge in theknowledge baseto answer questions and solve problems in the domain. In these early systems the facts in the knowledge base tended to be a fairly flat structure, essentially assertions about the values of variables used by the rules.[10]
Meanwhile,Marvin Minskydeveloped the concept offramein the mid-1970s.[11]A frame is similar to an object class: It is an abstract description of a category describing things in the world, problems, and potential solutions. Frames were originally used on systems geared toward human interaction, e.g.understanding natural languageand the social settings in which various default expectations such as ordering food in a restaurant narrow the search space and allow the system to choose appropriate responses to dynamic situations.
It was not long before the frame communities and the rule-based researchers realized that there was a synergy between their approaches. Frames were good for representing the real world, described as classes, subclasses, slots (data values) with various constraints on possible values. Rules were good for representing and utilizing complex logic such as the process to make a medical diagnosis. Integrated systems were developed that combined frames and rules. One of the most powerful and well known was the 1983Knowledge Engineering Environment(KEE) fromIntellicorp. KEE had a complete rule engine withforwardandbackward chaining. It also had a complete frame-based knowledge base with triggers, slots (data values), inheritance, and message passing. Although message passing originated in the object-oriented community rather than AI it was quickly embraced by AI researchers as well in environments such as KEE and in the operating systems for Lisp machines fromSymbolics,Xerox, andTexas Instruments.[12]
The integration of frames, rules, and object-oriented programming was significantly driven by commercial ventures such as KEE and Symbolics spun off from various research projects. At the same time, there was another strain of research that was less commercially focused and was driven by mathematical logic and automated theorem proving.[citation needed]One of the most influential languages in this research was theKL-ONElanguage of the mid-'80s. KL-ONE was aframe languagethat had a rigorous semantics, formal definitions for concepts such as anIs-A relation.[13]KL-ONE and languages that were influenced by it such asLoomhad an automated reasoning engine that was based on formal logic rather than on IF-THEN rules. This reasoner is called the classifier. A classifier can analyze a set of declarations and infer new assertions, for example, redefine a class to be a subclass or superclass of some other class that wasn't formally specified. In this way the classifier can function as an inference engine, deducing new facts from an existing knowledge base. The classifier can also provide consistency checking on a knowledge base (which in the case of KL-ONE languages is also referred to as an Ontology).[14]
Another area of knowledge representation research was the problem ofcommon-sense reasoning. One of the first realizations learned from trying to make software that can function with human natural language was that humans regularly draw on an extensive foundation of knowledge about the real world that we simply take for granted but that is not at all obvious to an artificial agent, such as basic principles of common-sense physics, causality, intentions, etc. An example is theframe problem, that in an event driven logic there need to be axioms that state things maintain position from one moment to the next unless they are moved by some external force. In order to make a true artificial intelligence agent that canconverse with humans using natural languageand can process basic statements and questions about the world, it is essential to represent this kind of knowledge.[15]In addition to McCarthy and Hayes' situation calculus, one of the most ambitious programs to tackle this problem was Doug Lenat'sCycproject. Cyc established its own Frame language and had large numbers of analysts document various areas of common-sense reasoning in that language. The knowledge recorded in Cyc included common-sense models of time, causality, physics, intentions, and many others.[16]
The starting point for knowledge representation is theknowledge representation hypothesisfirst formalized byBrian C. Smithin 1985:[17]
Any mechanically embodied intelligent process will be comprised of structural ingredients that a) we as external observers naturally take to represent a propositional account of the knowledge that the overall process exhibits, and b) independent of such external semantic attribution, play a formal but causal and essential role in engendering the behavior that manifests that knowledge.
One of the most active areas of knowledge representation research is theSemantic Web.[citation needed]The Semantic Web seeks to add a layer of semantics (meaning) on top of the current Internet. Rather than indexing web sites and pages via keywords, the Semantic Web creates largeontologiesof concepts. Searching for a concept will be more effective than traditional text only searches. Frame languages and automatic classification play a big part in the vision for the future Semantic Web. The automatic classification gives developers technology to provide order on a constantly evolving network of knowledge. Defining ontologies that are static and incapable of evolving on the fly would be very limiting for Internet-based systems. The classifier technology provides the ability to deal with the dynamic environment of the Internet.
Recent projects funded primarily by theDefense Advanced Research Projects Agency(DARPA) have integrated frame languages and classifiers with markup languages based on XML. TheResource Description Framework(RDF) provides the basic capability to define classes, subclasses, and properties of objects. TheWeb Ontology Language(OWL) provides additional levels of semantics and enables integration with classification engines.[18][19]
Knowledge-representation is a field of artificial intelligence that focuses on designing computer representations that capture information about the world that can be used for solving complex problems.
The justification for knowledge representation is that conventionalprocedural codeis not the best formalism to use to solve complex problems. Knowledge representation makes complex software easier to define and maintain than procedural code and can be used inexpert systems.
For example, talking to experts in terms of business rules rather than code lessens the semantic gap between users and developers and makes development of complex systems more practical.
Knowledge representation goes hand in hand withautomated reasoningbecause one of the main purposes of explicitly representing knowledge is to be able to reason about that knowledge, to make inferences, assert new knowledge, etc. Virtually allknowledge representation languageshave a reasoning or inference engine as part of the system.[20]
A key trade-off in the design of knowledge representation formalisms is that between expressivity and tractability.[21]First Order Logic(FOL), with its high expressive power and ability to formalise much of mathematics, is a standard for comparing the expressibility of knowledge representation languages.
Arguably, FOL has two drawbacks as a knowledge representation formalism in its own right, namely ease of use and efficiency of implementation. Firstly, because of its high expressive power, FOL allows many ways of expressing the same information, and this can make it hard for users to formalise or even to understand knowledge expressed in complex, mathematically-oriented ways. Secondly, because of its complex proof procedures, it can be difficult for users to understand complex proofs and explanations, and it can be hard for implementations to be efficient. As a consequence, unrestricted FOL can be intimidating for many software developers.
One of the key discoveries of AI research in the 1970s was that languages that do not have the full expressive power of FOL can still provide close to the same expressive power of FOL, but can be easier for both the average developer and for the computer to understand. Many of the early AI knowledge representation formalisms, from databases to semantic nets to production systems, can be viewed as making various design decisions about how to balance expressive power with naturalness of expression and efficiency.[22]In particular, this balancing act was a driving motivation for the development of IF-THEN rules inrule-basedexpert systems.
A similar balancing act was also a motivation for the development oflogic programming(LP) and the logic programming languageProlog. Logic programs have a rule-based syntax, which is easily confused with the IF-THEN syntax ofproduction rules. But logic programs have a well-defined logical semantics, whereas production systems do not.
The earliest form of logic programming was based on theHorn clausesubset of FOL. But later extensions of LP included thenegation as failureinference rule, which turns LP into anon-monotonic logicfordefault reasoning. The resulting extended semantics of LP is a variation of the standard semantics of Horn clauses and FOL, and is a form of database semantics,[23]which includes theunique name assumptionand a form ofclosed world assumption. These assumptions are much harder to state and reason with explicitly using the standard semantics of FOL.
In a key 1993 paper on the topic, Randall Davis ofMIToutlined five distinct roles to analyze a knowledge representation framework:[24]
Knowledge representation and reasoning are a key enabling technology for theSemantic Web. Languages based on the Frame model with automatic classification provide a layer of semantics on top of the existing Internet. Rather than searching via text strings as is typical today, it will be possible to define logical queries and find pages that map to those queries.[18]The automated reasoning component in these systems is an engine known as the classifier. Classifiers focus on thesubsumptionrelations in a knowledge base rather than rules. A classifier can infer new classes and dynamically change the ontology as new information becomes available. This capability is ideal for the ever-changing and evolving information space of the Internet.[25]
The Semantic Web integrates concepts from knowledge representation and reasoning with markup languages based on XML. TheResource Description Framework(RDF) provides the basic capabilities to define knowledge-based objects on the Internet with basic features such as Is-A relations and object properties. TheWeb Ontology Language(OWL) adds additional semantics and integrates with automatic classification reasoners.[19]
In 1985,Ron Brachmancategorized the core issues for knowledge representation as follows:[26]
In the early years ofknowledge-based systemsthe knowledge-bases were fairly small. The knowledge-bases that were meant to actually solve real problems rather than do proof of concept demonstrations needed to focus on well defined problems. So for example, not just medical diagnosis as a whole topic, but medical diagnosis of certain kinds of diseases.
As knowledge-based technology scaled up, the need for larger knowledge bases and for modular knowledge bases that could communicate and integrate with each other became apparent. This gave rise to the discipline of ontology engineering, designing and building large knowledge bases that could be used by multiple projects. One of the leading research projects in this area was theCycproject. Cyc was an attempt to build a huge encyclopedic knowledge base that would contain not just expert knowledge but common-sense knowledge. In designing an artificial intelligence agent, it was soon realized that representing common-sense knowledge, knowledge that humans simply take for granted, was essential to make an AI that could interact with humans using natural language. Cyc was meant to address this problem. The language they defined was known asCycL.
After CycL, a number ofontology languageshave been developed. Most aredeclarative languages, and are eitherframe languages, or are based onfirst-order logic. Modularity—the ability to define boundaries around specific domains and problem spaces—is essential for these languages because as stated byTom Gruber, "Every ontology is a treaty–a social agreement among people with common motive in sharing." There are always many competing and differing views that make any general-purpose ontology impossible. A general-purpose ontology would have to be applicable in any domain and different areas of knowledge need to be unified.[30]
There is a long history of work attempting to build ontologies for a variety of task domains, e.g., an ontology for liquids,[31]thelumped element modelwidely used in representing electronic circuits (e.g.[32]), as well as ontologies for time, belief, and even programming itself. Each of these offers a way to see some part of the world.
The lumped element model, for instance, suggests that we think of circuits in terms of components with connections between them, with signals flowing instantaneously along the connections. This is a useful view, but not the only possible one. A different ontology arises if we need to attend to the electrodynamics in the device: Here signals propagate at finite speed and an object (like a resistor) that was previously viewed as a single component with an I/O behavior may now have to be thought of as an extended medium through which an electromagnetic wave flows.
Ontologies can of course be written down in a wide variety of languages and notations (e.g., logic, LISP, etc.); the essential information is not the form of that language but the content, i.e., the set of concepts offered as a way of thinking about the world. Simply put, the important part is notions like connections and components, not the choice between writing them as predicates or LISP constructs.
The commitment made selecting one or another ontology can produce a sharply different view of the task at hand. Consider the difference that arises in selecting the lumped element view of a circuit rather than the electrodynamic view of the same device. As a second example, medical diagnosis viewed in terms of rules (e.g.,MYCIN) looks substantially different from the same task viewed in terms of frames (e.g., INTERNIST). Where MYCIN sees the medical world as made up of empirical associations connecting symptom to disease, INTERNIST sees a set of prototypes, in particular prototypical diseases, to be matched against the case at hand.
|
https://en.wikipedia.org/wiki/Knowledge_representation
|
In the mathematicaltheory of probability, theIonescu-Tulcea theorem, sometimes called theIonesco Tulcea extension theorem, deals with the existence ofprobability measuresfor probabilistic events consisting of a countably infinite number of individual probabilistic events. In particular, the individual events may beindependentor dependent with respect to each other. Thus, the statement goes beyond the mere existence of countableproduct measures. The theorem was proved byCassius Ionescu-Tulceain 1949.[1][2]
Suppose that(Ω0,A0,P0){\displaystyle (\Omega _{0},{\mathcal {A}}_{0},P_{0})}is aprobability spaceand(Ωi,Ai){\displaystyle (\Omega _{i},{\mathcal {A}}_{i})}fori∈N{\displaystyle i\in \mathbb {N} }is a sequence ofmeasurable spaces. For eachi∈N{\displaystyle i\in \mathbb {N} }let
be theMarkov kernelderived from(Ωi−1,Ai−1){\displaystyle (\Omega ^{i-1},{\mathcal {A}}^{i-1})}and(Ωi,Ai),{\displaystyle (\Omega _{i},{\mathcal {A}}_{i}),}, where
Then there exists a sequence of probability measures
and there exists a uniquely defined probability measureP{\displaystyle P}on(∏k=0∞Ωk,⨂k=0∞Ak){\displaystyle \left(\prod _{k=0}^{\infty }\Omega _{k},\bigotimes _{k=0}^{\infty }{\mathcal {A}}_{k}\right)}, so that
is satisfied for eachA∈Ai{\displaystyle A\in {\mathcal {A}}^{i}}andi∈N{\displaystyle i\in \mathbb {N} }. (The measureP{\displaystyle P}hasconditional probabilitiesequal to the stochastic kernels.)[3]
The construction used in the proof of the Ionescu-Tulcea theorem is often used in the theory ofMarkov decision processes, and, in particular, the theory ofMarkov chains.[3]
|
https://en.wikipedia.org/wiki/Ionescu-Tulcea_theorem
|
Inprobability theory, theBorel–Kolmogorov paradox(sometimes known asBorel's paradox) is aparadoxrelating toconditional probabilitywith respect to aneventof probability zero (also known as anull set). It is named afterÉmile BorelandAndrey Kolmogorov.
Suppose that arandom variablehas auniform distributionon aunit sphere. What is itsconditional distributionon agreat circle? Because of the symmetry of the sphere, one might expect that the distribution is uniform and independent of the choice of coordinates. However, two analyses give contradictory results. First, note that choosing a point uniformly on the sphere is equivalent to choosing thelongitudeλ{\displaystyle \lambda }uniformly from[−π,π]{\displaystyle [-\pi ,\pi ]}and choosing thelatitudeφ{\displaystyle \varphi }from[−π2,π2]{\textstyle [-{\frac {\pi }{2}},{\frac {\pi }{2}}]}with density12cosφ{\textstyle {\frac {1}{2}}\cos \varphi }.[1]Then we can look at two different great circles:
One distribution is uniform on the circle, the other is not. Yet both seem to be referring to the same great circle in different coordinate systems.
Many quite futile arguments have raged — between otherwise competent probabilists — over which of these results is 'correct'.
In case (1) above, the conditional probability that the longitudeλlies in a setEgiven thatφ= 0 can be writtenP(λ∈E|φ= 0). Elementary probability theory suggests this can be computed asP(λ∈Eandφ= 0)/P(φ= 0), but that expression is not well-defined sinceP(φ= 0) = 0.Measure theoryprovides a way to define a conditional probability, using the limit of eventsRab= {φ:a<φ<b} which are horizontal rings (curved surface zones ofspherical segments) consisting of all points with latitude betweenaandb.
The resolution of the paradox is to notice that in case (2),P(φ∈F|λ= 0) is defined using a limit of the eventsLcd= {λ:c<λ<d}, which arelunes(vertical wedges), consisting of all points whose longitude varies betweencandd. So althoughP(λ∈E|φ= 0) andP(φ∈F|λ= 0) each provide a probability distribution on a great circle, one of them is defined using limits of rings, and the other using limits of lunes. Since rings and lunes have different shapes, it should be less surprising thatP(λ∈E|φ= 0) andP(φ∈F|λ= 0) have different distributions.
The concept of a conditional probability with regard to an isolated hypothesis whose probability equals 0 is inadmissible. For we can obtain a probability distribution for [the latitude] on the meridian circle only if we regard this circle as an element of the decomposition of the entire spherical surface onto meridian circles with the given poles
… the term 'great circle' is ambiguous until we specify what limiting operation is to produce it. The intuitive symmetry argument presupposes the equatorial limit; yet one eating slices of an orange might presuppose the other.
To understand the problem we need to recognize that a distribution on a continuous random variable is described by a densityfonly with respect to some measureμ. Both are important for the full description of the probability distribution. Or, equivalently, we need to fully define the space on which we want to definef.
Let Φ and Λ denote two random variables taking values in Ω1=[−π2,π2]{\textstyle \left[-{\frac {\pi }{2}},{\frac {\pi }{2}}\right]}respectively Ω2= [−π,π]. An event {Φ =φ, Λ =λ} gives a point on the sphereS(r) with radiusr. We define thecoordinate transform
for which we obtain thevolume element
Furthermore, if eitherφorλis fixed, we get the volume elements
Let
denote the joint measure onB(Ω1×Ω2){\displaystyle {\mathcal {B}}(\Omega _{1}\times \Omega _{2})}, which has a densityfΦ,Λ{\displaystyle f_{\Phi ,\Lambda }}with respect toωr(φ,λ)dφdλ{\displaystyle \omega _{r}(\varphi ,\lambda )\,d\varphi \,d\lambda }and let
If we assume that the densityfΦ,Λ{\displaystyle f_{\Phi ,\Lambda }}is uniform, then
Hence,μΦ∣Λ{\displaystyle \mu _{\Phi \mid \Lambda }}has a uniform density with respect toωr(φ)dφ{\displaystyle \omega _{r}(\varphi )\,d\varphi }but not with respect to theLebesgue measure. On the other hand,μΛ∣Φ{\displaystyle \mu _{\Lambda \mid \Phi }}has a uniform density with respect toωr(λ)dλ{\displaystyle \omega _{r}(\lambda )\,d\lambda }and the Lebesgue measure.
Consider a random vector(X,Y,Z){\displaystyle (X,Y,Z)}that is uniformly distributed on the unit sphereS2{\displaystyle S^{2}}.
We begin by parametrizing the sphere with the usualspherical polar coordinates:
where−π2≤φ≤π2{\textstyle -{\frac {\pi }{2}}\leq \varphi \leq {\frac {\pi }{2}}}and−π≤θ≤π{\displaystyle -\pi \leq \theta \leq \pi }.
We can define random variablesΦ{\displaystyle \Phi },Θ{\displaystyle \Theta }as the values of(X,Y,Z){\displaystyle (X,Y,Z)}under the inverse of this parametrization, or more formally using thearctan2 function:
Using the formulas for the surface areaspherical capand thespherical wedge, the surface of a spherical cap wedge is given by
Since(X,Y,Z){\displaystyle (X,Y,Z)}is uniformly distributed, the probability is proportional to the surface area, giving thejoint cumulative distribution function
Thejoint probability density functionis then given by
Note thatΦ{\displaystyle \Phi }andΘ{\displaystyle \Theta }are independent random variables.
For simplicity, we won't calculate the full conditional distribution on a great circle, only the probability that the random vector lies in the first octant. That is to say, we will attempt to calculate the conditional probabilityP(A|B){\displaystyle \mathbb {P} (A|B)}with
We attempt to evaluate the conditional probability as a limit of conditioning on the events
AsΦ{\displaystyle \Phi }andΘ{\displaystyle \Theta }are independent, so are the eventsA{\displaystyle A}andBε{\displaystyle B_{\varepsilon }}, therefore
Now we repeat the process with a different parametrization of the sphere:
This is equivalent to the previous parametrizationrotated by 90 degrees around the y axis.
Define new random variables
Rotation ismeasure preservingso the density ofΦ′{\displaystyle \Phi '}andΘ′{\displaystyle \Theta '}is the same:
The expressions forAandBare:
Attempting again to evaluate the conditional probability as a limit of conditioning on the events
UsingL'Hôpital's ruleanddifferentiation under the integral sign:
This shows that the conditional density cannot be treated as conditioning on an event of probability zero, as explained inConditional probability#Conditioning on an event of probability zero.
|
https://en.wikipedia.org/wiki/Borel%E2%80%93Kolmogorov_paradox
|
Inprobability theory,regular conditional probabilityis a concept that formalizes the notion of conditioning on the outcome of arandom variable. The resultingconditional probability distributionis a parametrized family of probability measures called aMarkov kernel.
Consider two random variablesX,Y:Ω→R{\displaystyle X,Y:\Omega \to \mathbb {R} }. Theconditional probability distributionofYgivenXis a two variable functionκY∣X:R×B(R)→[0,1]{\displaystyle \kappa _{Y\mid X}:\mathbb {R} \times {\mathcal {B}}(\mathbb {R} )\to [0,1]}
If the random variableXis discrete
If the random variablesX,Yare continuous with densityfX,Y(x,y){\displaystyle f_{X,Y}(x,y)}.
A more general definition can be given in terms ofconditional expectation. Consider a functioneY∈A:R→[0,1]{\displaystyle e_{Y\in A}:\mathbb {R} \to [0,1]}satisfying
for almost allω{\displaystyle \omega }.
Then the conditional probability distribution is given by
As with conditional expectation, this can be further generalized to conditioning on a sigma algebraF{\displaystyle {\mathcal {F}}}. In that case the conditional distribution is a functionΩ×B(R)→[0,1]{\displaystyle \Omega \times {\mathcal {B}}(\mathbb {R} )\to [0,1]}:
For working withκY∣X{\displaystyle \kappa _{Y\mid X}}, it is important that it beregular, that is:
In other wordsκY∣X{\displaystyle \kappa _{Y\mid X}}is aMarkov kernel.
The second condition holds trivially, but the proof of the first is more involved. It can be shown that ifYis a random elementΩ→S{\displaystyle \Omega \to S}in aRadon spaceS, there exists aκY∣X{\displaystyle \kappa _{Y\mid X}}that satisfies the first condition.[1]It is possible to construct more general spaces where a regular conditional probability distribution does not exist.[2]
For discrete and continuous random variables, theconditional expectationcan be expressed as
wherefY∣X(x,y){\displaystyle f_{Y\mid X}(x,y)}is theconditional densityofYgivenX.
This result can be extended to measure theoretical conditional expectation using the regular conditional probability distribution:
Let(Ω,F,P){\displaystyle (\Omega ,{\mathcal {F}},P)}be aprobability space, and letT:Ω→E{\displaystyle T:\Omega \rightarrow E}be arandom variable, defined as aBorel-measurable functionfromΩ{\displaystyle \Omega }to itsstate space(E,E){\displaystyle (E,{\mathcal {E}})}.
One should think ofT{\displaystyle T}as a way to "disintegrate" the sample spaceΩ{\displaystyle \Omega }into{T−1(x)}x∈E{\displaystyle \{T^{-1}(x)\}_{x\in E}}.
Using thedisintegration theoremfrom the measure theory, it allows us to "disintegrate" the measureP{\displaystyle P}into a collection of measures,
one for eachx∈E{\displaystyle x\in E}. Formally, aregular conditional probabilityis defined as a functionν:E×F→[0,1],{\displaystyle \nu :E\times {\mathcal {F}}\rightarrow [0,1],}called a "transition probability", where:
whereP∘T−1{\displaystyle P\circ T^{-1}}is thepushforward measureT∗P{\displaystyle T_{*}P}of the distribution of the random elementT{\displaystyle T},x∈suppT,{\displaystyle x\in \operatorname {supp} T,}i.e. thesupportof theT∗P{\displaystyle T_{*}P}.
Specifically, if we takeB=E{\displaystyle B=E}, thenA∩T−1(E)=A{\displaystyle A\cap T^{-1}(E)=A}, and so
whereν(x,A){\displaystyle \nu (x,A)}can be denoted, using more familiar termsP(A|T=x){\displaystyle P(A\ |\ T=x)}.
Consider aRadon spaceΩ{\displaystyle \Omega }(that is a probability measure defined on a Radon space endowed with the Borel sigma-algebra) and a real-valued random variableT. As discussed above, in this case there exists a regular conditional probability with respect toT. Moreover, we can alternatively define theregular conditional probabilityfor an eventAgiven a particular valuetof the random variableTin the following manner:
where thelimitis taken over thenetofopenneighborhoodsUoftas they becomesmaller with respect to set inclusion. This limit is defined if and only if the probability space isRadon, and only in the support ofT, as described in the article. This is the restriction of the transition probability to the support ofT. To describe this limiting process rigorously:
For everyε>0,{\displaystyle \varepsilon >0,}there exists an open neighborhoodUof the event {T=t}, such that for every openVwith{T=t}⊂V⊂U,{\displaystyle \{T=t\}\subset V\subset U,}
whereL=P(A∣T=t){\displaystyle L=P(A\mid T=t)}is the limit.
|
https://en.wikipedia.org/wiki/Regular_conditional_probability
|
Instatistics, sometimes thecovariance matrixof amultivariate random variableis not known but has to beestimated.Estimation of covariance matricesthen deals with the question of how to approximate the actual covariance matrix on the basis of a sample from themultivariate distribution. Simple cases, where observations are complete, can be dealt with by using thesample covariance matrix. The sample covariance matrix (SCM) is anunbiasedandefficient estimatorof the covariance matrix if the space of covariance matrices is viewed as anextrinsicconvex coneinRp×p; however, measured using theintrinsic geometryofpositive-definite matrices, the SCM is abiasedand inefficient estimator.[1]In addition, if the random variable has anormal distribution, the sample covariance matrix has aWishart distributionand a slightly differently scaled version of it is themaximum likelihood estimate. Cases involvingmissing data,heteroscedasticity, or autocorrelated residuals require deeper considerations. Another issue is therobustnesstooutliers, to which sample covariance matrices are highly sensitive.[2][3][4]
Statistical analyses of multivariate data often involve exploratory studies of the way in which the variables change in relation to one another and this may be followed up by explicit statistical models involving the covariance matrix of the variables. Thus the estimation of covariance matrices directly from observational data plays two roles:
Estimates of covariance matrices are required at the initial stages ofprincipal component analysisandfactor analysis, and are also involved in versions ofregression analysisthat treat thedependent variablesin a data-set, jointly with theindependent variableas the outcome of a random sample.
Given asampleconsisting ofnindependent observationsx1,...,xnof ap-dimensionalrandom vectorX∈Rp×1(ap×1 column-vector), anunbiasedestimatorof the (p×p)covariance matrix
is thesample covariance matrix
wherexi{\displaystyle x_{i}}is thei-th observation of thep-dimensional random vector, and the vector
is thesample mean.
This is true regardless of the distribution of the random variableX, provided of course that the theoretical means and covariances exist. The reason for the factorn− 1 rather thannis essentially the same as the reason for the same factor appearing in unbiased estimates ofsample variancesandsample covariances, which relates to the fact that the mean is not known and is replaced by the sample mean (seeBessel's correction).
In cases where the distribution of therandom variableXis known to be within a certain family of distributions, other estimates may be derived on the basis of that assumption. A well-known instance is when therandom variableXisnormally distributed: in this case themaximum likelihoodestimatorof the covariance matrix is slightly different from the unbiased estimate, and is given by
A derivation of this result is given below. Clearly, the difference between the unbiased estimator and the maximum likelihood estimator diminishes for largen.
In the general case, the unbiased estimate of the covariance matrix provides an acceptable estimate when the data vectors in the observed data set are all complete: that is they contain nomissing elements. One approach to estimating the covariance matrix is to treat the estimation of each variance or pairwise covariance separately, and to use all the observations for which both variables have valid values. Assuming the missing data aremissing at randomthis results in an estimate for the covariance matrix which is unbiased. However, for many applications this estimate may not be acceptable because the estimated covariance matrix is not guaranteed to be positive semi-definite. This could lead to estimated correlations having absolute values which are greater than one, and/or a non-invertible covariance matrix.
When estimating thecross-covarianceof a pair of signals that arewide-sense stationary, missing samples donotneed be random (e.g., sub-sampling by an arbitrary factor is valid).[citation needed]
A random vectorX∈Rp(ap×1 "column vector") has a multivariate normal distribution with a nonsingular covariance matrix Σ precisely if Σ ∈Rp×pis apositive-definite matrixand theprobability density functionofXis
whereμ∈Rp×1is theexpected valueofX. Thecovariance matrixΣ is the multidimensional analog of what in one dimension would be thevariance, and
normalizes the densityf(x){\displaystyle f(x)}so that it integrates to 1.
Suppose now thatX1, ...,Xnareindependentand identically distributed samples from the distribution above. Based on theobserved valuesx1, ...,xnof thissample, we wish to estimate Σ.
The likelihood function is:
It is fairly readily shown that themaximum-likelihoodestimate of the mean vectorμis the "sample mean" vector:
Seethe section on estimation in the article on the normal distributionfor details; the process here is similar.
Since the estimatex¯{\displaystyle {\bar {x}}}does not depend on Σ, we can just substitute it forμin thelikelihood function, getting
and then seek the value of Σ that maximizes the likelihood of the data (in practice it is easier to work with logL{\displaystyle {\mathcal {L}}}).
Now we come to the first surprising step: regard thescalar(xi−x¯)TΣ−1(xi−x¯){\displaystyle (x_{i}-{\overline {x}})^{\mathrm {T} }\Sigma ^{-1}(x_{i}-{\overline {x}})}as thetraceof a 1×1 matrix. This makes it possible to use the identity tr(AB) = tr(BA) wheneverAandBare matrices so shaped that both products exist. We get
where
S{\displaystyle S}is sometimes called thescatter matrix, and is positive definite if there exists a subset of the data consisting ofp{\displaystyle p}affinely independent observations (which we will assume).
It follows from thespectral theoremoflinear algebrathat a positive-definite symmetric matrixShas a unique positive-definite symmetric square rootS1/2. We can again use the"cyclic property"of the trace to write
LetB=S1/2Σ−1S1/2. Then the expression above becomes
The positive-definite matrixBcan be diagonalized, and then the problem of finding the value ofBthat maximizes
Since the trace of a square matrix equals the sum of eigenvalues ("trace and eigenvalues"), the equation reduces to the problem of finding the eigenvaluesλ1, ...,λpthat maximize
This is just a calculus problem and we getλi=nfor alli.Thus, assumeQis the matrix of eigen vectors, then
i.e.,ntimes thep×pidentity matrix.
Finally we get
i.e., thep×p"sample covariance matrix"
is the maximum-likelihood estimator of the "population covariance matrix" Σ. At this point we are using a capitalXrather than a lower-casexbecause we are thinking of it "as an estimator rather than as an estimate", i.e., as something random whose probability distribution we could profit by knowing. The random matrixScan be shown to have aWishart distributionwithn− 1 degrees of freedom.[5]That is:
An alternative derivation of the maximum likelihood estimator can be performed viamatrix calculusformulae (see alsodifferential of a determinantanddifferential of the inverse matrix). It also verifies the aforementioned fact about the maximum likelihood estimate of the mean. Re-write the likelihood in the log form using the trace trick:
The differential of this log-likelihood is
It naturally breaks down into the part related to the estimation of the mean, and to the part related to the estimation of the variance. Thefirst order conditionfor maximum,dlnL(μ,Σ)=0{\displaystyle d\ln {\mathcal {L}}(\mu ,\Sigma )=0}, is satisfied when the terms multiplyingdμ{\displaystyle d\mu }anddΣ{\displaystyle d\Sigma }are identically zero. Assuming (the maximum likelihood estimate of)Σ{\displaystyle \Sigma }is non-singular, the first order condition for the estimate of the mean vector is
which leads to the maximum likelihood estimator
This lets us simplify
as defined above. Then the terms involvingdΣ{\displaystyle d\Sigma }indlnL{\displaystyle d\ln L}can be combined as
The first order conditiondlnL(μ,Σ)=0{\displaystyle d\ln {\mathcal {L}}(\mu ,\Sigma )=0}will hold when the term in the square bracket is (matrix-valued) zero. Pre-multiplying the latter byΣ{\displaystyle \Sigma }and dividing byn{\displaystyle n}gives
which of course coincides with the canonical derivation given earlier.
Dwyer[6]points out that decomposition into two terms such as appears above is "unnecessary" and derives the estimator in two lines of working. Note that it may be not trivial to show that such derived estimator is the unique global maximizer for likelihood function.
Given asampleofnindependent observationsx1,...,xnof ap-dimensional zero-mean Gaussian random variableXwith covarianceR, themaximum likelihoodestimatorofRis given by
The parameterR{\displaystyle R}belongs to the set ofpositive-definite matrices, which is aRiemannian manifold, not avector space, hence the usual vector-space notions ofexpectation, i.e. "E[R^]{\displaystyle \mathrm {E} [{\hat {\mathbf {R} }}]}", andestimator biasmust be generalized to manifolds to make sense of the problem of covariance matrix estimation. This can be done by defining the expectation of a manifold-valued estimatorR^{\displaystyle {\hat {\mathbf {R} }}}with respect to the manifold-valued pointR{\displaystyle R}as
where
are theexponential mapand inverse exponential map, respectively, "exp" and "log" denote the ordinarymatrix exponentialandmatrix logarithm, and E[·] is the ordinary expectation operator defined on a vector space, in this case thetangent spaceof the manifold.[1]
Theintrinsic biasvector fieldof the SCM estimatorR^{\displaystyle {\hat {\mathbf {R} }}}is defined to be
The intrinsic estimator bias is then given byexpRB(R^){\displaystyle \exp _{\mathbf {R} }\mathbf {B} ({\hat {\mathbf {R} }})}.
ForcomplexGaussian random variables, this bias vector field can be shown[1]to equal
where
and ψ(·) is thedigamma function. The intrinsic bias of the sample covariance matrix equals
and the SCM is asymptotically unbiased asn→ ∞.
Similarly, the intrinsicinefficiencyof the sample covariance matrix depends upon theRiemannian curvatureof the space of positive-definite matrices.
If the sample sizenis small and the number of considered variablespis large, the above empirical estimators of covariance and correlation are very unstable. Specifically, it is possible to furnish estimators that improve considerably upon the maximum likelihood estimate in terms of mean squared error. Moreover, forn<p(the number of observations is less than the number of random variables) the empirical estimate of the covariance matrix becomessingular, i.e. it cannot be inverted to compute theprecision matrix.
As an alternative, many methods have been suggested to improve the estimation of the covariance matrix. All of these approaches rely on the concept of shrinkage. This is implicit inBayesian methodsand in penalizedmaximum likelihoodmethods and explicit in theStein-type shrinkage approach.
A simple version of a shrinkage estimator of the covariance matrix is represented by the Ledoit-Wolf shrinkage estimator.[7][8][9][10]One considers aconvex combinationof the empirical estimator (A{\displaystyle A}) with some suitable chosen target (B{\displaystyle B}), e.g., the diagonal matrix. Subsequently, the mixing parameter (δ{\displaystyle \delta }) is selected to maximize the expected accuracy of the shrunken estimator. This can be done bycross-validation, or by using an analytic estimate of the shrinkage intensity. The resulting regularized estimator (δA+(1−δ)B{\displaystyle \delta A+(1-\delta )B}) can be shown to outperform the maximum likelihood estimator for small samples. For large samples, the shrinkage intensity will reduce to zero, hence in this case the shrinkage estimator will be identical to the empirical estimator. Apart from increased efficiency the shrinkage estimate has the additional advantage that it is always positive definite and well conditioned.
Various shrinkage targets have been proposed:
The shrinkage estimator can be generalized to a multi-target shrinkage estimator that utilizes several targets simultaneously.[11]
The Ledoit-Wolf shrinkage has been applied in many fields.[12]It is particularly useful to computePartial_correlationsfrom high-dimensional data (n<p).[13]
Software for computing a covariance shrinkage estimator is available inR(packagescorpcor[14]andShrinkCovMat[15]), inPython(scikit-learnlibrary[1]), and inMATLAB.[16]
|
https://en.wikipedia.org/wiki/Estimation_of_covariance_matrices
|
This is a list ofpublicationsinstatistics, organized by field.
Some reasons why a particular publication might be regarded as important:
Mathematical Methods of Statistics
Statistical Decision Functions
Testing Statistical Hypotheses
An Essay Towards Solving a Problem in the Doctrine of Chances
On Small Differences in Sensation
Truth and Probability
Bayesian Inference in Statistical Analysis
Theory of Probability
Introduction to statistical decision theory
An Introduction to Multivariate Analysis
Time Series Analysis Forecasting and Control
Statistical Methods for Research Workers
Statistical Methods
Principles and Procedures of Statistics with Special Reference to the Biological Sciences.
Biometry: The Principles and Practices of Statistics in Biological Research
On the uniform convergence of relative frequencies of events to their probabilities
On the mathematical foundations of theoretical statistics
Estimation of variance and covariance components
Maximum-likelihood estimation for the mixed analysis of variance model
Recovery of inter-block information when block sizes are unequal
Estimation of Variance and Covariance Components in Linear Models
Nonparametric estimation from incomplete observations
A generalized Wilcoxon test for comparing arbitrarily singly-censored samples
Evaluation of survival data and two new rank order statistics arising in its consideration
Regression Models and Life Tables
The Statistical Analysis of Failure Time Data
Report on Certain Enteric Fever Inoculation Statistics
The Probability Integral Transformation for Testing Goodness of Fit and Combining Independent Tests of Significance
Combining Independent Tests of Significance
The combination of estimates from different experiments
On Small Differences in Sensation
The Design of Experiments
The Design and Analysis of Experiments
On the Experimental Attainment of Optimum Conditions (with discussion)
|
https://en.wikipedia.org/wiki/List_of_important_publications_in_statistics#Multivariate_analysis
|
Inmarketing,multivariate testingormulti-variable testingtechniques applystatistical hypothesis testingon multi-variable systems, typically consumers on websites. Techniques ofmultivariate statisticsare used.
Ininternet marketing, multivariate testing is a process by which more than one component of a website may be tested in a live environment.[1]It can be thought of in simple terms as numerousA/B testsperformed on one page at the same time. A/B tests are usually performed to determine the better of two content variations; multivariate testing uses multiple variables to find the ideal combination.[2]The only limits on the number of combinations and the number of variables in a multivariate test are the amount of time it will take to get a statistically valid sample of visitors and computational power.
Multivariate testing is usually employed in order to ascertain which content or creative variation produces the best improvement in the defined goals of a website, whether that be user registrations or successful completion of a checkout process (that is,conversion rate).[3]Dramatic increases can be seen through testing different copy text, form layouts and even landing page images and background colours. However, not all elements produce the same increase in conversions, and by looking at the results from different tests, it is possible to identify those elements that consistently tend to produce the greatest increase in conversions.[4]
By setting up the server to display the different variations of content in equal proportions to incoming visitors, one can carry out testing on a dynamically generated website. Statistics on how each visitor went on to behave after seeing the content under test must then be gathered and presented. Websites with minor page coding changes can also utilize outsourced services for multivariate testing. These services insert their content into predefined areas of a site and monitor user behavior.
Multivariate testing essentially enables website visitors to express their preferences and determine which content is most likely to lead them towards a specific goal through their clicks. The testing is transparent to the visitor, with all commercial solutions capable of ensuring that each visitor is shown the same content on every visit.
Some websites benefit from constant, 24/7, continuous optimization as visitor response to creatives and layouts differs by time of day/week or even season.
Multivariate testing is currently an area of high growth in internet marketing as it helps website owners to ensure that they are getting the most from the visitors arriving at their site. Areas such assearch engine optimizationandpay per clickadvertising bring visitors to a site and have been extensively used by many organisations but multivariate testing allows internet marketeers to ensure that visitors are being shown the right offers, content and layout to convert them to sale, registration or the desired action once they arrive at the website.
There are two principal approaches used to achieve multivariate testing on websites. One being PageTagging; a process where the website creator insertsJavaScriptinto the site to inject content variants and monitor visitor response. Page tagging typically tracks what a visitor viewed on the website and for how long that visitor remained on the site together with any click or conversion related actions performed. Page tagging is often done by a technical team rather than the online marketer who designs the test and interprets the results in the light of usability analysis.[5]Later refinements on this method allow for a single common tag to be deployed across all pages, reducing deployment time and removing the need for re-deployment between tests.
The second principal approach used does not require page tagging. By establishing a DNS-proxy or hosting within a website's owndatacenter, it is possible to intercept and process all web traffic to and from the site undergoing testing, insert variants and monitor visitor response. In this case, all logic sits on the server rather than browser-side, and after initial DNS changes are made, no further technical involvement is required from the website point of view. SiteSpect is known to employ this method of implementation.
Multivariate testing can also be applied to email body content and mobile web pages.
In addition to testing the efficacy of various creative/content executions on a website, the principles of multivariate testing can and often are used to test various offer combinations. Examples of this are testing various price points, purchase incentives, premiums, trial periods or other similar purchase incentives both individually and in combination with each other. The value of this is that marketers (both traditional and online) can use multivariate testing principles online to quickly ascertain and predict the effectiveness of offers without going through the more traditional multivariate testing methods which take significantly more time and money (focus groups, telephone surveys, etc.).
Statistical testing relies ondesign of experiments. Several methods in use for multivariate testing include:
|
https://en.wikipedia.org/wiki/Multivariate_testing_in_marketing
|
Structural equation modeling(SEM) is a diverse set of methods used by scientists for both observational and experimental research. SEM is used mostly in the social and behavioral science fields, but it is also used in epidemiology,[2]business,[3]and other fields. A common definition of SEM is, "...a class of methodologies that seeks to represent hypotheses about the means, variances, and covariances of observed data in terms of a smaller number of 'structural' parameters defined by a hypothesized underlying conceptual or theoretical model,".[4]
SEM involves a model representing how various aspects of somephenomenonare thought tocausallyconnect to one another. Structural equation models often contain postulated causal connections among some latent variables (variables thought to exist but which can't be directly observed). Additional causal connections link those latent variables to observed variables whose values appear in a data set. The causal connections are represented usingequationsbut the postulated structuring can also be presented using diagrams containing arrows as in Figures 1 and 2. The causal structures imply that specific patterns should appear among the values of the observed variables. This makes it possible to use the connections between the observed variables' values to estimate the magnitudes of the postulated effects, and to test whether or not the observed data are consistent with the requirements of the hypothesized causal structures.[5]
The boundary between what is and is not a structural equation model is not always clear but SE models often contain postulated causal connections among a set of latent variables (variables thought to exist but which can't be directly observed, like an attitude, intelligence or mental illness) and causal connections linking the postulated latent variables to variables that can be observed and whose values are available in some data set. Variations among the styles of latent causal connections, variations among the observed variables measuring the latent variables, and variations in the statistical estimation strategies result in the SEM toolkit includingconfirmatory factor analysis(CFA),confirmatory composite analysis,path analysis, multi-group modeling, longitudinal modeling,partial least squares path modeling,latent growth modelingand hierarchical or multilevel modeling.[6][7][8][9][10]
SEM researchers use computer programs to estimate the strength and sign of the coefficients corresponding to the modeled structural connections, for example the numbers connected to the arrows in Figure 1. Because a postulated model such as Figure 1 may not correspond to the worldly forces controlling the observed data measurements, the programs also provide model tests and diagnostic clues suggesting which indicators, or which model components, might introduce inconsistency between the model and observed data. Criticisms of SEM methods hint at: disregard of available model tests, problems in the model's specification, a tendency to accept models without considering external validity, and potential philosophical biases.[11]
A great advantage of SEM is that all of these measurements and tests occur simultaneously in one statistical estimation procedure, where all the model coefficients are calculated using all information from the observed variables. This means the estimates are more accurate than if a researcher were to calculate each part of the model separately.[12]
Structural equation modeling (SEM) began differentiating itself from correlation and regression whenSewall Wrightprovided explicit causal interpretations for a set of regression-style equations based on a solid understanding of the physical and physiological mechanisms producing direct and indirect effects among his observed variables.[13][14][15]The equations were estimated like ordinary regression equations but the substantive context for the measured variables permitted clear causal, not merely predictive, understandings. O. D. Duncan introduced SEM to the social sciences in his 1975 book[16]and SEM blossomed in the late 1970's and 1980's when increasing computing power permitted practical model estimation. In 1987 Hayduk[7]provided the first book-length introduction to structural equation modeling with latent variables, and this was soon followed by Bollen's popular text (1989).[17]
Different yet mathematically related modeling approaches developed in psychology, sociology, and economics. EarlyCowles Commissionwork onsimultaneous equationsestimation centered on Koopman and Hood's (1953) algorithms fromtransport economicsand optimal routing, withmaximum likelihood estimation, and closed form algebraic calculations, as iterative solution search techniques were limited in the days before computers. The convergence of two of these developmental streams (factor analysis from psychology, and path analysis from sociology via Duncan) produced the current core of SEM. One of several programs Karl Jöreskog developed at Educational Testing Services, LISREL[18][19][20]embedded latent variables (which psychologists knew as the latent factors from factor analysis) within path-analysis-style equations (which sociologists inherited from Wright and Duncan). The factor-structured portion of the model incorporated measurement errors which permitted measurement-error-adjustment, though not necessarily error-free estimation, of effects connecting different postulated latent variables.
Traces of the historical convergence of the factor analytic and path analytic traditions persist as the distinction between the measurement and structural portions of models; and as continuing disagreements over model testing, and whether measurement should precede or accompany structural estimates.[21][22]Viewing factor analysis as a data-reduction technique deemphasizes testing, which contrasts with path analytic appreciation for testing postulated causal connections – where the test result might signal model misspecification. The friction between factor analytic and path analytic traditions continue to surface in the literature.
Wright's path analysis influenced Hermann Wold, Wold's student Karl Jöreskog, and Jöreskog's student Claes Fornell, but SEM never gained a large following among U.S. econometricians, possibly due to fundamental differences in modeling objectives and typical data structures. The prolonged separation of SEM's economic branch led to procedural and terminological differences, though deep mathematical and statistical connections remain.[23][24]Disciplinary differences in approaches can be seen in SEMNET discussions of endogeneity, and in discussions on causality via directed acyclic graphs (DAGs).[5]Discussions comparing and contrasting various SEM approaches are available[25][26]highlighting disciplinary differences in data structures and the concerns motivating economic models.
Judea Pearl[5]extended SEM from linear to nonparametric models, and proposed causal and counterfactual interpretations of the equations. Nonparametric SEMs permit estimating total, direct and indirect effects without making any commitment to linearity of effects or assumptions about the distributions of the error terms.[26]
SEM analyses are popular in the social sciences because these analytic techniques help us break down complex concepts and understand causal processes, but the complexity of the models can introduce substantial variability in the results depending on the presence or absence of conventional control variables, the sample size, and the variables of interest.[27]The use of experimental designs may address some of these doubts.[28]
Today, SEM forms the basis ofmachine learningand (interpretable)neural networks. Exploratory and confirmatory factor analyses in classical statistics mirror unsupervised and supervised machine learning.
The following considerations apply to the construction and assessment of many structural equation models.
Building or specifying a model requires attending to:
Structural equation models attempt to mirror the worldly forces operative for causally homogeneous cases – namely cases enmeshed in the same worldly causal structures but whose values on the causes differ and who therefore possess different values on the outcome variables. Causal homogeneity can be facilitated by case selection, or by segregating cases in a multi-group model. A model's specification is not complete until the researcher specifies:
The latent level of a model is composed ofendogenousandexogenousvariables. The endogenous latent variables are the true-score variables postulated as receiving effects from at least one other modeled variable. Each endogenous variable is modeled as the dependent variable in a regression-style equation. The exogenous latent variables are background variables postulated as causing one or more of the endogenous variables and are modeled like the predictor variables in regression-style equations. Causal connections among the exogenous variables are not explicitly modeled but are usually acknowledged by modeling the exogenous variables as freely correlating with one another. The model may include intervening variables – variables receiving effects from some variables but also sending effects to other variables. As in regression, each endogenous variable is assigned a residual or error variable encapsulating the effects of unavailable and usually unknown causes. Each latent variable, whetherexogenous or endogenous, is thought of as containing the cases' true-scores on that variable, and these true-scores causally contribute valid/genuine variations into one or more of the observed/reported indicator variables.[29]
The LISREL program assigned Greek names to the elements in a set of matrices to keep track of the various model components. These names became relatively standard notation, though the notation has been extended and altered to accommodate a variety of statistical considerations.[20][7][17][30]Texts and programs "simplifying" model specification via diagrams or by using equations permitting user-selected variable names, re-convert the user's model into some standard matrix-algebra form in the background. The "simplifications" are achieved by implicitly introducing default program "assumptions" about model features with which users supposedly need not concern themselves. Unfortunately, these default assumptions easily obscure model components that leave unrecognized issues lurking within the model's structure, and underlying matrices.
Two main components of models are distinguished in SEM: thestructural modelshowing potential causal dependencies betweenendogenous and exogenous latent variables, and themeasurement modelshowing the causal connections between the latent variables and the indicators. Exploratory and confirmatoryfactor analysismodels, for example, focus on the causal measurement connections, whilepath modelsmore closely correspond to SEMs latent structural connections.
Modelers specify each coefficient in a model as beingfreeto be estimated, orfixedat some value. The free coefficients may be postulated effects the researcher wishes to test, background correlations among the exogenous variables, or the variances of the residual or error variables providing additional variations in the endogenous latent variables. The fixed coefficients may be values like the 1.0 values in Figure 2 that provide a scales for the latent variables, or values of 0.0 which assert causal disconnections such as the assertion of no-direct-effects (no arrows) pointing from Academic Achievement to any of the four scales in Figure 1. SEM programs provide estimates and tests of the free coefficients, while the fixed coefficients contribute importantly to testing the overall model structure. Various kinds of constraints between coefficients can also be used.[30][7][17]The model specification depends on what is known from the literature, the researcher's experience with the modeled indicator variables, and the features being investigated by using the specific model structure.
There is a limit to how many coefficients can be estimated in a model. If there are fewer data points than the number of estimated coefficients, the resulting model is said to be "unidentified" and no coefficient estimates can be obtained. Reciprocal effect, and other causal loops, may also interfere with estimation.[31][32][30]
Model coefficients fixed at zero, 1.0, or other values, do not require estimation because they already have specified values. Estimated values for free model coefficients are obtained by maximizing fit to, or minimizing difference from, the data relative to what the data's features would be if the free model coefficients took on the estimated values. The model's implications for what the data should look like for a specific set of coefficient values depends on:
a) the coefficients' locations in the model (e.g. which variables are connected/disconnected),
b) the nature of the connections between the variables (covariances or effects; with effects often assumed to be linear),
c) the nature of the error or residual variables (often assumed to be independent of, or causally-disconnected from, many variables),
and d) the measurement scales appropriate for the variables (interval level measurement is often assumed).
A stronger effect connecting two latent variables implies the indicators of those latents should be more strongly correlated. Hence, a reasonable estimate of a latent's effect will be whatever value best matches the correlations between the indicators of the corresponding latent variables – namely the estimate-value maximizing the match with the data, or minimizing the differences from the data. With maximum likelihood estimation, the numerical values of all the free model coefficients are individually adjusted (progressively increased or decreased from initial start values) until they maximize the likelihood of observing the sample data – whether the data are the variables' covariances/correlations, or the cases' actual values on the indicator variables. Ordinary least squares estimates are the coefficient values that minimize the squared differences between the data and what the data would look like if the model was correctly specified, namely if all the model's estimated features correspond to real worldly features.
The appropriate statistical feature to maximize or minimize to obtain estimates depends on the variables' levels of measurement (estimation is generally easier with interval level measurements than with nominal or ordinal measures), and where a specific variable appears in the model (e.g. endogenous dichotomous variables create more estimation difficulties than exogenous dichotomous variables). Most SEM programs provide several options for what is to be maximized or minimized to obtain estimates the model's coefficients. The choices often include maximum likelihood estimation (MLE), full information maximum likelihood (FIML), ordinary least squares (OLS), weighted least squares (WLS), diagonally weighted least squares (DWLS), and two stage least squares.[30]
One common problem is that a coefficient's estimated value may be underidentified because it is insufficiently constrained by the model and data. No unique best-estimate exists unless the model and data together sufficiently constrain or restrict a coefficient's value. For example, the magnitude of a single data correlation between two variables is insufficient to provide estimates of a reciprocal pair of modeled effects between those variables. The correlation might be accounted for by one of the reciprocal effects being stronger than the other effect, or the other effect being stronger than the one, or by effects of equal magnitude. Underidentified effect estimates can be rendered identified by introducing additional model and/or data constraints. For example, reciprocal effects can be rendered identified by constraining one effect estimate to be double, triple, or equivalent to, the other effect estimate,[32]but the resultant estimates will only be trustworthy if the additional model constraint corresponds to the world's structure. Data on a third variable that directly causes only one of a pair of reciprocally causally connected variables can also assist identification.[31]Constraining a third variable to not directly cause one of the reciprocally-causal variables breaks the symmetry otherwise plaguing the reciprocal effect estimates because that third variable must be more strongly correlated with the variable it causes directly than with the variable at the "other" end of the reciprocal which it impacts only indirectly.[31]Notice that this again presumes the properness of the model's causal specification – namely that there really is a direct effect leading from the third variable to the variable at this end of the reciprocal effects and no direct effect on the variable at the "other end" of the reciprocally connected pair of variables. Theoretical demands for null/zero effects provide helpful constraints assisting estimation, though theories often fail to clearly report which effects are allegedly nonexistent.
Model assessment depends on the theory, the data, the model, and the estimation strategy. Hence model assessments consider:
Research claiming to test or "investigate" a theory requires attending to beyond-chance model-data inconsistency. Estimation adjusts the model's free coefficients to provide the best possible fit to the data. The output from SEM programs includes a matrix reporting the relationships among the observed variables that would be observed if the estimated model effects actually controlled the observed variables' values. The "fit" of a model reports match or mismatch between the model-implied relationships (often covariances) and the corresponding observed relationships among the variables. Large and significant differences between the data and the model's implications signal problems. The probability accompanying aχ2(chi-squared) test is the probability that the data could arise by random sampling variations if the estimated model constituted the real underlying population forces. A smallχ2probability reports it would be unlikely for the current data to have arisen if the modeled structure constituted the real population causal forces – with the remaining differences attributed to random sampling variations.
If a model remains inconsistent with the data despite selecting optimal coefficient estimates, an honest research response reports and attends to this evidence (often a significant modelχ2test).[33]Beyond-chance model-data inconsistency challenges both the coefficient estimates and the model's capacity for adjudicating the model's structure, irrespective of whether the inconsistency originates in problematic data, inappropriate statistical estimation, or incorrect model specification.
Coefficient estimates in data-inconsistent ("failing") models are interpretable, as reports of how the world would appear to someone believing a model that conflicts with the available data. The estimates in data-inconsistent models do not necessarily become "obviously wrong" by becoming statistically strange, or wrongly signed according to theory. The estimates may even closely match a theory's requirements but the remaining data inconsistency renders the match between the estimates and theory unable to provide succor. Failing models remain interpretable, but only as interpretations that conflict with available evidence.
Replication is unlikely to detect misspecified models which inappropriately-fit the data. If the replicate data is within random variations of the original data, the same incorrect coefficient placements that provided inappropriate-fit to the original data will likely also inappropriately-fit the replicate data. Replication helps detect issues such as data mistakes (made by different research groups), but is especially weak at detecting misspecifications after exploratory model modification – as when confirmatory factor analysis is applied to a random second-half of data following exploratory factor analysis (EFA) of first-half data.
A modification index is an estimate of how much a model's fit to the data would "improve" (but not necessarily how much the model's structure would improve) if a specific currently-fixed model coefficient were freed for estimation. Researchers confronting data-inconsistent models can easily free coefficients the modification indices report as likely to produce substantial improvements in fit. This simultaneously introduces a substantial risk of moving from a causally-wrong-and-failing model to a causally-wrong-but-fitting model because improved data-fit does not provide assurance that the freed coefficients are substantively reasonable or world matching. The original model may contain causal misspecifications such as incorrectly directed effects, or incorrect assumptions about unavailable variables, and such problems cannot be corrected by adding coefficients to the current model. Consequently, such models remain misspecified despite the closer fit provided by additional coefficients. Fitting yet worldly-inconsistent models are especially likely to arise if a researcher committed to a particular model (for example a factor model having a desired number of factors) gets an initially-failing model to fit by inserting measurement error covariances "suggested" by modification indices. MacCallum (1986) demonstrated that "even under favorable conditions, models arising from specification serchers must be viewed with caution."[34]Model misspecification may sometimes be corrected by insertion of coefficients suggested by the modification indices, but many more corrective possibilities are raised by employing a few indicators of similar-yet-importantly-different latent variables.[35]
"Accepting" failing models as "close enough" is also not a reasonable alternative. A cautionary instance was provided by Browne, MacCallum, Kim, Anderson, and Glaser who addressed the mathematics behind why theχ2test can have (though it does not always have) considerable power to detect model misspecification.[36]The probability accompanying aχ2test is the probability that the data could arise by random sampling variations if the current model, with its optimal estimates, constituted the real underlying population forces. A smallχ2probability reports it would be unlikely for the current data to have arisen if the current model structure constituted the real population causal forces – with the remaining differences attributed to random sampling variations. Browne, McCallum, Kim, Andersen, and Glaser presented a factor model they viewed as acceptable despite the model being significantly inconsistent with their data according toχ2. The fallaciousness of their claim that close-fit should be treated as good enough was demonstrated by Hayduk, Pazkerka-Robinson, Cummings, Levers and Beres[37]who demonstrated a fitting model for Browne, et al.'s own data by incorporating an experimental feature Browne, et al. overlooked. The fault was not in the math of the indices or in the over-sensitivity ofχ2testing. The fault was in Browne, MacCallum, and the other authors forgetting, neglecting, or overlooking, that the amount of ill fit cannot be trusted to correspond to the nature, location, or seriousness of problems in a model's specification.[38]
Many researchers tried to justify switching to fit-indices, rather than testing their models, by claiming thatχ2increases (and henceχ2probability decreases) with increasing sample size (N). There are two mistakes in discountingχ2on this basis. First, for proper models,χ2does not increase with increasing N,[33]so ifχ2increases with N that itself is a sign that something is detectably problematic. And second, for models that are detectably misspecified,χ2increase with N provides the good-news of increasing statistical power to detect model misspecification (namely power to detect Type II error). Some kinds of important misspecifications cannot be detected byχ2,[38]so any amount of ill fit beyond what might be reasonably produced by random variations warrants report and consideration.[39][33]Theχ2model test, possibly adjusted,[40]is the strongest available structural equation model test.
Numerous fit indices quantify how closely a model fits the data but all fit indices suffer from the logical difficulty that the size or amount of ill fit is not trustably coordinated with the severity or nature of the issues producing the data inconsistency.[38]Models with different causal structures which fit the data identically well, have been called equivalent models.[30]Such models are data-fit-equivalent though not causally equivalent, so at least one of the so-called equivalent models must be inconsistent with the world's structure. If there is a perfect 1.0 correlation between X and Y and we model this as X causes Y, there will be perfect fit and zero residual error. But the model may not match the world because Y may actually cause X, or both X and Y may be responding to a common cause Z, or the world may contain a mixture of these effects (e.g. like a common cause plus an effect of Y on X), or other causal structures. The perfect fit does not tell us the model's structure corresponds to the world's structure, and this in turn implies that getting closer to perfect fit does not necessarily correspond to getting closer to the world's structure – maybe it does, maybe it doesn't. This makes it incorrect for a researcher to claim that even perfect model fit implies the model is correctly causally specified. For even moderately complex models, precisely equivalently-fitting models are rare. Models almost-fitting the data, according to any index, unavoidably introduce additional potentially-important yet unknown model misspecifications. These models constitute a greater research impediment.
This logical weakness renders all fit indices "unhelpful" whenever a structural equation model is significantly inconsistent with the data,[39]but several forces continue to propagate fit-index use. For example, Dag Sorbom reported that when someone asked Karl Joreskog, the developer of the first structural equation modeling program, "Why have you then added GFI?" to your LISREL program, Joreskog replied "Well, users threaten us saying they would stop using LISREL if it always produces such large chi-squares. So we had to invent something to make people happy. GFI serves that purpose."[41]Theχ2evidence of model-data inconsistency was too statistically solid to be dislodged or discarded, but people could at least be provided a way to distract from the "disturbing" evidence. Career-profits can still be accrued by developing additional indices, reporting investigations of index behavior, and publishing models intentionally burying evidence of model-data inconsistency under an MDI (a mound of distracting indices). There seems no general justification for why a researcher should "accept" a causally wrong model, rather than attempting to correct detected misspecifications. And some portions of the literature seems not to have noticed that "accepting a model" (on the basis of "satisfying" an index value) suffers from an intensified version of the criticism applied to "acceptance" of a null-hypothesis. Introductory statistics texts usually recommend replacing the term "accept" with "failed to reject the null hypothesis" to acknowledge the possibility of Type II error. A Type III error arises from "accepting" a model hypothesis when the current data are sufficient to reject the model.
Whether or not researchers are committed to seeking the world’s structure is a fundamental concern. Displacing test evidence of model-data inconsistency by hiding it behind index claims of acceptable-fit, introduces the discipline-wide cost of diverting attention away from whatever the discipline might have done to attain a structurally-improved understanding of the discipline’s substance. The discipline ends up paying a real costs for index-based displacement of evidence of model misspecification. The frictions created by disagreements over the necessity of correcting model misspecifications will likely increase with increasing use of non-factor-structured models, and with use of fewer, more-precise, indicators of similar yet importantly-different latent variables.[35]
The considerations relevant to using fit indices include checking:
Some of the more commonly used fit statistics include
The following table provides references documenting these, and other, features for some common indices: the RMSEA (Root Mean Square Error of Approximation), SRMR (Standardized Root Mean Squared Residual), CFI (Confirmatory Fit Index), and the TLI (the Tucker-Lewis Index). Additional indices such as the AIC (Akaike Information Criterion) can be found in most SEM introductions.[30]For each measure of fit, a decision as to what represents a good-enough fit between the model and the data reflects the researcher's modeling objective (perhaps challenging someone else's model, or improving measurement); whether or not the model is to be claimed as having been "tested"; and whether the researcher is comfortable "disregarding" evidence of the index-documented degree of ill fit.[33]
for critical values
for critical values
disagreements over critical values
criteria are required
of this index
Researchers agree samples should be large enough to provide stable coefficient estimates and reasonable testing power but there is no general consensus regarding specific required sample sizes, or even how to determine appropriate sample sizes. Recommendations have been based on the number of coefficients to be estimated, the number of modeled variables, and Monte Carlo simulations addressing specific model coefficients.[30]Sample size recommendations based on the ratio of the number of indicators to latents are factor oriented and do not apply to models employing single indicators having fixed nonzero measurement error variances.[35]Overall, for moderate sized models without statistically difficult-to-estimate coefficients, the required sample sizes (N’s) seem roughly comparable to the N’s required for a regression employing all the indicators.
The larger the sample size, the greater the likelihood of including cases that are not causally homogeneous. Consequently, increasing N to improve the likelihood of being able to report a desired coefficient as statistically significant, simultaneously increases the risk of model misspecification, and the power to detect the misspecification. Researchers seeking to learn from their modeling (including potentially learning their model requires adjustment or replacement) will strive for as large a sample size as permitted by funding and by their assessment of likely population-based causal heterogeneity/homogeneity. If the available N is huge, modeling sub-sets of cases can control for variables that might otherwise disrupt causal homogeneity. Researchers fearing they might have to report their model’s deficiencies are torn between wanting a larger N to provide sufficient power to detect structural coefficients of interest, while avoiding the power capable of signaling model-data inconsistency. The huge variation in model structures and data characteristics suggests adequate sample sizes might be usefully located by considering other researchers’ experiences (both good and bad) with models of comparable size and complexity that have been estimated with similar data.
Causal interpretations of SE models are the clearest and most understandable but those interpretations will be fallacious/wrong if the model’s structure does not correspond to the world’s causal structure. Consequently, interpretation should address the overall status and structure of the model, not merely the model’s estimated coefficients. Whether a model fits the data, and/or how a model came to fit the data, are paramount for interpretation. Data fit obtained by exploring, or by following successive modification indices, does not guarantee the model is wrong but raises serious doubts because these approaches are prone to incorrectly modeling data features. For example, exploring to see how many factors are required preempts finding the data are not factor structured, especially if the factor model has been “persuaded” to fit via inclusion of measurement error covariances. Data’s ability to speak against a postulated model is progressively eroded with each unwarranted inclusion of a “modification index suggested” effect or error covariance. It becomes exceedingly difficult to recover a proper model if the initial/base model contains several misspecifications.[49]
Direct-effect estimates are interpreted in parallel to the interpretation of coefficients in regression equations but with causal commitment. Each unit increase in a causal variable’s value is viewed as producing a change of the estimated magnitude in the dependent variable’s value given control or adjustment for all the other operative/modeled causal mechanisms. Indirect effects are interpreted similarly, with the magnitude of a specific indirect effect equaling the product of the series of direct effects comprising that indirect effect. The units involved are the real scales of observed variables’ values, and the assigned scale values for latent variables. A specified/fixed 1.0 effect of a latent on a specific indicator coordinates that indicator’s scale with the latent variable’s scale. The presumption that the remainder of the model remains constant or unchanging may require discounting indirect effects that might, in the real world, be simultaneously prompted by a real unit increase. And the unit increase itself might be inconsistent with what is possible in the real world because there may be no known way to change the causal variable’s value. If a model adjusts for measurement errors, the adjustment permits interpreting latent-level effects as referring to variations in true scores.[29]
SEM interpretations depart most radically from regression interpretations when a network of causal coefficients connects the latent variables because regressions do not contain estimates of indirect effects. SEM interpretations should convey the consequences of the patterns of indirect effects that carry effects from background variables through intervening variables to the downstream dependent variables. SEM interpretations encourage understanding how multiple worldly causal pathways can work in coordination, or independently, or even counteract one another. Direct effects may be counteracted (or reinforced) by indirect effects, or have their correlational implications counteracted (or reinforced) by the effects of common causes.[16]The meaning and interpretation of specific estimates should be contextualized in the full model.
SE model interpretation should connect specific model causal segments to their variance and covariance implications. A single direct effect reports that the variance in the independent variable produces a specific amount of variation in the dependent variable’s values, but the causal details of precisely what makes this happens remains unspecified because a single effect coefficient does not contain sub-components available for integration into a structured story of how that effect arises. A more fine-grained SE model incorporating variables intervening between the cause and effect would be required to provide features constituting a story about how any one effect functions. Until such a model arrives each estimated direct effect retains a tinge of the unknown, thereby invoking the essence of a theory. A parallel essential unknownness would accompany each estimated coefficient in even the more fine-grained model, so the sense of fundamental mystery is never fully eradicated from SE models.
Even if each modeled effect is unknown beyond the identity of the variables involved and the estimated magnitude of the effect, the structures linking multiple modeled effects provide opportunities to express how things function to coordinate the observed variables – thereby providing useful interpretation possibilities. For example, a common cause contributes to the covariance or correlation between two effected variables, because if the value of the cause goes up, the values of both effects should also go up (assuming positive effects) even if we do not know the full story underlying each cause.[16](A correlation is the covariance between two variables that have both been standardized to have variance 1.0). Another interpretive contribution might be made by expressing how two causal variables can both explain variance in a dependent variable, as well as how covariance between two such causes can increase or decrease explained variance in the dependent variable. That is, interpretation may involve explaining how a pattern of effects and covariances can contribute to decreasing a dependent variable’s variance.[50]Understanding causal implications implicitly connects to understanding “controlling”, and potentially explaining why some variables, but not others, should be controlled.[5][51]As models become more complex these fundamental components can combine in non-intuitive ways, such as explaining how there can be no correlation (zero covariance) between two variables despite the variables being connected by a direct non-zero causal effect.[16][17][7][32]
The statistical insignificance of an effect estimate indicates the estimate could rather easily arise as a random sampling variation around a null/zero effect, so interpreting the estimate as a real effect becomes equivocal. As in regression, the proportion of each dependent variable’s variance explained by variations in the modeled causes are provided byR2, though the Blocked-ErrorR2should be used if the dependent variable is involved in reciprocal or looped effects, or if it has an error variable correlated with any predictor’s error variable.[52]
The caution appearing in the Model Assessment section warrants repeat. Interpretation should be possible whether a model is or is not consistent with the data. The estimates report how the world would appear to someone believing the model – even if that belief is unfounded because the model happens to be wrong. Interpretation should acknowledge that the model coefficients may or may not correspond to “parameters” – because the model’s coefficients may not have corresponding worldly structural features.
Adding new latent variables entering or exiting the original model at a few clear causal locations/variables contributes to detecting model misspecifications which could otherwise ruin coefficient interpretations. The correlations between the new latent’s indicators and all the original indicators contribute to testing the original model’s structure because the few new and focused effect coefficients must work in coordination with the model’s original direct and indirect effects to coordinate the new indicators with the original indicators. If the original model’s structure was problematic, the sparse new causal connections will be insufficient to coordinate the new indicators with the original indicators, thereby signaling the inappropriateness of the original model’s coefficients through model-data inconsistency.[32]The correlational constraints grounded in null/zero effect coefficients, and coefficients assigned fixed nonzero values, contribute to both model testing and coefficient estimation, and hence deserve acknowledgment as the scaffolding supporting the estimates and their interpretation.[32]
Interpretations become progressively more complex for models containing interactions, nonlinearities, multiple groups, multiple levels, and categorical variables.[30]Effects touching causal loops, reciprocal effects, or correlated residuals also require slightly revised interpretations.[7][32]
Careful interpretation of both failing and fitting models can provide research advancement. To be dependable, the model should investigate academically informative causal structures, fit applicable data with understandable estimates, and not include vacuous coefficients.[53]Dependable fitting models are rarer than failing models or models inappropriately bludgeoned into fitting, but appropriately-fitting models are possible.[37][54][55][56]
The multiple ways of conceptualizing PLS models[57]complicate interpretation of PLS models. Many of the above comments are applicable if a PLS modeler adopts a realist perspective by striving to ensure their modeled indicators combine in a way that matches some existing but unavailable latent variable. Non-causal PLS models, such as those focusing primarily onR2or out-of-sample predictive power, change the interpretation criteria by diminishing concern for whether or not the model’s coefficients have worldly counterparts. The fundamental features differentiating the five PLS modeling perspectives discussed by Rigdon, Sarstedt and Ringle[57]point to differences in PLS modelers’ objectives, and corresponding differences in model features warranting interpretation.
Caution should be taken when making claims of causality even when experiments or time-ordered investigations have been undertaken. The termcausal modelmust be understood to mean "a model that conveys causal assumptions", not necessarily a model that produces validated causal conclusions—maybe it does maybe it does not. Collecting data at multiple time points and using an experimental or quasi-experimental design can help rule out certain rival hypotheses but even a randomized experiments cannot fully rule out threats to causal claims. No research design can fully guarantee causal structures.[5]
Structural equation modeling is fraught with controversies. Researchers from the factor analytic tradition commonly attempt to reduce sets of multiple indicators to fewer, more manageable, scales or factor-scores for later use in path-structured models. This constitutes a stepwise process with the initial measurement step providing scales or factor-scores which are to be used later in a path-structured model. This stepwise approach seems obvious but actually confronts severe underlying deficiencies. The segmentation into steps interferes with thorough checking of whether the scales or factor-scores validly represent the indicators, and/or validly report on latent level effects. A structural equation model simultaneously incorporating both the measurement and latent-level structures not only checks whether the latent factors appropriately coordinates the indicators, it also checks whether that same latent simultaneously appropriately coordinates each latent’s indictors with the indicators of theorized causes and/or consequences of that latent.[32]If a latent is unable to do both these styles of coordination, the validity of that latent is questioned, and a scale or factor-scores purporting to measure that latent is questioned. The disagreements swirled around respect for, or disrespect of, evidence challenging the validity of postulated latent factors. The simmering, sometimes boiling, discussions resulted in a special issue of the journal Structural Equation Modeling focused on a target article by Hayduk and Glaser[21]followed by several comments and a rejoinder,[22]all made freely available, thanks to the efforts of George Marcoulides.
These discussions fueled disagreement over whether or not structural equation models should be tested for consistency with the data, and model testing became the next focus of discussions. Scholars having path-modeling histories tended to defend careful model testing while those with factor-histories tended to defend fit-indexing rather than fit-testing. These discussions led to a target article in Personality and Individual Differences by Paul Barrett[39]who said: “In fact, I would now recommend banning ALL such indices from ever appearing in any paper as indicative of model “acceptability” or “degree of misfit”.”[39](page 821). Barrett’s article was also accompanied by commentary from both perspectives.[53][58]
The controversy over model testing declined as clear reporting of significant model-data inconsistency becomes mandatory. Scientists do not get to ignore, or fail to report, evidence just because they do not like what the evidence reports.[33]The requirement of attending to evidence pointing toward model mis-specification underpins more recent concern for addressing “endogeneity” – a style of model mis-specification that interferes with estimation due to lack of independence of error/residual variables. In general, the controversy over the causal nature of structural equation models, including factor-models, has also been declining. Stan Mulaik, a factor-analysis stalwart, has acknowledged the causal basis of factor models.[59]The comments by Bollen and Pearl regarding myths about causality in the context of SEM[26]reinforced the centrality of causal thinking in the context of SEM.
A briefer controversy focused on competing models. Comparing competing models can be very helpful but there are fundamental issues that cannot be resolved by creating two models and retaining the better fitting model. The statistical sophistication of presentations like Levy and Hancock (2007),[60]for example, makes it easy to overlook that a researcher might begin with one terrible model and one atrocious model, and end by retaining the structurally terrible model because some index reports it as better fitting than the atrocious model. It is unfortunate that even otherwise strong SEM texts like Kline (2016)[30]remain disturbingly weak in their presentation of model testing.[61]Overall, the contributions that can be made by structural equation modeling depend on careful and detailed model assessment, even if a failing model happens to be the best available.
An additional controversy that touched the fringes of the previous controversies awaits ignition.[citation needed]Factor models and theory-embedded factor structures having multiple indicators tend to fail, and dropping weak indicators tends to reduce the model-data inconsistency. Reducing the number of indicators leads to concern for, and controversy over, the minimum number of indicators required to support a latent variable in a structural equation model. Researchers tied to factor tradition can be persuaded to reduce the number of indicators to three per latent variable, but three or even two indicators may still be inconsistent with a proposed underlying factor common cause. Hayduk and Littvay (2012)[35]discussed how to think about, defend, and adjust for measurement error, when using only a single indicator for each modeled latent variable. Single indicators have been used effectively in SE models for a long time,[54]but controversy remains only as far away as a reviewer who has considered measurement from only the factor analytic perspective.
Though declining, traces of these controversies are scattered throughout the SEM literature, and you can easily incite disagreement by asking: What should be done with models that are significantly inconsistent with the data? Or by asking: Does model simplicity override respect for evidence of data inconsistency? Or, what weight should be given to indexes which show close or not-so-close data fit for some models? Or, should we be especially lenient toward, and “reward”, parsimonious models that are inconsistent with the data? Or, given that the RMSEA condones disregarding some real ill fit for each model degree of freedom, doesn’t that mean that people testing models with null-hypotheses of non-zero RMSEA are doing deficient model testing? Considerable variation in statistical sophistication is required to cogently address such questions, though responses will likely center on the non-technical matter of whether or not researchers are required to report and respect evidence.
Structural equation modeling programs differ widely in their capabilities and user requirements.[69]Below is a table of available software.
|
https://en.wikipedia.org/wiki/Structural_equation_modeling
|
In statistics, theRV coefficient[1]is amultivariategeneralization of thesquaredPearson correlation coefficient(because the RV coefficient takes values between 0 and 1).[2]It measures the closeness of two set of points that may each be represented in amatrix.
The major approaches withinstatistical multivariate data analysiscan all be brought into a common framework in which the RV coefficient is maximised subject to relevant constraints. Specifically, these statistical methodologies include:[1]
One application of the RV coefficient is infunctional neuroimagingwhere it can measure
the similarity between two subjects' series of brain scans[3]or between different scans of a same subject.[4]
The definition of the RV-coefficient makes use of ideas[5]concerning the definition of scalar-valued quantities which are called the "variance" and "covariance" of vector-valuedrandom variables. Note that standard usage is to have matrices for the variances and covariances of vector random variables.
Given these innovative definitions, the RV-coefficient is then just the correlation coefficient defined in the usual way.
Suppose thatXandYare matrices of centered random vectors (column vectors) with covariance matrix given by
then the scalar-valued covariance (denoted by COVV) is defined by[5]
The scalar-valued variance is defined correspondingly:
With these definitions, the variance and covariance have certain additive properties in relation to the formation of new vector quantities by extending an existing vector with the elements of another.[5]
Then the RV-coefficient is defined by[5]
Even though the coefficient takes values between 0 and 1 by construction, it seldom attains values close to 1 as the denominator is often too large with respect to the maximal attainable value of the denominator.[6]
Given known diagonal blocksΣXX{\displaystyle \Sigma _{XX}}andΣYY{\displaystyle \Sigma _{YY}}of dimensionsp×p{\displaystyle p\times p}andq×q{\displaystyle q\times q}respectively, assuming thatp≤q{\displaystyle p\leq q}without loss of generality, it has been proved[7]that the maximal attainable numerator isTr(ΛXΠΛY),{\displaystyle \operatorname {Tr} (\Lambda _{X}\Pi \Lambda _{Y}),}whereΛX{\displaystyle \Lambda _{X}}(resp.ΛY{\displaystyle \Lambda _{Y}}) denotes the diagonal matrix of the eigenvalues ofΣXX{\displaystyle \Sigma _{XX}}(resp.ΣYY{\displaystyle \Sigma _{YY}}) sorted decreasingly from the upper leftmost corner to the lower rightmost corner andΠ{\displaystyle \Pi }is thep×q{\displaystyle p\times q}matrix(Ip0p×(q−p)){\displaystyle (I_{p}\ 0_{p\times (q-p)})}.
In light of this, Mordant and Segers[7]proposed an adjusted version of the RV coefficient in which the denominator is the maximal value attainable by the numerator. It reads
The impact of this adjustment is clearly visible in practice.[7]
|
https://en.wikipedia.org/wiki/RV_coefficient
|
Bivariate analysisis one of the simplest forms ofquantitative (statistical) analysis.[1]It involves the analysis of twovariables(often denoted asX,Y), for the purpose of determining the empirical relationship between them.[1]
Bivariate analysis can be helpful in testing simplehypothesesofassociation. Bivariate analysis can help determine to what extent it becomes easier to know and predict a value for one variable (possibly adependent variable) if we know the value of the other variable (possibly theindependent variable) (see alsocorrelationandsimple linear regression).[2]
Bivariate analysis can be contrasted withunivariate analysisin which only one variable is analysed.[1]Like univariate analysis, bivariate analysis can bedescriptiveorinferential. It is the analysis of the relationship between the two variables.[1]Bivariate analysis is a simple (two variable) special case ofmultivariate analysis(where multiple relations between multiple variables are examined simultaneously).[1]
Regression is a statistical technique used to help investigate how variation in one or more variables predicts or explains variation in another variable. Bivariate regression aims to identify the equation representing the optimal line that defines the relationship between two variables based on a particular data set. This equation is subsequently applied to anticipate values of the dependent variable not present in the initial dataset. Through regression analysis, one can derive the equation for the curve or straight line and obtain the correlation coefficient.
Simple linear regression is a statistical method used to model the linear relationship between an independent variable and a dependent variable. It assumes a linear relationship between the variables and is sensitive to outliers. The best-fitting linear equation is often represented as a straight line to minimize the difference between the predicted values from the equation and the actual observed values of the dependent variable.
Equation:y=mx+b{\displaystyle y=mx+b}
x{\displaystyle x}: independent variable (predictor)
y{\displaystyle y}: dependent variable (outcome)
m{\displaystyle m}: slope of the line
b{\displaystyle b}:y{\displaystyle y}-intercept
The least squares regression line is a method in simple linear regression for modeling the linear relationship between two variables, and it serves as a tool for making predictions based on new values of the independent variable. The calculation is based on the method of theleast squarescriterion. The goal is to minimize the sum of the squared vertical distances (residuals) between the observed y-values and the corresponding predicted y-values of each data point.
A bivariate correlation is a measure of whether and how two variables covary linearly, that is, whether the variance of one changes in a linear fashion as the variance of the other changes.
Covariance can be difficult to interpret across studies because it depends on the scale or level of measurement used. For this reason, covariance is standardized by dividing by the product of the standard deviations of the two variables to produce the Pearson product–moment correlation coefficient (also referred to as thePearson correlation coefficientor correlation coefficient), which is usually denoted by the letter “r.”[3]
Pearson’s correlation coefficient is used when both variables are measured on an interval or ratio scale. Other correlation coefficients or analyses are used when variables are not interval or ratio, or when they are not normally distributed. Examples areSpearman’s correlation coefficient,Kendall’s tau,Biserial correlation, and Chi-square analysis.
Three important notes should be highlighted with regard to correlation:
If thedependent variable—the one whose value is determined to some extent by the other,independent variable— is acategorical variable, such as the preferred brand of cereal, thenprobitorlogitregression (ormultinomial probitormultinomial logit) can be used. If both variables areordinal, meaning they are ranked in a sequence as first, second, etc., then arank correlationcoefficient can be computed. If just the dependent variable is ordinal,ordered probitorordered logitcan be used. If the dependent variable is continuous—either interval level or ratio level, such as a temperature scale or an income scale—thensimple regressioncan be used.
If both variables aretime series, a particular type of causality known asGranger causalitycan be tested for, andvector autoregressioncan be performed to examine the intertemporal linkages between the variables.
When neither variable can be regarded as dependent on the other, regression is not appropriate but some form ofcorrelationanalysis may be.[4]
Graphsthat are appropriate for bivariate analysis depend on the type of variable. For two continuous variables, ascatterplotis a common graph. When one variable is categorical and the other continuous, abox plotis common and when both are categorical amosaic plotis common. These graphs are part ofdescriptive statistics.
|
https://en.wikipedia.org/wiki/Bivariate_analysis
|
Thedesign of experiments(DOE),[1]also known asexperiment designorexperimental design, is the design of any task that aims to describe and explain the variation of information under conditions that are hypothesized to reflect the variation. The term is generally associated withexperimentsin which the design introduces conditions that directly affect the variation, but may also refer to the design ofquasi-experiments, in whichnaturalconditions that influence the variation are selected for observation.
In its simplest form, an experiment aims at predicting the outcome by introducing a change of the preconditions, which is represented by one or moreindependent variables, also referred to as "input variables" or "predictor variables." The change in one or more independent variables is generally hypothesized to result in a change in one or moredependent variables, also referred to as "output variables" or "response variables." The experimental design may also identifycontrol variablesthat must be held constant to prevent external factors from affecting the results. Experimental design involves not only the selection of suitable independent, dependent, and control variables, but planning the delivery of the experiment under statistically optimal conditions given the constraints of available resources. There are multiple approaches for determining the set of design points (unique combinations of the settings of the independent variables) to be used in the experiment.
Main concerns in experimental design include the establishment ofvalidity,reliability, andreplicability. For example, these concerns can be partially addressed by carefully choosing the independent variable, reducing the risk of measurement error, and ensuring that the documentation of the method is sufficiently detailed. Related concerns include achieving appropriate levels ofstatistical powerandsensitivity.
Correctly designed experiments advance knowledge in the natural and social sciences and engineering, with design of experiments methodology recognised as a key tool in the successful implementation of aQuality by Design(QbD) framework.[2]Other applications include marketing and policy making. The study of the design of experiments is an important topic inmetascience.
A theory ofstatistical inferencewas developed byCharles S. Peircein "Illustrations of the Logic of Science" (1877–1878)[3]and "A Theory of Probable Inference" (1883),[4]two publications that emphasized the importance of randomization-based inference in statistics.[5]
Charles S. Peirce randomly assigned volunteers to ablinded,repeated-measures designto evaluate their ability to discriminate weights.[6][7][8][9]Peirce's experiment inspired other researchers in psychology and education, which developed a research tradition of randomized experiments in laboratories and specialized textbooks in the 1800s.[6][7][8][9]
Charles S. Peircealso contributed the first English-language publication on anoptimal designforregressionmodelsin 1876.[10]A pioneeringoptimal designforpolynomial regressionwas suggested byGergonnein 1815. In 1918,Kirstine Smithpublished optimal designs for polynomials of degree six (and less).[11][12]
The use of a sequence of experiments, where the design of each may depend on the results of previous experiments, including the possible decision to stop experimenting, is within the scope ofsequential analysis, a field that was pioneered[13]byAbraham Waldin the context of sequential tests of statistical hypotheses.[14]Herman Chernoffwrote an overview of optimal sequential designs,[15]whileadaptive designshave been surveyed by S. Zacks.[16]One specific type of sequential design is the "two-armed bandit", generalized to themulti-armed bandit, on which early work was done byHerbert Robbinsin 1952.[17]
A methodology for designing experiments was proposed byRonald Fisher, in his innovative books:The Arrangement of Field Experiments(1926) andThe Design of Experiments(1935). Much of his pioneering work dealt with agricultural applications of statistical methods. As a mundane example, he described how to test thelady tasting teahypothesis, that a certain lady could distinguish by flavour alone whether the milk or the tea was first placed in the cup. These methods have been broadly adapted in biological, psychological, and agricultural research.[18]
This example of design experiments is attributed toHarold Hotelling, building on examples fromFrank Yates.[22][23][15]The experiments designed in this example involvecombinatorial designs.[24]
Weights of eight objects are measured using apan balanceand set of standard weights. Each weighing measures the weight difference between objects in the left pan and any objects in the right pan by adding calibrated weights to the lighter pan until the balance is in equilibrium. Each measurement has arandom error. The average error is zero; thestandard deviationsof theprobability distributionof the errors is the same number σ on different weighings; errors on different weighings areindependent. Denote the true weights by
We consider two different experiments:
The question of design of experiments is: which experiment is better?
The variance of the estimateX1ofθ1isσ2if we use the first experiment. But if we use the second experiment, the variance of the estimate given above isσ2/8. Thus the second experiment gives us 8 times as much precision for the estimate of a single item, and estimates all items simultaneously, with the same precision. What the second experiment achieves with eight would require 64 weighings if the items are weighed separately. However, note that the estimates for the items obtained in the second experiment have errors that correlate with each other.
Many problems of the design of experiments involvecombinatorial designs, as in this example and others.[24]
False positiveconclusions, often resulting from thepressure to publishor the author's ownconfirmation bias, are an inherent hazard in many fields.[25]
Use ofdouble-blind designscan preventbiasespotentially leading tofalse positivesin thedata collectionphase. When a double-blind design is used, participants are randomly assigned to experimental groups but the researcher is unaware of what participants belong to which group. Therefore, the researcher can not affect the participants' response to the intervention.[26]
Experimental designs with undiscloseddegrees of freedom[jargon]are a problem,[27]in that they can lead to conscious or unconscious "p-hacking": trying multiple things until you get the desired result. It typically involves the manipulation – perhaps unconsciously – of the process ofstatistical analysisand thedegrees of freedomuntil they return a figure below thep<.05 levelofstatistical significance.[28][29]
P-hacking can be prevented bypreregisteringresearches, in which researchers have to send their data analysis plan to the journal they wish to publish their paper in before they even start their data collection, so no data manipulation is possible.[30][31]
Another way to prevent this is taking a double-blind design to the data-analysis phase, making the study triple-blind, where the data are sent to a data-analyst unrelated to the research who scrambles up the data so there is no way to know which participants belong to before they are potentially taken away as outliers.[26]
Clear and completedocumentationof the experimentalmethodologyis also important in order to supportreplication of results.[32]
An experimental design or randomized clinical trial requires careful consideration of several factors before actually doing the experiment.[33]An experimental design is the laying out of a detailed experimental plan in advance of doing the experiment. Some of the following topics have already been discussed in the principles of experimental design section:
The independent variable of a study often has many levels or different groups. In a true experiment, researchers can have an experimental group, which is where their intervention testing the hypothesis is implemented, and a control group, which has all the same element as the experimental group, without the interventional element. Thus, when everything else except for one intervention is held constant, researchers can certify with some certainty that this one element is what caused the observed change. In some instances, having a control group is not ethical. This is sometimes solved using two different experimental groups. In some cases, independent variables cannot be manipulated, for example when testing the difference between two groups who have a different disease, or testing the difference between genders (obviously variables that would be hard or unethical to assign participants to). In these cases, a quasi-experimental design may be used.
In the pure experimental design, the independent (predictor) variable is manipulated by the researcher – that is – every participant of the research is chosen randomly from the population, and each participant chosen is assigned randomly to conditions of the independent variable. Only when this is done is it possible to certify with high probability that the reason for the differences in the outcome variables are caused by the different conditions. Therefore, researchers should choose the experimental design over other design types whenever possible. However, the nature of the independent variable does not always allow for manipulation. In those cases, researchers must be aware of not certifying about causal attribution when their design doesn't allow for it. For example, in observational designs, participants are not assigned randomly to conditions, and so if there are differences found in outcome variables between conditions, it is likely that there is something other than the differences between the conditions that causes the differences in outcomes, that is – a third variable. The same goes for studies with correlational design.
It is best that a process be in reasonable statistical control prior to conducting designed experiments. When this is not possible, proper blocking, replication, and randomization allow for the careful conduct of designed experiments.[34]To control for nuisance variables, researchers institutecontrol checksas additional measures. Investigators should ensure that uncontrolled influences (e.g., source credibility perception) do not skew the findings of the study. Amanipulation checkis one example of a control check. Manipulation checks allow investigators to isolate the chief variables to strengthen support that these variables are operating as planned.
One of the most important requirements of experimental research designs is the necessity of eliminating the effects ofspurious, intervening, andantecedent variables. In the most basic model, cause (X) leads to effect (Y). But there could be a third variable (Z) that influences (Y), and X might not be the true cause at all. Z is said to be a spurious variable and must be controlled for. The same is true forintervening variables(a variable in between the supposed cause (X) and the effect (Y)), and anteceding variables (a variable prior to the supposed cause (X) that is the true cause). When a third variable is involved and has not been controlled for, the relation is said to be azero orderrelationship. In most practical applications of experimental research designs there are several causes (X1, X2, X3). In most designs, only one of these causes is manipulated at a time.
Some efficient designs for estimating several main effects were found independently and in near succession byRaj Chandra BoseandK. Kishenin 1940 at theIndian Statistical Institute, but remained little known until thePlackett–Burman designswere published inBiometrikain 1946. About the same time,C. R. Raointroduced the concepts oforthogonal arraysas experimental designs. This concept played a central role in the development ofTaguchi methodsbyGenichi Taguchi, which took place during his visit to Indian Statistical Institute in early 1950s. His methods were successfully applied and adopted by Japanese and Indian industries and subsequently were also embraced by US industry albeit with some reservations.
In 1950,Gertrude Mary CoxandWilliam Gemmell Cochranpublished the bookExperimental Designs,which became the major reference work on the design of experiments for statisticians for years afterwards.
Developments of the theory oflinear modelshave encompassed and surpassed the cases that concerned early writers. Today, the theory rests on advanced topics inlinear algebra,algebraandcombinatorics.
As with other branches of statistics, experimental design is pursued using bothfrequentistandBayesianapproaches: In evaluating statistical procedures like experimental designs,frequentist statisticsstudies thesampling distributionwhileBayesian statisticsupdates aprobability distributionon the parameter space.
Some important contributors to the field of experimental designs areC. S. Peirce,R. A. Fisher,F. Yates,R. C. Bose,A. C. Atkinson,R. A. Bailey,D. R. Cox,G. E. P. Box,W. G. Cochran,W. T. Federer,V. V. Fedorov,A. S. Hedayat,J. Kiefer,O. Kempthorne,J. A. Nelder,Andrej Pázman,Friedrich Pukelsheim,D. Raghavarao,C. R. Rao,Shrikhande S. S.,J. N. Srivastava,William J. Studden,G. TaguchiandH. P. Wynn.[35]
The textbooks of D. Montgomery, R. Myers, and G. Box/W. Hunter/J.S. Hunter have reached generations of students and practitioners.[36][37][38][39][40]Furthermore, there is ongoing discussion of experimental design in the context of model building for models either static or dynamic models, also known assystem identification.[41][42]
Laws and ethical considerations preclude some carefully designed
experiments with human subjects. Legal constraints are dependent onjurisdiction. Constraints may involveinstitutional review boards,informed consentandconfidentialityaffecting both clinical (medical) trials and
behavioral and social science experiments.[43]In the field of toxicology, for example, experimentation is performed
on laboratoryanimalswith the goal of defining safe exposure limits
forhumans.[44]Balancing
the constraints are views from the medical field.[45]Regarding the randomization of patients,
"... if no one knows which therapy is better, there is no ethical
imperative to use one therapy or another." (p 380) Regarding
experimental design, "...it is clearly not ethical to place subjects
at risk to collect data in a poorly designed study when this situation
can be easily avoided...". (p 393)
|
https://en.wikipedia.org/wiki/Design_of_experiments
|
Note to admins: In case of doubt, remove this template and post a message asking for review atWT:CP.Withthis script, go tothe history with auto-selected revisions.
Note to the requestor: Make sure the page has already been reverted to a non-infringing revision or that infringing text has been removed or replaced before submitting this request. This template is reserved for obvious cases only, for other cases refer toWikipedia:Copyright problems.
Instatistics,exploratory data analysis(EDA) is an approach ofanalyzingdata setsto summarize their main characteristics, often usingstatistical graphicsand otherdata visualizationmethods. Astatistical modelcan be used or not, but primarily EDA is for seeing what the data can tell beyond the formal modeling and thereby contrasts with traditional hypothesis testing, in which a model is supposed to be selected before the data is seen. Exploratory data analysis has been promoted byJohn Tukeysince 1970 to encourage statisticians to explore the data, and possibly formulate hypotheses that could lead to new data collection and experiments. EDA is different frominitial data analysis (IDA),[1][2]which focuses more narrowly on checking assumptions required for model fitting and hypothesis testing, and handling missing values and making transformations of variables as needed. EDA encompasses IDA.
Tukey defined data analysis in 1961 as: "Procedures for analyzing data, techniques for interpreting the results of such procedures, ways of planning the gathering of data to make its analysis easier, more precise or more accurate, and all the machinery and results of (mathematical) statistics which apply to analyzing data."[3]
Exploratory data analysis is a technique to analyze and investigate a dataset and summarize its main characteristics. A main advantage of EDA is providing the visualization of data after conducting analysis.
Tukey's championing of EDA encouraged the development ofstatistical computingpackages, especiallySatBell Labs.[4]The S programming language inspired the systemsS-PLUSandR. This family of statistical-computing environments featured vastly improved dynamic visualization capabilities, which allowed statisticians to identifyoutliers,trendsandpatternsin data that merited further study.
Tukey's EDA was related to two other developments instatistical theory:robust statisticsandnonparametric statistics, both of which tried to reduce the sensitivity of statistical inferences to errors in formulatingstatistical models. Tukey promoted the use offive number summaryof numerical data—the twoextremes(maximumandminimum), themedian, and thequartiles—because these median and quartiles, being functions of theempirical distributionare defined for all distributions, unlike themeanandstandard deviation. Moreover, the quartiles and median are more robust toskewedorheavy-tailed distributionsthan traditional summaries (the mean and standard deviation). The packagesS,S-PLUS, andRincluded routines usingresampling statistics, such as Quenouille and Tukey'sjackknifeandEfron'sbootstrap, which are nonparametric and robust (for many problems).
Exploratory data analysis, robust statistics, nonparametric statistics, and the development of statistical programming languages facilitated statisticians' work on scientific and engineering problems. Such problems included the fabrication of semiconductors and the understanding of communications networks, both of which were of interest to Bell Labs. These statistical developments, all championed by Tukey, were designed to complement theanalytictheory oftesting statistical hypotheses, particularly theLaplaciantradition's emphasis onexponential families.[5]
John W. Tukeywrote the bookExploratory Data Analysisin 1977.[6]Tukey held that too much emphasis in statistics was placed onstatistical hypothesis testing(confirmatory data analysis); more emphasis needed to be placed on usingdatato suggest hypotheses to test. In particular, he held that confusing the two types of analyses and employing them on the same set of data can lead tosystematic biasowing to the issues inherent intesting hypotheses suggested by the data.
The objectives of EDA are to:
Many EDA techniques have been adopted intodata mining. They are also being taught to young students as a way to introduce them to statistical thinking.[8]
There are a number of tools that are useful for EDA, but EDA is characterized more by the attitude taken than by particular techniques.[9]
Typicalgraphical techniquesused in EDA are:
Dimensionality reduction:
Typicalquantitativetechniques are:
Many EDA ideas can be traced back to earlier authors, for example:
TheOpen UniversitycourseStatistics in Society(MDST 242), took the above ideas and merged them withGottfried Noether's work, which introducedstatistical inferencevia coin-tossing and themedian test.
Findings from EDA are orthogonal to the primary analysis task. To illustrate, consider an example from Cook et al. where the analysis task is to find the variables which best predict the tip that a dining party will give to the waiter.[12]The variables available in the data collected for this task are: the tip amount, total bill, payer gender, smoking/non-smoking section, time of day, day of the week, and size of the party. The primary analysis task is approached by fitting a regression model where the tip rate is the response variable. The fitted model is
which says that as the size of the dining party increases by one person (leading to a higher bill), the tip rate will decrease by 1%, on average.
However, exploring the data reveals other interesting features not described by this model.
What is learned from the plots is different from what is illustrated by the regression model, even though the experiment was not designed to investigate any of these other trends. The patterns found by exploring the data suggest hypotheses about tipping that may not have been anticipated in advance, and which could lead to interesting follow-up experiments where the hypotheses are formally stated and tested by collecting new data.
|
https://en.wikipedia.org/wiki/Exploratory_data_analysis
|
Soft independent modelling by class analogy(SIMCA) is astatisticalmethod forsupervised classificationof data. The method requires atraining data setconsisting of samples (or objects) with a set of attributes and their class membership. The term soft refers to the fact the classifier can identify samples as belonging to multiple classes and not necessarily producing a classification of samples into non-overlapping classes.
In order to build the classification models, the samples belonging to each class need to be analysed usingprincipal component analysis(PCA); only the significant components are retained.
For a given class, the resulting model then describes either a line (for one Principal Component or PC), plane (for two PCs) orhyper-plane(for more than two PCs). For each modelled class, the meanorthogonal distanceof training data samples from the line, plane, or hyper-plane (calculated as the residual standard deviation) is used to determine a critical distance for classification. This critical distance is based on theF-distributionand is usually calculated using 95% or 99% confidence intervals.
New observations are projected into each PC model and the residual distances calculated. An observation is assigned to the model class when its residual distance from the model is below the statistical limit for the class. The observation may be found to belong to multiple classes and a measure ofgoodness of the modelcan be found from the number of cases where the observations are classified into multiple classes. The classification efficiency is usually indicated byReceiver operating characteristics.
In the original SIMCA method, the ends of the hyper-plane of each class are closed off by setting statistical control limits along the retained principal components axes (i.e., score value between plus and minus 0.5 times score standard deviation).
More recent adaptations of the SIMCA method close off the hyper-plane by construction of ellipsoids (e.g.Hotelling's T2orMahalanobis distance). With such modified SIMCA methods, classification of an object requires both that its orthogonal distance from the model and its projection within the model (i.e. score value within the region defined by the ellipsoid) are not significant.
SIMCA as a method of classification has gained widespread use especially in applied statistical fields such aschemometricsand spectroscopic data analysis.
|
https://en.wikipedia.org/wiki/Soft_independent_modelling_of_class_analogies
|
Univariateis a term commonly used in statistics to describe a type of data which consists of observations on only a single characteristic or attribute. A simple example of univariate data would be the salaries of workers in industry.[1]Like all the other data, univariate data can be visualized using graphs, images or other analysis tools after the data is measured, collected, reported, and analyzed.[2]
Some univariate data consists of numbers (such as the height of 65 inches or the weight of 100 pounds), while others are nonnumerical (such as eye colors of brown or blue). Generally, the termscategoricalunivariate data andnumericalunivariate data are used to distinguish between these types.
Categorical univariate data consists of non-numericalobservationsthat may be placed in categories. It includes labels or names used to identify an attribute of each element. Categorical univariate data usually use eithernominalorordinalscale of measurement.[3]
Numerical univariate data consists of observations that are numbers. They are obtained using eitherintervalorratioscale of measurement. This type of univariate data can be classified even further into two subcategories:discreteandcontinuous.[2]A numerical univariate data is discrete if the set of all possible values isfiniteor countablyinfinite. Discrete univariate data are usually associated with counting (such as the number of books read by a person). A numerical univariate data is continuous if the set of all possible values is an interval of numbers. Continuous univariate data are usually associated with measuring (such as the weights of people).
Univariate analysis is the simplest form of analyzing data.Unimeans "one", so the data has only one variable (univariate).[4]Univariate data requires to analyze eachvariableseparately. Data is gathered for the purpose of answering a question, or more specifically, a research question. Univariate data does not answer research questions about relationships between variables, but rather it is used to describe one characteristic or attribute that varies from observation to observation.[5]Usually there are two purposes that a researcher can look for. The first one is to answer a research question with descriptive study and the second one is to get knowledge about howattributevaries with individual effect of a variable inregression analysis. There are some ways to describe patterns found in univariate data which include graphical methods, measures of central tendency and measures of variability.[6]
Like other forms of statistics, it can beinferentialordescriptive. The key fact is that only one variable is involved.
Univariate analysis can yield misleading results in cases in whichmultivariate analysisis more appropriate.
Central tendency is one of the most common numerical descriptive measures. It is used to estimate the central location of the univariate data by the calculation ofmean,medianandmode.[7]Each of these calculations has its own advantages and limitations. The mean has the advantage that its calculation includes each value of the data set, but it is particularly susceptible to the influence ofoutliers. The median is a better measure when the data set contains outliers. The mode is simple to locate.
One is not restricted to using only one of these measures of central tendency. If the data being analyzed is categorical, then the only measure of central tendency that can be used is the mode. However, if the data is numerical in nature (ordinalorinterval/ratio) then the mode, median, or mean can all be used to describe the data. Using more than one of these measures provides a more accurate descriptive summary of central tendency for the univariate.[8]
A measure ofvariabilityordispersion(deviation from the mean) of a univariate data set can reveal the shape of a univariate data distribution more sufficiently. It will provide some information about the variation among data values. The measures of variability together with the measures of central tendency give a better picture of the data than the measures of central tendency alone.[9]The three most frequently used measures of variability arerange,varianceandstandard deviation.[10]The appropriateness of each measure would depend on the type of data, the shape of the distribution of data and which measure of central tendency are being used. If the data is categorical, then there is no measure of variability to report. For data that is numerical, all three measures are possible. If the distribution of data is symmetrical, then the measures of variability are usually the variance and standard deviation. However, if the data areskewed, then the measure of variability that would be appropriate for that data set is the range.[3]
Descriptive statistics describe a sample or population. They can be part ofexploratory data analysis.[11]
The appropriate statistic depends on thelevel of measurement. For nominal variables, afrequency tableand a listing of themode(s)is sufficient. For ordinal variables themediancan be calculated as a measure ofcentral tendencyand therange(and variations of it) as a measure of dispersion. For interval level variables, thearithmetic mean(average) andstandard deviationare added to the toolbox and, for ratio level variables, we add thegeometric meanandharmonic meanas measures of central tendency and thecoefficient of variationas a measure of dispersion.
For interval and ratio level data, further descriptors include the variable's skewness andkurtosis.
Inferential methods allow us to infer from a sample to a population.[11]For a nominal variable a one-way chi-square (goodness of fit) test can help determine if our sample matches that of some population.[12]For interval and ratio level data, aone-sample t-testcan let us infer whether the mean in our sample matches some proposed number (typically 0). Other available tests of location include the one-samplesign testandWilcoxon signed rank test.
The most frequently used graphical illustrations for univariate data are:
Frequency is how many times a number occurs. The frequency of an observation in statistics tells us the number of times the observation occurs in the data. For example, in the following list of numbers {1, 2, 3, 4, 6, 9, 9, 8, 5, 1, 1, 9, 9, 0, 6, 9}, the frequency of the number 9 is 5 (because it occurs 5 times in this data set).
Bar chart is agraphconsisting ofrectangularbars. These bars actually representsnumberor percentage of observations of existing categories in a variable. Thelengthorheightof bars gives a visual representation of the proportional differences among categories.
Histogramsare used to estimate distribution of the data, with the frequency of values assigned to a value range called abin.[13]
Pie chart is a circle divided into portions that represent the relative frequencies or percentages of a population or a sample belonging to different categories.
Univariate distributionis a dispersal type of a single random variable described either with aprobability mass function(pmf) fordiscrete probability distribution, orprobability density function(pdf) forcontinuous probability distribution.[14]It is not to be confused withmultivariate distribution.
|
https://en.wikipedia.org/wiki/Univariate_analysis
|
Aninterference fit, also known as apressed fitorfriction fit, is a form of fastening between twotightfittingmating parts that produces a joint which is held together byfrictionafter the parts are pushed together.[1]
Depending on the amount of interference, parts may be joined using a tap from a hammer or forced together using a hydraulic press. Critical components that must not sustain damage during joining may also be cooled significantly below room temperature to shrink one of the components before fitting. This method allows the components to be joined without force and produces ashrink fitinterference when the component returns to normal temperature. Interference fits are commonly used with aircraft fasteners to improve thefatiguelife of a joint.
These fits, though applicable to shaft and hole assembly, are more often used for bearing-housing or bearing-shaft assembly. This is referred to as a 'press-in' mounting.
The tightness of fit is controlled by amount of interference; theallowance(planned difference from nominal size). Formulas exist[2]to compute allowance that will result in various strengths of fit such as loose fit, light interference fit, and interference fit. The value of the allowance depends on which material is being used, how big the parts are, and what degree of tightness is desired. Such values have already been worked out in the past for many standard applications, and they are available to engineers in the form oftables, obviating the need for re-derivation.
As an example, a 10 mm (0.394 in) shaft made of 303stainless steelwill form a tight fit with allowance of 3–10μm(0.00012–0.00039 in). Aslip fitcan be formed when the bore diameter is 12–20μm(0.00047–0.00079 in) wider than the rod; or, if the rod is made 12–20μm under the given bore diameter.[citation needed]An example:
The allowance per inch of diameter usually ranges from 0.001 to 0.0025 inches (0.0254 to 0.0635 mm) (0.1–0.25%), 0.0015 inches (0.0381 mm) (0.15%) being a fair average. Ordinarily the allowance per inch decreases as the diameter increases; thus the total allowance for a diameter of 2 inches (50.8 mm) might be 0.004 inches (0.1016 mm), 0.2%), whereas for a diameter of 8 inches (203.2 mm) the total allowance might not be over 0.009 or 0.010 inches (0.2286 or 0.2540 mm) i.e., 0.11–0.12%). The parts to be assembled by forced fits are usually made cylindrical, although sometimes they are slightly tapered. Advantages of the taper form are: the possibility of abrasion of the fitted surfaces is reduced; less pressure is required in assembling; and parts are more readily separated when renewal is required. On the other hand, the taper fit is less reliable, because if it loosens, the entire fit is free with but little axial movement. Some lubricant, such aswhite leadandlardoil mixed to the consistency of paint, should be applied to the pin and bore before assembling, to reduce the tendency toward abrasion.[citation needed][3]
There are two basic methods for assembling an oversize shaft into an undersized hole, sometimes used in combination: force and thermal expansion or contraction.
There are at least three different terms used to describe an interference fit created via force: press fit, friction fit, and hydraulic dilation.[4][5]
Press fit is achieved with presses that can press the parts together with very large amounts of force. The presses are generallyhydraulic, although small hand-operated presses (such asarbor presses) may operate by means of the mechanical advantage supplied by ajackscrewor by a gear reduction driving arack and pinion. The amount of force applied in hydraulic presses may be anything from a few pounds for the tiniest parts to hundreds of tons for the largest parts.
The edges of shafts and holes arechamfered(beveled). The chamfer forms a guide for the pressing movement, helping to distribute the force evenly around the circumference of the hole, to allow the compression to occur gradually instead of all at once, thus helping the pressing operation to be smoother, to be more easily controlled, and to require less power (less force at any one instant of time), and to assist in aligning the shaft parallel with the hole it is being pressed into. In the case oftrain wheelsetsthewheelsare pressed onto theaxlesby force.
Most materials expand whenheatedand shrink when cooled. Enveloping parts are heated (e.g., with torches or gas ovens) and assembled into position while hot, then allowed to cool and contract back to their former size, except for the compression that results from each interfering with the other. This is also referred to asshrink-fitting. Railroad axles, wheels, andtiresare typically assembled in this way. Alternatively, the enveloped part may be cooled before assembly such that it slides easily into its mating part. Upon warming, it expands and interferes. Cooling is often preferable as it is less likely than heating to change material properties, e.g., assembling a hardened gear onto a shaft, where the risk exists of heating the gear too much and drawing itstemper.
|
https://en.wikipedia.org/wiki/Interference_fit
|
Instatistics,interval estimationis the use ofsample datatoestimateanintervalof possible values of aparameterof interest. This is in contrast topoint estimation, which gives a single value.[1]
The most prevalent forms of interval estimation areconfidence intervals(afrequentistmethod) andcredible intervals(aBayesian method).[2]Less common forms includelikelihood intervals,fiducial intervals,tolerance intervals,andprediction intervals. For a non-statistical method, interval estimates can be deduced fromfuzzy logic.
Confidence intervals are used to estimate the parameter of interest from a sampled data set, commonly themeanorstandard deviation. A confidence interval states there is a 100γ% confidence that the parameter of interest is within a lower and upper bound. A common misconception of confidence intervals is 100γ% of the data set fits within or above/below the bounds, this is referred to as a tolerance interval, which is discussed below.
There are multiple methods used to build a confidence interval, the correct choice depends on the data being analyzed. For a normal distribution with a knownvariance, one uses the z-table to create an interval where a confidence level of 100γ% can be obtained centered around the sample mean from a data set of n measurements, . For aBinomial distribution, confidence intervals can be approximated using theWald Approximate Method,Jeffreys interval, andClopper-Pearson interval. The Jeffrey method can also be used to approximate intervals for aPoisson distribution.[3]If the underlying distribution is unknown, one can utilizebootstrappingto create bounds about the median of the data set.
As opposed to a confidence interval, a credible interval requires apriorassumption, modifying the assumption utilizing aBayes factor, and determining aposterior distribution. Utilizing the posterior distribution, one can determine a 100γ%probabilitythe parameter of interest is included, as opposed to the confidence interval where one can be 100γ%confidentthat an estimate is included within an interval.[4]
While a prior assumption is helpful towards providing more data towards building an interval, it removes the objectivity of a confidence interval. A prior will be used to inform a posterior, if unchallenged this prior can lead to incorrect predictions.[5]
The credible interval's bounds are variable, unlike the confidence interval. There are multiple methods to determine where the correct upper and lower limits should be located. Common techniques to adjust the bounds of the interval includehighest posterior density interval(HPDI), equal-tailed interval, or choosing the center the interval around the mean.
Utilizes the principles of a likelihood function to estimate the parameter of interest. Utilizing the likelihood-based method, confidence intervals can be found for exponential, Weibull, and lognormal means. Additionally, likelihood-based approaches can give confidence intervals for the standard deviation. It is also possible to create a prediction interval by combining the likelihood function and the future random variable.[3]
Fiducial inferenceutilizes a data set, carefully removes the noise and recovers a distribution estimator, Generalized Fiducial Distribution (GFD). Without the use of Bayes' Theorem, there is no assumption of a prior, much like confidence intervals.
Fiducial inference is a less common form ofstatistical inference. The founder,R.A. Fisher, who had been developing inverse probability methods, had his own questions about the validity of the process. While fiducial inference was developed in the early twentieth century, the late twentieth century believed that the method was inferior to the frequentist and Bayesian approaches but held an important place in historical context for statistical inference. However, modern-day approaches have generalized the fiducial interval into Generalized Fiducial Inference (GFI), which can be used to estimate discrete and continuous data sets.[6]
Tolerance intervals use collected data set population to obtain an interval, within tolerance limits, containing 100γ% values. Examples typically used to describe tolerance intervals include manufacturing. In this context, a percentage of an existing product set is evaluated to ensure that a percentage of the population is included within tolerance limits. When creating tolerance intervals, the bounds can be written in terms of an upper and lower tolerance limit, utilizing the samplemean,μ{\displaystyle \mu }, and the samplestandard deviation, s.
for two-sided intervals
And in the case of one-sided intervals where the tolerance is required only above or below a critical value,
ki{\displaystyle k_{i}}varies by distribution and the number of sides, i, in the interval estimate. In a normal distribution,k2{\displaystyle k_{2}}can be expressed as[7]
Where,
zα/2{\displaystyle z_{\alpha /2}}is the critical values obtained from the normal distribution.
A prediction interval estimates the interval containing future samples with some confidence, γ. Prediction intervals can be used for bothBayesianandfrequentistcontexts. These intervals are typically used in regression data sets, but prediction intervals are not used for extrapolation beyond the previous data's experimentally controlled parameters.[8]
Fuzzy logic is used to handle decision-making in a non-binary fashion for artificial intelligence, medical decisions, and other fields. In general, it takes inputs, maps them throughfuzzy inference systems, and produces an output decision. This process involves fuzzification, fuzzy logic rule evaluation, and defuzzification. When looking at fuzzy logic rule evaluation,membership functionsconvert our non-binary input information into tangible variables. These membership functions are essential to predict the uncertainty of the system.
Two-sided intervals estimate a parameter of interest, Θ, with a level of confidence, γ, using a lower (lb{\displaystyle l_{b}}) and upper bound (ub{\displaystyle u_{b}}). Examples may include estimating the average height of males in a geographic region or lengths of a particular desk made by a manufacturer. These cases tend to estimate the central value of a parameter. Typically, this is presented in a form similar to the equation below.
Differentiating from the two-sided interval, the one-sided interval utilizes a level of confidence, γ, to construct a minimum or maximum bound which predicts the parameter of interest to γ*100% probability. Typically, a one-sided interval is required when the estimate's minimum or maximum bound is not of interest. When concerned about the minimum predicted value of Θ, one is no longer required to find an upper bounds of the estimate, leading to a form reduced form of the two-sided.
As a result of removing the upper bound and maintaining the confidence, the lower-bound (lb{\displaystyle l_{b}}) will increase. Likewise, when concerned with finding only an upper bound of a parameter's estimate, the upper bound will decrease. A one-sided interval is a commonly found in material production'squality assurance, where an expected value of a material's strength, Θ, must be above a certain minimum value (lb{\displaystyle l_{b}}) with some confidence (100γ%). In this case, the manufacturer is not concerned with producing a product that is too strong, there is no upper-bound (ub{\displaystyle u_{b}}).
When determining thestatistical significanceof a parameter, it is best to understand the data and its collection methods. Before collecting data, an experiment should be planned such that thesampling errorisstatistical variability(arandom error), as opposed to astatistical bias(asystematic error).[9]After experimenting, a typical first step in creating interval estimates isexploratory analysisplotting using variousgraphical methods. From this, one can determine the distribution of samples from the data set. Producing interval boundaries with incorrect assumptions based on distribution makes a prediction faulty.[10]
When interval estimates are reported, they should have a commonly held interpretation within and beyond the scientific community. Interval estimates derived from fuzzy logic have much more application-specific meanings.
In commonly occurring situations there should be sets of standard procedures that can be used, subject to the checking and validity of any required assumptions. This applies for both confidence intervals and credible intervals. However, in more novel situations there should be guidance on how interval estimates can be formulated. In this regard confidence intervals and credible intervals have a similar standing but there two differences. First, credible intervals can readily deal with prior information, while confidence intervals cannot. Secondly, confidence intervals are more flexible and can be used practically in more situations than credible intervals: one area where credible intervals suffer in comparison is in dealing withnon-parametric models.
There should be ways of testing the performance of interval estimation procedures. This arises because many such procedures involve approximations of various kinds and there is a need to check that the actual performance of a procedure is close to what is claimed. The use ofstochastic simulationsmakes this is straightforward in the case of confidence intervals, but it is somewhat more problematic for credible intervals where prior information needs to be taken properly into account. Checking of credible intervals can be done for situations representing no-prior-information but the check involves checking the long-run frequency properties of the procedures.
Severini (1993) discusses conditions under which credible intervals and confidence intervals will produce similar results, and also discusses both thecoverage probabilitiesof credible intervals and the posterior probabilities associated with confidence intervals.[11]
Indecision theory, which is a common approach to and justification for Bayesian statistics, interval estimation is not of direct interest. The outcome is a decision, not an interval estimate, and thus Bayesian decision theorists use aBayes action: they minimize expected loss of a loss function with respect to the entire posterior distribution, not a specific interval.
Applications of confidence intervals are used to solve a variety of problems dealing with uncertainty. Katz (1975) proposes various challenges and benefits for utilizing interval estimates in legal proceedings.[12]For use in medical research, Altmen (1990) discusses the use of confidence intervals and guidelines towards using them.[13]In manufacturing, it is also common to find interval estimates estimating a product life, or to evaluate the tolerances of a product. Meeker and Escobar (1998) present methods to analyze reliability data under parametric and nonparametric estimation, including the prediction of future, random variables (prediction intervals).[14]
|
https://en.wikipedia.org/wiki/Interval_estimation
|
Probabilistic designis a discipline withinengineering design. It deals primarily with the consideration and minimization of the effects ofrandom variabilityupon the performance of anengineering systemduring the design phase. Typically, these effects studied and optimized are related to quality and reliability. It differs from the classical approach to design by assuming a small probability of failure instead of using thesafety factor.[2][3]Probabilistic design is used in a variety of different applications to assess the likelihood of failure. Disciplines which extensively use probabilistic design principles include product design,quality control,systems engineering,machine design,civil engineering(particularly useful inlimit state design) and manufacturing.
When using a probabilistic approach to design, the designer no longer thinks of each variable as a single value or number. Instead, each variable is viewed as a continuous random variable with aprobability distribution. From this perspective, probabilistic design predicts the flow of variability (or distributions) through a system.[4]
Because there are so many sources of random and systemic variability when designing materials and structures, it is greatly beneficial for the designer to model the factors studied as random variables. By considering this model, a designer can make adjustments to reduce the flow of random variability, thereby improving engineering quality. Proponents of the probabilistic design approach contend that many quality problems can be predicted and rectified during the early design stages and at a much reduced cost.[4][5]
Typically, the goal of probabilistic design is to identify the design that will exhibit the smallest effects of random variability. Minimizing random variability is essential to probabilistic design because it limits uncontrollable factors, while also providing a much more precise determination of failure probability. This could be the one design option out of several that is found to be most robust. Alternatively, it could be the only design option available, but with the optimum combination of input variables and parameters. This second approach is sometimes referred to asrobustification, parameter design ordesign for six sigma.[4]
Though the laws of physics dictate the relationships between variables and measurable quantities such as force,stress,strain, anddeflection, there are still three primary sources of variability when considering these relationships.[6]
The first source of variability is statistical, due to the limitations of having a finite sample size to estimate parameters such as yield stress,Young's modulus, andtrue strain.[7]Measurement uncertainty is the most easily minimized out of these three sources, as variance is proportional to the inverse of the sample size.
We can represent variance due to measurement uncertainties as a corrective factorB{\displaystyle B}, which is multiplied by the true meanX{\displaystyle X}to yield the measured mean ofX¯{\displaystyle {\bar {X}}}. Equivalently,X¯=B¯X{\displaystyle {\bar {X}}={\bar {B}}X}.
This yields the resultB¯=X¯X{\displaystyle {\bar {B}}={\frac {\bar {X}}{X}}}, and the variance of the corrective factorB{\displaystyle B}is given as:
Var[B]=Var[X¯]X=Var[X]nX{\displaystyle Var[B]={\frac {Var[{\bar {X}}]}{X}}={\frac {Var[X]}{nX}}}
whereB{\displaystyle B}is the correction factor,X{\displaystyle X}is the true mean,X¯{\displaystyle {\bar {X}}}is the measured mean, andn{\displaystyle n}is the number of measurements made.[6]
The second source of variability stems from the inaccuracies and uncertainties of the model used to calculate such parameters. These include the physical models we use to understand loading and their associated effects in materials. The uncertainty from the model of a physical measurable can be determined if both theoretical values according to the model and experimental results are available.
The measured valueH^(ω){\displaystyle {\hat {H}}(\omega )}is equivalent to the theoretical model predictionH(ω){\displaystyle H(\omega )}multiplied by a model error ofϕ(ω){\displaystyle \phi (\omega )}, plus the experimental errorε(ω){\displaystyle \varepsilon (\omega )}.[8]Equivalently,
H^(ω)=H(ω)ϕ(ω)+ε(ω){\displaystyle {\hat {H}}(\omega )=H(\omega )\phi (\omega )+\varepsilon (\omega )}
and the model error takes the general form:
ϕ(ω)=∑i=0naiωn{\displaystyle \phi (\omega )=\sum _{i=0}^{n}a_{i}\omega ^{n}}
whereai{\displaystyle a_{i}}are coefficients of regression determined from experimental data.[8]
Finally, the last variability source comes from the intrinsic variability of any physical measurable. There is a fundamental random uncertainty associated with all physical phenomena, and it is comparatively the most difficult to minimize this variability. Thus, each physical variable and measurable quantity can be represented as a random variable with a mean and a variability.
Consider the classical approach to performingtensile testingin materials. The stress experienced by a material is given as a singular value (i.e., force applied divided by the cross-sectional area perpendicular to the loading axis). The yield stress, which is the maximum stress a material can support before plastic deformation, is also given as a singular value. Under this approach, there is a 0% chance of material failure below the yield stress, and a 100% chance of failure above it. However, these assumptions break down in the real world.
The yield stress of a material is often only known to a certain precision, meaning that there is an uncertainty and therefore a probability distribution associated with the known value.[6][8]Let the probability distribution function of the yield strength be given asf(R){\displaystyle f(R)}.
Similarly, the applied load or predicted load can also only be known to a certain precision, and the range of stress which the material will undergo is unknown as well. Let this probability distribution be given asf(S){\displaystyle f(S)}.
The probability of failure is equivalent to the area between these two distribution functions, mathematically:
Pf=P(R<S)=∫−∞∞∫−∞∞f(R)f(S)dSdR{\displaystyle P_{f}=P(R<S)=\int \limits _{-\infty }^{\infty }\int \limits _{-\infty }^{\infty }f(R)f(S)dSdR}
or equivalently, if we let the difference between yield stress and applied load equal a third functionR−S=Q{\displaystyle R-S=Q}, then:
Pf=∫−∞∞∫−∞∞f(R)f(S)dSdR=∫−∞0f(Q)dQ{\displaystyle P_{f}=\int \limits _{-\infty }^{\infty }\int \limits _{-\infty }^{\infty }f(R)f(S)dSdR=\int \limits _{-\infty }^{0}f(Q)dQ}
where thevarianceof the mean differenceQ{\displaystyle Q}is given byσQ2=σR2+σS2{\displaystyle \sigma _{Q}^{2}={\sqrt {\sigma _{R}^{2}+\sigma _{S}^{2}}}}.
The probabilistic design principles allow for precise determination of failure probability, whereas the classical model assumes absolutely no failure before yield strength.[9]It is clear that the classical applied load vs. yield stress model has limitations, so modeling these variables with a probability distribution to calculate failure probability is a more precise approach. The probabilistic design approach allows for the determination of material failure under all loading conditions, associating quantitative probabilities to failure chance in place of a definitive yes or no.
In essence, probabilistic design focuses upon the prediction of the effects of variability. In order to be able to predict and calculate variability associated with model uncertainty, many methods have been devised and utilized across different disciplines to determine theoretical values for parameters such as stress and strain. Examples of theoretical models used alongside probabilistic design include:
Additionally, there are many statistical methods used to quantify and predict the random variability in the desired measurable. Some methods that are used to predict the random variability of an output include:
|
https://en.wikipedia.org/wiki/Probabilistic_design
|
Theprocess capabilityis a measurable property of aprocessto the specification, expressed as aprocess capability index(e.g., Cpkor Cpm) or as aprocess performance index(e.g., Ppkor Ppm). The output of this measurement is often illustrated by ahistogramand calculations that predict how many parts will be produced out of specification (OOS).
Two parts of process capability are:
The input of a process usually has at least one or more measurable characteristics that are used to specify outputs. These can be analyzed statistically; where the output data shows anormal distributionthe process can be described by the processmean(average) and thestandard deviation.
A process needs to be established with appropriateprocess controlsin place. Acontrol chartanalysis is used to determine whether the process is "in statistical control" If the process is not in statistical control then capability has no meaning. Therefore, the process capability involves onlycommon cause variationand notspecial cause variation.
A batch of data needs to be obtained from the measured output of the process. The more data that is included the more precise the result, however an estimate can be achieved with as few as 17 data points. This should include the normal variety of production conditions, materials, and people in the process. With a manufactured product, it is common to include at least three different production runs, including start-ups.
The process mean (average) and standard deviation are calculated. With a normal distribution, the "tails" can extend well beyond plus and minus three standard deviations, but this interval should contain about 99.73% of production output. Therefore, for a normal distribution of data the process capability is often described as the relationship between six standard deviations and the required specification.
The output of a process is expected to meet customer requirements,specifications, orengineering tolerances. Engineers can conduct a process capability study to determine the extent to which the process can meet these expectations.
The ability of a process to meet specifications can be expressed as a single number using aprocess capability indexor it can be assessed usingcontrol charts. Either case requires running the process to obtain enough measurable output so that engineering is confident that the process is stable and so that the process mean and variability can be reliably estimated.Statistical process controldefines techniques to properly differentiate between stable processes, processes that are drifting (experiencing a long-term change in the mean of the output), and processes that are growing more variable.Process capability indicesare only meaningful for processes that are stable (in a state ofstatistical control).
|
https://en.wikipedia.org/wiki/Process_capability
|
Reliability engineeringis a sub-discipline ofsystems engineeringthat emphasizes the ability of equipment to function without failure. Reliability is defined as the probability that a product, system, or service will perform its intended function adequately for a specified period of time, OR will operate in a defined environment without failure.[1]Reliability is closely related toavailability, which is typically described as the ability of a component or system to function at a specified moment or interval of time.
Thereliability functionis theoretically defined as theprobabilityof success. In practice, it is calculated using different techniques, and its value ranges between 0 and 1, where 0 indicates no probability of success while 1 indicates definite success. This probability is estimated from detailed (physics of failure) analysis, previous data sets, or through reliability testing and reliability modeling.Availability,testability,maintainability, andmaintenanceare often defined as a part of "reliability engineering" in reliability programs. Reliability often plays a key role in thecost-effectivenessof systems.
Reliability engineering deals with the prediction, prevention, and management of high levels of "lifetime" engineeringuncertaintyandrisksof failure. Althoughstochasticparameters define and affect reliability, reliability is not only achieved by mathematics and statistics.[2][3]"Nearly all teaching and literature on the subject emphasize these aspects and ignore the reality that the ranges of uncertainty involved largely invalidate quantitative methods forpredictionand measurement."[4]For example, it is easy to represent "probability of failure" as a symbol or value in an equation, but it is almost impossible to predict its true magnitude in practice, which is massivelymultivariate, so having the equation for reliability does not begin to equal having an accurate predictive measurement of reliability.
Reliability engineering relates closely to Quality Engineering,safety engineering, andsystem safety, in that they use common methods for their analysis and may require input from each other. It can be said that a system must be reliably safe.
Reliability engineering focuses on the costs of failure caused by system downtime, cost of spares, repair equipment, personnel, and cost of warranty claims.[5]
The wordreliabilitycan be traced back to 1816 and is first attested to the poetSamuel Taylor Coleridge.[6]Before World War II the term was linked mostly torepeatability; a test (in any type of science) was considered "reliable" if the same results would be obtained repeatedly. In the 1920s, product improvement through the use ofstatistical process controlwas promoted by Dr.Walter A. ShewhartatBell Labs,[7]around the time thatWaloddi Weibullwas working on statistical models for fatigue. The development of reliability engineering was here on a parallel path with quality. The modern use of the word reliability was defined by the U.S. military in the 1940s, characterizing a product that would operate when expected and for a specified period.
In World War II, many reliability issues were due to the inherent unreliability of electronic equipment available at the time, and to fatigue issues. In 1945, M.A. Miner published a seminal paper titled "Cumulative Damage in Fatigue" in an ASME journal. A main application for reliability engineering in the military was for the vacuum tube as used in radar systems and other electronics, for which reliability proved to be very problematic and costly. TheIEEEformed the Reliability Society in 1948. In 1950, theUnited States Department of Defenseformed a group called the "Advisory Group on the Reliability of Electronic Equipment" (AGREE) to investigate reliability methods for military equipment.[8]This group recommended three main ways of working:
In the 1960s, more emphasis was given to reliability testing on component and system levels. The famous military standard MIL-STD-781 was created at that time. Around this period also the much-used predecessor to military handbook 217 was published byRCAand was used for the prediction of failure rates of electronic components. The emphasis on component reliability and empirical research (e.g. Mil Std 217) alone slowly decreased. More pragmatic approaches, as used in the consumer industries, were being used. In the 1980s, televisions were increasingly made up of solid-state semiconductors. Automobiles rapidly increased their use of semiconductors with a variety of microcomputers under the hood and in the dash. Large air conditioning systems developed electronic controllers, as did microwave ovens and a variety of other appliances. Communications systems began to adopt
electronics to replace older mechanical switching systems.Bellcoreissued the first consumer prediction methodology for telecommunications, andSAEdeveloped a similar document SAE870050 for automotive applications. The nature of predictions evolved during the decade, and it became apparent that die complexity wasn't the only factor that determined failure rates for integrated circuits (ICs).
Kam Wong published a paper questioning the bathtub curve[9]—see alsoreliability-centered maintenance. During this decade, the failure rate of many components dropped by a factor of 10. Software became important to the reliability of systems. By the 1990s, the pace of IC development was picking up. Wider use of stand-alone microcomputers was common, and the PC market helped keep IC densities following Moore's law and doubling about every 18 months. Reliability engineering was now changing as it moved towards understanding thephysics of failure. Failure rates for components kept dropping, but system-level issues became more prominent.Systems thinkinghas become more and more important. For software, the CMM model (Capability Maturity Model) was developed, which gave a more qualitative approach to reliability. ISO 9000 added reliability measures as part of the design and development portion of certification. The expansion of the World Wide Web created new challenges of security and trust. The older problem of too little reliable information available had now been replaced by too much information of questionable value. Consumer reliability problems could now be discussed online in real-time using data. New technologies such as micro-electromechanical systems (MEMS), handheldGPS, and hand-held devices that combine cell phones and computers all represent challenges to maintaining reliability. Product development time continued to shorten through this
decade and what had been done in three years was being done in 18 months. This meant that reliability tools and tasks had to be more closely tied to the development process itself. In many ways, reliability has become part of everyday life and consumer expectations.
Reliability is the probability of a product performing its intended function under specified operating conditions in a manner that meets or exceeds customer expectations.[10]
The objectives of reliability engineering, in decreasing order of priority, are:[11]
The reason for the priority emphasis is that it is by far the most effective way of working, in terms of minimizing costs and generating reliable products. The primary skills that are required, therefore, are the ability to understand and anticipate the possible causes of failures, and knowledge of how to prevent them. It is also necessary to know the methods that can be used for analyzing designs and data.
Reliability engineering for "complex systems" requires a different, more elaborate systems approach than for non-complex systems. Reliability engineering may in that case involve:
Effective reliability engineering requires understanding of the basics offailure mechanismsfor which experience, broad engineering skills and good knowledge from many different special fields of engineering are required,[12]for example:
Reliability may be defined in the following ways:
Many engineering techniques are used in reliabilityrisk assessments, such as reliability block diagrams,hazard analysis,failure mode and effects analysis(FMEA),[13]fault tree analysis(FTA),Reliability Centered Maintenance, (probabilistic) load and material stress and wear calculations, (probabilistic) fatigue and creep analysis, human error analysis, manufacturing defect analysis, reliability testing, etc. These analyses must be done properly and with much attention to detail to be effective. Because of the large number of reliability techniques, their expense, and the varying degrees of reliability required for different situations, most projects develop a reliability program plan to specify the reliability tasks (statement of work(SoW) requirements) that will be performed for that specific system.
Consistent with the creation ofsafety cases, for example perARP4761, the goal of reliability assessments is to provide a robust set of qualitative and quantitative evidence that the use of a component or system will not be associated with unacceptable risk. The basic steps to take[14]are to:
Theriskhere is the combination of probability and severity of the failure incident (scenario) occurring. The severity can be looked at from a system safety or a system availability point of view. Reliability for safety can be thought of as a very different focus from reliability for system availability. Availability and safety can exist in dynamic tension as keeping a system too available can be unsafe. Forcing an engineering system into a safe state too quickly can force false alarms that impede the availability of the system.
In ade minimisdefinition, the severity of failures includes the cost of spare parts, man-hours, logistics, damage (secondary failures), and downtime of machines which may cause production loss. A more complete definition of failure also can mean injury, dismemberment, and death of people within the system (witness mine accidents, industrial accidents, space shuttle failures) and the same to innocent bystanders (witness the citizenry of cities like Bhopal, Love Canal, Chernobyl, or Sendai, and other victims of the 2011 Tōhoku earthquake and tsunami)—in this case, reliability engineering becomes system safety. What is acceptable is determined by the managing authority or customers or the affected communities. Residual risk is the risk that is left over after all reliability activities have finished, and includes the unidentified risk—and is therefore not completely quantifiable.
The complexity of the technical systems such as improvements of design and materials, planned inspections, fool-proof design, and backup redundancy decreases risk and increases the cost. The risk can be decreased to ALARA (as low as reasonably achievable) or ALAPA (as low as practically achievable) levels.
Implementing a reliability program is not simply a software purchase; it is not just a checklist of items that must be completed that ensure one has reliable products and processes. A reliability program is a complex learning and knowledge-based system unique to one's products and processes. It is supported by leadership, built on the skills that one develops within a team, integrated into business processes, and executed by following proven standard work practices.[15]
A reliability program plan is used to document exactly what "best practices" (tasks, methods, tools, analysis, and tests) are required for a particular (sub)system, as well as clarify customer requirements for reliability assessment. For large-scale complex systems, the reliability program plan should be a separatedocument. Resource determination for manpower and budgets for testing and other tasks is critical for a successful program. In general, the amount of work required for an effective program for complex systems is large.
A reliability program plan is essential for achieving high levels of reliability, testability,maintainability, and the resulting systemavailability, and is developed early during system development and refined over the system's life cycle. It specifies not only what the reliability engineer does, but also the tasks performed by otherstakeholders. An effective reliability program plan must be approved by top program management, which is responsible for the allocation of sufficient resources for its implementation.
A reliability program plan may also be used to evaluate and improve the availability of a system by the strategy of focusing on increasing testability & maintainability and not on reliability. Improving maintainability is generally easier than improving reliability. Maintainability estimates (repair rates) are also generally more accurate. However, because the uncertainties in the reliability estimates are in most cases very large, they are likely to dominate the availability calculation (prediction uncertainty problem), even when maintainability levels are very high. When reliability is not under control, more complicated issues may arise, like manpower (maintainers/customer service capability) shortages, spare part availability, logistic delays, lack of repair facilities, extensive retrofit and complex configuration management costs, and others. The problem of unreliability may be increased also due to the "domino effect" of maintenance-induced failures after repairs. Focusing only on maintainability is therefore not enough. If failures are prevented, none of the other issues are of any importance, and therefore reliability is generally regarded as the most important part of availability. Reliability needs to be evaluated and improved related to both availability and thetotal cost of ownership(TCO) due to the cost of spare parts, maintenance man-hours, transport costs, storage costs, part obsolete risks, etc. But, as GM and Toyota have belatedly discovered, TCO also includes the downstream liability costs when reliability calculations have not sufficiently or accurately addressed customers' bodily risks. Often a trade-off is needed between the two. There might be a maximum ratio between availability and cost of ownership. The testability of a system should also be addressed in the plan, as this is the link between reliability and maintainability. The maintenance strategy can influence the reliability of a system (e.g., by preventive and/orpredictive maintenance), although it can never bring it above the inherent reliability.
The reliability plan should clearly provide a strategy for availability control. Whether only availability or also cost of ownership is more important depends on the use of the system. For example, a system that is a critical link in a production system—e.g., a big oil platform—is normally allowed to have a very high cost of ownership if that cost translates to even a minor increase in availability, as the unavailability of the platform results in a massive loss of revenue which can easily exceed the high cost of ownership. A proper reliability plan should always address RAMT analysis in its total context. RAMT stands for reliability, availability, maintainability/maintenance, and testability in the context of the customer's needs.
For any system, one of the first tasks of reliability engineering is to adequately specify the reliability and maintainability requirements allocated from the overallavailabilityneeds and, more importantly, derived from proper design failure analysis or preliminary prototype test results. Clear requirements (able to be designed to) should constrain the designers from designing particular unreliable items/constructions/interfaces/systems. Setting only availability, reliability, testability, or maintainability targets (e.g., max. failure rates) is not appropriate. This is a broad misunderstanding about Reliability Requirements Engineering. Reliability requirements address the system itself, including test and assessment requirements, and associated tasks and documentation. Reliability requirements are included in the appropriate system or subsystem requirements specifications, test plans, and contract statements. The creation of proper lower-level requirements is critical.[16]The provision of only quantitative minimum targets (e.g.,Mean Time Between Failure(MTBF) values or failure rates) is not sufficient for different reasons. One reason is that a full validation (related to correctness and verifiability in time) of a quantitative reliability allocation (requirement spec) on lower levels for complex systems can (often) not be made as a consequence of (1) the fact that the requirements are probabilistic, (2) the extremely high level of uncertainties involved for showing compliance with all these probabilistic requirements, and because (3) reliability is a function of time, and accurate estimates of a (probabilistic) reliability number per item are available only very late in the project, sometimes even after many years of in-service use. Compare this problem with the continuous (re-)balancing of, for example, lower-level-system mass requirements in the development of an aircraft, which is already often a big undertaking. Notice that in this case, masses do only differ in terms of only some %, are not a function of time, and the data is non-probabilistic and available already in CAD models. In the case of reliability, the levels of unreliability (failure rates) may change with factors of decades (multiples of 10) as a result of very minor deviations in design, process, or anything else.[17]The information is often not available without huge uncertainties within the development phase. This makes this allocation problem almost impossible to do in a useful, practical, valid manner that does not result in massive over- or under-specification. A pragmatic approach is therefore needed—for example: the use of general levels/classes of quantitative requirements depending only on severity of failure effects. Also, the validation of results is a far more subjective task than any other type of requirement. (Quantitative) reliability parameters—in terms of MTBF—are by far the most uncertain design parameters in any design.
Furthermore, reliability design requirements should drive a (system or part) design to incorporate features that prevent failures from occurring, or limit consequences from failure in the first place. Not only would it aid in some predictions, this effort would keep from distracting the engineering effort into a kind of accounting work. A design requirement should be precise enough so that a designer can "design to" it and can also prove—through analysis or testing—that the requirement has been achieved, and, if possible, within some a stated confidence. Any type of reliability requirement should be detailed and could be derived from failure analysis (Finite-Element Stress and Fatigue analysis, Reliability Hazard Analysis, FTA, FMEA, Human Factor Analysis, Functional Hazard Analysis, etc.) or any type of reliability testing. Also, requirements are needed for verification tests (e.g., required overload stresses) and test time needed. To derive these requirements in an effective manner, asystems engineering-based risk assessment and mitigation logic should be used. Robust hazard log systems must be created that contain detailed information on why and how systems could or have failed. Requirements are to be derived and tracked in this way. These practical design requirements shall drive the design and not be used only for verification purposes. These requirements (often design constraints) are in this way derived from failure analysis or preliminary tests. Understanding of this difference compared to only purely quantitative (logistic) requirement specification (e.g., Failure Rate / MTBF target) is paramount in the development of successful (complex) systems.[18]
The maintainability requirements address the costs of repairs as well as repair time. Testability (not to be confused with test requirements) requirements provide the link between reliability and maintainability and should address detectability of failure modes (on a particular system level), isolation levels, and the creation of diagnostics (procedures).
As indicated above, reliability engineers should also address requirements for various reliability tasks and documentation during system development, testing, production, and operation. These requirements are generally specified in the contract statement of work and depend on how much leeway the customer wishes to provide to the contractor. Reliability tasks include various analyses, planning, and failure reporting. Task selection depends on the criticality of the system as well as cost. A safety-critical system may require a formal failure reporting and review process throughout development, whereas a non-critical system may rely on final test reports. The most common reliability program tasks are documented in reliability program standards, such as MIL-STD-785 and IEEE 1332. Failure reporting analysis and corrective action systems are a common approach for product/process reliability monitoring.
In practice, most failures can be traced back to some type ofhuman error, for example in:
However, humans are also very good at detecting such failures, correcting them, and improvising when abnormal situations occur. Therefore, policies that completely rule out human actions in design and production processes to improve reliability may not be effective. Some tasks are better performed by humans and some are better performed by machines.[19]
Furthermore, human errors in management; the organization of data and information; or the misuse or abuse of items, may also contribute to unreliability. This is the core reason why high levels of reliability for complex systems can only be achieved by following a robustsystems engineeringprocess with proper planning and execution of the validation and verification tasks. This also includes the careful organization of data and information sharing and creating a "reliability culture", in the same way, that having a "safety culture" is paramount in the development of safety-critical systems.
Reliability prediction combines:
For existing systems, it is arguable that any attempt by a responsible program to correct the root cause of discovered failures may render the initial MTBF estimate invalid, as new assumptions (themselves subject to high error levels) of the effect of this correction must be made. Another practical issue is the general unavailability of detailed failure data, with those available often featuring inconsistent filtering of failure (feedback) data, and ignoring statistical errors (which are very high for rare events like reliability related failures). Very clear guidelines must be present to count and compare failures related to different type of root-causes (e.g. manufacturing-, maintenance-, transport-, system-induced or inherent design failures). Comparing different types of causes may lead to incorrect estimations and incorrect business decisions about the focus of improvement.
To perform a proper quantitative reliability prediction for systems may be difficult and very expensive if done by testing. At the individual part-level, reliability results can often be obtained with comparatively high confidence, as testing of many sample parts might be possible using the available testing budget. However, unfortunately these tests may lack validity at a system-level due to assumptions made at part-level testing. These authors emphasized the importance of initial part- or system-level testing until failure, and to learn from such failures to improve the system or part. The general conclusion is drawn that an accurate and absolute prediction – by either field-data comparison or testing – of reliability is in most cases not possible. An exception might be failures due to wear-out problems such as fatigue failures. In the introduction of MIL-STD-785 it is written that reliability prediction should be used with great caution, if not used solely for comparison in trade-off studies.
Design for Reliability (DfR) is a process that encompasses tools and procedures to ensure that a product meets its reliability requirements, under its use environment, for the duration of its lifetime. DfR is implemented in the design stage of a product to proactively improve product reliability.[21]DfR is often used as part of an overallDesign for Excellence (DfX)strategy.
Reliability design begins with the development of a (system)model. Reliability and availability models useblock diagramsandFault Tree Analysisto provide a graphical means of evaluating the relationships between different parts of the system. These models may incorporate predictions based on failure rates taken from historical data. While the (input data) predictions are often not accurate in an absolute sense, they are valuable to assess relative differences in design alternatives. Maintainability parameters, for exampleMean time to repair(MTTR), can also be used as inputs for such models.
The most important fundamental initiating causes and failure mechanisms are to be identified and analyzed with engineering tools. A diverse set of practical guidance as to performance and reliability should be provided to designers so that they can generate low-stressed designs and products that protect, or are protected against, damage and excessive wear. Proper validation of input loads (requirements) may be needed, in addition to verification for reliability "performance" by testing.
One of the most important design techniques isredundancy. This means that if one part of the system fails, there is an alternate success path, such as a backup system. The reason why this is the ultimate design choice is related to the fact that high-confidence reliability evidence for new parts or systems is often not available, or is extremely expensive to obtain. By combining redundancy, together with a high level of failure monitoring, and the avoidance of common cause failures; even a system with relatively poor single-channel (part) reliability, can be made highly reliable at a system level (up to mission critical reliability). No testing of reliability has to be required for this. In conjunction with redundancy, the use of dissimilar designs or manufacturing processes (e.g. via different suppliers of similar parts) for single independent channels, can provide less sensitivity to quality issues (e.g. early childhood failures at a single supplier), allowing very-high levels of reliability to be achieved at all moments of the development cycle (from early life to long-term). Redundancy can also be applied in systems engineering by double checking requirements, data, designs, calculations, software, and tests to overcome systematic failures.
Another effective way to deal with reliability issues is to perform analysis that predicts degradation, enabling the prevention of unscheduled downtime events / failures.RCM(Reliability Centered Maintenance) programs can be used for this.
For electronic assemblies, there has been an increasing shift towards a different approach calledphysics of failure. This technique relies on understanding the physical static and dynamic failure mechanisms. It accounts for variation in load, strength, and stress that lead to failure with a high level of detail, made possible with the use of modernfinite element method(FEM) software programs that can handle complex geometries and mechanisms such as creep, stress relaxation, fatigue, and probabilistic design (Monte Carlo Methods/DOE). The material or component can be re-designed to reduce the probability of failure and to make it more robust against such variations. Another common design technique is componentderating: i.e. selecting components whose specifications significantly exceed the expected stress levels, such as using heavier gauge electrical wire than might normally be specified for the expectedelectric current.
Many of the tasks, techniques, and analyses used in Reliability Engineering are specific to particular industries and applications, but can commonly include:
Results from these methods are presented during reviews of part or system design, and logistics. Reliability is just one requirement among many for a complex part or system. Engineering trade-off studies are used to determine theoptimumbalance between reliability requirements and other constraints.
Reliability engineers, whether using quantitative or qualitative methods to describe a failure or hazard, rely on language to pinpoint the risks and enable issues to be solved. The language used must help create an orderly description of the function/item/system and its complex surrounding as it relates to the failure of these functions/items/systems. Systems engineering is very much about finding the correct words to describe the problem (and related risks), so that they can be readily solved via engineering solutions. Jack Ring said that a systems engineer's job is to "language the project." (Ring et al. 2000)[23]For part/system failures, reliability engineers should concentrate more on the "why and how", rather that predicting "when". Understanding "why" a failure has occurred (e.g. due to over-stressed components or manufacturing issues) is far more likely to lead to improvement in the designs and processes used[4]than quantifying "when" a failure is likely to occur (e.g. via determining MTBF). To do this, first the reliability hazards relating to the part/system need to be classified and ordered (based on some form of qualitative and quantitative logic if possible) to allow for more efficient assessment and eventual improvement. This is partly done in pure language andpropositionlogic, but also based on experience with similar items. This can for example be seen in descriptions of events infault tree analysis,FMEAanalysis, and hazard (tracking) logs. In this sense language and proper grammar (part of qualitative analysis) plays an important role in reliability engineering, just like it does insafety engineeringor in-general withinsystems engineering.
Correct use of language can also be key to identifying or reducing the risks ofhuman error, which are often the root cause of many failures. This can include proper instructions in maintenance manuals, operation manuals, emergency procedures, and others to prevent systematic human errors that may result in system failures. These should be written by trained or experienced technical authors using so-called simplified English orSimplified Technical English, where words and structure are specifically chosen and created so as to reduce ambiguity or risk of confusion (e.g. an "replace the old part" could ambiguously refer to a swapping a worn-out part with a non-worn-out part, or replacing a part with one using a more recent and hopefully improved design).
Reliability modeling is the process of predicting or understanding the reliability of a component or system prior to its implementation. Two types of analysis that are often used to model a complete system'savailabilitybehavior including effects from logistics issues like spare part provisioning, transport and manpower are fault tree analysis andreliability block diagrams. At a component level, the same types of analyses can be used together with others. The input for the models can come from many sources including testing; prior operational experience; field data; as well as data handbooks from similar or related industries. Regardless of source, all model input data must be used with great caution, as predictions are only valid in cases where the same product was used in the same context. As such, predictions are often only used to help compare alternatives.
For part level predictions, two separate fields of investigation are common:
Reliability is defined as theprobabilitythat a device will perform its intended function during a specified period of time under stated conditions. Mathematically, this may be expressed as,
R(t)=Pr{T>t}=∫t∞f(x)dx{\displaystyle R(t)=Pr\{T>t\}=\int _{t}^{\infty }f(x)\,dx\ \!},
wheref(x){\displaystyle f(x)\!}is the failureprobability density functionandt{\displaystyle t}is the length of the period of time (which is assumed to start from time zero).
There are a few key elements of this definition:
Quantitative requirements are specified using reliabilityparameters. The most common reliability parameter is themean time to failure(MTTF), which can also be specified as thefailure rate(this is expressed as a frequency or conditional probability density function (PDF)) or the number of failures during a given period. These parameters may be useful for higher system levels and systems that are operated frequently (i.e. vehicles, machinery, and electronic equipment). Reliability increases as the MTTF increases. The MTTF is usually specified in hours, but can also be used with other units of measurement, such as miles or cycles. Using MTTF values on lower system levels can be very misleading, especially if they do not specify the associated Failures Modes and Mechanisms (The F in MTTF).[17]
In other cases, reliability is specified as the probability of mission success. For example, reliability of a scheduled aircraft flight can be specified as a dimensionless probability or a percentage, as often used insystem safetyengineering.
A special case of mission success is the single-shot device or system. These are devices or systems that remain relatively dormant and only operate once. Examples include automobileairbags, thermalbatteriesandmissiles. Single-shot reliability is specified as a probability of one-time success or is subsumed into a related parameter. Single-shot missile reliability may be specified as a requirement for the probability of a hit. For such systems, theprobability of failure on demand(PFD) is the reliability measure – this is actually an "unavailability" number. The PFD is derived from failure rate (a frequency of occurrence) and mission time for non-repairable systems.
For repairable systems, it is obtained from failure rate, mean-time-to-repair (MTTR), and test interval. This measure may not be unique for a given system as this measure depends on the kind of demand. In addition to system level requirements, reliability requirements may be specified for critical subsystems. In most cases, reliability parameters are specified with appropriate statisticalconfidence intervals.
The purpose ofreliability testingorreliability verificationis to discover potential problems with the design as early as possible and, ultimately, provide confidence that the system meets its reliability requirements. The reliability of the product in all environments such as expected use, transportation, or storage during the specified lifespan should be considered.[10]It is to expose the product to natural or artificial environmental conditions to undergo its action to evaluate the performance of the product under the environmental conditions of actual use, transportation, and storage, and to analyze and study the degree of influence of environmental factors and their mechanism of action.[24]Through the use of various environmental test equipment to simulate the high temperature, low temperature, and high humidity, and temperature changes in the climate environment, to accelerate the reaction of the product in the use environment, to verify whether it reaches the expected quality inR&D, design, and manufacturing.[25]
Reliability verification is also called reliability testing, which refers to the use of modeling, statistics, and other methods to evaluate the reliability of the product based on the product's life span and expected performance.[26]Most product on the market requires reliability testing, such as automotive,integrated circuit, heavy machinery used to mine nature resources, Aircraft auto software.[27][28]
Reliability testing may be performed at several levels and there are different types of testing. Complex systems may be tested at component, circuit board, unit, assembly, subsystem and system levels.[29](The test level nomenclature varies among applications.) For example, performingenvironmental stress screeningtests at lower levels, such as piece parts or small assemblies, catches problems before they cause failures at higher levels. Testing proceeds during each level of integration through full-up system testing, developmental testing, and operational testing, thereby reducing program risk. However, testing does not mitigate unreliability risk.
With each test both statisticaltype I and type II errorscould be made, depending on sample size, test time, assumptions and the needed discrimination ratio. There is risk of incorrectly rejecting a good design (type I error) and the risk of incorrectly accepting a bad design (type II error).
It is not always feasible to test all system requirements. Some systems are prohibitively expensive to test; somefailure modesmay take years to observe; some complex interactions result in a huge number of possible test cases; and some tests require the use of limited test ranges or other resources. In such cases, different approaches to testing can be used, such as (highly) accelerated life testing,design of experiments, andsimulations.
The desired level of statistical confidence also plays a role in reliability testing. Statistical confidence is increased by increasing either the test time or the number of items tested. Reliability test plans are designed to achieve the specified reliability at the specifiedconfidence levelwith the minimum number of test units and test time. Different test plans result in different levels of risk to the producer and consumer. The desired reliability, statistical confidence, and risk levels for each side influence the ultimate test plan. The customer and developer should agree in advance on how reliability requirements will be tested.
A key aspect of reliability testing is to define "failure". Although this may seem obvious, there are many situations where it is not clear whether a failure is really the fault of the system. Variations in test conditions, operator differences, weather and unexpected situations create differences between the customer and the system developer. One strategy to address this issue is to use a scoring conference process. A scoring conference includes representatives from the customer, the developer, the test organization, the reliability organization, and sometimes independent observers. The scoring conference process is defined in the statement of work. Each test case is considered by the group and "scored" as a success or failure. This scoring is the official result used by the reliability engineer.
As part of the requirements phase, the reliability engineer develops a test strategy with the customer. The test strategy makes trade-offs between the needs of the reliability organization, which wants as much data as possible, and constraints such as cost, schedule and available resources. Test plans and procedures are developed for each reliability test, and results are documented.
Reliability testing is common in the Photonics industry. Examples of reliability tests of lasers are life test andburn-in. These tests consist of the highly accelerated aging, under controlled conditions, of a group of lasers. The data collected from these life tests are used to predict laser life expectancy under the intended operating characteristics.[30]
There are many criteria to test depends on the product or process that are testing on, and mainly, there are five components that are most common:[31][32]
The product life span can be split into four different for analysis. Useful life is the estimated economic life of the product, which is defined as the time can be used before the cost of repair do not justify the continue use to the product. Warranty life is the product should perform the function within the specified time period. Design life is where during the design of the product, designer take into consideration on the life time of competitive product and customer desire and ensure that the product do not result in customer dissatisfaction.[34][35]
Reliability test requirements can follow from any analysis for which the first estimate of failure probability, failure mode or effect needs to be justified. Evidence can be generated with some level of confidence by testing. With software-based systems, the probability is a mix of software and hardware-based failures. Testing reliability requirements is problematic for several reasons. A single test is in most cases insufficient to generate enough statistical data. Multiple tests or long-duration tests are usually very expensive. Some tests are simply impractical, and environmental conditions can be hard to predict over a systems life-cycle.
Reliability engineering is used to design a realistic and affordable test program that provides empirical evidence that the system meets its reliability requirements. Statisticalconfidence levelsare used to address some of these concerns. A certain parameter is expressed along with a corresponding confidence level: for example, anMTBFof 1000 hours at 90% confidence level. From this specification, the reliability engineer can, for example, design a test with explicit criteria for the number of hours and number of failures until the requirement is met or failed. Different sorts of tests are possible.
The combination of required reliability level and required confidence level greatly affects the development cost and the risk to both the customer and producer. Care is needed to select the best combination of requirements—e.g. cost-effectiveness. Reliability testing may be performed at various levels, such as component,subsystemandsystem. Also, many factors must be addressed during testing and operation, such as extreme temperature and humidity, shock, vibration, or other environmental factors (like loss of signal, cooling or power; or other catastrophes such as fire, floods, excessive heat, physical or security violations or other myriad forms of damage or degradation). For systems that must last many years, accelerated life tests may be needed.
A systematic approach to reliability testing is to, first, determine reliability goal, then do tests that are linked to performance and determine the reliability of the product.[36]A reliability verification test in modern industries should clearly determine how they relate to the product's overall reliability performance and how individual tests impact the warranty cost and customer satisfaction.[37]
The purpose ofaccelerated life testing (ALT test)is to induce field failure in the laboratory at a much faster rate by providing a harsher, but nonetheless representative, environment. In such a test, the product is expected to fail in the lab just as it would have failed in the field—but in much less time.
The main objective of an accelerated test is either of the following:
An accelerated testing program can be broken down into the following steps:
Common ways to determine a life stress relationship are:
Software reliability is a special aspect of reliability engineering. It focuses on foundations and techniques to make software more reliable, i.e., resilient to faults. System reliability, by definition, includes all parts of the system, including hardware, software, supporting infrastructure (including critical external interfaces), operators and procedures. Traditionally, reliability engineering focuses on critical hardware parts of the system. Since the widespread use of digitalintegrated circuittechnology, software has become an increasingly critical part of most electronics and, hence, nearly all present day systems. Therefore, software reliability has gained prominence within the field of system reliability.
There are significant differences, however, in how software and hardware behave.
Most hardware unreliability is the result of a component or material failure that results in the system not performing its intended function. Repairing or replacing the hardware component restores the system to its original operating state.
However, software does not fail in the same sense that hardware fails. Instead, software unreliability is the result of unanticipated results of software operations. Even relatively small software programs can have astronomically largecombinationsof inputs and states that are infeasible to exhaustively test. Restoring software to its original state only works until the same combination of inputs and states results in the same unintended result. Software reliability engineering must take this into account.
Despite this difference in the source of failure between software and hardware, severalsoftware reliability modelsbased on statistics have been proposed to quantify what we experience with software: the longer software is run, the higher the probability that it will eventually be used in an untested manner and exhibit a latent defect that results in a failure (Shooman1987), (Musa 2005), (Denney 2005).
As with hardware, software reliability depends on good requirements, design and implementation. Software reliability engineering relies heavily on a disciplinedsoftware engineeringprocess to anticipate and design againstunintended consequences. There is more overlap between softwarequality engineeringand software reliability engineering than between hardware quality and reliability. A good software development plan is a key aspect of the software reliability program. The software development plan describes the design and coding standards,peer reviews,unit tests,configuration management,software metricsand software models to be used during software development.
A common reliability metric is the number of software faults per line of code (FLOC), usually expressed as faults per thousand lines of code. This metric, along with software execution time, is key to most software reliability models and estimates. The theory is that the software reliability increases as the number of faults (or fault density) decreases. Establishing a direct connection between fault density and mean-time-between-failure is difficult, however, because of the way software faults are distributed in the code, their severity, and the probability of the combination of inputs necessary to encounter the fault. Nevertheless, fault density serves as a useful indicator for the reliability engineer. Other software metrics, such as complexity, are also used. This metric remains controversial, since changes in software development and verification practices can have dramatic impact on overall defect rates.
Software testingis an important aspect of software reliability. Even the best software development process results in some software faults that are nearly undetectable until tested. Software is tested at several levels, starting with individualunits, throughintegrationand full-upsystem testing. All phases of testing, software faults are discovered, corrected, and re-tested. Reliability estimates are updated based on the fault density and other metrics. At a system level, mean-time-between-failure data can be collected and used to estimate reliability. Unlike hardware, performing exactly the same test on exactly the same software configuration does not provide increased statistical confidence. Instead, software reliability uses different metrics, such ascode coverage.
The Software Engineering Institute'scapability maturity modelis a common means of assessing the overall software development process for reliability and quality purposes.
Structural reliabilityor the reliability of structures is the application of reliability theory to the behavior ofstructures. It is used in both the design and maintenance of different types of structures including concrete and steel structures.[38][39]In structural reliability studies both loads and resistances are modeled as probabilistic variables. Using this approach the probability of failure of a structure is calculated.
Reliability for safety and reliability for availability are often closely related. Lost availability of an engineering system can cost money. If a subway system is unavailable the subway operator will lose money for each hour the system is down. The subway operator will lose more money if safety is compromised. The definition of reliability is tied to a probability of not encountering a failure. A failure can cause loss of safety, loss of availability or both. It is undesirable to lose safety or availability in a critical system.
Reliability engineering is concerned with overall minimisation of failures that could lead to financial losses for the responsible entity, whereassafety engineeringfocuses on minimising a specific set of failure types that in general could lead to loss of life, injury or damage to equipment.
Reliability hazards could transform into incidents leading to a loss of revenue for the company or the customer, for example due to direct and indirect costs associated with: loss of production due to system unavailability; unexpected high or low demands for spares; repair costs; man-hours; re-designs or interruptions to normal production.[40]
Safety engineering is often highly specific, relating only to certain tightly regulated industries, applications, or areas. It primarily focuses on system safety hazards that could lead to severe accidents including: loss of life; destruction of equipment; or environmental damage. As such, the related system functional reliability requirements are often extremely high. Although it deals with unwanted failures in the same sense as reliability engineering, it, however, has less of a focus on direct costs, and is not concerned with post-failure repair actions. Another difference is the level of impact of failures on society, leading to a tendency for strict control by governments or regulatory bodies (e.g. nuclear, aerospace, defense, rail and oil industries).[40]
Safety can be increased using a 2oo2 cross checked redundant system. Availability can be increased by using "1oo2" (1 out of 2) redundancy at a part or system level. If both redundant elements disagree the more permissive element will maximize availability. A 1oo2 system should never be relied on for safety. Fault-tolerant systems often rely on additional redundancy (e.g.2oo3 voting logic) where multiple redundant elements must agree on a potentially unsafe action before it is performed. This increases both availability and safety at a system level. This is common practice in aerospace systems that need continued availability and do not have afail-safemode. For example, aircraft may use triple modular redundancy forflight computersand control surfaces (including occasionally different modes of operation e.g. electrical/mechanical/hydraulic) as these need to always be operational, due to the fact that there are no "safe" default positions for control surfaces such as rudders or ailerons when the aircraft is flying.
The above example of a 2oo3 fault tolerant system increases both mission reliability as well as safety. However, the "basic" reliability of the system will in this case still be lower than a non-redundant (1oo1) or 2oo2 system. Basic reliability engineering covers all failures, including those that might not result in system failure, but do result in additional cost due to: maintenance repair actions; logistics; spare parts etc. For example, replacement or repair of 1 faulty channel in a 2oo3 voting system, (the system is still operating, although with one failed channel it has actually become a 2oo2 system) is contributing to basic unreliability but not mission unreliability. As an example, the failure of the tail-light of an aircraft will not prevent the plane from flying (and so is not considered a mission failure), but it does need to be remedied (with a related cost, and so does contribute to the basic unreliability levels).
When using fault tolerant (redundant) systems or systems that are equipped with protection functions, detectability of failures and avoidance of common cause failures becomes paramount for safe functioning and/or mission reliability.
Quality often focuses on manufacturing defects during the warranty phase. Reliability looks at the failure intensity over the whole life of a product or engineering system from commissioning to decommissioning.Six Sigmahas its roots in statistical control in quality of manufacturing. Reliability engineering is a specialty part of systems engineering. The systems engineering process is a discovery process that is often unlike a manufacturing process. A manufacturing process is often focused on repetitive activities that achieve high quality outputs with minimum cost and time.[41]
The everyday usage term "quality of a product" is loosely taken to mean its inherent degree of excellence. In industry, a more precise definition of quality as "conformance to requirements or specifications at the start of use" is used. Assuming the final product specification adequately captures the original requirements and customer/system needs, the quality level can be measured as the fraction of product units shipped that meet specifications.[42]Manufactured goods quality often focuses on the number of warranty claims during the warranty period.
Quality is a snapshot at the start of life through the warranty period and is related to the control of lower-level product specifications. This includes time-zero defects i.e. where manufacturing mistakes escaped final Quality Control. In theory the quality level might be described by a single fraction of defective products. Reliability, as a part of systems engineering, acts as more of an ongoing assessment of failure rates over many years. Theoretically, all items will fail over an infinite period of time.[43]Defects that appear over time are referred to as reliability fallout. To describe reliability fallout a probability model that describes the fraction fallout over time is needed. This is known as the life distribution model.[42]Some of these reliability issues may be due to inherent design issues, which may exist even though the product conforms to specifications. Even items that are produced perfectly will fail over time due to one or more failure mechanisms (e.g. due to human error or mechanical, electrical, and chemical factors). These reliability issues can also be influenced by acceptable levels of variation during initial production.
Quality and reliability are, therefore, related to manufacturing. Reliability is more targeted towards clients who are focused on failures throughout the whole life of the product such as the military, airlines or railroads. Items that do not conform to product specification will generally do worse in terms of reliability (having a lower MTTF), but this does not always have to be the case. The full mathematical quantification (in statistical models) of this combined relation is in general very difficult or even practically impossible. In cases where manufacturing variances can be effectively reduced, six sigma tools have been shown to be useful to find optimal process solutions which can increase quality and reliability. Six Sigma may also help to design products that are more robust to manufacturing induced failures and infant mortality defects in engineering systems and manufactured product.
In contrast with Six Sigma, reliability engineering solutions are generally found by focusing on reliability testing and system design. Solutions are found in different ways, such as by simplifying a system to allow more of the mechanisms of failure involved to be understood; performing detailed calculations of material stress levels allowing suitable safety factors to be determined; finding possible abnormal system load conditions and using this to increase robustness of a design to manufacturing variance related failure mechanisms. Furthermore, reliability engineering uses system-level solutions, like designing redundant and fault-tolerant systems for situations with high availability needs (seeReliability engineering vs Safety engineeringabove).
Note: A "defect" in six-sigma/quality literature is not the same as a "failure" (Field failure | e.g. fractured item) in reliability. A six-sigma/quality defect refers generally to non-conformance with a requirement (e.g. basic functionality or a key dimension). Items can, however, fail over time, even if these requirements are all fulfilled. Quality is generally not concerned with asking the crucial question "are the requirements actually correct?", whereas reliability is.
Once systems or parts are being produced, reliability engineering attempts to monitor, assess, and correct deficiencies. Monitoring includes electronic and visual surveillance of critical parameters identified during the fault tree analysis design stage. Data collection is highly dependent on the nature of the system. Most large organizations havequality controlgroups that collect failure data on vehicles, equipment and machinery. Consumer product failures are often tracked by the number of returns. For systems in dormant storage or on standby, it is necessary to establish a formal surveillance program to inspect and test random samples. Any changes to the system, such as field upgrades or recall repairs, require additional reliability testing to ensure the reliability of the modification. Since it is not possible to anticipate all the failure modes of a given system, especially ones with a human element, failures will occur. The reliability program also includes a systematicroot cause analysisthat identifies the causal relationships involved in the failure such that effective corrective actions may be implemented. When possible, system failures and corrective actions are reported to the reliability engineering organization.
Some of the most common methods to apply to a reliability operational assessment arefailure reporting, analysis, and corrective action systems(FRACAS). This systematic approach develops a reliability, safety, and logistics assessment based on failure/incident reporting, management, analysis, and corrective/preventive actions. Organizations today are adopting this method and utilizing commercial systems (such as Web-based FRACAS applications) that enable them to create a failure/incident data repository from which statistics can be derived to view accurate and genuine reliability, safety, and quality metrics.
It is extremely important for an organization to adopt a common FRACAS system for all end items. Also, it should allow test results to be captured in a practical way. Failure to adopt one easy-to-use (in terms of ease of data-entry for field engineers and repair shop engineers) and easy-to-maintain integrated system is likely to result in a failure of the FRACAS program itself.
Some of the common outputs from a FRACAS system include Field MTBF, MTTR, spares consumption, reliability growth, failure/incidents distribution by type, location, part no., serial no., and symptom.
The use of past data to predict the reliability of new comparable systems/items can be misleading as reliability is a function of the context of use and can be affected by small changes in design/manufacturing.
Systems of any significant complexity are developed by organizations of people, such as a commercialcompanyor agovernmentagency. The reliability engineering organization must be consistent with the company'sorganizational structure. For small, non-critical systems, reliability engineering may be informal. As complexity grows, the need arises for a formal reliability function. Because reliability is important to the customer, the customer may even specify certain aspects of the reliability organization.
There are several common types of reliability organizations. The project manager or chief engineer may employ one or more reliability engineers directly. In larger organizations, there is usually a product assurance orspecialty engineeringorganization, which may include reliability,maintainability,quality, safety,human factors,logistics, etc. In such case, the reliability engineer reports to the product assurance manager or specialty engineering manager.
In some cases, a company may wish to establish an independent reliability organization. This is desirable to ensure that the system reliability, which is often expensive and time-consuming, is not unduly slighted due to budget and schedule pressures. In such cases, the reliability engineer works for the project day-to-day, but is actually employed and paid by a separate organization within the company.
Because reliability engineering is critical to early system design, it has become common for reliability engineers, however, the organization is structured, to work as part of anintegrated product team.
Some universities offer graduate degrees in reliability engineering. Other reliability professionals typically have a physics degree from a university or college program. Many engineering programs offer reliability courses, and some universities have entire reliability engineering programs. A reliability engineer must be registered as aprofessional engineerby the state or province by law, but not all reliability professionals are engineers. Reliability engineers are required in systems where public safety is at risk. There are many professional conferences and industry training programs available for reliability engineers. Several professional organizations exist for reliability engineers, including the American Society for Quality Reliability Division (ASQ-RD),[44]theIEEE Reliability Society, theAmerican Society for Quality(ASQ),[45]and the Society of Reliability Engineers (SRE).[46]
http://standards.sae.org/ja1000/1_199903/SAE JA1000/1 Reliability Program Standard Implementation Guide
In the UK, there are more up to date standards maintained under the sponsorship of UK MOD as Defence Standards. The relevant Standards include:
DEF STAN 00-40 Reliability and Maintainability (R&M)
DEF STAN 00-42 RELIABILITY AND MAINTAINABILITY ASSURANCE GUIDES
DEF STAN 00-43 RELIABILITY AND MAINTAINABILITY ASSURANCE ACTIVITY
DEF STAN 00-44 RELIABILITY AND MAINTAINABILITY DATA COLLECTION AND CLASSIFICATION
DEF STAN 00-45 Issue 1: RELIABILITY CENTERED MAINTENANCE
DEF STAN 00-49 Issue 1: RELIABILITY AND MAINTAINABILITY MOD GUIDE TO TERMINOLOGY DEFINITIONS
These can be obtained fromDSTAN. There are also many commercial standards, produced by many organisations including the SAE, MSG, ARP, and IEE.
|
https://en.wikipedia.org/wiki/Reliability_engineering
|
Aspecificationoften refers to a set of documented requirements to be satisfied by a material, design, product, or service.[1]A specification is often a type oftechnical standard.
There are different types of technical or engineering specifications (specs), and the term is used differently in different technical contexts. They often refer to particular documents, and/or particular information within them. The wordspecificationis broadly defined as "to state explicitly or in detail" or "to be specific".
Arequirement specificationis a documentedrequirement, or set of documented requirements, to be satisfied by a given material, design, product, service, etc.[1]It is a common early part ofengineering designandproduct developmentprocesses in many fields.
Afunctional specificationis a kind of requirement specification, and may show functional block diagrams.[citation needed]
Adesign or product specificationdescribes the features of thesolutionsfor the Requirement Specification, referring to either a designed solutionorfinal produced solution. It is often used to guide fabrication/production. Sometimes the termspecificationis here used in connection with adata sheet(orspec sheet), which may be confusing. A data sheet describes the technical characteristics of an item or product, often published by a manufacturer to help people choose or use the products. A data sheet is not a technical specification in the sense of informing how to produce.
An "in-service" or "maintained as"specification, specifies the conditions of a system or object after years of operation, including the effects of wear and maintenance (configuration changes).
Specifications are a type of technical standard that may be developed by any of various kinds of organizations, in both thepublicandprivatesectors. Example organization types include acorporation, aconsortium(a small group of corporations), atrade association(an industry-wide group of corporations), a national government (including its different public entities,regulatory agencies, and national laboratories and institutes), aprofessional association(society), a purpose-madestandards organizationsuch asISO, or vendor-neutral developed generic requirements. It is common for one organization torefer to(reference,call out,cite) the standards of another. Voluntary standards may become mandatory if adopted by a government or business contract.
Inengineering,manufacturing, andbusiness, it is vital forsuppliers,purchasers, and users of materials, products, or services to understand and agree upon all requirements.[2]
A specification may refer to astandardwhich is often referenced by acontractor procurement document, or an otherwise agreed upon set of requirements (though still often used in the singular). In any case, it provides the necessary details about the specific requirements.
Standards for specifications may be provided by government agencies, standards organizations (SAE,AWS,NIST,ASTM,ISO/IEC,CEN/CENELEC,DoD, etc.),trade associations,corporations, and others. A memorandum published byWilliam J. Perry,U.S. Defense Secretary, on 29 June 1994 announced that a move to "greater use of performance and commercial specifications and standards" was to be introduced, which Perry saw as "one of the most important actions that [the Department of Defense] should take" at that time.[3]The followingBritish standardsapply to specifications:
A design/product specification does not necessarily prove aproductto be correct or useful in every context. An item might beverifiedto comply with a specification or stamped with a specification number: this does not, by itself, indicate that the item is fit for other, non-validated uses. The people who use the item (engineers,trade unions, etc.) or specify the item (building codes, government, industry, etc.) have the responsibility to consider the choice of available specifications, specify the correct one, enforce compliance, and use the item correctly.Validationof suitability is necessary.
Public sector procurementrules in theEuropean Unionand United Kingdom requirenon-discriminatorytechnical specifications to be used to identify the purchasing organisation's requirements. The rules relating to public works contracts initially prohibited "technical specifications having a discriminatory effect" from 1971; this principle was extended to public supply contracts by the then European Communities' Directive 77/62/EEC coordinating procedures for the award of public supply contracts, adopted in 1976.[7]Some organisations provide guidance on specification-writing for their staff and partners.[8][9]In addition to identifying the specific attributes required of the goods or services being purchased, specifications in the public sector may also make reference to the organisation's current corporate objectives or priorities.[8]: 3
Sometimes a guide or astandard operating procedureis available to help write and format a good specification.[10][11][12]A specification might include:
Specifications in North America form part of the contract documents that accompany and govern the drawings for construction of building and infrastructure projects. Specifications describe the quality and performance of building materials, using code citations and published standards, whereas the drawings orbuilding information model(BIM) illustrates quantity and location of materials. The guiding master document of names and numbers is the latest edition ofMasterFormat. This is a consensus document that is jointly sponsored by two professional organizations:Construction Specifications CanadaandConstruction Specifications Institutebased in the United States and updated every two years.
While there is a tendency to believe that "specifications overrule drawings" in the event of discrepancies between the text document and the drawings, the actual intent must be made explicit in the contract between the Owner and the Contractor. The standard AIA (American Institute of Architects) and EJCDC (Engineering Joint Contract Documents Committee) states that the drawings and specifications are complementary, together providing the information required for a complete facility. Many public agencies, such as the Naval Facilities Command (NAVFAC) state that the specifications overrule the drawings. This is based on the idea that words are easier for a jury (or mediator) to interpret than drawings in case of a dispute.
The standard listing of construction specifications falls into50 Divisions, or broad categories of work types and work results involved in construction. The divisions are subdivided into sections, each one addressing a specific material type (concrete) or a work product (steel door) of the construction work. A specific material may be covered in several locations, depending on the work result: stainless steel (for example) can be covered as a sheet material used in flashing and sheet Metal in division 07; it can be part of a finished product, such as a handrail, covered in division 05; or it can be a component of building hardware, covered in division 08. The original listing of specification divisions was based on the time sequence of construction, working from exterior to interior, and this logic is still somewhat followed as new materials and systems make their way into the construction process.
Each section is subdivided into three distinct parts: "general", "products" and "execution". The MasterFormat and SectionFormat[20]systems can be successfully applied to residential, commercial, civil, and industrial construction. Although many architects find the rather voluminous commercial style of specifications too lengthy for most residential projects and therefore either produce more abbreviated specifications of their own or use ArCHspec (which was specifically created for residential projects). Master specification systems are available from multiple vendors such as Arcom, Visispec, BSD, and Spectext. These systems were created to standardize language across the United States and are usually subscription based.
Specifications can be either "performance-based", whereby the specifier restricts the text to stating the performance that must be achieved by the completed work, "prescriptive" where the specifier states the specific criteria such as fabrication standards applicable to the item, or "proprietary", whereby the specifier indicates specific products, vendors and even contractors that are acceptable for each workscope. In addition, specifications can be "closed" with a specific list of products, or "open" allowing for substitutions made by the constructor. Most construction specifications are a combination of performance-based and proprietary types, naming acceptable manufacturers and products while also specifying certain standards and design criteria that must be met.
While North American specifications are usually restricted to broad descriptions of the work,Europeanones and Civil work can include actual work quantities, including such things asareaofdrywallto be built in square meters, like abill of materials. This type of specification is a collaborative effort between a specification writer and aquantity surveyor. This approach is unusual in North America, where each bidder performs a quantity survey on the basis of both drawings and specifications. In many countries on the European continent, content that might be described as "specifications" in the United States are covered under the building code or municipal code. Civil and infrastructure work in the United States often includes a quantity breakdown of the work to be performed as well.
Although specifications are usually issued by thearchitect's office, specification writing itself is undertaken by the architect and the variousengineersor by specialist specification writers. Specification writing is often a distinct professional trade, with professional certifications such as "Certified Construction Specifier" (CCS) available through the Construction Specifications Institute and the Registered Specification Writer (RSW)[21]through Construction Specifications Canada. Specification writers may be separate entities such assub-contractorsor they may beemployeesof architects, engineers, or construction management companies. Specification writers frequently meet with manufacturers ofbuilding materialswho seek to have their products specified on upcoming construction projects so that contractors can include their products in the estimates leading to their proposals.
In February 2015, ArCHspec went live, from ArCH (Architects Creating Homes), a nationwide American professional society of architects whose purpose is to improve residential architecture. ArCHspec was created specifically for use by licensed architects while designing SFR (Single Family Residential) architectural projects. Unlike the more commercial CSI/CSC (50+ division commercial specifications), ArCHspec utilizes the more concise 16 traditional Divisions, plus a Division 0 (Scope & Bid Forms) and Division 17 (low voltage). Many architects, up to this point, did not provide specifications for residential designs, which is one of the reasons ArCHspec was created: to fill a void in the industry with more compact specifications for residential projects. Shorter form specifications documents suitable for residential use are also available through Arcom, and follow the 50 division format, which was adopted in both the United States and Canada starting in 2004. The 16 division format is no longer considered standard, and is not supported by either CSI or CSC, or any of the subscription master specification services, data repositories, product lead systems, and the bulk of governmental agencies.
The United States'Federal Acquisition Regulationgoverningprocurementfor thefederal governmentand its agencies stipulates that a copy of the drawings and specifications must be kept available on a construction site.[22]
Specifications inEgyptform part of contract documents. The Housing and Building National Research Center (HBRC) is responsible for developing construction specifications and codes. The HBRC has published more than 15 books which cover building activities likeearthworks, plastering, etc.
Specifications in the UK are part of the contract documents that accompany and govern the construction of a building. They are prepared by construction professionals such asarchitects,architectural technologists,structural engineers,landscape architectsandbuilding services engineers. They are created from previous project specifications, in-house documents or master specifications such as theNational Building Specification(NBS). The National Building Specification is owned by theRoyal Institute of British Architects(RIBA) through their commercial group RIBA Enterprises (RIBAe). NBS master specifications provide content that is broad and comprehensive, and delivered using software functionality that enables specifiers to customize the content to suit the needs of the project and to keep up to date.
UK project specification types fall into two main categories prescriptive and performance. Prescriptive specifications define the requirements using generic or proprietary descriptions of what is required, whereas performance specifications focus on the outcomes rather than the characteristics of the components.
Specifications are an integral part ofBuilding Information Modelingand cover the non-geometric requirements.
Pharmaceutical products can usually be tested and qualified by variouspharmacopoeias. Current existing pharmaceutical standards include:
If any pharmaceutical product is not covered by the abovestandards, it can be evaluated by the additional source of pharmacopoeias from other nations, from industrial specifications, or from a standardizedformularysuch as
A similar approach is adopted by the food manufacturing, of whichCodex Alimentariusranks the highest standards, followed by regional and national standards.[23]
The coverage of food and drugstandardsbyISOis currently less fruitful and not yet put forward as an urgent agenda due to the tight restrictions of regional or national constitution.[24][25]
Specifications and other standards can be externally imposed as discussed above, but also internal manufacturing and quality specifications. These exist not only for thefoodorpharmaceuticalproduct but also for the processingmachinery,qualityprocesses,packaging,logistics(cold chain), etc. and are exemplified by ISO 14134 and ISO 15609.[26][27]
The converse of explicit statement of specifications is a process for dealing with observations that are out-of-specification. TheUnited States Food and Drug Administrationhas published a non-binding recommendation that addresses just this point.[28]
At the present time, much of the information and regulations concerning food and food products remain in a form which makes it difficult to apply automated information processing, storage and transmission methods and techniques.
Data systems that can process, store and transfer information about food and food products need formal specifications for the representations of data about food and food products in order to operate effectively and efficiently.
Development of formal specifications for food and drug data with the necessary and sufficient clarity and precision for use specifically by digital computing systems have begun to emerge from some government agencies and standards organizations: theUnited States Food and Drug Administrationhas published specifications for a "Structured Product Label" which drug manufacturers must by mandate use to submit electronically the information on a drug label.[29]Recently, the ISO has made some progress in the area of food and drugstandardsand formal specifications for data about regulated substances through the publication of ISO 11238.[30]
In many contexts, particularly software, specifications are needed to avoid errors due to lack of compatibility, for instance, in interoperability issues.
For instance, when two applications share Unicode data, but use different normal forms or use them incorrectly, in an incompatible way or without sharing a minimum set of interoperability specification, errors and data loss can result. For example, Mac OS X has many components that prefer or require only decomposed characters (thus decomposed-only Unicode encoded with UTF-8 is also known as "UTF8-MAC"). In one specific instance, the combination of OS X errors handling composed characters, and thesambafile- and printer-sharing software (which replaces decomposed letters with composed ones when copying file names), has led to confusing and data-destroying interoperability problems.[31][32]
Applications may avoid such errors by preserving input code points, and normalizing them to only the application's preferred normal form for internal use.
Such errors may also be avoided with algorithms normalizing both strings before any binary comparison.
However errors due to file name encoding incompatibilities have always existed, due to a lack of minimum set of common specification between software hoped to be inter-operable between various file system drivers, operating systems, network protocols, and thousands of software packages.
Aformal specificationis amathematicaldescription ofsoftwareorhardwarethat may be used to develop animplementation. It describeswhatthe system should do, not (necessarily)howthe system should do it. Given such a specification, it is possible to useformal verificationtechniques to demonstrate that a candidate system design is correct with respect to that specification. This has the advantage that incorrect candidate system designs can be revised before a major investment has been made in actually implementing the design. An alternative approach is to use provably correctrefinementsteps to transform a specification into a design, and ultimately into an actual implementation, that is correct by construction.
In (hardware, software, or enterprise) systems development, anarchitectural specificationis the set ofdocumentationthat describes thestructure,behavior, and moreviewsof thatsystem.
Aprogram specificationis the definition of what acomputer programis expected to do. It can beinformal, in which case it can be considered as a user manual from a developer point of view, orformal, in which case it has a definite meaning defined inmathematicalor programmatic terms. In practice, many successful specifications are written to understand and fine-tune applications that were already well-developed, althoughsafety-criticalsoftware systemsare often carefully specified prior to application development. Specifications are most important for external interfaces that must remain stable.
Insoftware development, afunctional specification(also,functional specorspecsorfunctional specifications document (FSD)) is the set ofdocumentationthat describes the behavior of a computer program or largersoftware system. The documentation typically describes various inputs that can be provided to thesoftwaresystem and how thesystemresponds to those inputs.
Web services specifications are often under the umbrella of aquality management system.[33]
These types of documents define how a specific document should be written, which may include, but is not limited to, the systems of a document naming, version, layout, referencing, structuring, appearance, language, copyright, hierarchy or format, etc.[34][35]Very often, this kind of specifications is complemented by a designated template.[36][37][38]
|
https://en.wikipedia.org/wiki/Specification
|
Engineering toleranceis the permissible limit or limits of variation in:
Dimensions, properties, or conditions may have some variation without significantly affecting functioning of systems, machines, structures, etc. A variation beyond the tolerance (for example, a temperature that is too hot or too cold) is said to be noncompliant, rejected, or exceeding the tolerance.
A primary concern is to determine how wide the tolerances may be without affecting other factors or the outcome of a process. This can be by the use of scientific principles, engineering knowledge, and professional experience. Experimental investigation is very useful to investigate the effects of tolerances:Design of experiments, formal engineering evaluations, etc.
A good set of engineering tolerances in aspecification, by itself, does not imply that compliance with those tolerances will be achieved. Actual production of any product (or operation of any system) involves some inherent variation of input and output. Measurement error and statistical uncertainty are also present in all measurements. With anormal distribution, the tails of measured values may extend well beyond plus and minus three standard deviations from the process average. Appreciable portions of one (or both) tails might extend beyond the specified tolerance.
Theprocess capabilityof systems, materials, and products needs to be compatible with the specified engineering tolerances.Process controlsmust be in place and an effectivequality management system, such asTotal Quality Management, needs to keep actual production within the desired tolerances. Aprocess capability indexis used to indicate the relationship between tolerances and actual measured production.
The choice of tolerances is also affected by the intended statisticalsampling planand its characteristics such as the Acceptable Quality Level. This relates to the question of whether tolerances must be extremely rigid (high confidence in 100% conformance) or whether some small percentage of being out-of-tolerance may sometimes be acceptable.
Genichi Taguchiand others have suggested that traditional two-sided tolerancing is analogous to "goal posts" in afootball game: It implies that all data within those tolerances are equally acceptable. The alternative is that the best product has a measurement which is precisely on target. There is an increasing loss which is a function of the deviation or variability from the target value of any design parameter. The greater the deviation from target, the greater is the loss. This is described as theTaguchi loss functionorquality loss function, and it is the key principle of an alternative system calledinertial tolerancing.
Research and development work conducted by M. Pillet and colleagues[1]at the Savoy University has resulted in industry-specific adoption.[2]Recently the publishing of the French standard NFX 04-008 has allowed further consideration by the manufacturing community.
Dimensional tolerance is related to, but different fromfitin mechanical engineering, which is adesigned-inclearance or interference between two parts. Tolerances are assigned to parts for manufacturing purposes, as boundaries for acceptable build. No machine can hold dimensions precisely to the nominal value, so there must be acceptable degrees of variation. If a part is manufactured, but has dimensions that are out of tolerance, it is not a usable part according to the design intent. Tolerances can be applied to any dimension. The commonly used terms are:
This is identical to the upper deviation for shafts and the lower deviation for holes.[3]If the fundamental deviation is greater than zero, the bolt will always be smaller than the basic size and he hole will always be wider. Fundamental deviation is a form ofallowance, rather than tolerance.
For example, if a shaft with a nominal diameter of 10mmis to have a sliding fit within a hole, the shaft might be specified with a tolerance range from 9.964 to 10 mm (i.e., a zero fundamental deviation, but a lower deviation of 0.036 mm) and the hole might be specified with a tolerance range from 10.04 mm to 10.076 mm (0.04 mm fundamental deviation and 0.076 mm upper deviation). This would provide a clearance fit of somewhere between 0.04 mm (largest shaft paired with the smallest hole, called theMaximum Material Condition- MMC) and 0.112 mm (smallest shaft paired with the largest hole,Least Material Condition- LMC). In this case the size of the tolerance range for both the shaft and hole is chosen to be the same (0.036 mm), meaning that both components have the same International Tolerance grade but this need not be the case in general.
When no other tolerances are provided, themachining industryuses the followingstandard tolerances:[4][5]
When designing mechanical components, a system of standardized tolerances calledInternational Tolerance gradesare often used. The standard (size) tolerances are divided into two categories: hole and shaft. They are labelled with a letter (capitals for holes and lowercase for shafts) and a number. For example: H7 (hole,tapped hole, ornut) and h7 (shaft or bolt). H7/h6 is a very common standard tolerance which gives a tight fit. The tolerances work in such a way that for a hole H7 means that the hole should be made slightly larger than the base dimension (in this case for an ISO fit 10+0.015−0, meaning that it may be up to 0.015 mm larger than the base dimension, and 0 mm smaller). The actual amount bigger/smaller depends on the base dimension. For a shaft of the same size, h6 would mean 10+0−0.009, which means the shaft may be as small as 0.009 mm smaller than the base dimension and 0 mm larger. This method of standard tolerances is also known as Limits and Fits and can be found inISO 286-1:2010 (Link to ISO catalog).
The table below summarises the International Tolerance (IT) grades and the general applications of these grades:
An analysis of fit bystatistical interferenceis also extremely useful: It indicates the frequency (or probability) of parts properly fitting together.
An electrical specification might call for aresistorwith a nominal value of 100 Ω (ohms), but will also state a tolerance such as "±1%". This means that any resistor with a value in the range 99–101Ω is acceptable. For critical components, one might specify that the actual resistance must remain within tolerance within a specified temperature range, over a specified lifetime, and so on.
Many commercially availableresistorsandcapacitorsof standard types, and some smallinductors, are often marked withcoloured bandsto indicate their value and the tolerance. High-precision components of non-standard values may have numerical information printed on them.
Low tolerance means only a small deviation from the components given value, when new, under normal operating conditions and at room temperature. Higher tolerance means the component will have a wider range of possible values.
The terms are often confused but sometimes a difference is maintained. SeeAllowance (engineering) § Confounding of the engineering concepts of allowance and tolerance.
Incivil engineering,clearancerefers to the difference between theloading gaugeand thestructure gaugein the case ofrailroad carsortrams, or the difference between the size of anyvehicleand the width/height of doors, the width/height of anoverpassor thediameterof atunnelas well as theair draftunder abridge, the width of alockor diameter of a tunnel in the case ofwatercraft. In addition there is the difference between thedeep draftand thestream bedorsea bedof awaterway.
|
https://en.wikipedia.org/wiki/Tolerance_(engineering)
|
Inset theoryinmathematicsandformal logic, twosetsare said to bedisjoint setsif they have noelementin common. Equivalently, two disjoint sets are sets whoseintersectionis theempty set.[1]For example, {1, 2, 3} and {4, 5, 6} aredisjoint sets,while {1, 2, 3} and {3, 4, 5} are not disjoint. A collection of two or more sets is called disjoint if any two distinct sets of the collection are disjoint.
This definition of disjoint sets can be extended tofamilies of setsand toindexed familiesof sets.
By definition, a collection of sets is called afamily of sets(such as thepower set, for example). In some sources this is a set of sets, while other sources allow it to be amultisetof sets, with some sets repeated.
Anindexed familyof sets(Ai)i∈I,{\displaystyle \left(A_{i}\right)_{i\in I},}is by definition a set-valuedfunction(that is, it is a function that assigns a setAi{\displaystyle A_{i}}to every elementi∈I{\displaystyle i\in I}in its domain) whose domainI{\displaystyle I}is called itsindex set(and elements of its domain are calledindices).
There are two subtly different definitions for when a family of setsF{\displaystyle {\mathcal {F}}}is calledpairwise disjoint. According to one such definition, the family is disjoint if each two sets in the family are either identical or disjoint. This definition would allow pairwise disjoint families of sets to have repeated copies of the same set. According to an alternative definition, each two sets in the family must be disjoint; repeated copies are not allowed. The same two definitions can be applied to an indexed family of sets: according to the first definition, every two distinct indices in the family must name sets that are disjoint or identical, while according to the second, every two distinct indices must name disjoint sets.[2]For example, the family of sets{ {0, 1, 2}, {3, 4, 5}, {6, 7, 8}, ... }is disjoint according to both definitions, as is the family{ {..., −2, 0, 2, 4, ...}, {..., −3, −1, 1, 3, 5} }of the two parity classes of integers. However, the family({n+2k∣k∈Z})n∈{0,1,…,9}{\displaystyle (\{n+2k\mid k\in \mathbb {Z} \})_{n\in \{0,1,\ldots ,9\}}}with 10 members has five repetitions each of two disjoint sets, so it is pairwise disjoint under the first definition but not under the second.
Two sets are said to bealmost disjoint setsif their intersection is small in some sense. For instance, twoinfinite setswhose intersection is afinite setmay be said to be almost disjoint.[3]
Intopology, there are various notions ofseparated setswith more strict conditions than disjointness. For instance, two sets may be considered to be separated when they have disjointclosuresor disjointneighborhoods. Similarly, in ametric space,positively separated setsare sets separated by a nonzerodistance.[4]
Disjointness of two sets, or of a family of sets, may be expressed in terms ofintersectionsof pairs of them.
Two setsAandBare disjoint if and only if their intersectionA∩B{\displaystyle A\cap B}is theempty set.[1]It follows from this definition that every set is disjoint from the empty set,
and that the empty set is the only set that is disjoint from itself.[5]
If a collection contains at least two sets, the condition that the collection is disjoint implies that the intersection of the whole collection is empty. However, a collection of sets may have an empty intersection without being disjoint. Additionally, while a collection of less than two sets is trivially disjoint, as there are no pairs to compare, the intersection of a collection of one set is equal to that set, which may be non-empty.[2]For instance, the three sets{ {1, 2}, {2, 3}, {1, 3} }have an empty intersection but are not disjoint. In fact, there are no two disjoint sets in this collection.
The empty family of sets is pairwise disjoint.[6]
AHelly familyis a system of sets within which the only subfamilies with empty intersections are the ones that are pairwise disjoint. For instance, theclosed intervalsof thereal numbersform a Helly family: if a family of closed intervals has an empty intersection and is minimal (i.e. no subfamily of the family has an empty intersection), it must be pairwise disjoint.[7]
Apartition of a setXis any collection of mutually disjoint non-empty sets whoseunionisX.[8]Every partition can equivalently be described by anequivalence relation, abinary relationthat describes whether two elements belong to the same set in the partition.[8]Disjoint-set data structures[9]andpartition refinement[10]are two techniques in computer science for efficiently maintaining partitions of a set subject to, respectively, union operations that merge two sets or refinement operations that split one set into two.
Adisjoint unionmay mean one of two things. Most simply, it may mean the union of sets that are disjoint.[11]But if two or more sets are not already disjoint, their disjoint union may be formed by modifying the sets to make them disjoint before forming the union of the modified sets.[12]For instance two sets may be made disjoint by replacing each element by an ordered pair of the element and a binary value indicating whether it belongs to the first or second set.[13]For families of more than two sets, one may similarly replace each element by an ordered pair of the element and the index of the set that contains it.[14]
|
https://en.wikipedia.org/wiki/Disjoint_sets
|
This is aglossary of graph theory.Graph theoryis the study ofgraphs, systems of nodes orverticesconnected in pairs by lines oredges.
|
https://en.wikipedia.org/wiki/Glossary_of_graph_theory_terms
|
Incomputer science,graph transformation, orgraph rewriting, concerns the technique of creating a newgraphout of an original graph algorithmically. It has numerous applications, ranging fromsoftware engineering(software constructionand alsosoftware verification) tolayout algorithmsand picture generation.
Graph transformations can be used as a computation abstraction. The basic idea is that if the state of a computation can be represented as a graph, further steps in that computation can then be represented as transformation rules on that graph. Such rules consist of an original graph, which is to be matched to a subgraph in the complete state, and a replacing graph, which will replace the matched subgraph.
Formally, a graphrewritingsystem usually consists of a set of graph rewrite rules of the formL→R{\displaystyle L\rightarrow R}, withL{\displaystyle L}being called pattern graph (or left-hand side) andR{\displaystyle R}being called replacement graph (or right-hand side of the rule). A graph rewrite rule is applied to the host graph by searching for an occurrence of the pattern graph (pattern matching, thus solving thesubgraph isomorphism problem) and by replacing the found occurrence by an instance of the replacement graph. Rewrite rules can be further regulated in the case oflabeled graphs, such as in string-regulated graph grammars.
Sometimesgraph grammaris used as a synonym forgraph rewriting system, especially in the context offormal languages; the different wording is used to emphasize the goal of constructions, like the enumeration of all graphs from some starting graph, i.e. the generation of a graph language – instead of simply transforming a given state (host graph) into a new state.
The algebraic approach to graph rewriting is based uponcategory theory. The algebraic approach is further divided into sub-approaches, the most common of which are thedouble-pushout (DPO) approachand thesingle-pushout (SPO) approach. Other sub-approaches include thesesqui-pushoutand thepullback approach.
From the perspective of the DPO approach a graph rewriting rule is a pair ofmorphismsin the category of graphs andgraph homomorphismsbetween them:r=(L←K→R){\displaystyle r=(L\leftarrow K\rightarrow R)}, also writtenL⊇K⊆R{\displaystyle L\supseteq K\subseteq R}, whereK→L{\displaystyle K\rightarrow L}isinjective. The graph K is calledinvariantor sometimes thegluing graph. Arewritingsteporapplicationof a rule r to ahost graphG is defined by twopushoutdiagrams both originating in the samemorphismk:K→D{\displaystyle k\colon K\rightarrow D}, where D is acontext graph(this is where the namedouble-pushout comes from). Another graph morphismm:L→G{\displaystyle m\colon L\rightarrow G}models an occurrence of L in G and is called amatch. Practical understanding of this is thatL{\displaystyle L}is a subgraph that is matched fromG{\displaystyle G}(seesubgraph isomorphism problem), and after a match is found,L{\displaystyle L}is replaced withR{\displaystyle R}in host graphG{\displaystyle G}whereK{\displaystyle K}serves as an interface, containing the nodes and edges which are preserved when applying the rule. The graphK{\displaystyle K}is needed to attach the pattern being matched to its context: if it is empty, the match can only designate a whole connected component of the graphG{\displaystyle G}.
In contrast a graph rewriting rule of the SPO approach is a single morphism in the category oflabeled multigraphsandpartial mappingsthat preserve the multigraph structure:r:L→R{\displaystyle r\colon L\rightarrow R}. Thus a rewriting step is defined by a singlepushoutdiagram. Practical understanding of this is similar to the DPO approach. The difference is, that there is no interface between the host graph G and the graph G' being the result of the rewriting step.
From the practical perspective, the key distinction between DPO and SPO is how they deal with the deletion of nodes with adjacent edges, in particular, how they avoid that such deletions may leave behind "dangling edges". The DPO approach only deletes a node when the rule specifies the deletion of all adjacent edges as well (thisdangling conditioncan be checked for a given match), whereas the SPO approach simply disposes the adjacent edges, without requiring an explicit specification.
There is also another algebraic-like approach to graph rewriting, based mainly on Boolean algebra and an algebra of matrices, calledmatrix graph grammars.[1]
Yet another approach to graph rewriting, known asdeterminategraph rewriting, came out oflogicanddatabase theory.[2]In this approach, graphs are treated as database instances, and rewriting operations as a mechanism for defining queries and views; therefore, all rewriting is required to yield unique results (up to isomorphism), and this is achieved by applying any rewriting rule concurrently throughout the graph, wherever it applies, in such a way that the result is indeed uniquely defined.
Another approach to graph rewriting isterm graphrewriting, which involves the processing or transformation of term graphs (also known asabstract semantic graphs) by a set of syntactic rewrite rules.
Term graphs are a prominent topic in programming language research since term graph rewriting rules are capable of formally expressing a compiler'soperational semantics. Term graphs are also used as abstract machines capable of modelling chemical and biological computations as well as graphical calculi such as concurrency models. Term graphs can performautomated verificationand logical programming since they are well-suited to representing quantified statements in first order logic. Symbolic programming software is another application for term graphs, which are capable of representing and performing computation with abstract algebraic structures such as groups, fields and rings.
The TERMGRAPH conference[3]focuses entirely on research into term graph rewriting and its applications.
Graph rewriting systems naturally group into classes according to the kind of representation of graphs that are used and how the rewrites are expressed. The term graph grammar, otherwise equivalent to graph rewriting system or graph replacement system, is most often used in classifications. Some common types are:
Graphs are an expressive, visual and mathematically precise formalism for modelling of objects (entities) linked by relations; objects are represented by nodes and relations between them by edges. Nodes and edges are commonly typed and attributed. Computations are described in this model by changes in the relations between the entities or by attribute changes of the graph elements. They are encoded in graph rewrite/graph transformation rules and executed by graph rewrite systems/graph transformation tools.
|
https://en.wikipedia.org/wiki/Graph_rewriting
|
Ingraph theory, a division ofmathematics, amedian graphis anundirected graphin which every threeverticesa,b, andchave a uniquemedian: a vertexm(a,b,c) that belongs toshortest pathsbetween each pair ofa,b, andc.
The concept of median graphs has long been studied, for instance byBirkhoff & Kiss (1947)or (more explicitly) byAvann (1961), but the first paper to call them "median graphs" appears to beNebeský (1971). AsChung,Graham, and Saks write, "median graphs arise naturally in the study of ordered sets and discretedistributive lattices, and have an extensive literature".[1]Inphylogenetics, the Buneman graph representing allmaximum parsimonyevolutionary treesis a median graph.[2]Median graphs also arise insocial choice theory: if a set of alternatives has the structure of a median graph, it is possible to derive in an unambiguous way a majority preference among them.[3]
Additional surveys of median graphs are given byKlavžar & Mulder (1999),Bandelt & Chepoi (2008), andKnuth (2008).
Everytreeis a median graph. To see this, observe that in a tree, the union of the three shortest paths between pairs of the three verticesa,b, andcis either itself a path, or a subtree formed by three paths meeting at a single central node withdegreethree. If the union of the three paths is itself a path, the medianm(a,b,c) is equal to one ofa,b, orc, whichever of these three vertices is between the other two in the path. If the subtree formed by the union of the three paths is not a path, the median of the three vertices is the central degree-three node of the subtree.[4]
Additional examples of median graphs are provided by thegrid graphs. In a grid graph, the coordinates of the medianm(a,b,c) can be found as the median of the coordinates ofa,b, andc. Conversely, it turns out that, in every median graph, one may label the vertices by points in aninteger latticein such a way that medians can be calculated coordinatewise in this way.[5]
Squaregraphs, planar graphs in which all interior faces are quadrilaterals and all interior vertices have four or more incident edges, are another subclass of the median graphs.[6]Apolyominois a special case of a squaregraph and therefore also forms a median graph.[7]
Thesimplex graphκ(G) of an arbitrary undirected graphGhas a vertex for everyclique(complete subgraph) ofG; two vertices of κ(G) are linked by an edge if the corresponding cliques differ by one vertex ofG. The simplex graph is always a median graph, in which the median of a given triple of cliques may be formed by using themajority ruleto determine which vertices of the cliques to include.[8]
Nocycle graphof length other than four can be a median graph. Every such cycle has three verticesa,b, andcsuch that the three shortest paths wrap all the way around the cycle without having a common intersection. For such a triple of vertices, there can be no median.
In an arbitrary graph, for each two verticesaandb, the minimal number of edges between them is called theirdistance, denoted byd(x,y). Theintervalof vertices that lie on shortest paths betweenaandbis defined as
A median graph is defined by the property that, for every three verticesa,b, andc, these intervals intersect in a single point:
Equivalently, for every three verticesa,b, andcone can find a vertexm(a,b,c) such that theunweighteddistances in the graph satisfy the equalities
andm(a,b,c) is the only vertex for which this is true.
It is also possible to define median graphs as the solution sets of2-satisfiabilityproblems, as the retracts ofhypercubes, as the graphs of finitemedian algebras, as the Buneman graphs of Helly split systems, and as the graphs of windex 2; see the sections below.
Inlattice theory, the graph of afinitelatticehas a vertex for each lattice element and an edge for each pair of elements in thecovering relationof the lattice. Lattices are commonly presented visually viaHasse diagrams, which aredrawingsof graphs of lattices. These graphs, especially in the case ofdistributive lattices, turn out to be closely related to median graphs.
In a distributive lattice,Birkhoff'sself-dualternarymedian operation[9]
satisfies certain key axioms, which it shares with the usualmedianof numbers in the range from 0 to 1 and withmedian algebrasmore generally:
The distributive law may be replaced by an associative law:[10]
The median operation may also be used to define a notion of intervals for distributive lattices:
The graph of a finite distributive lattice has an edge between verticesaandbwheneverI(a,b) = {a,b}.For every two verticesaandbof this graph, the intervalI(a,b)defined in lattice-theoretic terms above consists of the vertices on shortest paths fromatob, and thus coincides with the graph-theoretic intervals defined earlier. For every three lattice elementsa,b, andc,m(a,b,c) is the unique intersection of the three intervalsI(a,b),I(a,c), andI(b,c).[12]Therefore, the graph of an arbitrary finite distributive lattice is a median graph. Conversely, if a median graphGcontains two vertices 0 and 1 such that every other vertex lies on a shortest path between the two (equivalently,m(0,a,1) =afor alla), then we may define a distributive lattice in whicha∧b=m(a,0,b) anda∨b=m(a,1,b), andGwill be the graph of this lattice.[13]
Duffus & Rival (1983)characterize graphs of distributive lattices directly as diameter-preserving retracts of hypercubes. More generally, every median graph gives rise to a ternary operationmsatisfying idempotence, commutativity, and distributivity, but possibly without the identity elements of a distributive lattice. Every ternary operation on a finite set that satisfies these three properties (but that does not necessarily have 0 and 1 elements) gives rise in the same way to a median graph.[14]
In a median graph, a setSof vertices is said to beconvexif, for every two verticesaandbbelonging toS, the whole intervalI(a,b) is a subset ofS. Equivalently, given the two definitions of intervals above,Sis convex if it contains every shortest path between two of its vertices, or if it contains the median of every set of three points at least two of which are fromS. Observe that the intersection of every pair of convex sets is itself convex.[15]
The convex sets in a median graph have theHelly property: ifFis an arbitrary family of pairwise-intersecting convex sets, then all sets inFhave a common intersection.[16]For, ifFhas only three convex setsS,T, andUin it, withain the intersection of the pairSandT,bin the intersection of the pairTandU, andcin the intersection of the pairSandU, then every shortest path fromatobmust lie withinTby convexity, and similarly every shortest path between the other two pairs of vertices must lie within the other two sets; butm(a,b,c) belongs to paths between all three pairs of vertices, so it lies within all three sets, and forms part of their common intersection. IfFhas more than three convex sets in it, the result follows by induction on the number of sets, for one may replace an arbitrary pair of sets inFby their intersection, using the result for triples of sets to show that the replaced family is still pairwise intersecting.
A particularly important family of convex sets in a median graph, playing a role similar to that ofhalfspacesin Euclidean space, are the sets
defined for each edgeuvof the graph. In words,Wuvconsists of the vertices closer touthan tov, or equivalently the verticeswsuch that some shortest path fromvtowgoes throughu.
To show thatWuvis convex, letw1w2...wkbe an arbitrary shortest path that starts and ends withinWuv; thenw2must also lie withinWuv, for otherwise the two pointsm1=m(u,w1,wk) andm2=m(m1,w2...wk) could be shown (by considering the possible distances between the vertices) to be distinct medians ofu,w1, andwk, contradicting the definition of a median graph which requires medians to be unique. Thus, each successive vertex on a shortest path between two vertices ofWuvalso lies withinWuv, soWuvcontains all shortest paths between its nodes, one of the definitions of convexity.
The Helly property for the setsWuvplays a key role in the characterization of median graphs as the solution of 2-satisfiability instances, below.
Median graphs have a close connection to the solution sets of2-satisfiabilityproblems that can be used both to characterize these graphs and to relate them to adjacency-preserving maps of hypercubes.[17]
A 2-satisfiability instance consists of a collection ofBoolean variablesand a collection ofclauses,constraintson certain pairs of variables requiring those two variables to avoid certain combinations of values. Usually such problems are expressed inconjunctive normal form, in which each clause is expressed as adisjunctionand the whole set of constraints is expressed as aconjunctionof clauses, such as
A solution to such an instance is an assignment oftruth valuesto the variables that satisfies all the clauses, or equivalently that causes the conjunctive normal form expression for the instance to become true when the variable values are substituted into it. The family of all solutions has a natural structure as a median algebra, where the median of three solutions is formed by choosing each truth value to be themajority functionof the values in the three solutions; it is straightforward to verify that this median solution cannot violate any of the clauses. Thus, these solutions form a median graph, in which the neighbor of each solution is formed by negating a set of variables that are all constrained to be equal or unequal to each other.
Conversely, every median graphGmay be represented in this way as the solution set to a 2-satisfiability instance. To find such a representation, create a 2-satisfiability instance in which each variable describes the orientation of one of the edges in the graph (an assignment of a direction to the edge causing the graph to becomedirectedrather than undirected) and each constraint allows two edges to share a pair of orientations only when there exists a vertexvsuch that both orientations lie along shortest paths from other vertices tov. Each vertexvofGcorresponds to a solution to this 2-satisfiability instance in which all edges are directed towardsv. Each
solution to the instance must come from some vertexvin this way, wherevis the common intersection of the setsWuwfor edges directed fromwtou; this common intersection exists due to the Helly property of the setsWuw. Therefore, the solutions to this 2-satisfiability instance correspond one-for-one with the vertices ofG.
Aretractionof a graphGis an adjacency-preserving map fromGto one of its subgraphs.[18]More precisely, it isgraph homomorphismφ fromGto itself such that φ(v) =vfor each vertexvin the subgraph φ(G). The image of the retraction is called aretractofG.
Retractions are examples ofmetric maps: the distance between φ(v) and φ(w), for everyvandw, is at most equal to the distance betweenvandw, and is equal whenevervandwboth belong to φ(G). Therefore, a retract must be anisometric subgraphofG: distances in the retract equal those inG.
IfGis a median graph, anda,b, andcare an arbitrary three vertices of a retract φ(G), then φ(m(a,b,c)) must be a median ofa,b, andc, and so must equalm(a,b,c). Therefore, φ(G) contains medians of all triples of its vertices, and must also be a median graph. In other words, the family of median graphs isclosedunder the retraction operation.[19]
Ahypercube graph, in which the vertices correspond to all possiblek-bitbitvectorsand in which two vertices are adjacent when the corresponding bitvectors differ in only a single bit, is a special case of ak-dimensional grid graph and is therefore a median graph. The median of three bitvectorsa,b, andcmay be calculated by computing, in each bit position, themajority functionof the bits ofa,b, andc. Since median graphs are closed under retraction, and include the hypercubes, every retract of a hypercube is a median graph.
Conversely, every median graph must be the retract of a hypercube.[20]This may be seen from the connection, described above, between median graphs and 2-satisfiability: letGbe the graph of solutions to a 2-satisfiability instance; without loss of generality this instance can be formulated in such a way that no two variables are always equal or always unequal in every solution. Then the space of all truth assignments to the variables of this instance forms a hypercube. For each clause, formed as the disjunction of two variables or their complements, in the 2-satisfiability instance, one can form a retraction of the hypercube in which truth assignments violating this clause are mapped to truth assignments in which both variables satisfy the clause, without changing the other variables in the truth assignment. The composition of the retractions formed in this way for each of the clauses gives a retraction of the hypercube onto the solution space of the instance, and therefore gives a representation ofGas the retract of a hypercube. In particular, median graphs are isometric subgraphs of hypercubes, and are thereforepartial cubes. However, not all partial cubes are median graphs; for instance, a six-vertexcycle graphis a partial cube but is not a median graph.
AsImrich & Klavžar (2000)describe, an isometric embedding of a median graph into a hypercube may be constructed in time O(mlogn), wherenandmare the numbers of vertices and edges of the graph respectively.[21]
The problems of testing whether a graph is a median graph, and whether a graph istriangle-free, both had been well studied whenImrich, Klavžar & Mulder (1999)observed that, in some sense, they are computationally equivalent.[22]Therefore, the best known time bound for testing whether a graph is triangle-free, O(m1.41),[23]applies as well to testing whether a graph is a median graph, and any improvement in median graph testing algorithms would also lead to an improvement in algorithms for detecting triangles in graphs.
In one direction, suppose one is given as input a graphG, and must test whetherGis triangle-free. FromG, construct a new graphHhaving as vertices each set of zero, one, or two adjacent vertices ofG. Two such sets are adjacent inHwhen they differ by exactly one vertex. An equivalent description ofHis that it is formed by splitting each edge ofGinto a path of two edges, and adding a new vertex connected to all the original vertices ofG. This graphHis by construction a partial cube, but it is a median graph only whenGis triangle-free: ifa,b, andcform a triangle inG, then {a,b}, {a,c}, and {b,c} have no median inH, for such a median would have to correspond to the set {a,b,c}, but sets of three or more vertices ofGdo not form vertices inH. Therefore,Gis triangle-free if and only ifHis a median graph. In the case thatGis triangle-free,His itssimplex graph. An algorithm to test efficiently whetherHis a median graph could by this construction also be used to test whetherGis triangle-free. This transformation preserves the computational complexity of the problem, for the size ofHis proportional to that ofG.
The reduction in the other direction, from triangle detection to median graph testing, is more involved and depends on the previous median graph recognition algorithm ofHagauer, Imrich & Klavžar (1999), which tests several necessary conditions for median graphs in near-linear time. The key new step involves using abreadth first searchto partition the graph's vertices into levels according to their distances from some arbitrarily chosen root vertex, forming a graph from each level in which two vertices are adjacent if they share a common neighbor in the previous level, and searching for triangles in these graphs. The median of any such triangle must be a common neighbor of the three triangle vertices; if this common neighbor does not exist, the graph is not a median graph. If all triangles found in this way have medians, and the previous algorithm finds that the graph satisfies all the other conditions for being a median graph, then it must actually be a median graph. This algorithm requires, not just the ability to test whether a triangle exists, but a list of all triangles in the level graph. In arbitrary graphs, listing all triangles sometimes requires Ω(m3/2) time, as some graphs have that many triangles, however Hagauer et al. show that the number of triangles arising in the level graphs of their reduction is near-linear, allowing the Alon et al. fast matrix multiplication based technique for finding triangles to be used.
Phylogenyis the inference ofevolutionary treesfrom observed characteristics ofspecies; such a tree must place the species at distinct vertices, and may have additionallatent vertices, but the latent vertices are required to have three or more incident edges and must also be labeled with characteristics. A characteristic isbinarywhen it has only two possible values, and a set of species and their characteristics exhibitperfect phylogenywhen there exists an evolutionary tree in which the vertices (species and latent vertices) labeled with any particular characteristic value form a contiguous subtree. If a tree with perfect phylogeny is not possible, it is often desired to find one exhibitingmaximum parsimony, or equivalently, minimizing the number of times the endpoints of a tree edge have different values for one of the characteristics, summed over all edges and all characteristics.
Buneman (1971)described a method for inferring perfect phylogenies for binary characteristics, when they exist. His method generalizes naturally to the construction of a median graph for any set of species and binary characteristics, which has been called themedian networkorBuneman graph[24]and is a type ofphylogenetic network. Every maximum parsimony evolutionary tree embeds into the Buneman graph, in the sense that tree edges follow paths in the graph and the number of characteristic value changes on the tree edge is the same as the number in the corresponding path. The Buneman graph will be a tree if and only if a perfect phylogeny exists; this happens when there are no two incompatible characteristics for which all four combinations of characteristic values are observed.
To form the Buneman graph for a set of species and characteristics, first, eliminate redundant species that are indistinguishable from some other species and redundant characteristics that are always the same as some other characteristic. Then, form a latent vertex for every combination of characteristic values such that every two of the values exist in some known species. In the example shown, there are small brown tailless mice, small silver tailless mice, small brown tailed mice, large brown tailed mice, and large silver tailed mice; the Buneman graph method would form a latent vertex corresponding to an unknown species of small silver tailed mice, because every pairwise combination (small and silver, small and tailed, and silver and tailed) is observed in some other known species. However, the method would not infer the existence of large brown tailless mice, because no mice are known to have both the large and tailless traits. Once the latent vertices are determined, form an edge between every pair of species or latent vertices that differ in a single characteristic.
One can equivalently describe a collection of binary characteristics as asplit system, afamily of setshaving the property that thecomplement setof each set in the family is also in the family. This split system has a set for each characteristic value, consisting of the species that have that value. When the latent vertices are included, the resulting split system has theHelly property: every pairwise intersecting subfamily has a common intersection. In some sense median graphs are characterized as coming from Helly split systems: the pairs (Wuv,Wvu) defined for each edgeuvof a median graph form a Helly split system, so if one applies the Buneman graph construction to this system no latent vertices will be needed and the result will be the same as the starting graph.[25]
Bandelt et al. (1995)andBandelt, Macaulay & Richards (2000)describe techniques for simplified hand calculation of the Buneman graph, and use this construction to visualize human genetic relationships.
|
https://en.wikipedia.org/wiki/Median_graph
|
Ingraph theory, thehypercube graphQnis the graph formed from the vertices and edges of ann-dimensionalhypercube. For instance, thecube graphQ3is the graph formed by the 8 vertices and 12 edges of a three-dimensional cube.Qnhas2nvertices,2n− 1nedges, and is aregular graphwithnedges touching each vertex.
The hypercube graphQnmay also be constructed by creating a vertex for eachsubsetof ann-element set, with two vertices adjacent when their subsets differ in a single element, or by creating a vertex for eachn-digitbinary number, with two vertices adjacent when their binary representations differ in a single digit. It is then-foldCartesian productof the two-vertexcomplete graph, and may be decomposed into two copies ofQn− 1connected to each other by aperfect matching.
Hypercube graphs should not be confused withcubic graphs, which are graphs that have exactly three edges touching each vertex. The only hypercube graphQnthat is a cubic graph is the cubical graphQ3.
The hypercube graphQnmay be constructed from the family ofsubsetsof asetwithnelements, by making a vertex for each possible subset and joining two vertices by an edge whenever the corresponding subsets differ in a single element. Equivalently, it may be constructed using2nvertices labeled withn-bitbinary numbersand connecting two vertices by an edge whenever theHamming distanceof their labels is one. These two constructions are closely related: a binary number may be interpreted as a set (the set of positions where it has a1digit), and two such sets differ in a single element whenever the corresponding two binary numbers have Hamming distance one.
Alternatively,Qnmay be constructed from thedisjoint unionof two hypercubesQn− 1, by adding an edge from each vertex in one copy ofQn− 1to the corresponding vertex in the other copy, as shown in the figure. The joining edges form aperfect matching.
The above construction gives a recursive algorithm for constructing theadjacency matrixof a hypercube,An. Copying is done via theKronecker product, so that the two copies ofQn− 1have an adjacency matrix12⊗KAn−1{\displaystyle \mathrm {1} _{2}\otimes _{K}A_{n-1}},where1d{\displaystyle 1_{d}}is theidentity matrixind{\displaystyle d}dimensions. Meanwhile the joining edges have an adjacency matrixA1⊗K12n−1{\displaystyle A_{1}\otimes _{K}1_{2^{n-1}}}. The sum of these two terms gives a recursive function function for the adjacency matrix of a hypercube:
An={12⊗KAn−1+A1⊗K12n−1ifn>1[0110]ifn=1{\displaystyle A_{n}={\begin{cases}1_{2}\otimes _{K}A_{n-1}+A_{1}\otimes _{K}1_{2^{n-1}}&{\text{if }}n>1\\{\begin{bmatrix}0&1\\1&0\end{bmatrix}}&{\text{if }}n=1\end{cases}}}
Another construction ofQnis theCartesian productofntwo-vertex complete graphsK2. More generally the Cartesian product of copies of a complete graph is called aHamming graph; the hypercube graphs are examples of Hamming graphs.
The graphQ0consists of a single vertex, whileQ1is thecomplete graphon two vertices.
Q2is acycleof length4.
The graphQ3is the1-skeletonof acubeand is a planar graph with eightverticesand twelveedges.
The graphQ4is theLevi graphof theMöbius configuration. It is also theknight's graphfor atoroidal4×4{\displaystyle 4\times 4}chessboard.[1]
Every hypercube graph isbipartite: it can becoloredwith only two colors. The two colors of this coloring may be found from the subset construction of hypercube graphs, by giving one color to the subsets that have an even number of elements and the other color to the subsets with an odd number of elements.
Every hypercubeQnwithn> 1has aHamiltonian cycle, a cycle that visits each vertex exactly once. Additionally, aHamiltonian pathexists between two verticesuandvif and only if they have different colors in a2-coloring of the graph. Both facts are easy to prove using the principle ofinductionon the dimension of the hypercube, and the construction of the hypercube graph by joining two smaller hypercubes with a matching.
Hamiltonicity of the hypercube is tightly related to the theory ofGray codes. More precisely there is abijectivecorrespondence between the set ofn-bit cyclic Gray codes and the set of Hamiltonian cycles in the hypercubeQn.[2]An analogous property holds for acyclicn-bit Gray codes and Hamiltonian paths.
A lesser known fact is that every perfect matching in the hypercube extends to a Hamiltonian cycle.[3]The question whether every matching extends to a Hamiltonian cycle remains an open problem.[4]
The hypercube graphQn(forn> 1) :
The familyQnfor alln> 1is aLévy family of graphs.
The problem of finding thelongest pathor cycle that is aninduced subgraphof a given hypercube graph is known as thesnake-in-the-boxproblem.
Szymanski's conjectureconcerns the suitability of a hypercube as anetwork topologyfor communications. It states that, no matter how one chooses apermutationconnecting each hypercube vertex to another vertex with which it should be connected, there is always a way to connect these pairs of vertices bypathsthat do not share any directed edge.[9]
|
https://en.wikipedia.org/wiki/Hypercube_graph
|
Sidorenko's conjectureis a majorconjecturein the field ofextremal graph theory, posed byAlexander Sidorenkoin 1986. Roughly speaking, the conjecture states that for anybipartite graphH{\displaystyle H}andgraphG{\displaystyle G}onn{\displaystyle n}vertices with average degreepn{\displaystyle pn}, there are at leastp|E(H)|n|V(H)|{\displaystyle p^{|E(H)|}n^{|V(H)|}}labeled copies ofH{\displaystyle H}inG{\displaystyle G}, up to a small error term. Formally, it provides an intuitive inequality aboutgraph homomorphismdensities ingraphons. The conjectured inequality can be interpreted as a statement that the density of copies ofH{\displaystyle H}in a graph is asymptotically minimized by a random graph, as one would expect ap|E(H)|{\displaystyle p^{|E(H)|}}fraction of possible subgraphs to be a copy ofH{\displaystyle H}if each edge exists with probabilityp{\displaystyle p}.
LetH{\displaystyle H}be a graph. ThenH{\displaystyle H}is said to haveSidorenko's propertyif, for allgraphonsW{\displaystyle W}, the inequality
is true, wheret(H,W){\displaystyle t(H,W)}is thehomomorphism densityofH{\displaystyle H}inW{\displaystyle W}.
Sidorenko's conjecture (1986) states that every bipartite graph has Sidorenko's property.[1]
IfW{\displaystyle W}is a graphG{\displaystyle G}, this means that the probability of a uniform random mapping fromV(H){\displaystyle V(H)}toV(G){\displaystyle V(G)}being a homomorphism is at least the product over each edge inH{\displaystyle H}of the probability of that edge being mapped to an edge inG{\displaystyle G}. This roughly means that a randomly chosen graph with fixed number of vertices and average degree has the minimum number of labeled copies ofH{\displaystyle H}. This is not a surprising conjecture because the right hand side of the inequality is the probability of the mapping being a homomorphism if each edge map is independent. So one should expect the two sides to be at least of the same order. The natural extension to graphons would follow from the fact that every graphon is thelimit pointof some sequence of graphs.
The requirement thatH{\displaystyle H}is bipartite to have Sidorenko's property is necessary — ifW{\displaystyle W}is a bipartite graph, thent(K3,W)=0{\displaystyle t(K_{3},W)=0}sinceW{\displaystyle W}is triangle-free. Butt(K2,W){\displaystyle t(K_{2},W)}is twice the number of edges inW{\displaystyle W}, so Sidorenko's property does not hold forK3{\displaystyle K_{3}}. A similar argument shows that no graph with an odd cycle has Sidorenko's property. Since a graph is bipartite if and only if it has no odd cycles, this implies that the only possible graphs that can have Sidorenko's property are bipartite graphs.
Sidorenko's property is equivalent to the following reformulation:
This is equivalent because the number of homomorphisms fromK2{\displaystyle K_{2}}toG{\displaystyle G}is twice the number of edges inG{\displaystyle G}, and the inequality only needs to be checked whenW{\displaystyle W}is a graph as previously mentioned.
In this formulation, since the number of non-injective homomorphisms fromH{\displaystyle H}toG{\displaystyle G}is at most a constant timesn|V(H)|−1{\displaystyle n^{|V(H)|-1}}, Sidorenko's property would imply that there are at least(p|E(H)|−o(1))n|V(H)|{\displaystyle (p^{|E(H)|}-o(1))n^{|V(H)|}}labeled copies ofH{\displaystyle H}inG{\displaystyle G}.
As previously noted, to prove Sidorenko's property it suffices to demonstrate the inequality for all graphsG{\displaystyle G}. Throughout this section,G{\displaystyle G}is a graph onn{\displaystyle n}vertices with average degreepn{\displaystyle pn}. The quantityhom(H,G){\displaystyle \operatorname {hom} (H,G)}refers to the number of homomorphisms fromH{\displaystyle H}toG{\displaystyle G}. This quantity is the same asn|V(H)|t(H,G){\displaystyle n^{|V(H)|}t(H,G)}.
Elementary proofs of Sidorenko's property for some graphs follow from theCauchy–Schwarz inequalityorHölder's inequality. Others can be done by usingspectral graph theory, especially noting the observation that the number of closed paths of lengthℓ{\displaystyle \ell }from vertexi{\displaystyle i}to vertexj{\displaystyle j}inG{\displaystyle G}is the component in thei{\displaystyle i}th row andj{\displaystyle j}th column of the matrixAℓ{\displaystyle A^{\ell }}, whereA{\displaystyle A}is theadjacency matrixofG{\displaystyle G}.
By fixing two verticesu{\displaystyle u}andv{\displaystyle v}ofG{\displaystyle G}, each copy ofC4{\displaystyle C_{4}}that haveu{\displaystyle u}andv{\displaystyle v}on opposite ends can be identified by choosing two (not necessarily distinct) common neighbors ofu{\displaystyle u}andv{\displaystyle v}. Lettingcodeg(u,v){\displaystyle \operatorname {codeg} (u,v)}denote thecodegreeofu{\displaystyle u}andv{\displaystyle v}(i.e. the number of common neighbors), this implies:
by the Cauchy–Schwarz inequality. The sum has now become a count of all pairs of vertices and their common neighbors, which is the same as the count of all vertices and pairs of their neighbors. So:
by Cauchy–Schwarz again. So:
as desired.
Although the Cauchy–Schwarz approach forC4{\displaystyle C_{4}}is elegant and elementary, it does not immediately generalize to all even cycles. However, one can apply spectral graph theory to prove that all even cycles have Sidorenko's property. Note that odd cycles are not accounted for in Sidorenko's conjecture because they are not bipartite.
Using the observation about closed paths, it follows thathom(C2k,G){\displaystyle \operatorname {hom} (C_{2k},G)}is the sum of the diagonal entries inA2k{\displaystyle A^{2k}}. This is equal to thetraceofA2k{\displaystyle A^{2k}}, which in turn is equal to the sum of the2k{\displaystyle 2k}th powers of theeigenvaluesofA{\displaystyle A}. Ifλ1≥λ2≥⋯≥λn{\displaystyle \lambda _{1}\geq \lambda _{2}\geq \dots \geq \lambda _{n}}are the eigenvalues ofA{\displaystyle A}, then themin-max theoremimplies that:
where1{\displaystyle \mathbf {1} }is the vector withn{\displaystyle n}components, all of which are1{\displaystyle 1}. But then:
because the eigenvalues of areal symmetric matrixare real. So:
as desired.
J.L. Xiang Li andBalázs Szegedy(2011) introduced the idea of usingentropyto prove some cases of Sidorenko's conjecture. Szegedy (2015) later applied the ideas further to prove that an even wider class of bipartite graphs have Sidorenko's property.[2]While Szegedy's proof wound up being abstract and technical,Tim Gowersand Jason Long reduced the argument to a simpler one for specific cases such as paths of length3{\displaystyle 3}.[3]In essence, the proof chooses a niceprobability distributionof choosing the vertices in the path and appliesJensen's inequality(i.e. convexity) to deduce the inequality.
Here is a list of some bipartite graphsH{\displaystyle H}which have been shown to have Sidorenko's property. LetH{\displaystyle H}have bipartitionA⊔B{\displaystyle A\sqcup B}.
However, there are graphs for which Sidorenko's conjecture is still open. An example is the "Möbius strip" graphK5,5∖C10{\displaystyle K_{5,5}\setminus C_{10}}, formed by removing a10{\displaystyle 10}-cycle from the complete bipartite graph with parts of size5{\displaystyle 5}.
László Lovászproved a local version of Sidorenko's conjecture, i.e. for graphs that are "close" torandom graphsin a sense of cut norm.[11]
A sequence of graphs{Gn}n=1∞{\displaystyle \{G_{n}\}_{n=1}^{\infty }}is calledquasi-random with densityp{\displaystyle p}for some density0<p<1{\displaystyle 0<p<1}if for every graphH{\displaystyle H}:
The sequence of graphs would thus have properties of theErdős–Rényi random graphG(n,p){\displaystyle G(n,p)}.
If the edge densityt(K2,Gn){\displaystyle t(K_{2},G_{n})}is fixed at(1+o(1))p{\displaystyle (1+o(1))p}, then the condition implies that the sequence of graphs is near the equality case in Sidorenko's property for every graphH{\displaystyle H}.
From Chung, Graham, and Wilson's 1989 paper about quasi-random graphs, it suffices for theC4{\displaystyle C_{4}}count to match what would be expected of a random graph (i.e. the condition holds forH=C4{\displaystyle H=C_{4}}).[12]The paper also asks which graphsH{\displaystyle H}have this property besidesC4{\displaystyle C_{4}}. Such graphs are calledforcing graphsas their count controls the quasi-randomness of a sequence of graphs.
The forcing conjecture states the following:
It is straightforward to see that ifH{\displaystyle H}is forcing, then it is bipartite and not a tree. Some examples of forcing graphs are even cycles (shown by Chung, Graham, and Wilson). Skokan and Thoma showed that all complete bipartite graphs that are not trees are forcing.[13]
Sidorenko's conjecture for graphs of densityp{\displaystyle p}follows from the forcing conjecture. Furthermore, the forcing conjecture would show that graphs that are close to equality in Sidorenko's property must satisfy quasi-randomness conditions.[14]
|
https://en.wikipedia.org/wiki/Sidorenko%27s_conjecture
|
Algebraicgraph theoryis a branch ofmathematicsin whichalgebraicmethods are applied to problems aboutgraphs. This is in contrast togeometric,combinatoric, oralgorithmicapproaches. There are three main branches of algebraic graph theory, involving the use oflinear algebra, the use ofgroup theory, and the study ofgraph invariants.
The first branch of algebraic graph theory involves the study of graphs in connection withlinear algebra. Especially, it studies thespectrumof theadjacency matrix, or theLaplacian matrixof a graph (this part of algebraic graph theory is also calledspectral graph theory). For thePetersen graph, for example, the spectrum of the adjacency matrix is (−2, −2, −2, −2, 1, 1, 1, 1, 1, 3). Several theorems relate properties of the spectrum to othergraph properties. As a simple example, aconnectedgraph withdiameterDwill have at leastD+1 distinct values in its spectrum.[1]Aspectsof graph spectra have been used in analysing thesynchronizabilityofnetworks.
The second branch of algebraic graph theory involves the study of graphs in connection togroup theory, particularlyautomorphism groupsandgeometric group theory. The focus is placed on various families of graphs based onsymmetry(such assymmetric graphs,vertex-transitive graphs,edge-transitive graphs,distance-transitive graphs,distance-regular graphs, andstrongly regular graphs), and on the inclusion relationships between these families. Certain of such categories of graphs are sparse enough thatlistsof graphs can be drawn up. ByFrucht's theorem, allgroupscan be represented as the automorphism group of a connected graph (indeed, of acubic graph).[2]Another connection with group theory is that, given any group, symmetrical graphs known asCayley graphscan be generated, and these have properties related to the structure of the group.[1]
This second branch of algebraic graph theory is related to the first, since the symmetry properties of a graph are reflected in its spectrum. In particular, the spectrum of a highly symmetrical graph, such as the Petersen graph, has few distinct values[1](the Petersen graph has 3, which is the minimum possible, given its diameter). For Cayley graphs, the spectrum can be related directly to the structure of the group, in particular to itsirreducible characters.[1][3]
Finally, the third branch of algebraic graph theory concerns algebraic properties ofinvariantsof graphs, and especially thechromatic polynomial, theTutte polynomialandknot invariants. The chromatic polynomial of a graph, for example, counts the number of its propervertex colorings. For the Petersen graph, this polynomial ist(t−1)(t−2)(t7−12t6+67t5−230t4+529t3−814t2+775t−352){\displaystyle t(t-1)(t-2)(t^{7}-12t^{6}+67t^{5}-230t^{4}+529t^{3}-814t^{2}+775t-352)}.[1]In particular, this means that the Petersen graph cannot be properly colored with one or two colors, but can be colored in 120 different ways with 3 colors. Much work in this area of algebraic graph theory was motivated by attempts to prove thefour color theorem. However, there are still manyopen problems, such as characterizing graphs which have the same chromatic polynomial, and determining which polynomials are chromatic.
|
https://en.wikipedia.org/wiki/Algebraic_graph_theory
|
Ingraph theory, adistinguishing coloringordistinguishing labelingof a graph is anassignment of colorsor labels to theverticesof the graph that destroys all of the nontrivialsymmetries of the graph. The coloring does not need to be aproper coloring: adjacent vertices are allowed to be given the same color. For the colored graph, there should not exist any one-to-one mapping of the vertices to themselves that preserves both adjacency and coloring. The minimum number of colors in a distinguishing coloring is called thedistinguishing numberof the graph.
Distinguishing colorings and distinguishing numbers were introduced byAlbertson & Collins (1996), who provided the following motivating example, based on a puzzle previously formulated by Frank Rubin: "Suppose you have a ring of keys to different doors; each key only opens one door, but they all look indistinguishable to you. How few colors do you need, in order to color the handles of the keys in such a way that you can uniquely identify each key?"[1]This example is solved by using a distinguishing coloring for acycle graph. With such a coloring, each key will be uniquely identified by its color and the sequence of colors surrounding it.[2]
A graph has distinguishing number oneif and only ifit isasymmetric.[3]For instance, theFrucht graphhas a distinguishing coloring with only one color.
In acomplete graph, the only distinguishing colorings assign a different color to each vertex. For, if two vertices were assigned the same color, there would exist a symmetry that swapped those two vertices, leaving the rest in place. Therefore, the distinguishing number of the complete graphKnisn. However, the graph obtained fromKnby attaching a degree-one vertex to each vertex ofKnhas a significantly smaller distinguishing number, despite having the same symmetry group: it has a distinguishing coloring with⌈n⌉{\displaystyle \lceil {\sqrt {n}}\rceil }colors, obtained by using a differentordered pairof colors for each pair of a vertexKnand its attached neighbor.[2]
For acycle graphof three, four, or five vertices, three colors are needed to construct a distinguishing coloring. For instance, every two-coloring of a five-cycle has areflection symmetry. In each of these cycles, assigning a unique color to each of two adjacent vertices and using the third color for all remaining vertices results in a three-color distinguishing coloring. However, cycles of six or more vertices have distinguishing colorings with only two colors. That is, Frank Rubin's keyring puzzle requires three colors for rings of three, four or five keys, but only two colors for six or more keys or for two keys.[2]For instance, in the ring of six keys shown, each key can be distinguished by its color and by the length or lengths of the adjacent blocks of oppositely-colored keys: there is only one key for each combination of key color and adjacent block lengths.
Hypercube graphsexhibit a similar phenomenon to cycle graphs. The two- and three-dimensional hypercube graphs (the 4-cycle and the graph of a cube, respectively) have distinguishing number three. However, every hypercube graph of higher dimension has distinguishing number only two.[4]
ThePetersen graphhas distinguishing number 3. However other than this graph and the complete graphs, allKneser graphshave distinguishing number 2.[5]Similarly, among thegeneralized Petersen graphs, only the Petersen graph itself and the graph of the cube have distinguishing number 3; the rest have distinguishing number 2.[6]
The distinguishing numbers oftrees,planar graphs, andinterval graphscan be computed inpolynomial time.[7][8][9]
The exact complexity of computing distinguishing numbers is unclear, because it is closely related to the still-unknown complexity ofgraph isomorphism. However, it has been shown to belong to the complexity classAM.[10]Additionally, testing whether the distinguishing chromatic number is at most three isNP-hard,[9]and testing whether it is at most two is "at least as hard as graph automorphism, but no harder than graph isomorphism".[11]
A coloring of a given graph is distinguishing for that graph if and only if it is distinguishing for thecomplement graph. Therefore, every graph has the same distinguishing number as its complement.[2]
For every graphG, the distinguishing number ofGis at most proportional to thelogarithmof the number ofautomorphismsofG. If the automorphisms form a nontrivialabelian group, the distinguishing number is two, and if it forms adihedral groupthen the distinguishing number is at most three.[2]
For everyfinite group, there exists a graph with that group as its group of automorphisms, with distinguishing number two.[2]This result extendsFrucht's theoremthat every finite group can be realized as the group of symmetries of a graph.
Aproper distinguishing coloringis a distinguishing coloring that is also a proper coloring: each two adjacent vertices have different colors. The minimum number of colors in a proper distinguishing coloring of a graph is called thedistinguishing chromatic numberof the graph.[12]
|
https://en.wikipedia.org/wiki/Distinguishing_coloring
|
Ingraph theory, a mathematical discipline, afactor-critical graph(orhypomatchable graph[1][2]) is agraphwith anodd numberof vertices in which deleting one vertex in every possible way results in a graph with aperfect matching, a way of grouping the remaining vertices into adjacent pairs.
A matching of all but one vertex of a graph is called anear-perfect matching. So equivalently, a factor-critical graph is a graph in which there are near-perfect matchings that avoid every possible vertex.
Factor-critical graphs may be characterized in several different ways, other than their definition as graphs in which each vertex deletion allows for a perfect matching:
Any odd-lengthcycle graphis factor-critical,[1]as is anycomplete graphwith an odd number of vertices.[7]More generally, whenever a graph has an odd number of vertices and contains aHamiltonian cycle, it is factor-critical. In such a graph, the near-perfect matchings can be obtained by removing one vertex from the cycle and choosing matched edges in alternation along the remaining path. Thefriendship graphs(graphs formed by connecting a collection of triangles at a single common vertex) provide examples of graphs that are factor-critical but do not have Hamiltonian cycles.
If a graphGis factor-critical, then so is theMycielskianofG. For instance, theGrötzsch graph, the Mycielskian of a five-vertex cycle-graph, is factor-critical.[8]
Every2-vertex-connectedclaw-free graphwith an odd number of vertices is factor-critical, because removing any vertex will leave a connected claw-free graph with an even number of vertices, and these always have a perfect matching.[9]Examples include the 5-vertex graph of asquare pyramidand the 11-vertex graph of thegyroelongated pentagonal pyramid.
Factor-critical graphs must always have an odd number of vertices, and must be2-edge-connected(that is, they cannot have anybridges).[10]However, they are not necessarily2-vertex-connected; the friendship graphs provide a counterexample. It is not possible for a factor-critical graph to bebipartite, because in a bipartite graph with a near-perfect matching, the only vertices that can be deleted to produce a perfectly matchable graph are the ones on the larger side of the bipartition.
Every 2-vertex-connected factor-critical graph withmedges has at leastmdifferent near-perfect matchings, and more generally every factor-critical graph withmedges andcblocks (2-vertex-connected components) has at leastm−c+ 1different near-perfect matchings. The graphs for which these bounds are tight may be characterized by having odd ear decompositions of a specific form.[7]
Any connected graph may be transformed into a factor-critical graph bycontractingsufficiently many of its edges. Theminimalsets of edges that need to be contracted to make a given graphGfactor-critical form the bases of amatroid, a fact that implies that agreedy algorithmmay be used to find the minimum weight set of edges to contract to make a graph factor-critical, inpolynomial time.[11]
Ablossomis a factor-criticalsubgraphof a larger graph. Blossoms play a key role inJack Edmonds'algorithmsformaximum matchingand minimum weight perfect matching in non-bipartite graphs.[12]
Inpolyhedral combinatorics, factor-critical graphs play an important role in describing facets of thematching polytopeof a given graph.[1][2]
A graph is said to bek-factor-critical if every subset ofn−kvertices has a perfect matching. Under this definition, a hypomatchable graph is 1-factor-critical.[13]Even more generally, a graph is(r,k)-factor-critical if every subset ofn−kvertices has anr-factor, that is, it is the vertex set of anr-regular subgraphof the given graph.
Acritical graph(without qualification) is usually assumed to mean a graph for which removing each of its vertices reduces the number of colors it needs in agraph coloring. The concept of criticality has been used much more generally in graph theory to refer to graphs for which removing each possible vertex changes or does not change some relevant property of the graph. Amatching-critical graphis a graph for which the removal of any vertex does not change the size of amaximum matching; by Gallai's characterization, the matching-critical graphs are exactly the graphs in which every connected component is factor-critical.[14]Thecomplement graphof a critical graph is necessarily matching-critical, a fact that was used by Gallai to prove lower bounds on the number of vertices in a critical graph.[15]
Beyond graph theory, the concept of factor-criticality has been extended tomatroidsby defining a type of ear decomposition on matroids and defining a matroid to be factor-critical if it has an ear decomposition in which all ears are odd.[16]
|
https://en.wikipedia.org/wiki/Factor-critical_graph
|
Incomputer science,dancing links(DLX) is a technique for adding and deleting a node from a circulardoubly linked list. It is particularly useful for efficiently implementingbacktrackingalgorithms, such asKnuth's Algorithm Xfor theexact cover problem.[1]Algorithm X is arecursive,nondeterministic,depth-first,backtrackingalgorithmthat finds all solutions to theexact coverproblem. Some of the better-known exact cover problems includetiling, thenqueens problem, andSudoku.
The namedancing links, which was suggested byDonald Knuth, stems from the way the algorithm works, as iterations of the algorithm cause the links to "dance" with partner links so as to resemble an "exquisitely choreographed dance." Knuth credits Hiroshi Hitotsumatsu and Kōhei Noshita with having invented the idea in 1979,[2]but it is his paper which has popularized it.
As the remainder of this article discusses the details of an implementation technique for Algorithm X, the reader is strongly encouraged to read theAlgorithm Xarticle first.
The idea of DLX is based on the observation that in a circulardoubly linked listof nodes,
will remove nodexfrom the list, while
will restorex's position in the list, assuming that x.right and x.left have been left unmodified. This works regardless of the number of elements in the list, even if that number is 1.
Knuth observed that a naive implementation of his Algorithm X would spend an inordinate amount of time searching for 1's. When selecting a column, the entire matrix had to be searched for 1's. When selecting a row, an entire column had to be searched for 1's. After selecting a row, that row and a number of columns had to be searched for 1's. To improve this search time fromcomplexityO(n) to O(1), Knuth implemented asparse matrixwhere only 1's are stored.
At all times, each node in the matrix will point to the adjacent nodes to the left and right (1's in the same row), above and below (1's in the same column), and the header for its column (described below). Each row and column in the matrix will consist of a circular doubly-linked list of nodes.
Each column will have a special node known as the "column header," which will be included in the column list, and will form a special row ("control row") consisting of all the columns which still exist in the matrix.
Finally, each column header may optionally track the number of nodes in its column, so that locating a column with the lowest number of nodes is ofcomplexityO(n) rather than O(n×m) wherenis the number of columns andmis the number of rows. Selecting a column with a low node count is a heuristic which improves performance in some cases, but is not essential to the algorithm.
In Algorithm X, rows and columns are regularly eliminated from and restored to the matrix. Eliminations are determined by selecting a column and a row in that column. If a selected column doesn't have any rows, the current matrix is unsolvable and must be backtracked. When an elimination occurs, all columns for which the selected row contains a 1 are removed, along with all rows (including the selected row) that contain a 1 in any of the removed columns. The columns are removed because they have been filled, and the rows are removed because they conflict with the selected row. To remove a single column, first remove the selected column's header. Next, for each row where the selected column contains a 1, traverse the row and remove it from other columns (this makes those rows inaccessible and is how conflicts are prevented). Repeat this column removal for each column where the selected row contains a 1. This order ensures that any removed node is removed exactly once and in a predictable order, so it can be backtracked appropriately. If the resulting matrix has no columns, then they have all been filled and the selected rows form the solution.
To backtrack, the above process must be reversed using the second algorithm stated above. One requirement of using that algorithm is that backtracking must be done as an exact reversal of eliminations. Knuth's paper gives a clear picture of these relationships and how the node removal and reinsertion works, and provides a slight relaxation of this limitation.
It is also possible to solve one-cover problems in which a particular constraint is optional, but can be satisfied no more than once. Dancing Links accommodates these with primary columns which must be filled and secondary columns which are optional. This alters the algorithm's solution test from a matrix having no columns to a matrix having no primary columns and if the heuristic of minimum one's in a column is being used then it needs to be checked only within primary columns. Knuth discusses optional constraints as applied to thenqueens problem. The chessboard diagonals represent optional constraints, as some diagonals may not be occupied. If a diagonal is occupied, it can be occupied only once.
|
https://en.wikipedia.org/wiki/Dancing_Links
|
Sudoku(/suːˈdoʊkuː,-ˈdɒk-,sə-/;Japanese:数独,romanized:sūdoku,lit.'digit-single'; originally calledNumber Place)[1]is alogic-based,[2][3]combinatorial[4]number-placementpuzzle. In classic Sudoku, the objective is to fill a 9 × 9 grid with digits so that each column, each row, and each of the nine 3 × 3 subgrids that compose the grid (also called "boxes", "blocks", or "regions") contains all of the digits from 1 to 9. The puzzle setter provides a partially completed grid, which for awell-posedpuzzle has a single solution.
French newspapers featured similar puzzles in the 19th century, and the modern form of the puzzle first appeared in 1979puzzle booksbyDell Magazinesunder the name Number Place.[5]However, the puzzle type only began to gain widespread popularity in 1986 when it was published by the Japanese puzzle companyNikoliunder the name Sudoku, meaning "single number".[6]In newspapers outside of Japan, it first appeared inThe Conway Daily Sun(New Hampshire) in September 2004, and thenThe Times(London) in November 2004, both of which were thanks to the efforts of the Hong Kong judgeWayne Gould, who devised acomputer programto rapidly produce unique puzzles.
Number puzzles appeared in newspapers in the late 19th century, when French puzzle setters began experimenting with removing numbers frommagic squares.Le Siècle, a Paris daily, published a partially completed 9×9 magic square with 3×3 subsquares on November 19, 1892.[7]It was not a Sudoku because it contained double-digit numbers and required arithmetic rather than logic to solve, but it shared key characteristics: each row, column, and subsquare added up to the same number.
On July 6, 1895,Le Siècle'srival,La France, refined the puzzle so that it was almost a modern Sudoku and named itcarré magique diabolique('diabolical magic square'). It simplified the 9×9 magic square puzzle so that each row, column, andbroken diagonalscontained only the numbers 1–9, but did not mark the subsquares. Although they were unmarked, each 3×3 subsquare did indeed comprise the numbers 1–9, and the additional constraint on the broken diagonals led to only one solution.[8]
These weekly puzzles were a feature of French newspapers such asL'Écho de Parisfor about a decade, but disappeared about the time ofWorld War I.[9]
The modern Sudoku was most likely designed anonymously byHoward Garns, a 74-year-old retired architect and freelance puzzle constructor fromConnersville, Indiana, and first published in 1979 byDell Magazinesas Number Place (the earliest known examples of modern Sudoku).[1]Garns' name was always present on the list of contributors in issues ofDell Pencil Puzzles and Word Gamesthat included Number Place and was always absent from issues that did not.[10]He died in 1989 before getting a chance to see his creation as a worldwide phenomenon.[10]Whether or not Garns was familiar with any of the French newspapers listed above is unclear.
The puzzle was introduced in Japan byMaki Kaji(鍜治 真起,Kaji Maki), president of theNikolipuzzle company, in the paperMonthly Nikolistin April 1984[10]asSūji wa dokushin ni kagiru(数字は独身に限る), which can be translated as "the digits must be single", or as "the digits are limited to one occurrence" (In Japanese,dokushinmeans an "unmarried person"). The name was later abbreviated toSudoku(数独), taking only the firstkanjiof compound words to form a shorter version.[10]"Sudoku" is a registered trademark in Japan[11]and the puzzle is generally referred to as Number Place(ナンバープレース,Nanbāpurēsu)or, more informally, a shortening of the two words, Num(ber) Pla(ce)(ナンプレ,Nanpure). In 1986, Nikoli introduced two innovations: the number of givens was restricted to no more than 32, and puzzles became "symmetrical" (meaning the givens were distributed inrotationally symmetric cells). It is now published in mainstream Japanese periodicals, such as theAsahi Shimbun.
In 1997, Hong Kong judgeWayne Gouldsaw a partly completed puzzle in a Japanese bookshop. Over six years, he developed a computer program to produce unique puzzles rapidly.[5]
The first newspaper outside of Japan to publish a Sudoku puzzle wasThe Conway Daily Sun(New Hampshire), which published a puzzle by Gould in September 2004.[12][13]
Gould pitched the idea of publishing Sudoku puzzles to newspapers, offering the puzzles for free in exchange for the newspapers' attributing them to him and linking to his website for solutions and other puzzles.
Knowing that British newspapers have a long history of publishingcrosswordsand other puzzles, he promoted Sudoku toThe Timesin Britain, which launched it on November 12, 2004 (calling it Su Doku). The first letter toThe Timesregarding Su Doku was published the following day on November 13 from Ian Payn ofBrentford, complaining that the puzzle had caused him to miss his stop on thetube.[14]Sudoku puzzles rapidly spread to other newspapers as a regular feature.[5][15]
The rapid rise of Sudoku in Britain from relative obscurity to a front-page feature in national newspapers attracted commentary in the media and parody (such as whenThe Guardian'sG2section advertised itself as the first newspaper supplement with a Sudoku grid on every page).[16]Recognizing the different psychological appeals of easy and difficult puzzles,The Timesintroduced both, side by side, on June 20, 2005. From July 2005,Channel 4included a daily Sudoku game in theirteletextservice. On August 2, the BBC's program guideRadio Timesfeatured a weekly Super Sudoku with a 16×16 grid.
The world's first live TV Sudoku show,Sudoku Live, was apuzzle contestfirst broadcast on July 1, 2005, on the British pay-television channelSky One. It was presented byCarol Vorderman. Nine teams of nine players (with one celebrity in each team) representing geographical regions competed to solve a puzzle. Each player had a hand-held device for entering numbers corresponding to answers for four cells. Phil Kollin ofWinchelsea, England, was the series grand prize winner, taking home over £23,000 over a series of games. The audience at home was in a separate interactive competition, which was won by Hannah Withey ofCheshire.
Later in 2005, theBBClaunchedSUDO-Q, agame showthat combined Sudoku with general knowledge. However, it used only 4×4 and 6×6 puzzles. Four seasons were produced before the show ended in 2007.
An annualWorld Sudoku Championshipseries has been organized by theWorld Puzzle Federationsince 2006, except in 2020 and 2021 during theCOVID-19 pandemic.
In 2006, a Sudoku website published a tribute song by Australian songwriter Peter Levy, but the song download was later removed due to heavy traffic. The Japanese Embassy nominated the song for an award, and Levy claimed he was in discussions withSonyin Japan to release the song as a single.[17]
Sudoku software is very popular on PCs, websites, and mobile phones. It comes with many distributions ofLinux. The software has also been released on video game consoles, such as theNintendo DS,PlayStation Portable, theGame Boy Advance,Xbox Live Arcade, theNooke-book reader, Kindle Fire tablet, severaliPodmodels, and theiPhone. ManyNokiaphones also had Sudoku. In fact, just two weeks afterApple Inc.debuted the onlineApp Storewithin itsiTunes Storeon July 11, 2008, nearly 30 different Sudoku games were already in it, created by varioussoftware developers, specifically for the iPhone and iPod Touch. Sudoku games also rapidly became available forweb browserusers and for basically all gaming, cellphone, and computer platforms.
In June 2008, an Australian drugs-related jury trial costing overA$1 million was aborted when it was discovered that four or five of the twelve jurors had been playing Sudoku instead of listening to the evidence.[18]
Although the 9×9 grid with 3×3 regions is by far the most common, many other variations exist. Sample puzzles can be 4×4 grids with 2×2 regions; 5×5 grids withpentominoregions have been published under the name Logi-5; theWorld Puzzle Championshiphas featured a 6×6 grid with 2×3 regions and a 7×7 grid with sixheptominoregions and a disjoint region. Larger grids are also possible, or different irregular shapes (under various names such asSuguru,Tectonic,Jigsaw Sudokuetc.).The Timesoffers a 12×12-grid "Dodeka Sudoku" with 12 regions of 4×3 squares. Dell Magazines regularly publishes 16×16 "Number Place Challenger" puzzles (using the numbers 1–16 or the letters A-P). Nikoli offers 25×25 "Sudoku the Giant" behemoths. A 100×100-grid puzzle dubbed Sudoku-zilla was published in 2010.[19]
Under the name "Mini Sudoku", a 6×6 variant with 3×2 regions appears in the American newspaperUSA Todayand elsewhere. The object is the same as that of standard Sudoku, but the puzzle only uses the numbers 1 through 6. A similar form, for younger solvers of puzzles, called "The Junior Sudoku", has appeared in some newspapers, such as some editions ofThe Daily Mail.
Another common variant is to add limits on the placement of numbers beyond the usual row, column, and box requirements. Often, the limit takes the form of an extra "dimension"; the most common is to require the numbers in the main diagonals of the grid to also be unique. The aforementioned "Number Place Challenger" puzzles are all of this variant, as are the Sudoku X puzzles inThe Daily Mail, which use 6×6 grids.
The killer sudoku variant combines elements of sudoku andkakuro. A killer sudoku puzzle is made up of 'cages', typically depicted by boxes outlined with dashes or colours. The sum of the numbers in a cage is written in the top left corner of the cage, and numbers cannot be repeated in a cage.
Puzzles constructed from more than two grids are also common. Five 9×9 grids that overlap at the corner regions in the shape of aquincunxis known in Japan asGattai5 (five merged) Sudoku. InThe Times,The Age, andThe Sydney Morning Herald, this form of puzzle is known as Samurai Sudoku.The Baltimore Sunand theToronto Starpublish a puzzle of this variant (titled High Five) in their Sunday edition. Often, no givens are placed in the overlapping regions. Sequential grids, as opposed to overlapping, are also published, with values in specific locations in grids needing to be transferred to others.
A tabletop version of Sudoku can be played with a standard 81-card Set deck (seeSet game). A three-dimensional Sudoku puzzle was published inThe Daily Telegraphin May 2005.The Timesalso publishes a three-dimensional version under the name Tredoku. Also, a Sudoku version of theRubik's Cubeis namedSudoku Cube.
Many other variants have been developed.[20][21][22]Some are different shapes in the arrangement of overlapping 9×9 grids, such as butterfly, windmill, or flower.[23]Others vary the logic for solving the grid. One of these is "Greater Than Sudoku". In this, a 3×3 grid of the Sudoku is given with 12 symbols of Greater Than (>) or Less Than (<) on the common line of the two adjacent numbers.[10]Another variant on the logic of the solution is "Clueless Sudoku", in which nine 9×9 Sudoku grids are each placed in a 3×3 array. The center cell in each 3×3 grid of all nine puzzles is left blank and forms a tenth Sudoku puzzle without any cell completed; hence, "clueless".[23]Examples and other variants can be found in theGlossary of Sudoku.
This section refers to classic Sudoku, disregarding jigsaw, hyper, and other variants. A completed Sudoku grid is a special type ofLatin squarewith the additional property of no repeated values in any of the nine blocks (orboxesof 3×3 cells).[24]
The general problem of solving Sudoku puzzles onn2×n2grids ofn×nblocks is known to beNP-complete.[25]ManySudoku solving algorithms, such asbrute force-backtracking anddancing linkscan solve most 9×9 puzzles efficiently, butcombinatorial explosionoccurs asnincreases, creating practical limits to the properties of Sudokus that can be constructed, analyzed, and solved asnincreases. A Sudoku puzzle can be expressed as agraph coloringproblem.[26]The aim is to construct a 9-coloring of a particular graph, given a partial 9-coloring.
The fewest clues possible for a proper Sudoku is 17.[27]Tens of thousands of distinct Sudoku puzzles have only 17 clues.[28]
The number of classic 9×9 Sudoku solution grids is 6,670,903,752,021,072,936,960, or around6.67×1021.[29]The number of essentially different solutions, whensymmetriessuch as rotation, reflection, permutation, and relabelling are taken into account, is much smaller, 5,472,730,538.[30]
Unlike the number of complete Sudoku grids, the number of minimal 9×9 Sudoku puzzles is not precisely known. (A minimal puzzle is one in which no clue can be deleted without losing the uniqueness of the solution.) However, statistical techniques combined with a puzzle generator show that about (with 0.065% relative error) 3.10 × 1037minimal puzzles and 2.55 × 1025nonessentially equivalent minimal puzzles exist.[31]
|
https://en.wikipedia.org/wiki/Sudokian
|
Sudoku codesare non-linearforward error correctingcodes following rules ofsudokupuzzles designed for anerasure channel. Based on this model, the transmitter sends a sequence of all symbols of a solved sudoku. The receiver either receives a symbol correctly or an erasure symbol to indicate that the symbol was not received. The decoder gets a matrix with missing entries and uses the constraints of sudoku puzzles to reconstruct a limited amount of erased symbols.
Sudoku codes are not suitable for practical usage but are subject of research. Questions like the rate and error performance are still unknown for general dimensions.[1]
In a sudoku one can find missing information by using different techniques to reproduce the full puzzle. This method can be seen as decoding a sudoku coded message that is sent over an erasure channel where some symbols got erased. By using the sudoku rules the decoder can recover the missing information. Sudokus can be modeled as aprobabilistic graphical modeland thus methods from decodinglow-density parity-check codeslikebelief propagationcan be used.
In the erasure channel model a symbol gets either transmitted correctly with probability1−pe{\displaystyle 1-p_{e}}or is erased with probabilitype{\displaystyle p_{e}}(see Figure \ref{fig:Sudoku3x3channel}). The channel introduces no errors, i.e. no channel input is changed to another symbol. The example in Figure \ref{fig:Sudoku3x3BSC} shows the transmission of a3×3{\displaystyle 3\times 3}Sudoku code. 5 of the 9 symbols got erased by the channel. The decoder is still able to reconstruct the message, i.e. the whole puzzle.
Note that the symbols sent over the channel are not binary. For a binary channel the symbols (e.g. integers{1,…,9}{\displaystyle \{1,\ldots ,9\}}) have to be mapped onto base 2. Thebinary erasure channelmodel however is not applicable because it erases only individual bits with some probability and not Sudoku symbols. If the symbols of the Sudoku are sent in packets the channel can be described as apacket erasure channelmodel.
A sudoku is aN×N{\displaystyle N\times N}number-placement puzzle. It is filled in a way, that in each column, row and sub-grid N distinct symbols occur exactly once. Typical alphabet is the set of the integers{1,…,N}{\displaystyle \{1,\ldots ,N\}}. The size of the sub-grids limit the size of SUDOKUs toN=n2{\displaystyle N=n^{2}}withn∈N{\displaystyle n\in \mathrm {N} }. Every solved sudoku and every sub-grid of it is aLatin square, meaning every symbol occurs exactly once in each row and column. At the starting point (in this case after the erasure channel) the puzzle is only partially complete but has only one unique solution.
For channel codes also other varieties of sudokus are conceivable. Diagonal regions instead of square sub-grids can be used for performance investigations.[2]The diagonal sudoku has the advantage, that its size can be chosen more freely. Due to the sub-grid structure normal sudokus can only be of size n², diagonal sudokus have valid solutions for all oddN{\displaystyle N}.[2]
Sudoku codes are non-linear. In alinear codeany linear combination of codewords give a new valid codeword, this does not hold for sudoku codes. The symbols of a sudoku are from a finite alphabet (e.g. integers{1,…,9}{\displaystyle \{1,\ldots ,9\}}). The constraints of Sudoku codes are non-linear: all symbols within a constraint (row, line, sub-grid) must be different from any other symbol within this constraint. Hence there is no all-zero codeword in Sudoku codes.
Sudoku codes can be represented by probabilistic graphical model in which they take the form of alow-density parity-check code.[3]
There are several possible decoding methods for sudoku codes. Some algorithms are very specific developments for Sudoku codes. Several methods are described insudoku solving algorithms. Another efficient method is withdancing links.
Decoding methods likebelief propagationare also used forlow-density parity-check codesare of special interest. Performance analysis of these methods on sudoku codes can help to understand decoding problems for low-density parity-check codes better.[3]
By modeling sudoku codes as aprobabilistic graphical modelbelief propagationcan be used for Sudoku codes. Belief propagation on thetanner graphorfactor graphto decode Sudoku codes is discussed in by Sayir.[1]and Moon[4]This method is originally designed for low-density parity-check codes. Due to its generality belief propagation works not only with the classical9×9{\displaystyle 9\times 9}Sudoku but with a variety of those.LDPCdecoding is a common use-case for belief propagation, with slight modifications this approach can be used for solving Sudoku codes.[4]
The constraint satisfaction using atanner graphis shown in the figure on the right.Sn{\displaystyle S_{n}}denotes the entries of the sudoku in row-scan order.Cm{\displaystyle C_{m}}denotes the constraint functions:m=1,...,9{\displaystyle m=1,...,9}associated with rows,m=10,...,18{\displaystyle m=10,...,18}associated with columns andm=19,...,27{\displaystyle m=19,...,27}associated with the3×3{\displaystyle 3\times 3}sub-grids of the Sudoku.Cm{\displaystyle C_{m}}is defined as
Cm(s1,s2,.....,s9)={1,ifs1,s2,...,s9are distinct0,otherwise.{\displaystyle C_{m}(s_{1},s_{2},.....,s_{9})={\begin{cases}1,&{\text{if }}s_{1},s_{2},...,s_{9}{\text{ are distinct}}\\0,&{\text{otherwise.}}\end{cases}}}[4]
Every cellSn{\displaystyle S_{n}}is connected to 3 constraints: the row, column and sub-grid constraints. A specification of the general approach for belief propagation is suggested by Sayir:[1]The initial probability of a received symbol is either 1 to the observed symbol and 0 to all others or uniform distributed on the whole alphabet if the symbol is erased. For the belief propagation algorithm it is sufficient to transmit only a subset of possibilities instead of distributions, since the distribution is always uniform over the subset. The candidates for the erased symbols narrow down to a subset of the alphabet as symbols get excluded due to constraints. All values that are used by another cell in the constraint, and pairs that are shared among two other cells and so on are eliminated. Sudoku players use this method of logic excluding to solve most sudoku puzzles.
The aim of error-correcting codes is to encode data in a way such that it is more resilient to errors in the transmission process. The encoder has to map dataU{\displaystyle U}to a valid sudoku grid from which the codewordX{\displaystyle X}can we taken e.g. in row-scan order.
U=00101…⟶Encoder123231312⇒X=1,2,3,1,2,1,3,1,2{\displaystyle U=00101\ldots {\stackrel {\text{Encoder}}{\longrightarrow }}{\begin{array}{|c|c|c|}\hline 1&2&3\\\hline 2&3&1\\\hline 3&1&2\\\hline \end{array}}\Rightarrow X=1,2,3,1,2,1,3,1,2}
shows the necessary steps.
A standard9×9{\displaystyle 9\times 9}sudoku has about 72.5 bits of information as calculated in the next section.Informationafter Shannon is the degree of randomness in a set of data. An ideal coin toss for example has an information ofI=log22=1{\displaystyle I=\log _{2}2=1}bit. To represent the outcome of 72 coin tosses 72 bits are necessary. One Sudoku contains therewith about the same information as 72 coin tosses or a sequence of 72 bits. A sequence of 81 random symbols{1,…,9}{\displaystyle \{1,\ldots ,9\}}hasI=81log29≈256.8{\displaystyle I=81\log _{2}9\approx 256.8}bits of information. One Sudoku code can be seen as 72.5 bits of information and 184.3 bits redundancy. Theoretically a string of 72 bits can be mapped to one sudoku that is sent over the channel as a string of 81 symbols. However, there is no linear function that maps a string to a sudoku code.
A suggested encoding approach by Sayir[5]is as follows:
For a4×4{\displaystyle 4\times 4}sudoku the first entry can be filled with a source of cardinality 4. In this example this a 1. For the rest of this row, column and2×2{\displaystyle 2\times 2}sub-grid this number is excluded from the possibilities in the belief propagation decoder. For the second cell only the numbers 2,3,4 are valid. The source has to be transformed into a uniform distribution between three possibilities and mapped to the valid numbers and so on, until the grid is filled.
The calculation of the rate of sudoku codes is not trivial. An example rate calculation of a4×4{\displaystyle 4\times 4}sudoku is shown above. Filling line by line from the top left corner only the first entry has maximum information oflog24=2{\displaystyle \log _{2}4=2}bits. Every next entry cannot be any of the numbers used before, so the information reduces tolog23{\displaystyle \log _{2}3},log22{\displaystyle \log _{2}2}and0{\displaystyle 0}for the following entries, as they have to be of the remaining left numbers. In the second lines the information is additionally reduced by the area rule: cell5{\displaystyle 5}in row-scan order can only be a3{\displaystyle 3}or4{\displaystyle 4}as the numbers1{\displaystyle 1}and2{\displaystyle 2}are already used in the sub-grid. The last row contains no information at all. Adding all information up one getslog(4∗3∗25)≈8.58{\displaystyle \log(4*3*2^{5})\approx 8.58}bits. The rate in this example is
4+3+2∗516∗4≈0.27{\displaystyle {\frac {4+3+2*5}{16*4}}\approx 0.27}.
The exact number of possible Sudoku grids according toMathematics of Sudokuis6,670,903,752,021,072,936,960{\displaystyle 6,670,903,752,021,072,936,960}. With the total information of
Ilog9=log96.67∗1021≈22.87Ilog2=log26.67∗1021≈72.50bits{\displaystyle {\begin{aligned}I_{log_{9}}&=\log _{9}6.67*10^{21}\approx 22.87\\I_{log_{2}}&=\log _{2}6.67*10^{21}\approx 72.50\,{\text{bits}}\end{aligned}}}
the average rate of a standard Sudoku is
R=I9/92≈0.28{\displaystyle R=I_{9}/9^{2}\approx 0.28}.
The average number of possible entries for a cell is6.67∗102181≈1.86{\displaystyle {\sqrt[{81}]{6.67*10^{21}}}\approx 1.86}orlog21.86≈0.90bits{\displaystyle \log _{2}{1.86}\approx 0.90\,{\text{bits}}}of information per Sudoku cell. Note that the rate may vary between codewords.[5]
The minimum number of given entries that render a unique solution was proven to be 17.[6]In the worst case only four missing entries can lead to ambiguous solutions. For an erasure channel it is very unlikely that 17 successful transmissions are enough to reproduce the puzzle. There are only about 50,000 known solutions with 17 given entries.[7]
Density evolution is a capacity analysis algorithm originally developed forlow-density parity-check codeson belief propagation decoding.[8]Density evolution can also be applied to Sudoku-type constraints.[1]One important simplification used in density evolution on LDPC codes is the sufficiency to analyze only the all-one codeword. With the Sudoku constraints however, this is not a valid codeword. Unlike for linear codes the weight-distance equivalence property does not hold for non-linear codes. Therewith it is necessary to compute density evolution recursions for every possible Sudoku puzzle to get precise performance analysis.
A proposed simplification is to analyze the probability distribution of the cardinalities of messages instead of the probability distribution of the message.[1]Density evolution is calculated on the entry nodes and the constraint nodes (compare Tanner graph above). On the entry nodes one analyzes the cardinalities of the constraints. If for example the constraints have the cardinalities(1,1){\displaystyle (1,1)}then the entry can only be of one symbol. If the constraints have cardinalities(2,2){\displaystyle (2,2)}then both constraints allow two different symbols. For both constraints the correct symbol is contained for sure, lets assume the correct symbol is1{\displaystyle 1}. The other symbol can be equal or different for the constraints. If the symbols are different then the correct symbol is determined. If the second symbol is equal, lets assume2{\displaystyle 2}the output symbols are of cardinality2{\displaystyle 2}i.e. the symbols{1,2}{\displaystyle \left\{1,2\right\}}. Depending on the alphabet size (q{\displaystyle q}) the probability for the unique output for the input cardinalities(2,2){\displaystyle (2,2)}is
p1(2,2)=1−1q−1{\displaystyle {\begin{aligned}p_{1}^{(2,2)}=1-{\frac {1}{q-1}}\end{aligned}}}
and for output of cardinality 2
p2(2,2)=1q−1.{\displaystyle {\begin{aligned}p_{2}^{(2,2)}={\frac {1}{q-1}}.\end{aligned}}}
For a standard9×9{\displaystyle 9\times 9}Sudoku this results in a probability of7/8{\displaystyle 7/8}for a unique solution. An analog calculation is done for all cardinality combinations. In the end the distribution of output cardinalities are summed up from the results. Note that the order of the input cardinality is interchangeable. The calculation of non-decreasing constraint combinations is therewith sufficient.
For constraint nodes the procedure is somewhat similar and described in the following example based on a4×4{\displaystyle 4\times 4}standard Sudoku. Inputs to the constraint nodes are the possible symbols of the connected entry nodes. Cardinality 1 means that the symbol of the source node is already determined. Again a non-decreasing analysis is sufficient. Lets assume the true output value is 4 and the inputs have cardinalities(1,1,2){\displaystyle (1,1,2)}with the true symbols 1-2-3. The messages with cardinality 1 are{1}{\displaystyle \left\{1\right\}}and{2}{\displaystyle \left\{2\right\}}. The message of cardinality 2 might be{1,3}{\displaystyle \left\{1,3\right\}},{2,3}{\displaystyle \left\{2,3\right\}}or{3,4}{\displaystyle \left\{3,4\right\}}as the true symbol 3 must be contained. In two of three cases the output is the correct symbol 4 with cardinality 1:{1}{\displaystyle \left\{1\right\}},{2}{\displaystyle \left\{2\right\}},{1,3}{\displaystyle \left\{1,3\right\}}and{1}{\displaystyle \left\{1\right\}},{2}{\displaystyle \left\{2\right\}},{2,3}{\displaystyle \left\{2,3\right\}}. In one of three case the output cardinality is 2:{1}{\displaystyle \left\{1\right\}},{2}{\displaystyle \left\{2\right\}},{3,4}{\displaystyle \left\{3,4\right\}}. The output symbols in this case are{3,4}{\displaystyle \left\{3,4\right\}}. The final output cardinality distribution can be expressed by summing over all possible input combinations. For a4×4{\displaystyle 4\times 4}standard Sudoku these are 64 combinations that can be grouped to 20 non-decreasing ones.[1]
If the cardinality converges to 1 the decoding is error-free. To find the threshold the erasure probability must be increased until the decoding error remains positive for any number of iterations. With the method of Sayir[1]density evolution recursions can be used to calculate thresholds also for Sudoku codes up to an alphabet sizeq=8{\displaystyle q=8}.
|
https://en.wikipedia.org/wiki/Sudoku_code
|
Inprobability theoryandstatistics,varianceis theexpected valueof thesquared deviation from the meanof arandom variable. Thestandard deviation(SD) is obtained as the square root of the variance. Variance is a measure ofdispersion, meaning it is a measure of how far a set of numbers is spread out from their average value. It is the secondcentral momentof adistribution, and thecovarianceof the random variable with itself, and it is often represented byσ2{\displaystyle \sigma ^{2}},s2{\displaystyle s^{2}},Var(X){\displaystyle \operatorname {Var} (X)},V(X){\displaystyle V(X)}, orV(X){\displaystyle \mathbb {V} (X)}.[1]
An advantage of variance as a measure of dispersion is that it is more amenable to algebraic manipulation than other measures of dispersion such as theexpected absolute deviation; for example, the variance of a sum of uncorrelated random variables is equal to the sum of their variances. A disadvantage of the variance for practical applications is that, unlike the standard deviation, its units differ from the random variable, which is why the standard deviation is more commonly reported as a measure of dispersion once the calculation is finished. Another disadvantage is that the variance is not finite for many distributions.
There are two distinct concepts that are both called "variance". One, as discussed above, is part of a theoreticalprobability distributionand is defined by an equation. The other variance is a characteristic of a set of observations. When variance is calculated from observations, those observations are typically measured from a real-world system. If all possible observations of the system are present, then the calculated variance is called the population variance. Normally, however, only a subset is available, and the variance calculated from this is called the sample variance. The variance calculated from a sample is considered an estimate of the full population variance. There are multiple ways to calculate an estimate of the population variance, as discussed in the section below.
The two kinds of variance are closely related. To see how, consider that a theoretical probability distribution can be used as a generator of hypothetical observations. If an infinite number of observations are generated using a distribution, then the sample variance calculated from that infinite set will match the value calculated using the distribution's equation for variance. Variance has a central role in statistics, where some ideas that use it includedescriptive statistics,statistical inference,hypothesis testing,goodness of fit, andMonte Carlo sampling.
The variance of a random variableX{\displaystyle X}is theexpected valueof thesquared deviation from the meanofX{\displaystyle X},μ=E[X]{\displaystyle \mu =\operatorname {E} [X]}:Var(X)=E[(X−μ)2].{\displaystyle \operatorname {Var} (X)=\operatorname {E} \left[(X-\mu )^{2}\right].}This definition encompasses random variables that are generated by processes that arediscrete,continuous,neither, or mixed. The variance can also be thought of as thecovarianceof a random variable with itself:
Var(X)=Cov(X,X).{\displaystyle \operatorname {Var} (X)=\operatorname {Cov} (X,X).}The variance is also equivalent to the secondcumulantof a probability distribution that generatesX{\displaystyle X}. The variance is typically designated asVar(X){\displaystyle \operatorname {Var} (X)}, or sometimes asV(X){\displaystyle V(X)}orV(X){\displaystyle \mathbb {V} (X)}, or symbolically asσX2{\displaystyle \sigma _{X}^{2}}or simplyσ2{\displaystyle \sigma ^{2}}(pronounced "sigmasquared"). The expression for the variance can be expanded as follows:Var(X)=E[(X−E[X])2]=E[X2−2XE[X]+E[X]2]=E[X2]−2E[X]E[X]+E[X]2=E[X2]−2E[X]2+E[X]2=E[X2]−E[X]2{\displaystyle {\begin{aligned}\operatorname {Var} (X)&=\operatorname {E} \left[{\left(X-\operatorname {E} [X]\right)}^{2}\right]\\[4pt]&=\operatorname {E} \left[X^{2}-2X\operatorname {E} [X]+\operatorname {E} [X]^{2}\right]\\[4pt]&=\operatorname {E} \left[X^{2}\right]-2\operatorname {E} [X]\operatorname {E} [X]+\operatorname {E} [X]^{2}\\[4pt]&=\operatorname {E} \left[X^{2}\right]-2\operatorname {E} [X]^{2}+\operatorname {E} [X]^{2}\\[4pt]&=\operatorname {E} \left[X^{2}\right]-\operatorname {E} [X]^{2}\end{aligned}}}
In other words, the variance ofXis equal to the mean of the square ofXminus the square of the mean ofX. This equation should not be used for computations usingfloating-point arithmetic, because it suffers fromcatastrophic cancellationif the two components of the equation are similar in magnitude. For other numerically stable alternatives, seealgorithms for calculating variance.
If the generator of random variableX{\displaystyle X}isdiscretewithprobability mass functionx1↦p1,x2↦p2,…,xn↦pn{\displaystyle x_{1}\mapsto p_{1},x_{2}\mapsto p_{2},\ldots ,x_{n}\mapsto p_{n}}, then
Var(X)=∑i=1npi⋅(xi−μ)2,{\displaystyle \operatorname {Var} (X)=\sum _{i=1}^{n}p_{i}\cdot {\left(x_{i}-\mu \right)}^{2},}
whereμ{\displaystyle \mu }is the expected value. That is,
μ=∑i=1npixi.{\displaystyle \mu =\sum _{i=1}^{n}p_{i}x_{i}.}
(When such a discreteweighted varianceis specified by weights whose sum is not 1, then one divides by the sum of the weights.)
The variance of a collection ofn{\displaystyle n}equally likely values can be written as
Var(X)=1n∑i=1n(xi−μ)2{\displaystyle \operatorname {Var} (X)={\frac {1}{n}}\sum _{i=1}^{n}(x_{i}-\mu )^{2}}
whereμ{\displaystyle \mu }is the average value. That is,
μ=1n∑i=1nxi.{\displaystyle \mu ={\frac {1}{n}}\sum _{i=1}^{n}x_{i}.}
The variance of a set ofn{\displaystyle n}equally likely values can be equivalently expressed, without directly referring to the mean, in terms of squared deviations of all pairwise squared distances of points from each other:[2]
Var(X)=1n2∑i=1n∑j=1n12(xi−xj)2=1n2∑i∑j>i(xi−xj)2.{\displaystyle \operatorname {Var} (X)={\frac {1}{n^{2}}}\sum _{i=1}^{n}\sum _{j=1}^{n}{\frac {1}{2}}{\left(x_{i}-x_{j}\right)}^{2}={\frac {1}{n^{2}}}\sum _{i}\sum _{j>i}{\left(x_{i}-x_{j}\right)}^{2}.}
If the random variableX{\displaystyle X}has aprobability density functionf(x){\displaystyle f(x)}, andF(x){\displaystyle F(x)}is the correspondingcumulative distribution function, then
Var(X)=σ2=∫R(x−μ)2f(x)dx=∫Rx2f(x)dx−2μ∫Rxf(x)dx+μ2∫Rf(x)dx=∫Rx2dF(x)−2μ∫RxdF(x)+μ2∫RdF(x)=∫Rx2dF(x)−2μ⋅μ+μ2⋅1=∫Rx2dF(x)−μ2,{\displaystyle {\begin{aligned}\operatorname {Var} (X)=\sigma ^{2}&=\int _{\mathbb {R} }{\left(x-\mu \right)}^{2}f(x)\,dx\\[4pt]&=\int _{\mathbb {R} }x^{2}f(x)\,dx-2\mu \int _{\mathbb {R} }xf(x)\,dx+\mu ^{2}\int _{\mathbb {R} }f(x)\,dx\\[4pt]&=\int _{\mathbb {R} }x^{2}\,dF(x)-2\mu \int _{\mathbb {R} }x\,dF(x)+\mu ^{2}\int _{\mathbb {R} }\,dF(x)\\[4pt]&=\int _{\mathbb {R} }x^{2}\,dF(x)-2\mu \cdot \mu +\mu ^{2}\cdot 1\\[4pt]&=\int _{\mathbb {R} }x^{2}\,dF(x)-\mu ^{2},\end{aligned}}}
or equivalently,
Var(X)=∫Rx2f(x)dx−μ2,{\displaystyle \operatorname {Var} (X)=\int _{\mathbb {R} }x^{2}f(x)\,dx-\mu ^{2},}
whereμ{\displaystyle \mu }is the expected value ofX{\displaystyle X}given by
μ=∫Rxf(x)dx=∫RxdF(x).{\displaystyle \mu =\int _{\mathbb {R} }xf(x)\,dx=\int _{\mathbb {R} }x\,dF(x).}
In these formulas, the integrals with respect todx{\displaystyle dx}anddF(x){\displaystyle dF(x)}areLebesgueandLebesgue–Stieltjesintegrals, respectively.
If the functionx2f(x){\displaystyle x^{2}f(x)}isRiemann-integrableon every finite interval[a,b]⊂R,{\displaystyle [a,b]\subset \mathbb {R} ,}then
Var(X)=∫−∞+∞x2f(x)dx−μ2,{\displaystyle \operatorname {Var} (X)=\int _{-\infty }^{+\infty }x^{2}f(x)\,dx-\mu ^{2},}
where the integral is animproper Riemann integral.
Theexponential distributionwith parameterλ> 0 is a continuous distribution whoseprobability density functionis given byf(x)=λe−λx{\displaystyle f(x)=\lambda e^{-\lambda x}}on the interval[0, ∞). Its mean can be shown to beE[X]=∫0∞xλe−λxdx=1λ.{\displaystyle \operatorname {E} [X]=\int _{0}^{\infty }x\lambda e^{-\lambda x}\,dx={\frac {1}{\lambda }}.}
Usingintegration by partsand making use of the expected value already calculated, we have:E[X2]=∫0∞x2λe−λxdx=[−x2e−λx]0∞+∫0∞2xe−λxdx=0+2λE[X]=2λ2.{\displaystyle {\begin{aligned}\operatorname {E} \left[X^{2}\right]&=\int _{0}^{\infty }x^{2}\lambda e^{-\lambda x}\,dx\\&={\left[-x^{2}e^{-\lambda x}\right]}_{0}^{\infty }+\int _{0}^{\infty }2xe^{-\lambda x}\,dx\\&=0+{\frac {2}{\lambda }}\operatorname {E} [X]\\&={\frac {2}{\lambda ^{2}}}.\end{aligned}}}
Thus, the variance ofXis given byVar(X)=E[X2]−E[X]2=2λ2−(1λ)2=1λ2.{\displaystyle \operatorname {Var} (X)=\operatorname {E} \left[X^{2}\right]-\operatorname {E} [X]^{2}={\frac {2}{\lambda ^{2}}}-\left({\frac {1}{\lambda }}\right)^{2}={\frac {1}{\lambda ^{2}}}.}
A fairsix-sided diecan be modeled as a discrete random variable,X, with outcomes 1 through 6, each with equal probability 1/6. The expected value ofXis(1+2+3+4+5+6)/6=7/2.{\displaystyle (1+2+3+4+5+6)/6=7/2.}Therefore, the variance ofXisVar(X)=∑i=1616(i−72)2=16((−5/2)2+(−3/2)2+(−1/2)2+(1/2)2+(3/2)2+(5/2)2)=3512≈2.92.{\displaystyle {\begin{aligned}\operatorname {Var} (X)&=\sum _{i=1}^{6}{\frac {1}{6}}\left(i-{\frac {7}{2}}\right)^{2}\\[5pt]&={\frac {1}{6}}\left((-5/2)^{2}+(-3/2)^{2}+(-1/2)^{2}+(1/2)^{2}+(3/2)^{2}+(5/2)^{2}\right)\\[5pt]&={\frac {35}{12}}\approx 2.92.\end{aligned}}}
The general formula for the variance of the outcome,X, of ann-sideddie isVar(X)=E(X2)−(E(X))2=1n∑i=1ni2−(1n∑i=1ni)2=(n+1)(2n+1)6−(n+12)2=n2−112.{\displaystyle {\begin{aligned}\operatorname {Var} (X)&=\operatorname {E} \left(X^{2}\right)-(\operatorname {E} (X))^{2}\\[5pt]&={\frac {1}{n}}\sum _{i=1}^{n}i^{2}-\left({\frac {1}{n}}\sum _{i=1}^{n}i\right)^{2}\\[5pt]&={\frac {(n+1)(2n+1)}{6}}-\left({\frac {n+1}{2}}\right)^{2}\\[4pt]&={\frac {n^{2}-1}{12}}.\end{aligned}}}
The following table lists the variance for some commonly used probability distributions.
Variance is non-negative because the squares are positive or zero:Var(X)≥0.{\displaystyle \operatorname {Var} (X)\geq 0.}
The variance of a constant is zero.Var(a)=0.{\displaystyle \operatorname {Var} (a)=0.}
Conversely, if the variance of a random variable is 0, then it isalmost surelya constant. That is, it always has the same value:Var(X)=0⟺∃a:P(X=a)=1.{\displaystyle \operatorname {Var} (X)=0\iff \exists a:P(X=a)=1.}
If a distribution does not have a finite expected value, as is the case for theCauchy distribution, then the variance cannot be finite either. However, some distributions may not have a finite variance, despite their expected value being finite. An example is aPareto distributionwhoseindexk{\displaystyle k}satisfies1<k≤2.{\displaystyle 1<k\leq 2.}
The general formula for variance decomposition or thelaw of total varianceis: IfX{\displaystyle X}andY{\displaystyle Y}are two random variables, and the variance ofX{\displaystyle X}exists, then
Var[X]=E(Var[X∣Y])+Var(E[X∣Y]).{\displaystyle \operatorname {Var} [X]=\operatorname {E} (\operatorname {Var} [X\mid Y])+\operatorname {Var} (\operatorname {E} [X\mid Y]).}
Theconditional expectationE(X∣Y){\displaystyle \operatorname {E} (X\mid Y)}ofX{\displaystyle X}givenY{\displaystyle Y}, and theconditional varianceVar(X∣Y){\displaystyle \operatorname {Var} (X\mid Y)}may be understood as follows. Given any particular valueyof the random variableY, there is a conditional expectationE(X∣Y=y){\displaystyle \operatorname {E} (X\mid Y=y)}given the eventY=y. This quantity depends on the particular valuey; it is a functiong(y)=E(X∣Y=y){\displaystyle g(y)=\operatorname {E} (X\mid Y=y)}. That same function evaluated at the random variableYis the conditional expectationE(X∣Y)=g(Y).{\displaystyle \operatorname {E} (X\mid Y)=g(Y).}
In particular, ifY{\displaystyle Y}is a discrete random variable assuming possible valuesy1,y2,y3…{\displaystyle y_{1},y_{2},y_{3}\ldots }with corresponding probabilitiesp1,p2,p3…,{\displaystyle p_{1},p_{2},p_{3}\ldots ,}, then in the formula for total variance, the first term on the right-hand side becomes
E(Var[X∣Y])=∑ipiσi2,{\displaystyle \operatorname {E} (\operatorname {Var} [X\mid Y])=\sum _{i}p_{i}\sigma _{i}^{2},}
whereσi2=Var[X∣Y=yi]{\displaystyle \sigma _{i}^{2}=\operatorname {Var} [X\mid Y=y_{i}]}. Similarly, the second term on the right-hand side becomes
Var(E[X∣Y])=∑ipiμi2−(∑ipiμi)2=∑ipiμi2−μ2,{\displaystyle \operatorname {Var} (\operatorname {E} [X\mid Y])=\sum _{i}p_{i}\mu _{i}^{2}-\left(\sum _{i}p_{i}\mu _{i}\right)^{2}=\sum _{i}p_{i}\mu _{i}^{2}-\mu ^{2},}
whereμi=E[X∣Y=yi]{\displaystyle \mu _{i}=\operatorname {E} [X\mid Y=y_{i}]}andμ=∑ipiμi{\displaystyle \mu =\sum _{i}p_{i}\mu _{i}}. Thus the total variance is given by
Var[X]=∑ipiσi2+(∑ipiμi2−μ2).{\displaystyle \operatorname {Var} [X]=\sum _{i}p_{i}\sigma _{i}^{2}+\left(\sum _{i}p_{i}\mu _{i}^{2}-\mu ^{2}\right).}
A similar formula is applied inanalysis of variance, where the corresponding formula is
MStotal=MSbetween+MSwithin;{\displaystyle {\mathit {MS}}_{\text{total}}={\mathit {MS}}_{\text{between}}+{\mathit {MS}}_{\text{within}};}
hereMS{\displaystyle {\mathit {MS}}}refers to the Mean of the Squares. Inlinear regressionanalysis the corresponding formula is
MStotal=MSregression+MSresidual.{\displaystyle {\mathit {MS}}_{\text{total}}={\mathit {MS}}_{\text{regression}}+{\mathit {MS}}_{\text{residual}}.}
This can also be derived from the additivity of variances, since the total (observed) score is the sum of the predicted score and the error score, where the latter two are uncorrelated.
Similar decompositions are possible for the sum of squared deviations (sum of squares,SS{\displaystyle {\mathit {SS}}}):SStotal=SSbetween+SSwithin,{\displaystyle {\mathit {SS}}_{\text{total}}={\mathit {SS}}_{\text{between}}+{\mathit {SS}}_{\text{within}},}SStotal=SSregression+SSresidual.{\displaystyle {\mathit {SS}}_{\text{total}}={\mathit {SS}}_{\text{regression}}+{\mathit {SS}}_{\text{residual}}.}
The population variance for a non-negative random variable can be expressed in terms of thecumulative distribution functionFusing
2∫0∞u(1−F(u))du−[∫0∞(1−F(u))du]2.{\displaystyle 2\int _{0}^{\infty }u(1-F(u))\,du-{\left[\int _{0}^{\infty }(1-F(u))\,du\right]}^{2}.}
This expression can be used to calculate the variance in situations where the CDF, but not thedensity, can be conveniently expressed.
The secondmomentof a random variable attains the minimum value when taken around the first moment (i.e., mean) of the random variable, i.e.argminmE((X−m)2)=E(X){\displaystyle \mathrm {argmin} _{m}\,\mathrm {E} \left(\left(X-m\right)^{2}\right)=\mathrm {E} (X)}. Conversely, if a continuous functionφ{\displaystyle \varphi }satisfiesargminmE(φ(X−m))=E(X){\displaystyle \mathrm {argmin} _{m}\,\mathrm {E} (\varphi (X-m))=\mathrm {E} (X)}for all random variablesX, then it is necessarily of the formφ(x)=ax2+b{\displaystyle \varphi (x)=ax^{2}+b}, wherea> 0. This also holds in the multidimensional case.[3]
Unlike theexpected absolute deviation, the variance of a variable has units that are the square of the units of the variable itself. For example, a variable measured in meters will have a variance measured in meters squared. For this reason, describing data sets via theirstandard deviationorroot mean square deviationis often preferred over using the variance. In the dice example the standard deviation is√2.9≈ 1.7, slightly larger than the expected absolute deviation of 1.5.
The standard deviation and the expected absolute deviation can both be used as an indicator of the "spread" of a distribution. The standard deviation is more amenable to algebraic manipulation than the expected absolute deviation, and, together with variance and its generalizationcovariance, is used frequently in theoretical statistics; however the expected absolute deviation tends to be morerobustas it is less sensitive tooutliersarising frommeasurement anomaliesor an undulyheavy-tailed distribution.
Variance isinvariantwith respect to changes in alocation parameter. That is, if a constant is added to all values of the variable, the variance is unchanged:Var(X+a)=Var(X).{\displaystyle \operatorname {Var} (X+a)=\operatorname {Var} (X).}
If all values are scaled by a constant, the variance isscaledby the square of that constant:Var(aX)=a2Var(X).{\displaystyle \operatorname {Var} (aX)=a^{2}\operatorname {Var} (X).}
The variance of a sum of two random variables is given byVar(aX+bY)=a2Var(X)+b2Var(Y)+2abCov(X,Y)Var(aX−bY)=a2Var(X)+b2Var(Y)−2abCov(X,Y){\displaystyle {\begin{aligned}\operatorname {Var} (aX+bY)&=a^{2}\operatorname {Var} (X)+b^{2}\operatorname {Var} (Y)+2ab\,\operatorname {Cov} (X,Y)\\[1ex]\operatorname {Var} (aX-bY)&=a^{2}\operatorname {Var} (X)+b^{2}\operatorname {Var} (Y)-2ab\,\operatorname {Cov} (X,Y)\end{aligned}}}
whereCov(X,Y){\displaystyle \operatorname {Cov} (X,Y)}is thecovariance.
In general, for the sum ofN{\displaystyle N}random variables{X1,…,XN}{\displaystyle \{X_{1},\dots ,X_{N}\}}, the variance becomes:Var(∑i=1NXi)=∑i,j=1NCov(Xi,Xj)=∑i=1NVar(Xi)+∑i,j=1,i≠jNCov(Xi,Xj),{\displaystyle \operatorname {Var} \left(\sum _{i=1}^{N}X_{i}\right)=\sum _{i,j=1}^{N}\operatorname {Cov} (X_{i},X_{j})=\sum _{i=1}^{N}\operatorname {Var} (X_{i})+\sum _{i,j=1,i\neq j}^{N}\operatorname {Cov} (X_{i},X_{j}),}see also generalBienaymé's identity.
These results lead to the variance of alinear combinationas:
Var(∑i=1NaiXi)=∑i,j=1NaiajCov(Xi,Xj)=∑i=1Nai2Var(Xi)+∑i≠jaiajCov(Xi,Xj)=∑i=1Nai2Var(Xi)+2∑1≤i<j≤NaiajCov(Xi,Xj).{\displaystyle {\begin{aligned}\operatorname {Var} \left(\sum _{i=1}^{N}a_{i}X_{i}\right)&=\sum _{i,j=1}^{N}a_{i}a_{j}\operatorname {Cov} (X_{i},X_{j})\\&=\sum _{i=1}^{N}a_{i}^{2}\operatorname {Var} (X_{i})+\sum _{i\neq j}a_{i}a_{j}\operatorname {Cov} (X_{i},X_{j})\\&=\sum _{i=1}^{N}a_{i}^{2}\operatorname {Var} (X_{i})+2\sum _{1\leq i<j\leq N}a_{i}a_{j}\operatorname {Cov} (X_{i},X_{j}).\end{aligned}}}
If the random variablesX1,…,XN{\displaystyle X_{1},\dots ,X_{N}}are such thatCov(Xi,Xj)=0,∀(i≠j),{\displaystyle \operatorname {Cov} (X_{i},X_{j})=0\ ,\ \forall \ (i\neq j),}then they are said to beuncorrelated. It follows immediately from the expression given earlier that if the random variablesX1,…,XN{\displaystyle X_{1},\dots ,X_{N}}are uncorrelated, then the variance of their sum is equal to the sum of their variances, or, expressed symbolically:
Var(∑i=1NXi)=∑i=1NVar(Xi).{\displaystyle \operatorname {Var} \left(\sum _{i=1}^{N}X_{i}\right)=\sum _{i=1}^{N}\operatorname {Var} (X_{i}).}
Since independent random variables are always uncorrelated (seeCovariance § Uncorrelatedness and independence), the equation above holds in particular when the random variablesX1,…,Xn{\displaystyle X_{1},\dots ,X_{n}}are independent. Thus, independence is sufficient but not necessary for the variance of the sum to equal the sum of the variances.
DefineX{\displaystyle X}as a column vector ofn{\displaystyle n}random variablesX1,…,Xn{\displaystyle X_{1},\ldots ,X_{n}}, andc{\displaystyle c}as a column vector ofn{\displaystyle n}scalarsc1,…,cn{\displaystyle c_{1},\ldots ,c_{n}}. Therefore,cTX{\displaystyle c^{\mathsf {T}}X}is alinear combinationof these random variables, wherecT{\displaystyle c^{\mathsf {T}}}denotes thetransposeofc{\displaystyle c}. Also letΣ{\displaystyle \Sigma }be thecovariance matrixofX{\displaystyle X}. The variance ofcTX{\displaystyle c^{\mathsf {T}}X}is then given by:[4]
Var(cTX)=cTΣc.{\displaystyle \operatorname {Var} \left(c^{\mathsf {T}}X\right)=c^{\mathsf {T}}\Sigma c.}
This implies that the variance of the mean can be written as (with a column vector of ones)
Var(x¯)=Var(1n1′X)=1n21′Σ1.{\displaystyle \operatorname {Var} \left({\bar {x}}\right)=\operatorname {Var} \left({\frac {1}{n}}1'X\right)={\frac {1}{n^{2}}}1'\Sigma 1.}
One reason for the use of the variance in preference to other measures of dispersion is that the variance of the sum (or the difference) ofuncorrelatedrandom variables is the sum of their variances:
Var(∑i=1nXi)=∑i=1nVar(Xi).{\displaystyle \operatorname {Var} \left(\sum _{i=1}^{n}X_{i}\right)=\sum _{i=1}^{n}\operatorname {Var} (X_{i}).}
This statement is called theBienayméformula[5]and was discovered in 1853.[6][7]It is often made with the stronger condition that the variables areindependent, but being uncorrelated suffices. So if all the variables have the same variance σ2, then, since division bynis a linear transformation, this formula immediately implies that the variance of their mean is
Var(X¯)=Var(1n∑i=1nXi)=1n2∑i=1nVar(Xi)=1n2nσ2=σ2n.{\displaystyle \operatorname {Var} \left({\overline {X}}\right)=\operatorname {Var} \left({\frac {1}{n}}\sum _{i=1}^{n}X_{i}\right)={\frac {1}{n^{2}}}\sum _{i=1}^{n}\operatorname {Var} \left(X_{i}\right)={\frac {1}{n^{2}}}n\sigma ^{2}={\frac {\sigma ^{2}}{n}}.}
That is, the variance of the mean decreases whennincreases. This formula for the variance of the mean is used in the definition of thestandard errorof the sample mean, which is used in thecentral limit theorem.
To prove the initial statement, it suffices to show that
Var(X+Y)=Var(X)+Var(Y).{\displaystyle \operatorname {Var} (X+Y)=\operatorname {Var} (X)+\operatorname {Var} (Y).}
The general result then follows by induction. Starting with the definition,
Var(X+Y)=E[(X+Y)2]−(E[X+Y])2=E[X2+2XY+Y2]−(E[X]+E[Y])2.{\displaystyle {\begin{aligned}\operatorname {Var} (X+Y)&=\operatorname {E} \left[(X+Y)^{2}\right]-(\operatorname {E} [X+Y])^{2}\\[5pt]&=\operatorname {E} \left[X^{2}+2XY+Y^{2}\right]-(\operatorname {E} [X]+\operatorname {E} [Y])^{2}.\end{aligned}}}
Using the linearity of theexpectation operatorand the assumption of independence (or uncorrelatedness) ofXandY, this further simplifies as follows:
Var(X+Y)=E[X2]+2E[XY]+E[Y2]−(E[X]2+2E[X]E[Y]+E[Y]2)=E[X2]+E[Y2]−E[X]2−E[Y]2=Var(X)+Var(Y).{\displaystyle {\begin{aligned}\operatorname {Var} (X+Y)&=\operatorname {E} {\left[X^{2}\right]}+2\operatorname {E} [XY]+\operatorname {E} {\left[Y^{2}\right]}-\left(\operatorname {E} [X]^{2}+2\operatorname {E} [X]\operatorname {E} [Y]+\operatorname {E} [Y]^{2}\right)\\[5pt]&=\operatorname {E} \left[X^{2}\right]+\operatorname {E} \left[Y^{2}\right]-\operatorname {E} [X]^{2}-\operatorname {E} [Y]^{2}\\[5pt]&=\operatorname {Var} (X)+\operatorname {Var} (Y).\end{aligned}}}
In general, the variance of the sum ofnvariables is the sum of theircovariances:
Var(∑i=1nXi)=∑i=1n∑j=1nCov(Xi,Xj)=∑i=1nVar(Xi)+2∑1≤i<j≤nCov(Xi,Xj).{\displaystyle \operatorname {Var} \left(\sum _{i=1}^{n}X_{i}\right)=\sum _{i=1}^{n}\sum _{j=1}^{n}\operatorname {Cov} \left(X_{i},X_{j}\right)=\sum _{i=1}^{n}\operatorname {Var} \left(X_{i}\right)+2\sum _{1\leq i<j\leq n}\operatorname {Cov} \left(X_{i},X_{j}\right).}
(Note: The second equality comes from the fact thatCov(Xi,Xi) = Var(Xi).)
Here,Cov(⋅,⋅){\displaystyle \operatorname {Cov} (\cdot ,\cdot )}is thecovariance, which is zero for independent random variables (if it exists). The formula states that the variance of a sum is equal to the sum of all elements in the covariance matrix of the components. The next expression states equivalently that the variance of the sum is the sum of the diagonal of covariance matrix plus two times the sum of its upper triangular elements (or its lower triangular elements); this emphasizes that the covariance matrix is symmetric. This formula is used in the theory ofCronbach's alphainclassical test theory.
So, if the variables have equal varianceσ2and the averagecorrelationof distinct variables isρ, then the variance of their mean is
Var(X¯)=σ2n+n−1nρσ2.{\displaystyle \operatorname {Var} \left({\overline {X}}\right)={\frac {\sigma ^{2}}{n}}+{\frac {n-1}{n}}\rho \sigma ^{2}.}
This implies that the variance of the mean increases with the average of the correlations. In other words, additional correlated observations are not as effective as additional independent observations at reducing theuncertainty of the mean. Moreover, if the variables have unit variance, for example if they are standardized, then this simplifies to
Var(X¯)=1n+n−1nρ.{\displaystyle \operatorname {Var} \left({\overline {X}}\right)={\frac {1}{n}}+{\frac {n-1}{n}}\rho .}
This formula is used in theSpearman–Brown prediction formulaof classical test theory. This converges toρifngoes to infinity, provided that the average correlation remains constant or converges too. So for the variance of the mean of standardized variables with equal correlations or converging average correlation we have
limn→∞Var(X¯)=ρ.{\displaystyle \lim _{n\to \infty }\operatorname {Var} \left({\overline {X}}\right)=\rho .}
Therefore, the variance of the mean of a large number of standardized variables is approximately equal to their average correlation. This makes clear that the sample mean of correlated variables does not generally converge to the population mean, even though thelaw of large numbersstates that the sample mean will converge for independent variables.
There are cases when a sample is taken without knowing, in advance, how many observations will be acceptable according to some criterion. In such cases, the sample sizeNis a random variable whose variation adds to the variation ofX, such that,[8]Var(∑i=1NXi)=E[N]Var(X)+Var(N)(E[X])2{\displaystyle \operatorname {Var} \left(\sum _{i=1}^{N}X_{i}\right)=\operatorname {E} \left[N\right]\operatorname {Var} (X)+\operatorname {Var} (N)(\operatorname {E} \left[X\right])^{2}}which follows from thelaw of total variance.
IfNhas aPoisson distribution, thenE[N]=Var(N){\displaystyle \operatorname {E} [N]=\operatorname {Var} (N)}with estimatorn=N. So, the estimator ofVar(∑i=1nXi){\displaystyle \operatorname {Var} \left(\sum _{i=1}^{n}X_{i}\right)}becomesnSx2+nX¯2{\displaystyle n{S_{x}}^{2}+n{\bar {X}}^{2}}, givingSE(X¯)=Sx2+X¯2n{\displaystyle \operatorname {SE} ({\bar {X}})={\sqrt {\frac {{S_{x}}^{2}+{\bar {X}}^{2}}{n}}}}(seestandard error of the sample mean).
The scaling property and the Bienaymé formula, along with the property of thecovarianceCov(aX,bY) =abCov(X,Y)jointly imply that
Var(aX±bY)=a2Var(X)+b2Var(Y)±2abCov(X,Y).{\displaystyle \operatorname {Var} (aX\pm bY)=a^{2}\operatorname {Var} (X)+b^{2}\operatorname {Var} (Y)\pm 2ab\,\operatorname {Cov} (X,Y).}
This implies that in a weighted sum of variables, the variable with the largest weight will have a disproportionally large weight in the variance of the total. For example, ifXandYare uncorrelated and the weight ofXis two times the weight ofY, then the weight of the variance ofXwill be four times the weight of the variance ofY.
The expression above can be extended to a weighted sum of multiple variables:
Var(∑inaiXi)=∑i=1nai2Var(Xi)+2∑1≤i∑<j≤naiajCov(Xi,Xj){\displaystyle \operatorname {Var} \left(\sum _{i}^{n}a_{i}X_{i}\right)=\sum _{i=1}^{n}a_{i}^{2}\operatorname {Var} (X_{i})+2\sum _{1\leq i}\sum _{<j\leq n}a_{i}a_{j}\operatorname {Cov} (X_{i},X_{j})}
If two variables X and Y areindependent, the variance of their product is given by[9]Var(XY)=[E(X)]2Var(Y)+[E(Y)]2Var(X)+Var(X)Var(Y).{\displaystyle \operatorname {Var} (XY)=[\operatorname {E} (X)]^{2}\operatorname {Var} (Y)+[\operatorname {E} (Y)]^{2}\operatorname {Var} (X)+\operatorname {Var} (X)\operatorname {Var} (Y).}
Equivalently, using the basic properties of expectation, it is given by
Var(XY)=E(X2)E(Y2)−[E(X)]2[E(Y)]2.{\displaystyle \operatorname {Var} (XY)=\operatorname {E} \left(X^{2}\right)\operatorname {E} \left(Y^{2}\right)-[\operatorname {E} (X)]^{2}[\operatorname {E} (Y)]^{2}.}
In general, if two variables are statistically dependent, then the variance of their product is given by:Var(XY)=E[X2Y2]−[E(XY)]2=Cov(X2,Y2)+E(X2)E(Y2)−[E(XY)]2=Cov(X2,Y2)+(Var(X)+[E(X)]2)(Var(Y)+[E(Y)]2)−[Cov(X,Y)+E(X)E(Y)]2{\displaystyle {\begin{aligned}\operatorname {Var} (XY)={}&\operatorname {E} \left[X^{2}Y^{2}\right]-[\operatorname {E} (XY)]^{2}\\[5pt]={}&\operatorname {Cov} \left(X^{2},Y^{2}\right)+\operatorname {E} (X^{2})\operatorname {E} \left(Y^{2}\right)-[\operatorname {E} (XY)]^{2}\\[5pt]={}&\operatorname {Cov} \left(X^{2},Y^{2}\right)+\left(\operatorname {Var} (X)+[\operatorname {E} (X)]^{2}\right)\left(\operatorname {Var} (Y)+[\operatorname {E} (Y)]^{2}\right)\\[5pt]&-[\operatorname {Cov} (X,Y)+\operatorname {E} (X)\operatorname {E} (Y)]^{2}\end{aligned}}}
Thedelta methoduses second-orderTaylor expansionsto approximate the variance of a function of one or more random variables: seeTaylor expansions for the moments of functions of random variables. For example, the approximate variance of a function of one variable is given by
Var[f(X)]≈(f′(E[X]))2Var[X]{\displaystyle \operatorname {Var} \left[f(X)\right]\approx \left(f'(\operatorname {E} \left[X\right])\right)^{2}\operatorname {Var} \left[X\right]}
provided thatfis twice differentiable and that the mean and variance ofXare finite.
Real-world observations such as the measurements of yesterday's rain throughout the day typically cannot be complete sets of all possible observations that could be made. As such, the variance calculated from the finite set will in general not match the variance that would have been calculated from the full population of possible observations. This means that oneestimatesthe mean and variance from a limited set of observations by using anestimatorequation. The estimator is a function of thesampleofnobservationsdrawn without observational bias from the wholepopulationof potential observations. In this example, the sample would be the set of actual measurements of yesterday's rainfall from available rain gauges within the geography of interest.
The simplest estimators for population mean and population variance are simply the mean and variance of the sample, thesample meanand(uncorrected) sample variance– these areconsistent estimators(they converge to the value of the whole population as the number of samples increases) but can be improved. Most simply, the sample variance is computed as the sum ofsquared deviationsabout the (sample) mean, divided bynas the number of samples.However, using values other thannimproves the estimator in various ways. Four common values for the denominator aren,n− 1,n+ 1, andn− 1.5:nis the simplest (the variance of the sample),n− 1 eliminates bias,[10]n+ 1 minimizesmean squared errorfor the normal distribution,[11]andn− 1.5 mostly eliminates bias inunbiased estimation of standard deviationfor the normal distribution.[12]
Firstly, if the true population mean is unknown, then the sample variance (which uses the sample mean in place of the true mean) is abiased estimator: it underestimates the variance by a factor of (n− 1) /n; correcting this factor, resulting in the sum of squared deviations about the sample mean divided byn-1 instead ofn, is calledBessel's correction.[10]The resulting estimator is unbiased and is called the(corrected) sample varianceorunbiased sample variance. If the mean is determined in some other way than from the same samples used to estimate the variance, then this bias does not arise, and the variance can safely be estimated as that of the samples about the (independently known) mean.
Secondly, the sample variance does not generally minimizemean squared errorbetween sample variance and population variance. Correcting for bias often makes this worse: one can always choose a scale factor that performs better than the corrected sample variance, though the optimal scale factor depends on theexcess kurtosisof the population (seemean squared error: variance) and introduces bias. This always consists of scaling down the unbiased estimator (dividing by a number larger thann− 1) and is a simple example of ashrinkage estimator: one "shrinks" the unbiased estimator towards zero. For the normal distribution, dividing byn+ 1 (instead ofn− 1 orn) minimizes mean squared error.[11]The resulting estimator is biased, however, and is known as thebiased sample variation.
In general, thepopulation varianceof afinitepopulationof sizeNwith valuesxiis given byσ2=1N∑i=1N(xi−μ)2=1N∑i=1N(xi2−2μxi+μ2)=(1N∑i=1Nxi2)−2μ(1N∑i=1Nxi)+μ2=E[xi2]−μ2{\displaystyle {\begin{aligned}\sigma ^{2}&={\frac {1}{N}}\sum _{i=1}^{N}{\left(x_{i}-\mu \right)}^{2}={\frac {1}{N}}\sum _{i=1}^{N}\left(x_{i}^{2}-2\mu x_{i}+\mu ^{2}\right)\\[5pt]&=\left({\frac {1}{N}}\sum _{i=1}^{N}x_{i}^{2}\right)-2\mu \left({\frac {1}{N}}\sum _{i=1}^{N}x_{i}\right)+\mu ^{2}\\[5pt]&=\operatorname {E} [x_{i}^{2}]-\mu ^{2}\end{aligned}}}
where the population mean isμ=E[xi]=1N∑i=1Nxi{\textstyle \mu =\operatorname {E} [x_{i}]={\frac {1}{N}}\sum _{i=1}^{N}x_{i}}andE[xi2]=(1N∑i=1Nxi2){\textstyle \operatorname {E} [x_{i}^{2}]=\left({\frac {1}{N}}\sum _{i=1}^{N}x_{i}^{2}\right)}, whereE{\textstyle \operatorname {E} }is theexpectation valueoperator.
The population variance can also be computed using[13]
σ2=1N2∑i<j(xi−xj)2=12N2∑i,j=1N(xi−xj)2.{\displaystyle \sigma ^{2}={\frac {1}{N^{2}}}\sum _{i<j}\left(x_{i}-x_{j}\right)^{2}={\frac {1}{2N^{2}}}\sum _{i,j=1}^{N}\left(x_{i}-x_{j}\right)^{2}.}
(The right side has duplicate terms in the sum while the middle side has only unique terms to sum.) This is true because12N2∑i,j=1N(xi−xj)2=12N2∑i,j=1N(xi2−2xixj+xj2)=12N∑j=1N(1N∑i=1Nxi2)−(1N∑i=1Nxi)(1N∑j=1Nxj)+12N∑i=1N(1N∑j=1Nxj2)=12(σ2+μ2)−μ2+12(σ2+μ2)=σ2.{\displaystyle {\begin{aligned}&{\frac {1}{2N^{2}}}\sum _{i,j=1}^{N}{\left(x_{i}-x_{j}\right)}^{2}\\[5pt]={}&{\frac {1}{2N^{2}}}\sum _{i,j=1}^{N}\left(x_{i}^{2}-2x_{i}x_{j}+x_{j}^{2}\right)\\[5pt]={}&{\frac {1}{2N}}\sum _{j=1}^{N}\left({\frac {1}{N}}\sum _{i=1}^{N}x_{i}^{2}\right)-\left({\frac {1}{N}}\sum _{i=1}^{N}x_{i}\right)\left({\frac {1}{N}}\sum _{j=1}^{N}x_{j}\right)+{\frac {1}{2N}}\sum _{i=1}^{N}\left({\frac {1}{N}}\sum _{j=1}^{N}x_{j}^{2}\right)\\[5pt]={}&{\frac {1}{2}}\left(\sigma ^{2}+\mu ^{2}\right)-\mu ^{2}+{\frac {1}{2}}\left(\sigma ^{2}+\mu ^{2}\right)\\[5pt]={}&\sigma ^{2}.\end{aligned}}}
The population variance matches the variance of the generating probability distribution. In this sense, the concept of population can be extended to continuous random variables with infinite populations.
In many practical situations, the true variance of a population is not knowna prioriand must be computed somehow. When dealing with extremely large populations, it is not possible to count every object in the population, so the computation must be performed on asampleof the population.[14]This is generally referred to assample varianceorempirical variance. Sample variance can also be applied to the estimation of the variance of a continuous distribution from a sample of that distribution.
We take asample with replacementofnvaluesY1, ...,Ynfrom the population of sizeN, wheren<N, and estimate the variance on the basis of this sample.[15]Directly taking the variance of the sample data gives the average of thesquared deviations:[16]
S~Y2=1n∑i=1n(Yi−Y¯)2=(1n∑i=1nYi2)−Y¯2=1n2∑i,j:i<j(Yi−Yj)2.{\displaystyle {\tilde {S}}_{Y}^{2}={\frac {1}{n}}\sum _{i=1}^{n}\left(Y_{i}-{\overline {Y}}\right)^{2}=\left({\frac {1}{n}}\sum _{i=1}^{n}Y_{i}^{2}\right)-{\overline {Y}}^{2}={\frac {1}{n^{2}}}\sum _{i,j\,:\,i<j}\left(Y_{i}-Y_{j}\right)^{2}.}
(See the sectionPopulation variancefor the derivation of this formula.) Here,Y¯{\displaystyle {\overline {Y}}}denotes thesample mean:Y¯=1n∑i=1nYi.{\displaystyle {\overline {Y}}={\frac {1}{n}}\sum _{i=1}^{n}Y_{i}.}
Since theYiare selected randomly, bothY¯{\displaystyle {\overline {Y}}}andS~Y2{\displaystyle {\tilde {S}}_{Y}^{2}}arerandom variables. Their expected values can be evaluated by averaging over the ensemble of all possible samples{Yi}of sizenfrom the population. ForS~Y2{\displaystyle {\tilde {S}}_{Y}^{2}}this gives:E[S~Y2]=E[1n∑i=1n(Yi−1n∑j=1nYj)2]=1n∑i=1nE[Yi2−2nYi∑j=1nYj+1n2∑j=1nYj∑k=1nYk]=1n∑i=1n(E[Yi2]−2n(∑j≠iE[YiYj]+E[Yi2])+1n2∑j=1n∑k≠jnE[YjYk]+1n2∑j=1nE[Yj2])=1n∑i=1n(n−2nE[Yi2]−2n∑j≠iE[YiYj]+1n2∑j=1n∑k≠jnE[YjYk]+1n2∑j=1nE[Yj2])=1n∑i=1n[n−2n(σ2+μ2)−2n(n−1)μ2+1n2n(n−1)μ2+1n(σ2+μ2)]=n−1nσ2.{\displaystyle {\begin{aligned}\operatorname {E} [{\tilde {S}}_{Y}^{2}]&=\operatorname {E} \left[{\frac {1}{n}}\sum _{i=1}^{n}{\left(Y_{i}-{\frac {1}{n}}\sum _{j=1}^{n}Y_{j}\right)}^{2}\right]\\[5pt]&={\frac {1}{n}}\sum _{i=1}^{n}\operatorname {E} \left[Y_{i}^{2}-{\frac {2}{n}}Y_{i}\sum _{j=1}^{n}Y_{j}+{\frac {1}{n^{2}}}\sum _{j=1}^{n}Y_{j}\sum _{k=1}^{n}Y_{k}\right]\\[5pt]&={\frac {1}{n}}\sum _{i=1}^{n}\left(\operatorname {E} \left[Y_{i}^{2}\right]-{\frac {2}{n}}\left(\sum _{j\neq i}\operatorname {E} \left[Y_{i}Y_{j}\right]+\operatorname {E} \left[Y_{i}^{2}\right]\right)+{\frac {1}{n^{2}}}\sum _{j=1}^{n}\sum _{k\neq j}^{n}\operatorname {E} \left[Y_{j}Y_{k}\right]+{\frac {1}{n^{2}}}\sum _{j=1}^{n}\operatorname {E} \left[Y_{j}^{2}\right]\right)\\[5pt]&={\frac {1}{n}}\sum _{i=1}^{n}\left({\frac {n-2}{n}}\operatorname {E} \left[Y_{i}^{2}\right]-{\frac {2}{n}}\sum _{j\neq i}\operatorname {E} \left[Y_{i}Y_{j}\right]+{\frac {1}{n^{2}}}\sum _{j=1}^{n}\sum _{k\neq j}^{n}\operatorname {E} \left[Y_{j}Y_{k}\right]+{\frac {1}{n^{2}}}\sum _{j=1}^{n}\operatorname {E} \left[Y_{j}^{2}\right]\right)\\[5pt]&={\frac {1}{n}}\sum _{i=1}^{n}\left[{\frac {n-2}{n}}\left(\sigma ^{2}+\mu ^{2}\right)-{\frac {2}{n}}(n-1)\mu ^{2}+{\frac {1}{n^{2}}}n(n-1)\mu ^{2}+{\frac {1}{n}}\left(\sigma ^{2}+\mu ^{2}\right)\right]\\[5pt]&={\frac {n-1}{n}}\sigma ^{2}.\end{aligned}}}
Hereσ2=E[Yi2]−μ2{\textstyle \sigma ^{2}=\operatorname {E} [Y_{i}^{2}]-\mu ^{2}}derived in the section ispopulation varianceandE[YiYj]=E[Yi]E[Yj]=μ2{\textstyle \operatorname {E} [Y_{i}Y_{j}]=\operatorname {E} [Y_{i}]\operatorname {E} [Y_{j}]=\mu ^{2}}due to independency ofYi{\textstyle Y_{i}}andYj{\textstyle Y_{j}}.
HenceS~Y2{\textstyle {\tilde {S}}_{Y}^{2}}gives an estimate of the population varianceσ2{\textstyle \sigma ^{2}}that is biased by a factor ofn−1n{\textstyle {\frac {n-1}{n}}}because the expectation value ofS~Y2{\textstyle {\tilde {S}}_{Y}^{2}}is smaller than the population variance (true variance) by that factor. For this reason,S~Y2{\textstyle {\tilde {S}}_{Y}^{2}}is referred to as thebiased sample variance.
Correcting for this bias yields theunbiased sample variance, denotedS2{\displaystyle S^{2}}:
S2=nn−1S~Y2=nn−1[1n∑i=1n(Yi−Y¯)2]=1n−1∑i=1n(Yi−Y¯)2{\displaystyle S^{2}={\frac {n}{n-1}}{\tilde {S}}_{Y}^{2}={\frac {n}{n-1}}\left[{\frac {1}{n}}\sum _{i=1}^{n}\left(Y_{i}-{\overline {Y}}\right)^{2}\right]={\frac {1}{n-1}}\sum _{i=1}^{n}\left(Y_{i}-{\overline {Y}}\right)^{2}}
Either estimator may be simply referred to as thesample variancewhen the version can be determined by context. The same proof is also applicable for samples taken from a continuous probability distribution.
The use of the termn− 1is calledBessel's correction, and it is also used insample covarianceand thesample standard deviation(the square root of variance). The square root is aconcave functionand thus introduces negative bias (byJensen's inequality), which depends on the distribution, and thus the corrected sample standard deviation (using Bessel's correction) is biased. Theunbiased estimation of standard deviationis a technically involved problem, though for the normal distribution using the termn− 1.5yields an almost unbiased estimator.
The unbiased sample variance is aU-statisticfor the functionf(y1,y2) = (y1−y2)2/2, meaning that it is obtained by averaging a 2-sample statistic over 2-element subsets of the population.
For a set of numbers {10, 15, 30, 45, 57, 52, 63, 72, 81, 93, 102, 105}, if this set is the whole data population for some measurement, then variance is the population variance 932.743 as the sum of the squared deviations about the mean of this set, divided by 12 as the number of the set members. If the set is a sample from the whole population, then the unbiased sample variance can be calculated as 1017.538 that is the sum of the squared deviations about the mean of the sample, divided by 11 instead of 12. A function VAR.S inMicrosoft Excelgives the unbiased sample variance while VAR.P is for population variance.
Being a function ofrandom variables, the sample variance is itself a random variable, and it is natural to study its distribution. In the case thatYiare independent observations from anormal distribution,Cochran's theoremshows that theunbiased sample varianceS2follows a scaledchi-squared distribution(see also:asymptotic propertiesand anelementary proof):[17](n−1)S2σ2∼χn−12{\displaystyle (n-1){\frac {S^{2}}{\sigma ^{2}}}\sim \chi _{n-1}^{2}}
whereσ2is thepopulation variance. As a direct consequence, it follows thatE(S2)=E(σ2n−1χn−12)=σ2,{\displaystyle \operatorname {E} \left(S^{2}\right)=\operatorname {E} \left({\frac {\sigma ^{2}}{n-1}}\chi _{n-1}^{2}\right)=\sigma ^{2},}
and[18]
Var[S2]=Var(σ2n−1χn−12)=σ4(n−1)2Var(χn−12)=2σ4n−1.{\displaystyle \operatorname {Var} \left[S^{2}\right]=\operatorname {Var} \left({\frac {\sigma ^{2}}{n-1}}\chi _{n-1}^{2}\right)={\frac {\sigma ^{4}}{{\left(n-1\right)}^{2}}}\operatorname {Var} \left(\chi _{n-1}^{2}\right)={\frac {2\sigma ^{4}}{n-1}}.}
IfYiare independent and identically distributed, but not necessarily normally distributed, then[19]
E[S2]=σ2,Var[S2]=σ4n(κ−1+2n−1)=1n(μ4−n−3n−1σ4),{\displaystyle \operatorname {E} \left[S^{2}\right]=\sigma ^{2},\quad \operatorname {Var} \left[S^{2}\right]={\frac {\sigma ^{4}}{n}}\left(\kappa -1+{\frac {2}{n-1}}\right)={\frac {1}{n}}\left(\mu _{4}-{\frac {n-3}{n-1}}\sigma ^{4}\right),}
whereκis thekurtosisof the distribution andμ4is the fourthcentral moment.
If the conditions of thelaw of large numbershold for the squared observations,S2is aconsistent estimatorofσ2. One can see indeed that the variance of the estimator tends asymptotically to zero. An asymptotically equivalent formula was given in Kenney and Keeping (1951:164), Rose and Smith (2002:264), and Weisstein (n.d.).[20][21][22]
Samuelson's inequalityis a result that states bounds on the values that individual observations in a sample can take, given that the sample mean and (biased) variance have been calculated.[23]Values must lie within the limitsy¯±σY(n−1)1/2.{\displaystyle {\bar {y}}\pm \sigma _{Y}(n-1)^{1/2}.}
It has been shown[24]that for a sample {yi} of positive real numbers,
σy2≤2ymax(A−H),{\displaystyle \sigma _{y}^{2}\leq 2y_{\max }(A-H),}
whereymaxis the maximum of the sample,Ais the arithmetic mean,His theharmonic meanof the sample andσy2{\displaystyle \sigma _{y}^{2}}is the (biased) variance of the sample.
This bound has been improved, and it is known that variance is bounded by
σy2≤ymax(A−H)(ymax−A)ymax−H,σy2≥ymin(A−H)(A−ymin)H−ymin,{\displaystyle {\begin{aligned}\sigma _{y}^{2}&\leq {\frac {y_{\max }(A-H)(y_{\max }-A)}{y_{\max }-H}},\\[1ex]\sigma _{y}^{2}&\geq {\frac {y_{\min }(A-H)(A-y_{\min })}{H-y_{\min }}},\end{aligned}}}
whereyminis the minimum of the sample.[25]
TheF-test of equality of variancesand thechi square testsare adequate when the sample is normally distributed. Non-normality makes testing for the equality of two or more variances more difficult.
Several non parametric tests have been proposed: these include the Barton–David–Ansari–Freund–Siegel–Tukey test, theCapon test,Mood test, theKlotz testand theSukhatme test. The Sukhatme test applies to two variances and requires that bothmediansbe known and equal to zero. The Mood, Klotz, Capon and Barton–David–Ansari–Freund–Siegel–Tukey tests also apply to two variances. They allow the median to be unknown but do require that the two medians are equal.
TheLehmann testis a parametric test of two variances. Of this test there are several variants known. Other tests of the equality of variances include theBox test, theBox–Anderson testand theMoses test.
Resampling methods, which include thebootstrapand thejackknife, may be used to test the equality of variances.
The variance of a probability distribution is analogous to themoment of inertiainclassical mechanicsof a corresponding mass distribution along a line, with respect to rotation about its center of mass.[26]It is because of this analogy that such things as the variance are calledmomentsofprobability distributions.[26]The covariance matrix is related to themoment of inertia tensorfor multivariate distributions. The moment of inertia of a cloud ofnpoints with a covariance matrix ofΣ{\displaystyle \Sigma }is given by[citation needed]I=n(13×3tr(Σ)−Σ).{\displaystyle I=n\left(\mathbf {1} _{3\times 3}\operatorname {tr} (\Sigma )-\Sigma \right).}
This difference between moment of inertia in physics and in statistics is clear for points that are gathered along a line. Suppose many points are close to thexaxis and distributed along it. The covariance matrix might look likeΣ=[100000.10000.1].{\displaystyle \Sigma ={\begin{bmatrix}10&0&0\\0&0.1&0\\0&0&0.1\end{bmatrix}}.}
That is, there is the most variance in thexdirection. Physicists would consider this to have a low momentaboutthexaxis so the moment-of-inertia tensor isI=n[0.200010.100010.1].{\displaystyle I=n{\begin{bmatrix}0.2&0&0\\0&10.1&0\\0&0&10.1\end{bmatrix}}.}
Thesemivarianceis calculated in the same manner as the variance but only those observations that fall below the mean are included in the calculation:Semivariance=1n∑i:xi<μ(xi−μ)2{\displaystyle {\text{Semivariance}}={\frac {1}{n}}\sum _{i:x_{i}<\mu }{\left(x_{i}-\mu \right)}^{2}}It is also described as a specific measure in different fields of application. For skewed distributions, the semivariance can provide additional information that a variance does not.[27]
For inequalities associated with the semivariance, seeChebyshev's inequality § Semivariances.
The termvariancewas first introduced byRonald Fisherin his 1918 paperThe Correlation Between Relatives on the Supposition of Mendelian Inheritance:[28]
The great body of available statistics show us that the deviations of ahuman measurementfrom its mean follow very closely theNormal Law of Errors, and, therefore, that the variability may be uniformly measured by thestandard deviationcorresponding to thesquare rootof themean square error. When there are two independent causes of variability capable of producing in an otherwise uniform population distributions with standard deviationsσ1{\displaystyle \sigma _{1}}andσ2{\displaystyle \sigma _{2}}, it is found that the distribution, when both causes act together, has a standard deviationσ12+σ22{\displaystyle {\sqrt {\sigma _{1}^{2}+\sigma _{2}^{2}}}}. It is therefore desirable in analysing the causes of variability to deal with the square of the standard deviation as the measure of variability. We shall term this quantity the Variance...
Ifx{\displaystyle x}is a scalarcomplex-valued random variable, with values inC,{\displaystyle \mathbb {C} ,}then its variance isE[(x−μ)(x−μ)∗],{\displaystyle \operatorname {E} \left[(x-\mu )(x-\mu )^{*}\right],}wherex∗{\displaystyle x^{*}}is thecomplex conjugateofx.{\displaystyle x.}This variance is a real scalar.
IfX{\displaystyle X}is avector-valued random variable, with values inRn,{\displaystyle \mathbb {R} ^{n},}and thought of as a column vector, then a natural generalization of variance isE[(X−μ)(X−μ)T],{\displaystyle \operatorname {E} \left[(X-\mu ){(X-\mu )}^{\mathsf {T}}\right],}whereμ=E(X){\displaystyle \mu =\operatorname {E} (X)}andXT{\displaystyle X^{\mathsf {T}}}is the transpose ofX, and so is a row vector. The result is apositive semi-definite square matrix, commonly referred to as thevariance-covariance matrix(or simply as thecovariance matrix).
IfX{\displaystyle X}is a vector- and complex-valued random variable, with values inCn,{\displaystyle \mathbb {C} ^{n},}then thecovariance matrix isE[(X−μ)(X−μ)†],{\displaystyle \operatorname {E} \left[(X-\mu ){(X-\mu )}^{\dagger }\right],}whereX†{\displaystyle X^{\dagger }}is theconjugate transposeofX.{\displaystyle X.}[citation needed]This matrix is also positive semi-definite and square.
Another generalization of variance for vector-valued random variablesX{\displaystyle X}, which results in a scalar value rather than in a matrix, is thegeneralized variancedet(C){\displaystyle \det(C)}, thedeterminantof the covariance matrix. The generalized variance can be shown to be related to the multidimensional scatter of points around their mean.[29]
A different generalization is obtained by considering the equation for the scalar variance,Var(X)=E[(X−μ)2]{\displaystyle \operatorname {Var} (X)=\operatorname {E} \left[(X-\mu )^{2}\right]}, and reinterpreting(X−μ)2{\displaystyle (X-\mu )^{2}}as the squaredEuclidean distancebetween the random variable and its mean, or, simply as the scalar product of the vectorX−μ{\displaystyle X-\mu }with itself. This results inE[(X−μ)T(X−μ)]=tr(C),{\displaystyle \operatorname {E} \left[(X-\mu )^{\mathsf {T}}(X-\mu )\right]=\operatorname {tr} (C),}which is thetraceof the covariance matrix.
|
https://en.wikipedia.org/wiki/Variance#Properties
|
Incomputational group theory, ablack box group(black-box group) is agroupGwhose elements are encoded bybit stringsof lengthN, and group operations are performed by anoracle(the "black box"). These operations include:
This class is defined to include both thepermutation groupsand thematrix groups. The upper bound on theorderofGgiven by |G| ≤ 2Nshows thatGisfinite.
The black box groups were introduced byBabaiandSzemerédiin 1984.[1]They were used as a formalism for (constructive)group recognitionandproperty testing. Notable algorithms include theBabai's algorithmfor finding random group elements,[2]theProduct Replacement Algorithm,[3]andtesting group commutativity.[4]
Many early algorithms in CGT, such as theSchreier–Sims algorithm, require apermutation representationof a group and thus are not black box. Many other algorithms require findingelement orders. Since there are efficient ways of finding the order of an element in a permutation group or in a matrix group (a method for the latter is described by Celler andLeedham-Greenin 1997), a common recourse is to assume that the black box group is equipped with a further oracle for determining element orders.[5]
|
https://en.wikipedia.org/wiki/Black_box_group
|
Incomputability theory, aTuring reductionfrom adecision problemA{\displaystyle A}to a decision problemB{\displaystyle B}is anoracle machinethat decides problemA{\displaystyle A}given an oracle forB{\displaystyle B}(Rogers 1967, Soare 1987) in finitely many steps. It can be understood as analgorithmthat could be used to solveA{\displaystyle A}if it had access to asubroutinefor solvingB{\displaystyle B}. The concept can be analogously applied tofunction problems.
If a Turing reduction fromA{\displaystyle A}toB{\displaystyle B}exists, then everyalgorithmforB{\displaystyle B}[a]can be used to produce an algorithm forA{\displaystyle A}, by inserting the algorithm forB{\displaystyle B}at each place where the oracle machine computingA{\displaystyle A}queries the oracle forB{\displaystyle B}. However, because the oracle machine may query the oracle a large number of times, the resulting algorithm may require more time asymptotically than either the algorithm forB{\displaystyle B}or the oracle machine computingA{\displaystyle A}. A Turing reduction in which the oracle machine runs inpolynomial timeis known as aCook reduction.
The first formal definition of relative computability, then called relative reducibility, was given byAlan Turingin 1939 in terms oforacle machines. Later in 1943 and 1952Stephen Kleenedefined an equivalent concept in terms ofrecursive functions. In 1944Emil Postused the term "Turing reducibility" to refer to the concept.
Given two setsA,B⊆N{\displaystyle A,B\subseteq \mathbb {N} }of natural numbers, we sayA{\displaystyle A}isTuring reducibletoB{\displaystyle B}and write
if and only ifthere is anoracle machinethat computes thecharacteristic functionofAwhen run with oracleB. In this case, we also sayAisB-recursiveandB-computable.
If there is an oracle machine that, when run with oracleB, computes apartial functionwith domainA, thenAis said to beB-recursively enumerableandB-computably enumerable.
We sayA{\displaystyle A}isTuring equivalenttoB{\displaystyle B}and writeA≡TB{\displaystyle A\equiv _{T}B\,}if bothA≤TB{\displaystyle A\leq _{T}B}andB≤TA.{\displaystyle B\leq _{T}A.}Theequivalence classesof Turing equivalent sets are calledTuring degrees. The Turing degree of a setX{\displaystyle X}is writtendeg(X){\displaystyle {\textbf {deg}}(X)}.
Given a setX⊆P(N){\displaystyle {\mathcal {X}}\subseteq {\mathcal {P}}(\mathbb {N} )}, a setA⊆N{\displaystyle A\subseteq \mathbb {N} }is calledTuring hardforX{\displaystyle {\mathcal {X}}}ifX≤TA{\displaystyle X\leq _{T}A}for allX∈X{\displaystyle X\in {\mathcal {X}}}. If additionallyA∈X{\displaystyle A\in {\mathcal {X}}}thenA{\displaystyle A}is calledTuring completeforX{\displaystyle {\mathcal {X}}}.
Turing completeness, as just defined above, corresponds only partially toTuring completenessin the sense of computational universality. Specifically, a Turing machine is auniversal Turing machineif itshalting problem(i.e., the set of inputs for which it eventually halts) ismany-one completefor the setX{\displaystyle {\mathcal {X}}}of recursively enumerable sets. Thus, a necessarybut insufficientcondition for a machine to be computationally universal, is that the machine's halting problem be Turing-complete forX{\displaystyle {\mathcal {X}}}. Insufficient because it may still be the case that, the language accepted by the machine is not itself recursively enumerable.
LetWe{\displaystyle W_{e}}denote the set of input values for which the Turing machine with indexehalts. Then the setsA={e∣e∈We}{\displaystyle A=\{e\mid e\in W_{e}\}}andB={(e,n)∣n∈We}{\displaystyle B=\{(e,n)\mid n\in W_{e}\}}are Turing equivalent (here(−,−){\displaystyle (-,-)}denotes an effectivepairing function). A reduction showingA≤TB{\displaystyle A\leq _{T}B}can be constructed using the fact thate∈A⇔(e,e)∈B{\displaystyle e\in A\Leftrightarrow (e,e)\in B}. Given a pair(e,n){\displaystyle (e,n)}, a new indexi(e,n){\displaystyle i(e,n)}can be constructed using thesmntheoremsuch that the program coded byi(e,n){\displaystyle i(e,n)}ignores its input and merely simulates the computation of the machine with indexeon inputn. In particular, the machine with indexi(e,n){\displaystyle i(e,n)}either halts on every input or halts on no input. Thusi(e,n)∈A⇔(e,n)∈B{\displaystyle i(e,n)\in A\Leftrightarrow (e,n)\in B}holds for alleandn. Because the functioniis computable, this showsB≤TA{\displaystyle B\leq _{T}A}. The reductions presented here are not only Turing reductions butmany-one reductions, discussed below.
Since every reduction from a setA{\displaystyle A}to a setB{\displaystyle B}has to determine whether a single element is inA{\displaystyle A}in only finitely many steps, it can only make finitely many queries of membership in the setB{\displaystyle B}. When the amount of information about the setB{\displaystyle B}used to compute a single bit ofA{\displaystyle A}is discussed, this is made precise by theusefunction. Formally, theuseof a reduction is the function that sends each natural numbern{\displaystyle n}to the largest natural numberm{\displaystyle m}whose membership in the setB{\displaystyle B}was queried by the reduction while determining the membership ofn{\displaystyle n}inA{\displaystyle A}.
There are two common ways of producing reductions stronger than Turing reducibility. The first way is to limit the number and manner of oracle queries.
The second way to produce a stronger reducibility notion is to limit the computational resources that the program implementing the Turing reduction may use. These limits on thecomputational complexityof the reduction are important when studying subrecursive classes such asP. A setAispolynomial-time reducibleto a setB{\displaystyle B}if there is a Turing reduction ofA{\displaystyle A}toB{\displaystyle B}that runs in polynomial time. The concept oflog-space reductionis similar.
These reductions are stronger in the sense that they provide a finer distinction into equivalence classes, and satisfy more restrictive requirements than Turing reductions. Consequently, such reductions are harder to find. There may be no way to build a many-one reduction from one set to another even when a Turing reduction for the same sets exists.
According to theChurch–Turing thesis, a Turing reduction is the most general form of an effectively calculable reduction. Nevertheless, weaker reductions are also considered. SetA{\displaystyle A}is said to bearithmeticalinB{\displaystyle B}ifA{\displaystyle A}is definable by a formula ofPeano arithmeticwithB{\displaystyle B}as a parameter. The setA{\displaystyle A}ishyperarithmeticalinB{\displaystyle B}if there is arecursive ordinalα{\displaystyle \alpha }such thatA{\displaystyle A}is computable fromB(α){\displaystyle B^{(\alpha )}}, theα-iterated Turing jump ofB{\displaystyle B}. The notion ofrelative constructibilityis an important reducibility notion inset theory.
|
https://en.wikipedia.org/wiki/Turing_reduction
|
In mathematics and computer science, amatroid oracleis asubroutinethrough which analgorithmmay access amatroid, an abstract combinatorial structure that can be used to describe thelinear dependenciesbetween vectors in avector spaceor thespanning treesof agraph, among other applications.
The most commonly used oracle of this type is anindependence oracle, a subroutine for testing whether a set of matroid elements is independent. Several other types of oracle have also been used; some of them have been shown to be weaker than independence oracles, some stronger, and some equivalent in computational power.[1]
Manyalgorithmsthat perform computations on matroids have been designed to take an oracle as input, allowing them to run efficiently without change on many different kinds of matroids, and without additional assumptions about what kind of matroid they are using. For instance, given an independence oracle for any matroid, it is possible to find the minimum weight basis of the matroid by applying agreedy algorithmthat adds elements to the basis in sorted order by weight, using the independence oracle to test whether each element can be added.[2]
Incomputational complexity theory, theoracle modelhas led to unconditionallower boundsproving that certain matroid problems cannot be solved in polynomial time, without invoking unproved assumptions such as the assumption thatP ≠ NP. Problems that have been shown to be hard in this way include testing whether a matroid isbinaryoruniform, or testing whether it contains certain fixedminors.[3]
Although some authors have experimented with computer representations of matroids that explicitly list all independent sets or all basis sets of the matroid,[4]these representations are notsuccinct: a matroid withn{\displaystyle n}elements may expand into a representation that takes space exponential inn{\displaystyle n}. Indeed, the number of distinct matroids onn{\displaystyle n}elements growsdoubly exponentiallyas
from which it follows that any explicit representation capable of handling all possible matroids would necessarily use exponential space.[6]
Instead, different types of matroids may be represented more efficiently from the other structures from which they are defined:uniform matroidsfrom their two numeric parameters,graphic matroids,bicircular matroids, andgammoidsfrom graphs,linear matroidsfrommatrices, etc. However, an algorithm for performing computations on arbitrary matroids needs a uniform method of accessing its argument, rather than having to be redesigned for each of these matroid classes. The oracle model provides a convenient way of codifying and classifying the kinds of access that an algorithm might need.
Starting withRado (1942), "independence functions" or "I{\displaystyle I}-functions" have been studied as one of many equivalent ways of axiomatizing matroids. An independence function maps a set of matroid elements to the number1{\displaystyle 1}if the set is independent or0{\displaystyle 0}if it is dependent; that is, it is theindicator functionof the family of independent sets, essentially the same thing as an independence oracle.[7]
Matroid oracles have also been part of the earliest algorithmic work on matroids. Thus,Edmonds (1965), in studying matroid partition problems, assumed that the access to the given matroid was through a subroutine that takes as input an independent setI{\displaystyle I}and an elementx{\displaystyle x}, and either returns acircuitinI∪{x}{\displaystyle I\cup \{x\}}(necessarily unique and containingx{\displaystyle x}, if it exists) or determines that no such circuit exists.Edmonds (1971)used a subroutine that tests whether a given set is independent (that is, in more modern terminology, anindependence oracle), and observed that the information it provides is sufficient to find the minimum weight basis in polynomial time.
Beginning from the work ofKorte & Hausmann (1978)andHausmann & Korte (1978), researchers began studying oracles from the point of view of proving lower bounds on algorithms for matroids and related structures. These two papers by Hausmann and Korte both concerned the problem of finding a maximum cardinality independent set, which is easy for matroids but (as they showed) harder to approximate or compute exactly for more generalindependence systemsrepresented by an independence oracle. This work kicked off a flurry of papers in the late 1970s and early 1980s showing similar hardness results for problems on matroids[8]and comparing the power of different kinds of matroid oracles.[9]
Since that time, the independence oracle has become standard for most research on matroid algorithms.[10]There has also been continued research on lower bounds,[11]and comparisons of different types of oracle.[12]
The following types of matroid oracles have been considered.
Although there are many known types of oracles, the choice of which to use can be simplified, because many of them are equivalent in computational power. An oracleX{\displaystyle X}is said to bepolynomially reducibleto another oracleY{\displaystyle Y}if any call toX{\displaystyle X}may be simulated by an algorithm that accesses the matroid using only oracleY{\displaystyle Y}and takespolynomial timeas measured in terms of the number of elements of the matroid; in complexity-theoretic terms, this is aTuring reduction. Two oracles are said to bepolynomially equivalentif they are polynomially reducible to each other. IfX{\displaystyle X}andY{\displaystyle Y}are polynomially equivalent, then every result that proves the existence or nonexistence of a polynomial time algorithm for a matroid problem using oracleX{\displaystyle X}also proves the same thing for oracleY{\displaystyle Y}.
For instance, the independence oracle is polynomially equivalent to the circuit-finding oracle ofEdmonds (1965). If a circuit-finding oracle is available, a set may be tested for independence using at mostn{\displaystyle n}calls to the oracle by starting from anempty set, adding elements of the given set one element at a time, and using the circuit-finding oracle to test whether each addition preserves the independence of the set that has been constructed so far. In the other direction, if an independence oracle is available, the circuit in a setI∪{x}{\displaystyle I\cup \{x\}}may be found using at mostn{\displaystyle n}calls to the oracle by testing, for each elementy∈I{\displaystyle y\in I}, whetherI∖{y}∪{x}{\displaystyle I\setminus \{y\}\cup \{x\}}is independent and returning the elements for which the answer is no. The independence oracle is also polynomially equivalent to the rank oracle, the spanning oracle, the first two types of closure oracle, and the port oracle.[1]
The basis oracle, the circuit oracle, and the oracle that tests whether a given set is closed are all weaker than the independence oracle: they can be simulated in polynomial time by an algorithm that accesses the matroid using an independence oracle, but not vice versa. Additionally, none of these three oracles can simulate each other within polynomial time. The girth oracle is stronger than the independence oracle, in the same sense.[9]
As well as polynomial time Turing reductions, other types of reducibility have been considered as well. In particular,Karp, Upfal & Wigderson (1988)showed that,
inparallel algorithms, the rank and independence oracles are significantly different in computational power. The rank oracle allows the construction of a minimum weight basis byn{\displaystyle n}simultaneous queries, of the prefixes of the sorted order of the matroid elements: an element belongs to the optimal basis if and only if the rank of its prefix differs from the rank of the previous prefix. In contrast, finding a minimum basis with an independence oracle is much slower: it can be solved deterministically inO(n){\displaystyle O({\sqrt {n}})}time steps, and there is a lower bound ofΩ((n/logn)1/3){\displaystyle \Omega ((n/\log n)^{1/3})}even for randomized parallel algorithms.
Many problems on matroids are known to be solvable inpolynomial time, by algorithms that access the matroid only through an independence oracle or another oracle of equivalent power, without need of any additional assumptions about what kind of matroid has been given to them. These polynomially-solvable problems include:
For many matroid problems, it is possible to show that an independence oracle does not provide enough power to allow the problem to be solved in polynomial time. The main idea of these proofs is to find two matroidsM{\displaystyle M}andM′{\displaystyle M'}on which the answer to the problem differs and which are difficult for an algorithm to tell apart. In particular, ifM{\displaystyle M}has a high degree of symmetry, and differs fromM′{\displaystyle M'}only in the answers to a small number of queries, then it may take a very large number of queries for an algorithm to be sure of distinguishing an input of typeM{\displaystyle M}from an input formed by using one of the symmetries ofM{\displaystyle M}to permuteM′{\displaystyle M'}.[3]
A simple example of this approach can be used to show that it is difficult to test whether a matroid isuniform. For simplicity of exposition, letn{\displaystyle n}be even, letM{\displaystyle M}be the uniform matroidUnn/2{\displaystyle U{}_{n}^{n/2}}, and letM′{\displaystyle M'}be a matroid formed fromM{\displaystyle M}by making a single one of then/2{\displaystyle n/2}-element basis sets ofM{\displaystyle M}dependent instead of independent. In order for an algorithm to correctly test whether its input is uniform, it must be able to distinguishM{\displaystyle M}from every possible permutation ofM′{\displaystyle M'}. But in order for a deterministic algorithm to do so, it must test every one of then/2{\displaystyle n/2}-element subsets of the elements: if it missed one set, it could be fooled by an oracle that chose that same set as the one to make dependent. Therefore, testing for whether a matroid is uniform may require
independence queries, much higher than polynomial. Even a randomized algorithm must make nearly as many queries in order to be confident of distinguishing these two matroids.[23]
Jensen & Korte (1982)formalize this approach by proving that, whenever there exist two matroidsM{\displaystyle M}andM′{\displaystyle M'}on the same set of elements but with differing problem answers, an algorithm that correctly solves the given problem on those elements must use at least
queries, whereaut(M){\displaystyle \operatorname {aut} (M)}denotes theautomorphism groupofM{\displaystyle M},Qi{\displaystyle Q_{i}}denotes the family of sets whose independence differs fromM{\displaystyle M}toM′{\displaystyle M'}, andfix(M,Qi){\displaystyle \operatorname {fix} (M,Q_{i})}denotes the subgroup of automorphisms that mapsQi{\displaystyle Q_{i}}to itself. For instance, the automorphism group of the uniform matroid is just thesymmetric group, with sizen!{\displaystyle n!}, and in the problem of testing uniform matroids there was only one setQi{\displaystyle Q_{i}}with|fix(M,Qi)|=(n/2)!2{\displaystyle |\operatorname {fix} (M,Q_{i})|=(n/2)!^{2}}, smaller by an exponential factor thann!{\displaystyle n!}.[24]
Problems that have been proven to be impossible for a matroid oracle algorithm to compute in polynomial time include:
Among the set of all properties ofn{\displaystyle n}-element matroids, the fraction of the properties that do not require exponential time to test goes to zero, in the limit, asn{\displaystyle n}goes to infinity.[6]
|
https://en.wikipedia.org/wiki/Matroid_oracle
|
Inalgorithmic game theory, a branch of bothcomputer scienceandeconomics, ademand oracleis a function that, given a price-vector, returns thedemandof an agent. It is used by many algorithms related topricingand optimization inonline market. It is usually contrasted with avalue oracle, which is a function that, given a set of items, returns the value assigned to them by an agent.
Thedemandof an agent is the bundle of items that the agent most prefers, given some fixed prices of the items. As an example, consider a market with three objects and one agent, with the following values and prices.
Suppose the agent's utility function isadditive(= the value of a bundle is the sum of values of the items in the bundle), andquasilinear(= the utility of a bundle is the value of the bundle minus its price). Then, the demand of the agent, given the prices, is the set {Banana, Cherry}, which gives a utility of (4+6)-(3+1) = 6. Every other set gives the agent a smaller utility. For example, the empty set gives utility 0, while the set of all items gives utility (2+4+6)-(5+3+1)=3.
With additive valuations, the demand function is easy to compute - there is no need for an "oracle". However, in general, agents may havecombinatorial valuations. This means that, for each combination of items, they may have a different value, which is not necessarily a sum of their values for the individual items. Describing such a function onmitems might require up to 2mnumbers - a number for each subset. This may be infeasible whenmis large. Therefore, many algorithms for markets use two kinds of oracles:
Some examples of algorithms using demand oracles are:
|
https://en.wikipedia.org/wiki/Demand_oracle
|
In cryptography, apadding oracle attackis an attack which uses thepaddingvalidation of a cryptographic message to decrypt the ciphertext. In cryptography, variable-length plaintext messages often have to be padded (expanded) to be compatible with the underlyingcryptographic primitive. The attack relies on having a "paddingoracle" who freely responds to queries about whether a message is correctly padded or not. The information could be directly given, or leaked through aside-channel.
The earliest well-known attack that uses a padding oracle isBleichenbacher's attackof 1998, which attacksRSAwithPKCS #1 v1.5padding.[1]The term "padding oracle" appeared in literature in 2002,[2]afterSerge Vaudenay's attack on theCBC mode decryptionused within symmetricblock ciphers.[3]Variants of both attacks continue to find success more than one decade after their original publication.[1][4][5]
In 1998,Daniel Bleichenbacherpublished a seminal paper on what became known asBleichenbacher's attack(also known as "million message attack"). The attack uses a padding oracle againstRSAwithPKCS #1 v1.5padding, but it does not include the term. Later authors have classified his attack as a padding oracle attack.[1]
Manger (2001) reports an attack on the replacement for PKCS #1 v1.5 padding, PKCS #1 v2.0 "OAEP".[6]
In symmetric cryptography, the paddingoracle attackcan be applied to theCBC mode of operation. Leaked data on padding validity can allow attackers to decrypt (and sometimes encrypt) messages through the oracle using the oracle's key, without knowing the encryption key.
Compared to Bleichenbacher's attack on RSA with PKCS #1 v1.5, Vaudenay's attack on CBC is much more efficient.[1]Both attacks target crypto systems commonly used for the time: CBC is the original mode used inSecure Sockets Layer(SSL) and had continued to be supported in TLS.[4]
A number of mitigations have been performed to prevent the decryption software from acting as an oracle, but newerattacks based on timinghave repeatedly revived this oracle. TLS 1.2 introduces a number ofauthenticated encryption with additional datamodes that do not rely on CBC.[4]
The standard implementation of CBC decryption in block ciphers is to decrypt all ciphertext blocks, validate the padding, remove thePKCS7 padding, and return the message's plaintext. If the server returns an "invalid padding" error instead of a generic "decryption failed" error, the attacker can use the server as a padding oracle to decrypt (and sometimes encrypt) messages.
The mathematical formula for CBC decryption is
As depicted above, CBC decryption XORs each plaintext block with the previous block.
As a result, a single-byte modification in blockC1{\displaystyle C_{1}}will make a corresponding change to a single byte inP2{\displaystyle P_{2}}.
Suppose the attacker has two ciphertext blocksC1,C2{\displaystyle C_{1},C_{2}}and wants to decrypt the second block to get plaintextP2{\displaystyle P_{2}}.
The attacker changes the last byte ofC1{\displaystyle C_{1}}(creatingC1′{\displaystyle C_{1}'}) and sends(IV,C1′,C2){\displaystyle (IV,C_{1}',C_{2})}to the server.
The server then returns whether or not the padding of the last decrypted block (P2′{\displaystyle P_{2}'}) is correct (a valid PKCS#7 padding).
If the padding is correct, the attacker now knows that the last byte ofDK(C2)⊕C1′{\displaystyle D_{K}(C_{2})\oplus C_{1}'}is0x01{\displaystyle \mathrm {0x01} }, the last two bytes are 0x02, the last three bytes are 0x03, …, or the last eight bytes are 0x08. The attacker can modify the second-last byte (flip any bit) to ensure that the last byte is 0x01. (Alternatively, the attacker can flip earlier bytes andbinary searchfor the position to identify the padding. For example, if modifying the third-last byte is correct, but modifying the second-last byte is incorrect, then the last two bytes are known to be 0x02, allowing both of them to be decrypted.) Therefore, the last byte ofDK(C2){\displaystyle D_{K}(C_{2})}equalsC1′⊕0x01{\displaystyle C_{1}'\oplus \mathrm {0x01} }.
If the padding is incorrect, the attacker can change the last byte ofC1′{\displaystyle C_{1}'}to the next possible value.
At most, the attacker will need to make 256 attempts to find the last byte ofP2{\displaystyle P_{2}}, 255 attempts for every possible byte (256 possible, minus one bypigeonhole principle), plus one additional attempt to eliminate an ambiguous padding.[7]
After determining the last byte ofP2{\displaystyle P_{2}}, the attacker can use the same technique to obtain the second-to-last byte ofP2{\displaystyle P_{2}}.
The attacker sets the last byte ofP2{\displaystyle P_{2}}to0x02{\displaystyle \mathrm {0x02} }by setting the last byte ofC1{\displaystyle C_{1}}toDK(C2)⊕0x02{\displaystyle D_{K}(C_{2})\oplus \mathrm {0x02} }.
The attacker then uses the same approach described above, this time modifying the second-to-last byte until the padding is correct (0x02, 0x02).
If a block consists of 128 bits (AES, for example), which is 16 bytes, the attacker will obtain plaintextP2{\displaystyle P_{2}}in no more than 256⋅16 = 4096 attempts. This is significantly faster than the2128{\displaystyle 2^{128}}attempts required to bruteforce a 128-bit key.
CBC-R[8]turns a decryption oracle into an encryption oracle, and is primarily demonstrated against padding oracles.
Using padding oracle attack CBC-R can craft an initialization vector and ciphertext block for any plaintext:
To generate a ciphertext that isNblocks long, attacker must performNnumbers of padding oracle attacks. These attacks are chained together so that proper plaintext is constructed in reverse order, from end of message (CN) to beginning message (C0, IV). In each step, padding oracle attack is used to construct the IV to the previous chosen ciphertext.
The CBC-R attack will not work against an encryption scheme that authenticates ciphertext (using amessage authentication codeor similar) before decrypting.
The original attack against CBC was published in 2002 bySerge Vaudenay.[3]Concrete instantiations of the attack were later realised against SSL[9]and IPSec.[10][11]It was also applied to severalweb frameworks, includingJavaServer Faces,Ruby on Rails[12]andASP.NET[13][14][15]as well as other software, such as theSteamgaming client.[16]In 2012 it was shown to be effective againstPKCS 11cryptographic tokens.[1]
While these earlier attacks were fixed by mostTLSimplementors following its public announcement, a new variant, theLucky Thirteen attack, published in 2013, used a timing side-channel to re-open the vulnerability even in implementations that had previously been fixed. As of early 2014, the attack is no longer considered a threat in real-life operation, though it is still workable in theory (seesignal-to-noise ratio) against a certain class of machines. As of 2015[update], the most active area of development for attacks upon cryptographic protocols used to secure Internet traffic aredowngrade attack, such as Logjam[17]and Export RSA/FREAK[18]attacks, which trick clients into using less-secure cryptographic operations provided for compatibility with legacy clients when more secure ones are available. An attack calledPOODLE[19](late 2014) combines both a downgrade attack (to SSL 3.0) with a padding oracle attack on the older, insecure protocol to enable compromise of the transmitted data. In May 2016 it has been revealed inCVE-2016-2107that the fix against Lucky Thirteen in OpenSSL introduced another timing-based padding oracle.[20][21]
|
https://en.wikipedia.org/wiki/Padding_oracle_attack
|
Theorigins ofglobal surveillancecan be traced back to the late 1940s, when theUKUSA Agreementwas jointly enacted by the United Kingdom and the United States, whose close cooperation eventually culminated in the creation of the global surveillance network, code-named "ECHELON", in 1971.[1][2]
In the aftermath of the1970s Watergate affairand a subsequentcongressional inquiryled bySenatorFrank Church,[3]it was revealed that theNSA, in collaboration with Britain'sGCHQ, had routinely intercepted the international communications of prominent anti-Vietnam Warleaders such asJane FondaandBenjamin Spock.[4]Decades later, a multi-year investigation by theEuropean Parliamenthighlighted the NSA's role ineconomic espionagein a report entitled "Development of Surveillance Technology and Risk of Abuse of Economic Information", in 1999.[5]
However, for the general public, it was a series of detailed disclosures of internal NSA documents in June 2013 that first revealed the massive extent of the NSA's spying, both foreign and domestic. Most of these were leaked by an ex-contractor,Edward Snowden. Even so, a number of these older global surveillance programs such asPRISM,XKeyscore, andTemporawere referenced in the 2013 release of thousands of documents.[6]As confirmed by the NSA's directorKeith B. Alexanderin 2013, the NSA collects and stores all phone records of all American citizens.[7]Much of the data is kept in large storage facilities such as theUtah Data Center, a US$1.5 billionmegaprojectreferred to byThe Wall Street Journalas a "symbol of the spy agency's surveillance prowess."[8]
Wartime censorship of communications during the World Wars was paralleled by peacetime decipherment of communications by theBlack Chamber(Cipher Bureau, MI-8), operating with the approval of theU.S. State Departmentfrom 1919 to 1929.[9]In 1945 the now-defunctProject SHAMROCKwas created to gather alltelegraphicdata entering into or exiting from the United States.[9][10]Major communication companies such asWestern Union,RCA GlobalandITT World Communicationsactively aided the U.S. government in the latter's attempt to gain access to international message traffic.[11]
In 1952, the NSA was officially established.[9]According toThe New York Times, the NSA was created in "absolute secrecy" byPresident Truman.[12]Six weeks after President Truman took office, he orderedwiretapson the telephones ofThomas Gardiner Corcoran, a close advisor ofFranklin D. Roosevelt.[13]The recorded conversations are currently kept at theHarry S. Truman Presidential Library and Museum, along with other sensitive documents (~233,600 pages).
UnderJ. Edgar Hoover, theFederal Bureau of Investigation(FBI) carried out wide-ranging surveillance of communications and political expression, targeting many well-known speakers such asAlbert Einstein,[14][15][16]Frank Sinatra,[17][18]First LadyEleanor Roosevelt,[19][20]Marilyn Monroe,[21]John Lennon,[22]andDaniel Ellsberg,[23][24]Through the illegalCOINTELPROproject, Hoover placed emphasis oncivil rights movementleaderMartin Luther King Jr.(amongst others),[25][26]with one FBI memo calling King the "most dangerous and effectiveNegroleader in the country."[27]
Some of these activities were uncovered when documents were released in 1971 by theCitizens' Commission to Investigate the FBI, followed by the information revealed in the investigations of the 1972Watergate scandal.[28]Following the 1974resignation of Richard Nixon, and in light of the cumulative revelations, the U.S. SenateChurch Committeewas appointed in 1975 to investigate intelligence abuses by federal agencies. In a May 1976Timearticle,Nobody Asked: Is It Moral?, the magazine stated:
It did not matter that much of the information had already been released—or leaked—to the public. The effect was still overwhelming: a stunning, dismayingindictmentof U.S.intelligence agenciesand sixPresidents, fromFranklin RoosevelttoRichard Nixon, for having blithely violateddemocratic idealsandindividual rightswhile gathering information at home or conductingclandestine operationsabroad...[29]
During World War II the U.K. and U.S. governments entered into a series of agreements for sharing ofsignals intelligenceof enemy communications traffic.[31]In March 1946, a secret agreement, the "British-US Communication Intelligence Agreement", known as BRUSA, was established, based on the wartime agreements. The agreement "tied the two countries into a worldwide network of listening posts run byGovernment Communications Headquarters(GCHQ), the U.K.'s biggest spying organisation, and its U.S. equivalent, the National Security Agency."[32]
In 1988, an article titled "Somebody's listening" byDuncan Campbellin theNew Statesman, described the signals intelligence gathering activities of a program code-named "ECHELON.[33]The program was engaged by English-speaking World War IIAllied powersAustralia, Canada, New Zealand, the United Kingdom and the United States (collectively known asAUSCANNZUKUS). Based on theUKUSA Agreement, it was created to monitor the military and diplomatic communications of theSoviet Unionand itsEastern Blocallies during theCold Warin the early 1960s.[34]Though its existence had long been known, the UKUSA agreement only became public in 2010. It enabled the U.S. and the U.K. to exchange "knowledge from operations involving intercepting, decoding and translating foreign communications." The agreement forbade the parties to reveal its existence to any third party.[32]
By the late 1990s theECHELONsystem was capable of intercepting satellite transmissions,public switched telephone network(PSTN) communications (including most Internet traffic), and transmissions carried by microwave. A detailed description of ECHELON was provided by New Zealand journalistNicky Hagerin his 1996 book "Secret Power". While the existence of ECHELON was denied by some member governments, a report by a committee of theEuropean Parliamentin 2001 confirmed the program's use and warned Europeans about its reach and effects.[35]The European Parliament stated in its report that the term "ECHELON" was used in a number of contexts, but that the evidence presented indicated it was a signals intelligence collection system capable of interception and content inspection of telephone calls, fax, e-mail and other data traffic globally. The report to the European Parliament confirmed that this was a "global system for the interception of private and commercial communications."[34]
Echelon spy network revealed
Imagine a global spying network that can eavesdrop on every singlephone call,faxore-mail, anywhere on the planet. It sounds like science fiction, but it's true. Two of the chief protagonists -Britain and America- officially deny its existence. But theBBChas confirmation from theAustralian Governmentthat such a network really does exist..."
In the aftermath of theSeptember 11 attacksin 2001 on theWorld Trade Centerandthe Pentagon, the scope of domestic spying in the United States increased significantly. The bid to prevent future attacks of this scale led to the passage of thePatriot Act. Later acts include theProtect America Act(which removes the warrant requirement for government surveillance of foreign targets[41]) and theFISA Amendments Act(which relaxed some of the original FISA court requirements).
In 2005, the existence ofSTELLARWINDwas revealed byThomas Tamm. On January 1, 2006, days afterThe New York Timeswrote that "Bush Lets U.S. Spy on Callers Without Courts,[42]the President emphasized that "This is a limited program designed to prevent attacks on the United States of America. And I repeat, limited."[43]
In 2006,Mark Kleinrevealed the existence ofRoom 641Athat he had wired back in 2003.[44]In 2008, Babak Pasdar, a computer security expert, and CEO ofBat Bluepublicly revealed the existence of the "Quantico circuit", that he and his team found in 2003. He described it as a back door to the federal government in the systems of an unnamed wireless provider; the company was later independently identified asVerizon.[45]Additional disclosures regarding a mass surveillance program involving U.S. citizens had been made in the U.S. media in 2006.[46]
You Are a Suspect
Every purchase you make with acredit card, everymagazinesubscriptionyou buy andmedical prescriptionyou fill, every Web site you visit ande-mailyou send or receive, everyacademic gradeyou receive, everybank deposityou make, every trip you book and every event you attend—all these transactions and communications will go into what theDefense Departmentdescribes as a virtual, centralized grand database. To this computerized dossier on your private life from commercial sources, add every piece of information that government has about you—passportapplication,driver's licenseandtoll records, judicial anddivorcerecords, complaints from nosy neighbors to the F.B.I., your lifetime paper trail plus the latest hidden camera surveillance—and you have the supersnoop's dream: aTotal Information Awarenessabout every U.S. citizen.
On November 28, 2010,WikiLeaksand five major news outlets in Spain (El País), France (Le Monde), Germany (Der Spiegel), the United Kingdom (The Guardian), and the United States (The New York Times) began publishing the first 220 of 251,287 leakedU.S. State department diplomatic "cables"simultaneously.[51]
On March 15, 2012, the American magazineWiredpublished an article with the headline "The NSA Is Building the Country's Biggest Spy Center (Watch What You Say)",[52]which was later mentioned by U.S. Rep.Hank Johnsonduring a congressional hearing. In response to Johnson's inquiry, NSA directorKeith B. Alexandertestified that these allegations made byWiredmagazine were untrue.[53]
In early 2013,Edward Snowdenhanded over 200,000top secretdocuments to various media outlets, triggering one of the biggestnews leaksin the modernhistory of the United States.[54]
|
https://en.wikipedia.org/wiki/Origins_of_global_surveillance
|
Global surveillancerefers to the practice ofglobalizedmass surveillanceon entire populations across national borders.[1]Although its existence was first revealed in the 1970s and led legislators to attempt to curb domestic spying by theNational Security Agency(NSA), it did not receive sustained public attention until the existence ofECHELONwas revealed in the 1980s and confirmed in the 1990s.[2]In 2013 it gained substantial worldwide media attention due to theglobal surveillance disclosurebyEdward Snowden.[3]
In 1972 NSA analystPerry Fellwock(under the pseudonym "Winslow Peck") introduced the readers ofRampartsmagazine to the NSA and theUKUSA Agreement.[4]In 1976, a separate article inTime Outmagazine revealed the existence of theGCHQ.[5]
In 1982James Bamford's book about the NSA,The Puzzle Palace, was first published. Bamford's second book,Body of Secrets: Anatomy of the Ultra-Secret National Security Agency, was published two decades later.
In 1988 theECHELONnetwork was revealed byMargaret Newsham, aLockheedemployee. Newsham told a member of the U.S. Congress that telephone calls ofStrom Thurmond, aRepublicanU.S. senator, were being collected by the NSA. Congressional investigators determined that "targeting of U.S. political figures would not occur by accident. But was designed into the system from the start."[6]
By the late 1990sECHELONwas reportedly capable of monitoring up to 90% of all internet traffic.[7]According to theBBCin May 2001, however, "The US Government still refused to admit that Echelon even exists."[7]
In the aftermath of theSeptember 11 attacks,William Binney, along with colleaguesJ. Kirke WiebeandEdward Loomisand in cooperation with House stafferDiane Roark, asked the U.S. Defense Department to investigate the NSA for allegedly wasting "millions and millions of dollars" onTrailblazer, a system intended to analyze data carried on communications networks such as the Internet. Binney was also publicly critical of the NSA for spying on U.S. citizens after theSeptember 11, 2001 attacks.[8]Binney claimed that the NSA had failed to uncover the 9/11 plot despite its massive interception of data.[9]
In 2001, after the September 11 attacks,MI5started collecting bulk telephone communications data in the United Kingdom (i.e. what telephone numbers called each other and when) and authorized theHome Secretaryunder theTelecommunications Act 1984instead of theRegulation of Investigatory Powers Act 2000, which would have brought independent oversight and regulation. This was kept secret until announced by the then Home Secretary in 2015.[10][11][12]
On December 16, 2005,The New York Timespublished a report under the headline "BushLets U.S. Spy on Callers Without Courts," which was co-written byEric Lichtblauand thePulitzer Prize-winning journalistJames Risen. According toThe Times, the article's date of publication was delayed for a year (past the next presidential election cycle) because of alleged national security concerns.[13]Russ Ticewas later revealed as a major source.
In 2006, further details of the NSA's domestic surveillance of U.S. citizens was provided byUSA Today. The newspaper released a report on May 11, 2006 detailing the NSA's "massive database" of phone records collected from "tens of millions" of U.S. citizens. According toUSA Today, these phone records were provided by several telecom companies such asAT&T,Verizon, andBellSouth.[15]AT&T technicianMark Kleinwas later revealed as major source, specifically of rooms at network control centers on the internet backbone intercepting and recording all traffic passing through. In 2008 the security analystBabak Pasdarrevealed the existence of the so-called "Quantico circuit" that he and his team had set up in 2003. The circuit provided the U.S. federal government with abackdoorinto the network of an unnamed wireless provider, which was later independently identified asVerizon.[16]
In 2007, formerQwestCEOJoseph Nacchioalleged in court and provided supporting documentation that in February 2001 (nearly 7 months prior to theSeptember 11 attacks) that the NSA proposed in a meeting to conduct blanket phone spying. He considered the spying to be illegal and refused to cooperate, and claims that the company was punished by being denied lucrative contracts.[17]
In 2011 details of themass surveillance industrywere released byWikiLeaks. According toJulian Assange, "We are in a world now where not only is it theoretically possible to record nearly all telecommunications traffic out of a country, all telephone calls, but where there is aninternational industryselling the devices now to do it."[18]
|
https://en.wikipedia.org/wiki/Global_surveillance_disclosures_(1970%E2%80%932013)
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.