text
stringlengths 11
320k
| source
stringlengths 26
161
|
|---|---|
Inprobability theoryandstatistics, acopulais a multivariatecumulative distribution functionfor which themarginal probabilitydistribution of each variable isuniformon the interval [0, 1]. Copulas are used to describe/model thedependence(inter-correlation) betweenrandom variables.[1]Their name, introduced by applied mathematicianAbe Sklarin 1959, comes from the Latin for "link" or "tie", similar but unrelated to grammaticalcopulasinlinguistics. Copulas have been used widely inquantitative financeto model and minimize tail risk[2]andportfolio-optimizationapplications.[3]
Sklar's theorem states that any multivariatejoint distributioncan be written in terms of univariatemarginal distributionfunctions and a copula which describes the dependence structure between the variables.
Copulas are popular in high-dimensional statistical applications as they allow one to easily model and estimate the distribution of random vectors by estimating marginals and copulas separately. There are many parametric copula families available, which usually have parameters that control the strength of dependence. Some popular parametric copula models are outlined below.
Two-dimensional copulas are known in some other areas of mathematics under the namepermutonsanddoubly-stochastic measures.
Consider a random vector(X1,X2,…,Xd){\displaystyle (X_{1},X_{2},\dots ,X_{d})}. Suppose its marginals are continuous, i.e. the marginalCDFsFi(x)=Pr[Xi≤x]{\displaystyle F_{i}(x)=\Pr[X_{i}\leq x]}arecontinuous functions. By applying theprobability integral transformto each component, the random vector
has marginals that areuniformly distributedon the interval [0, 1].
The copula of(X1,X2,…,Xd){\displaystyle (X_{1},X_{2},\dots ,X_{d})}is defined as thejoint cumulative distribution functionof(U1,U2,…,Ud){\displaystyle (U_{1},U_{2},\dots ,U_{d})}:
The copulaCcontains all information on the dependence structure between the components of(X1,X2,…,Xd){\displaystyle (X_{1},X_{2},\dots ,X_{d})}whereas the marginal cumulative distribution functionsFi{\displaystyle F_{i}}contain all information on the marginal distributions ofXi{\displaystyle X_{i}}.
The reverse of these steps can be used to generatepseudo-randomsamples from general classes ofmultivariate probability distributions. That is, given a procedure to generate a sample(U1,U2,…,Ud){\displaystyle (U_{1},U_{2},\dots ,U_{d})}from the copula function, the required sample can be constructed as
The generalized inversesFi−1{\displaystyle F_{i}^{-1}}are unproblematicalmost surely, since theFi{\displaystyle F_{i}}were assumed to be continuous. Furthermore, the above formula for the copula function can be rewritten as:
Inprobabilisticterms,C:[0,1]d→[0,1]{\displaystyle C:[0,1]^{d}\rightarrow [0,1]}is ad-dimensionalcopulaifCis a jointcumulative distribution functionof ad-dimensional random vector on theunit cube[0,1]d{\displaystyle [0,1]^{d}}withuniformmarginals.[4]
Inanalyticterms,C:[0,1]d→[0,1]{\displaystyle C:[0,1]^{d}\rightarrow [0,1]}is ad-dimensionalcopulaif
For instance, in the bivariate case,C:[0,1]×[0,1]→[0,1]{\displaystyle C:[0,1]\times [0,1]\rightarrow [0,1]}is a bivariate copula ifC(0,u)=C(u,0)=0{\displaystyle C(0,u)=C(u,0)=0},C(1,u)=C(u,1)=u{\displaystyle C(1,u)=C(u,1)=u}andC(u2,v2)−C(u2,v1)−C(u1,v2)+C(u1,v1)≥0{\displaystyle C(u_{2},v_{2})-C(u_{2},v_{1})-C(u_{1},v_{2})+C(u_{1},v_{1})\geq 0}for all0≤u1≤u2≤1{\displaystyle 0\leq u_{1}\leq u_{2}\leq 1}and0≤v1≤v2≤1{\displaystyle 0\leq v_{1}\leq v_{2}\leq 1}.
Sklar's theorem, named afterAbe Sklar, provides the theoretical foundation for the application of copulas.[5][6]Sklar's theorem states that everymultivariate cumulative distribution function
of a random vector(X1,X2,…,Xd){\displaystyle (X_{1},X_{2},\dots ,X_{d})}can be expressed in terms of its marginalsFi(xi)=Pr[Xi≤xi]{\displaystyle F_{i}(x_{i})=\Pr[X_{i}\leq x_{i}]}and
a copulaC{\displaystyle C}. Indeed:
If the multivariate distribution has a densityh{\displaystyle h}, and if this density is available, it also holds that
wherec{\displaystyle c}is the density of the copula.
The theorem also states that, givenH{\displaystyle H}, the copula is unique onRan(F1)×⋯×Ran(Fd){\displaystyle \operatorname {Ran} (F_{1})\times \cdots \times \operatorname {Ran} (F_{d})}which is thecartesian productof therangesof the marginal cdf's. This implies that the copula is unique if the marginalsFi{\displaystyle F_{i}}are continuous.
The converse is also true: given a copulaC:[0,1]d→[0,1]{\displaystyle C:[0,1]^{d}\rightarrow [0,1]}and marginalsFi(x){\displaystyle F_{i}(x)}thenC(F1(x1),…,Fd(xd)){\displaystyle C\left(F_{1}(x_{1}),\dots ,F_{d}(x_{d})\right)}defines ad-dimensional cumulative distribution function with marginal distributionsFi(x){\displaystyle F_{i}(x)}.
Copulas mainly work when time series arestationary[7]and continuous.[8]Thus, a very important pre-processing step is to check for theauto-correlation,trendandseasonalitywithin time series.
When time series are auto-correlated, they may generate a non existing dependence between sets of variables and result in incorrect copula dependence structure.[9]
The Fréchet–Hoeffding theorem (afterMaurice René FréchetandWassily Hoeffding[10]) states that for any copulaC:[0,1]d→[0,1]{\displaystyle C:[0,1]^{d}\rightarrow [0,1]}and any(u1,…,ud)∈[0,1]d{\displaystyle (u_{1},\dots ,u_{d})\in [0,1]^{d}}the following bounds hold:
The functionWis called lower Fréchet–Hoeffding bound and is defined as
The functionMis called upper Fréchet–Hoeffding bound and is defined as
The upper bound is sharp:Mis always a copula, it corresponds tocomonotone random variables.
The lower bound is point-wise sharp, in the sense that for fixedu, there is a copulaC~{\displaystyle {\tilde {C}}}such thatC~(u)=W(u){\displaystyle {\tilde {C}}(u)=W(u)}. However,Wis a copula only in two dimensions, in which case it corresponds to countermonotonic random variables.
In two dimensions, i.e. the bivariate case, the Fréchet–Hoeffding theorem states
Several families of copulas have been described.
The Gaussian copula is a distribution over the unithypercube[0,1]d{\displaystyle [0,1]^{d}}. It is constructed from amultivariate normal distributionoverRd{\displaystyle \mathbb {R} ^{d}}by using theprobability integral transform.
For a givencorrelation matrixR∈[−1,1]d×d{\displaystyle R\in [-1,1]^{d\times d}}, the Gaussian copula with parameter matrixR{\displaystyle R}can be written as
whereΦ−1{\displaystyle \Phi ^{-1}}is the inverse cumulative distribution function of astandard normalandΦR{\displaystyle \Phi _{R}}is the joint cumulative distribution function of a multivariate normal distribution with mean vector zero and covariance matrix equal to the correlation matrixR{\displaystyle R}. While there is no simple analytical formula for the copula function,CRGauss(u){\displaystyle C_{R}^{\text{Gauss}}(u)}, it can be upper or lower bounded, and approximated using numerical integration.[11][12]The density can be written as[13]
whereI{\displaystyle I}is the identity matrix.
Archimedean copulas are an associative class of copulas. Most common Archimedean copulas admit an explicit formula, something not possible for instance for the Gaussian copula.
In practice, Archimedean copulas are popular because they allow modeling dependence in arbitrarily high dimensions with only one parameter, governing the strength of dependence.
A copulaCis called Archimedean if it admits the representation[14]
whereψ:[0,1]×Θ→[0,∞){\displaystyle \psi \!:[0,1]\times \Theta \rightarrow [0,\infty )}is a continuous, strictly decreasing and convex function such thatψ(1;θ)=0{\displaystyle \psi (1;\theta )=0},θ{\displaystyle \theta }is a parameter within some parameter spaceΘ{\displaystyle \Theta }, andψ{\displaystyle \psi }is the so-called generator function andψ−1{\displaystyle \psi ^{-1}}is its pseudo-inverse defined by
Moreover, the above formula forCyields a copula forψ−1{\displaystyle \psi ^{-1}}if and only ifψ−1{\displaystyle \psi ^{-1}}isd-monotoneon[0,∞){\displaystyle [0,\infty )}.[15]That is, if it isd−2{\displaystyle d-2}times differentiable and the derivatives satisfy
for allt≥0{\displaystyle t\geq 0}andk=0,1,…,d−2{\displaystyle k=0,1,\dots ,d-2}and(−1)d−2ψ−1,(d−2)(t;θ){\displaystyle (-1)^{d-2}\psi ^{-1,(d-2)}(t;\theta )}is nonincreasing andconvex.
The following tables highlight the most prominent bivariate Archimedean copulas, with their corresponding generator. Not all of them arecompletely monotone, i.e.d-monotone for alld∈N{\displaystyle d\in \mathbb {N} }ord-monotone for certainθ∈Θ{\displaystyle \theta \in \Theta }only.
In statistical applications, many problems can be formulated in the following way. One is interested in the expectation of a response functiong:Rd→R{\displaystyle g:\mathbb {R} ^{d}\rightarrow \mathbb {R} }applied to some random vector(X1,…,Xd){\displaystyle (X_{1},\dots ,X_{d})}.[18]If we denote the CDF of this random vector withH{\displaystyle H}, the quantity of interest can thus be written as
IfH{\displaystyle H}is given by a copula model, i.e.,
this expectation can be rewritten as
In case the copulaCisabsolutely continuous, i.e.Chas a densityc, this equation can be written as
and if each marginal distribution has the densityfi{\displaystyle f_{i}}it holds further that
If copula and marginals are known (or if they have been estimated), this expectation can be approximated through the following Monte Carlo algorithm:
When studying multivariate data, one might want to investigate the underlying copula. Suppose we have observations
from a random vector(X1,X2,…,Xd){\displaystyle (X_{1},X_{2},\dots ,X_{d})}with continuous marginals. The corresponding “true” copula observations would be
However, the marginal distribution functionsFi{\displaystyle F_{i}}are usually not known. Therefore, one can construct pseudo copula observations by using the empirical distribution functions
instead. Then, the pseudo copula observations are defined as
The corresponding empirical copula is then defined as
The components of the pseudo copula samples can also be written asU~ki=Rki/n{\displaystyle {\tilde {U}}_{k}^{i}=R_{k}^{i}/n}, whereRki{\displaystyle R_{k}^{i}}is the rank of the observationXki{\displaystyle X_{k}^{i}}:
Therefore, the empirical copula can be seen as the empirical distribution of the rank transformed data.
The sample version of Spearman's rho:[19]
Inquantitative financecopulas are applied torisk management, toportfolio managementandoptimization, and toderivatives pricing.
For the former, copulas are used to performstress-testsand robustness checks that are especially important during "downside/crisis/panic regimes" where extreme downside events may occur (e.g., the2008 financial crisis). The formula was also adapted for financial markets and was used to estimate theprobability distributionof losses onpools of loans or bonds.
During a downside regime, a large number of investors who have held positions in riskier assets such as equities or real estate may seek refuge in 'safer' investments such as cash or bonds. This is also known as aflight-to-qualityeffect and investors tend to exit their positions in riskier assets in large numbers in a short period of time. As a result, during downside regimes, correlations across equities are greater on the downside as opposed to the upside and this may have disastrous effects on the economy.[22][23]For example, anecdotally, we often read financial news headlines reporting the loss of hundreds of millions of dollars on the stock exchange in a single day; however, we rarely read reports of positive stock market gains of the same magnitude and in the same short time frame.
Copulas aid in analyzing the effects of downside regimes by allowing the modelling of themarginalsand dependence structure of a multivariate probability model separately. For example, consider the stock exchange as a market consisting of a large number of traders each operating with his/her own strategies to maximize profits. The individualistic behaviour of each trader can be described by modelling the marginals. However, as all traders operate on the same exchange, each trader's actions have an interaction effect with other traders'. This interaction effect can be described by modelling the dependence structure. Therefore, copulas allow us to analyse the interaction effects which are of particular interest during downside regimes as investors tend toherd their trading behaviour and decisions. (See alsoagent-based computational economics, where price is treated as anemergent phenomenon, resulting from the interaction of the various market participants, or agents.)
The users of the formula have been criticized for creating "evaluation cultures" that continued to use simple copulæ despite the simple versions being acknowledged as inadequate for that purpose.[24][25]Thus, previously, scalable copula models for large dimensions only allowed the modelling of elliptical dependence structures (i.e., Gaussian and Student-t copulas) that do not allow for correlation asymmetries where correlations differ on the upside or downside regimes. However, the development ofvine copulas[26](also known as pair copulas) enables the flexible modelling of the dependence structure for portfolios of large dimensions.[27]The Clayton canonical vine copula allows for the occurrence of extreme downside events and has been successfully applied inportfolio optimizationand risk management applications. The model is able to reduce the effects of extreme downside correlations and produces improved statistical and economic performance compared to scalable elliptical dependence copulas such as the Gaussian and Student-t copula.[28]
Other models developed for risk management applications are panic copulas that are glued with market estimates of the marginal distributions to analyze the effects ofpanic regimeson the portfolio profit and loss distribution. Panic copulas are created byMonte Carlo simulation, mixed with a re-weighting of the probability of each scenario.[29]
As regardsderivatives pricing, dependence modelling with copula functions is widely used in applications offinancial risk assessmentandactuarial analysis– for example in the pricing ofcollateralized debt obligations(CDOs).[30]Some believe the methodology of applying the Gaussian copula tocredit derivativesto be one of the causes of the2008 financial crisis;[31][32][33]seeDavid X. Li § CDOs and Gaussian copula.
Despite this perception, there are documented attempts within the financial industry, occurring before the crisis, to address the limitations of the Gaussian copula and of copula functions more generally, specifically the lack of dependence dynamics. The Gaussian copula is lacking as it only allows for an elliptical dependence structure, as dependence is only modeled using the variance-covariance matrix.[28]This methodology is limited such that it does not allow for dependence to evolve as the financial markets exhibit asymmetric dependence, whereby correlations across assets significantly increase during downturns compared to upturns. Therefore, modeling approaches using the Gaussian copula exhibit a poor representation ofextreme events.[28][34]There have been attempts to propose models rectifying some of the copula limitations.[34][35][36]
Additional to CDOs, copulas have been applied to other asset classes as a flexible tool in analyzing multi-asset derivative products. The first such application outside credit was to use a copula to construct abasketimplied volatilitysurface,[37]taking into account thevolatility smileof basket components. Copulas have since gained popularity in pricing and risk management[38]of options on multi-assets in the presence of a volatility smile, inequity-,foreign exchange-andfixed income derivatives.
Recently, copula functions have been successfully applied to the database formulation for thereliabilityanalysis of highway bridges, and to various multivariatesimulationstudies in civil engineering,[39]reliability of wind and earthquake engineering,[40]and mechanical & offshore engineering.[41]Researchers are also trying these functions in the field of transportation to understand the interaction between behaviors of individual drivers which, in totality, shapes traffic flow.
Copulas are being used forreliabilityanalysis of complex systems of machine components with competing failure modes.[42]
Copulas are being used forwarrantydata analysis in which the tail dependence is analysed.[43]
Copulas are used in modelling turbulent partially premixed combustion, which is common in practical combustors.[44][45]
Copulæ have many applications in the area ofmedicine, for example,
The combination of SSA and copula-based methods have been applied for the first time as a novel stochastic tool for Earth Orientation Parameters prediction.[60][61]
Copulas have been used in both theoretical and applied analyses of hydroclimatic data. Theoretical studies adopted the copula-based methodology for instance to gain a better understanding of the dependence structures of temperature and precipitation, in different parts of the world.[9][62][63]Applied studies adopted the copula-based methodology to examine e.g., agricultural droughts[64]or joint effects of temperature and precipitation extremes on vegetation growth.[65]
Copulas have been extensively used in climate- and weather-related research.[66][67]
Copulas have been used to estimate thesolar irradiancevariability in spatial networks and temporally for single locations.[68][69]
Large synthetic traces of vectors and stationary time series can be generated using empirical copula while preserving the entire dependence structure of small datasets.[70]Such empirical traces are useful in various simulation-based performance studies.[71]
Copulas have been used for quality ranking in the manufacturing of electronically commutated motors.[72]
Copulas are important because they represent a dependence structure without usingmarginal distributions. Copulas have been widely used in the field offinance, but their use insignal processingis relatively new. Copulas have been employed in the field ofwirelesscommunicationfor classifyingradarsignals, change detection inremote sensingapplications, andEEGsignal processinginmedicine. In this section, a short mathematical derivation to obtain copula density function followed by a table providing a list of copula density functions with the relevant signal processing applications are presented.
Copulas have been used for determining the core radio luminosity function of Active galactic Nuclei (AGNs),[73]while this cannot be realized using traditional methods due to the difficulties in sample completeness.
For any two random variablesXandY, the continuous jointprobability distributionfunction can be written as
whereFX(x)=Pr{X≤x}{\textstyle F_{X}(x)=\Pr {\begin{Bmatrix}X\leq {x}\end{Bmatrix}}}andFY(y)=Pr{Y≤y}{\textstyle F_{Y}(y)=\Pr {\begin{Bmatrix}Y\leq {y}\end{Bmatrix}}}are the marginal cumulative distribution functions of the random variablesXandY, respectively.
then the copula distribution functionC(u,v){\displaystyle C(u,v)}can be defined using Sklar's theorem[74][6]as:
whereu=FX(x){\displaystyle u=F_{X}(x)}andv=FY(y){\displaystyle v=F_{Y}(y)}are marginal distribution functions,FXY(x,y){\displaystyle F_{XY}(x,y)}joint andu,v∈(0,1){\displaystyle u,v\in (0,1)}.
AssumingFXY(⋅,⋅){\displaystyle F_{XY}(\cdot ,\cdot )}is a.e. twice differentiable, we start by using the relationship between joint probability density function (PDF) and joint cumulative distribution function (CDF) and its partial derivatives.
wherec(u,v){\displaystyle c(u,v)}is the copula density function,fX(x){\displaystyle f_{X}(x)}andfY(y){\displaystyle f_{Y}(y)}are the marginal probability density functions ofXandY, respectively. There are four elements in this equation, and if any three elements are known, the fourth element can be calculated. For example, it may be used,
Various bivariate copula density functions are important in the area of signal processing.u=FX(x){\displaystyle u=F_{X}(x)}andv=FY(y){\displaystyle v=F_{Y}(y)}are marginal distributions functions andfX(x){\displaystyle f_{X}(x)}andfY(y){\displaystyle f_{Y}(y)}are marginal density functions. Extension and generalization of copulas for statistical signal processing have been shown to construct new bivariate copulas for exponential, Weibull, and Rician distributions. Zeng et al.[75]presented algorithms, simulation, optimal selection, and practical applications of these copulas in signal processing.
validating biometric authentication,[77]modeling stochastic dependence in large-scale integration of wind power,[78]unsupervised classification of radar signals[79]
fusion of correlated sensor decisions[92]
|
https://en.wikipedia.org/wiki/Copula_(probability_theory)
|
Inmathematics, thedisintegration theoremis a result inmeasure theoryandprobability theory. It rigorously defines the idea of a non-trivial "restriction" of ameasureto ameasure zerosubset of themeasure spacein question. It is related to the existence ofconditional probability measures. In a sense, "disintegration" is the opposite process to the construction of aproduct measure.
Consider the unit squareS=[0,1]×[0,1]{\displaystyle S=[0,1]\times [0,1]}in theEuclidean planeR2{\displaystyle \mathbb {R} ^{2}}. Consider theprobability measureμ{\displaystyle \mu }defined onS{\displaystyle S}by the restriction of two-dimensionalLebesgue measureλ2{\displaystyle \lambda ^{2}}toS{\displaystyle S}. That is, the probability of an eventE⊆S{\displaystyle E\subseteq S}is simply the area ofE{\displaystyle E}. We assumeE{\displaystyle E}is a measurable subset ofS{\displaystyle S}.
Consider a one-dimensional subset ofS{\displaystyle S}such as the line segmentLx={x}×[0,1]{\displaystyle L_{x}=\{x\}\times [0,1]}.Lx{\displaystyle L_{x}}hasμ{\displaystyle \mu }-measure zero; every subset ofLx{\displaystyle L_{x}}is aμ{\displaystyle \mu }-null set; since the Lebesgue measure space is acomplete measure space,E⊆Lx⟹μ(E)=0.{\displaystyle E\subseteq L_{x}\implies \mu (E)=0.}
While true, this is somewhat unsatisfying. It would be nice to say thatμ{\displaystyle \mu }"restricted to"Lx{\displaystyle L_{x}}is the one-dimensional Lebesgue measureλ1{\displaystyle \lambda ^{1}}, rather than thezero measure. The probability of a "two-dimensional" eventE{\displaystyle E}could then be obtained as anintegralof the one-dimensional probabilities of the vertical "slices"E∩Lx{\displaystyle E\cap L_{x}}: more formally, ifμx{\displaystyle \mu _{x}}denotes one-dimensional Lebesgue measure onLx{\displaystyle L_{x}}, thenμ(E)=∫[0,1]μx(E∩Lx)dx{\displaystyle \mu (E)=\int _{[0,1]}\mu _{x}(E\cap L_{x})\,\mathrm {d} x}for any "nice"E⊆S{\displaystyle E\subseteq S}. The disintegration theorem makes this argument rigorous in the context of measures onmetric spaces.
(Hereafter,P(X){\displaystyle {\mathcal {P}}(X)}will denote the collection ofBorelprobability measures on atopological space(X,T){\displaystyle (X,T)}.)
The assumptions of the theorem are as follows:
The conclusion of the theorem: There exists aν{\displaystyle \nu }-almost everywhereuniquely determined family of probability measures{μx}x∈X⊆P(Y){\displaystyle \{\mu _{x}\}_{x\in X}\subseteq {\mathcal {P}}(Y)}, which provides a "disintegration" ofμ{\displaystyle \mu }into{μx}x∈X{\displaystyle \{\mu _{x}\}_{x\in X}},such that:
The original example was a special case of the problem of product spaces, to which the disintegration theorem applies.
WhenY{\displaystyle Y}is written as aCartesian productY=X1×X2{\displaystyle Y=X_{1}\times X_{2}}andπi:Y→Xi{\displaystyle \pi _{i}:Y\to X_{i}}is the naturalprojection, then each fibreπ1−1(x1){\displaystyle \pi _{1}^{-1}(x_{1})}can becanonicallyidentified withX2{\displaystyle X_{2}}and there exists a Borel family of probability measures{μx1}x1∈X1{\displaystyle \{\mu _{x_{1}}\}_{x_{1}\in X_{1}}}inP(X2){\displaystyle {\mathcal {P}}(X_{2})}(which is(π1)∗(μ){\displaystyle (\pi _{1})_{*}(\mu )}-almost everywhere uniquely determined) such thatμ=∫X1μx1μ(π1−1(dx1))=∫X1μx1d(π1)∗(μ)(x1),{\displaystyle \mu =\int _{X_{1}}\mu _{x_{1}}\,\mu \left(\pi _{1}^{-1}(\mathrm {d} x_{1})\right)=\int _{X_{1}}\mu _{x_{1}}\,\mathrm {d} (\pi _{1})_{*}(\mu )(x_{1}),}which is in particular[clarification needed]∫X1×X2f(x1,x2)μ(dx1,dx2)=∫X1(∫X2f(x1,x2)μ(dx2∣x1))μ(π1−1(dx1)){\displaystyle \int _{X_{1}\times X_{2}}f(x_{1},x_{2})\,\mu (\mathrm {d} x_{1},\mathrm {d} x_{2})=\int _{X_{1}}\left(\int _{X_{2}}f(x_{1},x_{2})\mu (\mathrm {d} x_{2}\mid x_{1})\right)\mu \left(\pi _{1}^{-1}(\mathrm {d} x_{1})\right)}andμ(A×B)=∫Aμ(B∣x1)μ(π1−1(dx1)).{\displaystyle \mu (A\times B)=\int _{A}\mu \left(B\mid x_{1}\right)\,\mu \left(\pi _{1}^{-1}(\mathrm {d} x_{1})\right).}
The relation toconditional expectationis given by the identitiesE(f∣π1)(x1)=∫X2f(x1,x2)μ(dx2∣x1),{\displaystyle \operatorname {E} (f\mid \pi _{1})(x_{1})=\int _{X_{2}}f(x_{1},x_{2})\mu (\mathrm {d} x_{2}\mid x_{1}),}μ(A×B∣π1)(x1)=1A(x1)⋅μ(B∣x1).{\displaystyle \mu (A\times B\mid \pi _{1})(x_{1})=1_{A}(x_{1})\cdot \mu (B\mid x_{1}).}
The disintegration theorem can also be seen as justifying the use of a "restricted" measure invector calculus. For instance, inStokes' theoremas applied to avector fieldflowing through acompactsurfaceΣ⊂R3{\displaystyle \Sigma \subset \mathbb {R} ^{3}}, it is implicit that the "correct" measure onΣ{\displaystyle \Sigma }is the disintegration of three-dimensional Lebesgue measureλ3{\displaystyle \lambda ^{3}}onΣ{\displaystyle \Sigma }, and that the disintegration of this measure on ∂Σ is the same as the disintegration ofλ3{\displaystyle \lambda ^{3}}on∂Σ{\displaystyle \partial \Sigma }.[2]
The disintegration theorem can be applied to give a rigorous treatment of conditional probability distributions in statistics, while avoiding purely abstract formulations of conditional probability.[3]The theorem is related to theBorel–Kolmogorov paradox, for example.
|
https://en.wikipedia.org/wiki/Disintegration_theorem
|
Multivariate statisticsis a subdivision ofstatisticsencompassing the simultaneous observation and analysis of more than oneoutcome variable, i.e.,multivariate random variables.
Multivariate statistics concerns understanding the different aims and background of each of the different forms of multivariate analysis, and how they relate to each other. The practical application of multivariate statistics to a particular problem may involve several types of univariate and multivariate analyses in order to understand the relationships between variables and their relevance to the problem being studied.
In addition, multivariate statistics is concerned with multivariateprobability distributions, in terms of both
Certain types of problems involving multivariate data, for examplesimple linear regressionandmultiple regression, arenotusually considered to be special cases of multivariate statistics because the analysis is dealt with by considering the (univariate) conditional distribution of a single outcome variable given the other variables.
Multivariate analysis(MVA) is based on the principles of multivariate statistics. Typically, MVA is used to address situations where multiple measurements are made on each experimental unit and the relations among these measurements and their structures are important.[1]A modern, overlapping categorization of MVA includes:[1]
Multivariate analysis can be complicated by the desire to include physics-based analysis to calculate the effects of variables for a hierarchical "system-of-systems". Often, studies that wish to use multivariate analysis are stalled by the dimensionality of the problem. These concerns are often eased through the use ofsurrogate models, highly accurate approximations of the physics-based code. Since surrogate models take the form of an equation, they can be evaluated very quickly. This becomes an enabler for large-scale MVA studies: while aMonte Carlo simulationacross the design space is difficult with physics-based codes, it becomes trivial when evaluating surrogate models, which often take the form ofresponse-surfaceequations.
Many different models are used in MVA, each with its own type of analysis:
It is very common that in an experimentally acquired set of data the values of some components of a given data point aremissing. Rather than discarding the whole data point, it is common to "fill in" values for the missing components, a process called "imputation".[6]
There is a set ofprobability distributionsused in multivariate analyses that play a similar role to the corresponding set of distributions that are used inunivariate analysiswhen thenormal distributionis appropriate to a dataset. These multivariate distributions are:
TheInverse-Wishart distributionis important inBayesian inference, for example inBayesian multivariate linear regression. Additionally,Hotelling's T-squared distributionis a multivariate distribution, generalisingStudent's t-distribution, that is used in multivariatehypothesis testing.
C.R. Raomade significant contributions to multivariate statistical theory throughout his career, particularly in the mid-20th century. One of his key works is the book titled "Advanced Statistical Methods in Biometric Research," published in 1952. This work laid the foundation for many concepts in multivariate statistics.[7]Anderson's 1958 textbook,An Introduction to Multivariate Statistical Analysis,[8]educated a generation of theorists and applied statisticians; Anderson's book emphasizeshypothesis testingvialikelihood ratio testsand the properties ofpower functions:admissibility,unbiasednessandmonotonicity.[9][10]
MVA was formerly discussed solely in the context of statistical theories, due to the size and complexity of underlying datasets and its high computational consumption. With the dramatic growth of computational power, MVA now plays an increasingly important role in data analysis and has wide application inOmicsfields.
There are an enormous number of software packages and other tools for multivariate analysis, including:
|
https://en.wikipedia.org/wiki/Multivariate_statistics
|
When twoprobability distributionsoverlap,statistical interferenceexists. Knowledge of the distributions can be used to determine the likelihood that one parameter exceeds another, and by how much.
This technique can be used forgeometric dimensioningof mechanical parts, determining when an applied load exceeds the strength of a structure, and in many other situations. This type of analysis can also be used to estimate theprobability of failureor thefailure rate.
Mechanical parts are usually designed to fit precisely together. For example, if a shaft is designed to have a "sliding fit" in a hole, the shaft must be a little smaller than the hole. (Traditionaltolerancesmay suggest that all dimensions fall within those intended tolerances. Aprocess capabilitystudy of actual production, however, may revealnormal distributionswith long tails.) Both the shaft and hole sizes will usually form normal distributions with some average (arithmetic mean) andstandard deviation.
With two such normal distributions, a distribution of interference can be calculated. The derived distribution will also be normal, and its average will be equal to the difference between the means of the two base distributions. Thevarianceof the derived distribution will be the sum of the variances of the two base distributions.
This derived distribution can be used to determine how often the difference in dimensions will be less than zero (i.e., the shaft cannot fit in the hole), how often the difference will be less than the required sliding gap (the shaft fits, but too tightly), and how often the difference will be greater than the maximum acceptable gap (the shaft fits, but not tightly enough).
Physical properties and the conditions of use are also inherently variable. For example, the applied load (stress) on a mechanical part may vary. The measured strength of that part (tensile strength, etc.) may also be variable. The part will break when the stress exceeds the strength.[1][2]
With two normal distributions, the statistical interference may be calculated as above. (This problem is also workable for transformed units such as thelog-normal distribution). With other distributions, or combinations of different distributions, aMonte Carlo methodor simulation is often the most practical way to quantify the effects of statistical interference.
|
https://en.wikipedia.org/wiki/Statistical_interference
|
Inprobability theory, apairwise independentcollection ofrandom variablesis a set of random variables any two of which areindependent.[1]Any collection ofmutually independentrandom variables is pairwise independent, but some pairwise independent collections are not mutually independent. Pairwise independent random variables with finitevarianceareuncorrelated.
A pair of random variablesXandYareindependentif and only if the random vector (X,Y) withjointcumulative distribution function (CDF)FX,Y(x,y){\displaystyle F_{X,Y}(x,y)}satisfies
or equivalently, their joint densityfX,Y(x,y){\displaystyle f_{X,Y}(x,y)}satisfies
That is, the joint distribution is equal to the product of the marginal distributions.[2]
Unless it is not clear in context, in practice the modifier "mutual" is usually dropped so thatindependencemeansmutual independence. A statement such as "X,Y,Zare independent random variables" means thatX,Y,Zare mutually independent.
Pairwise independence does not imply mutual independence, as shown by the following example attributed to S. Bernstein.[3]
SupposeXandYare two independent tosses of a fair coin, where we designate 1 for heads and 0 for tails. Let the third random variableZbe equal to 1 if exactly one of those coin tosses resulted in "heads", and 0 otherwise (i.e.,Z=X⊕Y{\displaystyle Z=X\oplus Y}). Then jointly the triple (X,Y,Z) has the followingprobability distribution:
Here themarginal probability distributionsare identical:fX(0)=fY(0)=fZ(0)=1/2,{\displaystyle f_{X}(0)=f_{Y}(0)=f_{Z}(0)=1/2,}andfX(1)=fY(1)=fZ(1)=1/2.{\displaystyle f_{X}(1)=f_{Y}(1)=f_{Z}(1)=1/2.}Thebivariate distributionsalso agree:fX,Y=fX,Z=fY,Z,{\displaystyle f_{X,Y}=f_{X,Z}=f_{Y,Z},}wherefX,Y(0,0)=fX,Y(0,1)=fX,Y(1,0)=fX,Y(1,1)=1/4.{\displaystyle f_{X,Y}(0,0)=f_{X,Y}(0,1)=f_{X,Y}(1,0)=f_{X,Y}(1,1)=1/4.}
Since each of the pairwise joint distributions equals the product of their respective marginal distributions, the variables are pairwise independent:
However,X,Y, andZarenotmutually independent, sincefX,Y,Z(x,y,z)≠fX(x)fY(y)fZ(z),{\displaystyle f_{X,Y,Z}(x,y,z)\neq f_{X}(x)f_{Y}(y)f_{Z}(z),}the left side equalling for example 1/4 for (x,y,z) = (0, 0, 0) while the right side equals 1/8 for (x,y,z) = (0, 0, 0). In fact, any of{X,Y,Z}{\displaystyle \{X,Y,Z\}}is completely determined by the other two (any ofX,Y,Zis thesum (modulo 2)of the others). That is as far from independence as random variables can get.
Bounds on theprobabilitythat the sum ofBernoullirandom variablesis at least one, commonly known as theunion bound, are provided by theBoole–Fréchet[4][5]inequalities. While these bounds assume onlyunivariateinformation, several bounds with knowledge of generalbivariateprobabilities, have been proposed too. Denote by{Ai,i∈{1,2,...,n}}{\displaystyle \{{A}_{i},i\in \{1,2,...,n\}\}}a set ofn{\displaystyle n}Bernoullievents withprobabilityof occurrenceP(Ai)=pi{\displaystyle \mathbb {P} (A_{i})=p_{i}}for eachi{\displaystyle i}. Suppose thebivariateprobabilities are given byP(Ai∩Aj)=pij{\displaystyle \mathbb {P} (A_{i}\cap A_{j})=p_{ij}}for every pair of indices(i,j){\displaystyle (i,j)}. Kounias[6]derived the followingupper bound:
which subtracts the maximum weight of astarspanning treeon acomplete graphwithn{\displaystyle n}nodes (where the edge weights are given bypij{\displaystyle p_{ij}}) from the sum of themarginalprobabilities∑ipi{\displaystyle \sum _{i}p_{i}}.Hunter-Worsley[7][8]tightened thisupper boundby optimizing overτ∈T{\displaystyle \tau \in T}as follows:
whereT{\displaystyle T}is the set of allspanning treeson the graph. These bounds are not thetightestpossible with generalbivariatespij{\displaystyle p_{ij}}even whenfeasibilityis guaranteed as shown in Boros et.al.[9]However, when the variables arepairwise independent(pij=pipj{\displaystyle p_{ij}=p_{i}p_{j}}), Ramachandra—Natarajan[10]showed that the Kounias-Hunter-Worsley[6][7][8]bound istightby proving that the maximum probability of the union of events admits aclosed-form expressiongiven as:
where theprobabilitiesare sorted in increasing order as0≤p1≤p2≤…≤pn≤1{\displaystyle 0\leq p_{1}\leq p_{2}\leq \ldots \leq p_{n}\leq 1}. Thetightbound inEq. 1depends only on the sum of the smallestn−1{\displaystyle n-1}probabilities∑i=1n−1pi{\displaystyle \sum _{i=1}^{n-1}p_{i}}and the largest probabilitypn{\displaystyle p_{n}}. Thus, whileorderingof theprobabilitiesplays a role in the derivation of the bound, theorderingamong the smallestn−1{\displaystyle n-1}probabilities{p1,p2,...,pn−1}{\displaystyle \{p_{1},p_{2},...,p_{n-1}\}}is inconsequential since only their sum is used.
It is useful to compare the smallest bounds on the probability of the union with arbitrarydependenceandpairwise independencerespectively. ThetightestBoole–Fréchetupperunion bound(assuming onlyunivariateinformation) is given as:
As shown in Ramachandra-Natarajan,[10]it can be easily verified that the ratio of the twotightbounds inEq. 2andEq. 1isupper boundedby4/3{\displaystyle 4/3}where the maximum value of4/3{\displaystyle 4/3}is attained when
where theprobabilitiesare sorted in increasing order as0≤p1≤p2≤…≤pn≤1{\displaystyle 0\leq p_{1}\leq p_{2}\leq \ldots \leq p_{n}\leq 1}. In other words, in the best-case scenario, the pairwise independence bound inEq. 1provides an improvement of25%{\displaystyle 25\%}over theunivariatebound inEq. 2.
More generally, we can talk aboutk-wise independence, for anyk≥ 2. The idea is similar: a set ofrandom variablesisk-wise independent if every subset of sizekof those variables is independent.k-wise independence has been used in theoretical computer science, where it was used to prove a theorem about the problemMAXEkSAT.
k-wise independence is used in the proof thatk-independent hashingfunctions are secure unforgeablemessage authentication codes.
|
https://en.wikipedia.org/wiki/Pairwise_independence
|
In themathematicalfield ofgraph theory, agraph homomorphismis a mapping between twographsthat respects their structure. More concretely, it is a function between the vertex sets of two graphs that maps adjacentverticesto adjacent vertices.
Homomorphisms generalize various notions ofgraph coloringsand allow the expression of an important class ofconstraint satisfaction problems, such as certainschedulingorfrequency assignmentproblems.[1]The fact that homomorphisms can be composed leads to rich algebraic structures: apreorderon graphs, adistributive lattice, and acategory(one for undirected graphs and one for directed graphs).[2]Thecomputational complexityof finding a homomorphism between given graphs is prohibitive in general, but a lot is known about special cases that are solvable inpolynomial time. Boundaries between tractable and intractable cases have been an active area of research.[3]
In this article, unless stated otherwise,graphsare finite,undirected graphswithloopsallowed, butmultiple edges(parallel edges) disallowed.
Agraph homomorphism[4]ffrom a graphG=(V(G),E(G)){\displaystyle G=(V(G),E(G))}to a graphH=(V(H),E(H)){\displaystyle H=(V(H),E(H))}, written
is a function fromV(G){\displaystyle V(G)}toV(H){\displaystyle V(H)}that preserves edges. Formally,(u,v)∈E(G){\displaystyle (u,v)\in E(G)}implies(f(u),f(v))∈E(H){\displaystyle (f(u),f(v))\in E(H)}, for all pairs of verticesu,v{\displaystyle u,v}inV(G){\displaystyle V(G)}.
If there exists any homomorphism fromGtoH, thenGis said to behomomorphictoHorH-colorable. This is often denoted as just
The above definition is extended to directed graphs. Then, for a homomorphismf:G→H, (f(u),f(v)) is anarc(directed edge) ofHwhenever (u,v) is an arc ofG.
There is aninjectivehomomorphism fromGtoH(i.e., one that maps distinct vertices inGto distinct vertices inH) if and only ifGisisomorphicto asubgraphofH.
If a homomorphismf:G→His abijection, and itsinverse functionf−1is also a graph homomorphism, thenfis a graph isomorphism.[5]
Covering mapsare a special kind of homomorphisms that mirror the definition and many properties ofcovering maps in topology.[6]They are defined assurjectivehomomorphisms (i.e., something maps to each vertex) that are also locally bijective, that is, a bijection on theneighbourhoodof each vertex.
An example is thebipartite double cover, formed from a graph by splitting each vertexvintov0andv1and replacing each edgeu,vwith edgesu0,v1andv0,u1. The function mappingv0andv1in the cover tovin the original graph is a homomorphism and a covering map.
Graphhomeomorphismis a different notion, not related directly to homomorphisms. Roughly speaking, it requires injectivity, but allows mapping edges to paths (not just to edges).Graph minorsare a still more relaxed notion.
Two graphsGandHarehomomorphically equivalentifG→HandH→G.[4]The maps are not necessarily surjective nor injective. For instance, thecomplete bipartite graphsK2,2andK3,3are homomorphically equivalent: each map can be defined as taking the left (resp. right) half of the domain graph and mapping to just one vertex in the left (resp. right) half of the image graph.
A retraction is a homomorphismrfrom a graphGto asubgraphHofGsuch thatr(v) =vfor each vertexvofH.
In this case the subgraphHis called aretractofG.[7]
Acoreis a graph with no homomorphism to any proper subgraph. Equivalently, a core can be defined as a graph that does not retract to any proper subgraph.[8]Every graphGis homomorphically equivalent to a unique core (up to isomorphism), calledthe coreofG.[9]Notably, this is not true in general for infinite graphs.[10]However, the same definitions apply to directed graphs and a directed graph is also equivalent to a unique core.
Every graph and every directed graph contains its core as a retract and as aninduced subgraph.[7]
For example, allcomplete graphsKnand all odd cycles (cycle graphsof odd length) are cores.
Every3-colorablegraphGthat contains a triangle (that is, has thecomplete graphK3as a subgraph) is homomorphically equivalent toK3. This is because, on one hand, a 3-coloring ofGis the same as a homomorphismG→K3, as explained below. On the other hand, every subgraph ofGtrivially admits a homomorphism intoG, implyingK3→G. This also means thatK3is the core of any such graphG. Similarly, everybipartite graphthat has at least one edge is equivalent toK2.[11]
Ak-coloring, for some integerk, is an assignment of one ofkcolors to each vertex of a graphGsuch that the endpoints of each edge get different colors. Thek-colorings ofGcorrespond exactly to homomorphisms fromGto thecomplete graphKk.[12]Indeed, the vertices ofKkcorrespond to thekcolors, and two colors are adjacent as vertices ofKkif and only if they are different. Hence a function defines a homomorphism toKkif and only if it maps adjacent vertices ofGto different colors (i.e., it is ak-coloring). In particular,Gisk-colorable if and only if it isKk-colorable.[12]
If there are two homomorphismsG→HandH→Kk, then their compositionG→Kkis also a homomorphism.[13]In other words, if a graphHcan be colored withkcolors, and there is a homomorphism fromGtoH, thenGcan also bek-colored. Therefore,G→Himplies χ(G) ≤ χ(H), whereχdenotes thechromatic numberof a graph (the leastkfor which it isk-colorable).[14]
General homomorphisms can also be thought of as a kind of coloring: if the vertices of a fixed graphHare the availablecolorsand edges ofHdescribe which colors arecompatible, then anH-coloring ofGis an assignment of colors to vertices ofGsuch that adjacent vertices get compatible colors.
Many notions of graph coloring fit into this pattern and can be expressed as graph homomorphisms into different families of graphs.Circular coloringscan be defined using homomorphisms intocircular complete graphs, refining the usual notion of colorings.[15]Fractionalandb-fold coloringcan be defined using homomorphisms intoKneser graphs.[16]T-coloringscorrespond to homomorphisms into certain infinite graphs.[17]Anoriented coloringof a directed graph is a homomorphism into anyoriented graph.[18]AnL(2,1)-coloringis a homomorphism into thecomplementof thepath graphthat is locally injective, meaning it is required to be injective on the neighbourhood of every vertex.[19]
Another interesting connection concernsorientationsof graphs.
An orientation of an undirected graphGis any directed graph obtained by choosing one of the two possible orientations for each edge.
An example of an orientation of the complete graphKkis the transitive tournamentT→kwith vertices 1,2,…,kand arcs fromitojwheneveri<j.
A homomorphism between orientations of graphsGandHyields a homomorphism between the undirected graphsGandH, simply by disregarding the orientations.
On the other hand, given a homomorphismG→Hbetween undirected graphs, any orientationH→ofHcan be pulled back to an orientationG→ofGso thatG→has a homomorphism toH→.
Therefore, a graphGisk-colorable (has a homomorphism toKk) if and only if some orientation ofGhas a homomorphism toT→k.[20]
A folklore theorem states that for allk, a directed graphGhas a homomorphism toT→kif and only if it admits no homomorphism from the directed pathP→k+1.[21]HereP→nis the directed graph with vertices 1, 2, …,nand edges fromitoi+ 1, fori= 1, 2, …,n− 1.
Therefore, a graph isk-colorable if and only if it has an orientation that admits no homomorphism fromP→k+1.
This statement can be strengthened slightly to say that a graph isk-colorable if and only if some orientation contains no directed path of lengthk(noP→k+1as a subgraph).
This is theGallai–Hasse–Roy–Vitaver theorem.
Some scheduling problems can be modeled as a question about finding graph homomorphisms.[22][23]As an example, one might want to assign workshop courses to time slots in a calendar so that two courses attended by the same student are not too close to each other in time. The courses form a graphG, with an edge between any two courses that are attended by some common student. The time slots form a graphH, with an edge between any two slots that are distant enough in time. For instance, if one wants a cyclical, weekly schedule, such that each student gets their workshop courses on non-consecutive days, thenHwould be thecomplement graphofC7. A graph homomorphism fromGtoHis then a schedule assigning courses to time slots, as specified.[22]To add a requirement saying that, e.g., no single student has courses on both Friday and Monday, it suffices to remove the corresponding edge fromH.
A simplefrequency allocationproblem can be specified as follows: a number of transmitters in awireless networkmust choose a frequency channel on which they will transmit data. To avoidinterference, transmitters that are geographically close should use channels with frequencies that are far apart. If this condition is approximated with a single threshold to define 'geographically close' and 'far apart', then a valid channel choice again corresponds to a graph homomorphism. It should go from the graph of transmittersG, with edges between pairs that are geographically close, to the graph of channelsH, with edges between channels that are far apart. While this model is rather simplified, it does admit some flexibility: transmitter pairs that are not close but could interfere because of geographical features can be added to the edges ofG. Those that do not communicate at the same time can be removed from it. Similarly, channel pairs that are far apart but exhibitharmonicinterference can be removed from the edge set ofH.[24]
In each case, these simplified models display many of the issues that have to be handled in practice.[25]Constraint satisfaction problems, which generalize graph homomorphism problems, can express various additional types of conditions (such as individual preferences, or bounds on the number of coinciding assignments). This allows the models to be made more realistic and practical.
Graphs and directed graphs can be viewed as a special case of the far more general notion called relationalstructures(defined as a set with a tuple of relations on it). Directed graphs are structures with a single binary relation (adjacency) on the domain (the vertex set).[26][3]Under this view,homomorphismsof such structures are exactly graph homomorphisms.
In general, the question of finding a homomorphism from one relational structure to another is aconstraint satisfaction problem(CSP).
The case of graphs gives a concrete first step that helps to understand more complicated CSPs.
Many algorithmic methods for finding graph homomorphisms, likebacktracking,constraint propagationandlocal search, apply to all CSPs.[3]
For graphsGandH, the question of whetherGhas a homomorphism toHcorresponds to a CSP instance with only one kind of constraint,[3]as follows. Thevariablesare the vertices ofGand thedomainfor each variable is the vertex set ofH. Anevaluationis a function that assigns to each variable an element of the domain, so a functionffromV(G) toV(H). Each edge or arc (u,v) ofGthen corresponds to theconstraint((u,v), E(H)). This is a constraint expressing that the evaluation should map the arc (u,v) to a pair (f(u),f(v)) that is in the relationE(H), that is, to an arc ofH. A solution to the CSP is an evaluation that respects all constraints, so it is exactly a homomorphism fromGtoH.
Compositions of homomorphisms are homomorphisms.[13]In particular, the relation → on graphs is transitive (and reflexive, trivially), so it is apreorderon graphs.[27]Let theequivalence classof a graphGunderhomomorphic equivalencebe [G].
The equivalence class can also be represented by the unique core in [G].
The relation → is apartial orderon those equivalence classes; it defines aposet.[28]
LetG<Hdenote that there is a homomorphism fromGtoH, but no homomorphism fromHtoG.
The relation → is adense order, meaning that for all (undirected) graphsG,Hsuch thatG<H, there is a graphKsuch thatG<K<H(this holds except for the trivial casesG=K0orK1).[29][30]For example, between any twocomplete graphs(exceptK0,K1,K2) there are infinitely manycircular complete graphs, corresponding to rational numbers between natural numbers.[31]
The poset of equivalence classes of graphs under homomorphisms is adistributive lattice, with thejoinof [G] and [H] defined as (the equivalence class of) the disjoint union [G∪H], and themeetof [G] and [H] defined as thetensor product[G×H] (the choice of graphsGandHrepresenting the equivalence classes [G] and [H] does not matter).[32]Thejoin-irreducibleelements of this lattice are exactlyconnectedgraphs. This can be shown using the fact that a homomorphism maps a connected graph into one connected component of the target graph.[33][34]Themeet-irreducibleelements of this lattice are exactly themultiplicative graphs. These are the graphsKsuch that a productG×Hhas a homomorphism toKonly when one ofGorHalso does. Identifying multiplicative graphs lies at the heart ofHedetniemi's conjecture.[35][36]
Graph homomorphisms also form acategory, with graphs as objects and homomorphisms as arrows.[37]Theinitial objectis the empty graph, while theterminal objectis the graph with one vertex and oneloopat that vertex.
Thetensor product of graphsis thecategory-theoretic productand
theexponential graphis theexponential objectfor this category.[36][38]Since these two operations are always defined, the category of graphs is acartesian closed category.
For the same reason, the lattice of equivalence classes of graphs under homomorphisms is in fact aHeyting algebra.[36][38]
For directed graphs the same definitions apply. In particular → is apartial orderon equivalence classes of directed graphs. It is distinct from the order → on equivalence classes of undirected graphs, but contains it as a suborder. This is because every undirected graph can be thought of as a directed graph where every arc (u,v) appears together with its inverse arc (v,u), and this does not change the definition of homomorphism. The order → for directed graphs is again a distributive lattice and a Heyting algebra, with join and meet operations defined as before. However, it is not dense. There is also a category with directed graphs as objects and homomorphisms as arrows, which is again acartesian closed category.[39][38]
There are many incomparable graphs with respect to the homomorphism preorder, that is, pairs of graphs such that neither admits a homomorphism into the other.[40]One way to construct them is to consider theodd girthof a graphG, the length of its shortest odd-length cycle.
The odd girth is, equivalently, the smallestodd numbergfor which there exists a homomorphism from thecycle graphongvertices toG. For this reason, ifG→H, then the odd girth ofGis greater than or equal to the odd girth ofH.[41]
On the other hand, ifG→H, then the chromatic number ofGis less than or equal to the chromatic number ofH.
Therefore, ifGhas strictly larger odd girth thanHand strictly larger chromatic number thanH, thenGandHare incomparable.[40]For example, theGrötzsch graphis 4-chromatic and triangle-free (it has girth 4 and odd girth 5),[42]so it is incomparable to the triangle graphK3.
Examples of graphs with arbitrarily large values of odd girth and chromatic number areKneser graphs[43]andgeneralized Mycielskians.[44]A sequence of such graphs, with simultaneously increasing values of both parameters, gives infinitely many incomparable graphs (anantichainin the homomorphism preorder).[45]Other properties, such asdensityof the homomorphism preorder, can be proved using such families.[46]Constructions of graphs with large values of chromatic number and girth, not just odd girth, are also possible, but more complicated (seeGirth and graph coloring).
Among directed graphs, it is much easier to find incomparable pairs. For example, consider the directed cycle graphsC→n, with vertices 1, 2, …,nand edges fromitoi+ 1 (fori= 1, 2, …,n− 1) and fromnto 1.
There is a homomorphism fromC→ntoC→k(n,k≥ 3) if and only ifnis a multiple ofk.
In particular, directed cycle graphsC→nwithnprime are all incomparable.[47]
In the graph homomorphism problem, an instance is a pair of graphs (G,H) and a solution is a homomorphism fromGtoH. The generaldecision problem, asking whether there is any solution, isNP-complete.[48]However, limiting allowed instances gives rise to a variety of different problems, some of which are much easier to solve. Methods that apply when restraining the left sideGare very different than for the right sideH, but in each case a dichotomy (a sharp boundary between easy and hard cases) is known or conjectured.
The homomorphism problem with a fixed graphHon the right side of each instance is also called theH-coloring problem. WhenHis the complete graphKk, this is thegraphk-coloring problem, which is solvable in polynomial time fork= 0, 1, 2, andNP-completeotherwise.[49]In particular,K2-colorability of a graphGis equivalent toGbeingbipartite, which can be tested in linear time.
More generally, wheneverHis a bipartite graph,H-colorability is equivalent toK2-colorability (orK0/K1-colorability whenHis empty/edgeless), hence equally easy to decide.[50]Pavol HellandJaroslav Nešetřilproved that, for undirected graphs, no other case is tractable:
This is also known as thedichotomy theoremfor (undirected) graph homomorphisms, since it dividesH-coloring problems into NP-complete or P problems, with nointermediatecases.
For directed graphs, the situation is more complicated and in fact equivalent to the much more general question of characterizing thecomplexity of constraint satisfaction problems.[53]It turns out thatH-coloring problems for directed graphs are just as general and as diverse as CSPs with any other kinds of constraints.[54][55]Formally, a (finite)constraint language(ortemplate)Γis a finite domain and a finite set of relations over this domain. CSP(Γ) is the constraint satisfaction problem where instances are only allowed to use constraints inΓ.
Intuitively, this means that every algorithmic technique or complexity result that applies toH-coloring problems for directed graphsHapplies just as well to general CSPs. In particular, one can ask whether the Hell–Nešetřil theorem can be extended to directed graphs. By the above theorem, this is equivalent to the Feder–Vardi conjecture (aka CSP conjecture, dichotomy conjecture) on CSP dichotomy, which states that for every constraint languageΓ, CSP(Γ) is NP-complete or in P.[48]This conjecture was proved in 2017 independently by Dmitry Zhuk and Andrei Bulatov, leading to the following corollary:
The homomorphism problem with a single fixed graphGon left side of input instances can be solved bybrute-forcein time |V(H)|O(|V(G)|), so polynomial in the size of the input graphH.[56]In other words, the problem is trivially in P for graphsGof bounded size. The interesting question is then what other properties ofG, beside size, make polynomial algorithms possible.
The crucial property turns out to betreewidth, a measure of how tree-like the graph is. For a graphGof treewidth at mostkand a graphH, the homomorphism problem can be solved in time |V(H)|O(k)with a standarddynamic programmingapproach. In fact, it is enough to assume that the core ofGhas treewidth at mostk. This holds even if the core is not known.[57][58]
The exponent in the |V(H)|O(k)-time algorithm cannot be lowered significantly: no algorithm with running time |V(H)|o(tw(G) /log tw(G))exists, assuming theexponential time hypothesis(ETH), even if the inputs are restricted to any class of graphs of unbounded treewidth.[59]The ETH is an unproven assumption similar toP ≠ NP, but stronger.
Under the same assumption, there are also essentially no other properties that can be used to get polynomial time algorithms. This is formalized as follows:
One can ask whether the problem is at least solvable in a time arbitrarily highly dependent onG, but with a fixed polynomial dependency on the size ofH.
The answer is again positive if we limitGto a class of graphs with cores of bounded treewidth, and negative for every other class.[58]In the language ofparameterized complexity, this formally states that the homomorphism problem inG{\displaystyle {\mathcal {G}}}parameterized by the size (number of edges) ofGexhibits a dichotomy. It isfixed-parameter tractableif graphs inG{\displaystyle {\mathcal {G}}}have cores of bounded treewidth, andW[1]-complete otherwise.
The same statements hold more generally for constraint satisfaction problems (or for relational structures, in other words). The only assumption needed is that constraints can involve only a bounded number of variables (all relations are of some bounded arity, 2 in the case of graphs). The relevant parameter is then the treewidth of theprimal constraint graph.[59]
|
https://en.wikipedia.org/wiki/Graph_homomorphism
|
In the mathematical field ofgraph theory, anautomorphismof agraphis a form ofsymmetryin which the graph ismappedonto itself while preserving the edge–vertexconnectivity.
Formally, an automorphism of a graphG= (V,E)is apermutationσof the vertex setV, such that the pair of vertices(u,v)form an edgeif and only ifthe pair(σ(u),σ(v))also form an edge. That is, it is agraph isomorphismfromGto itself. Automorphisms may be defined in this way both fordirected graphsand forundirected graphs.
Thecompositionof two automorphisms is another automorphism, and the set of automorphisms of a given graph, under the composition operation, forms agroup, theautomorphism groupof the graph. In the opposite direction, byFrucht's theorem, all groups can be represented as the automorphism group of a connected graph – indeed, of acubic graph.[1][2]
Constructing the automorphism group of a graph, in the form of a list of generators, is polynomial-time equivalent to thegraph isomorphism problem, and therefore solvable inquasi-polynomial time, that is with running time2O((logn)c){\displaystyle 2^{O((\log n)^{c})}}for some fixedc>0{\displaystyle c>0}.[3][4]Consequently, like the graph isomorphism problem, the problem of finding a graph's automorphism group is known to belong to thecomplexity classNP, but not known to be inPnor to beNP-complete, and therefore may beNP-intermediate.
The easier problem of testing whether a graph has any symmetries (nontrivial automorphisms), known as thegraph automorphism problem, also has no knownpolynomial timesolution.[5]There is apolynomial timealgorithm for solving the graph automorphism problem for graphs where vertex degrees are bounded by a constant.[6]The graph automorphism problem is polynomial-timemany-one reducibleto the graph isomorphism problem, but the converse reduction is unknown.[3][7][8]By contrast, hardness is known when the automorphisms are constrained in a certain fashion; for instance, determining the existence of a fixed-point-free automorphism (an automorphism that fixes no vertex) is NP-complete, and the problem of counting such automorphisms is♯P-complete.[5][8]
While noworst-casepolynomial-time algorithms are known for the general Graph Automorphism problem, finding the automorphism group (and printing out an irredundant set of generators) for many large graphs arising in applications is rather easy. Several open-source software tools are available for this task, includingNAUTY,[9]BLISS[10]andSAUCY.[11][12]SAUCY and BLISS are particularly efficient for sparse graphs, e.g., SAUCY processes some graphs with millions of vertices in mere seconds. However, BLISS and NAUTY can also produceCanonical Labeling, whereas SAUCY is currently optimized for solving Graph Automorphism. An important observation is that for a graph onnvertices, the automorphism group can be specified by no more thann−1{\displaystyle n-1}generators, and the above software packages are guaranteed to satisfy this bound as a side-effect of their algorithms (minimal sets of generators are harder to find and are not particularly useful in practice). It also appears that the total support (i.e., the number of vertices moved) of all generators is limited by a linear function ofn, which is important in runtime analysis of these algorithms. However, this has not been established for a fact, as of March 2012.
Practical applications of Graph Automorphism includegraph drawingand other visualization tasks, solving structured instances ofBoolean Satisfiabilityarising in the context ofFormal verificationandLogistics.Molecular symmetrycan predict or explain chemical properties.
Severalgraph drawingresearchers have investigated algorithms for drawing graphs in such a way that the automorphisms of the graph become visible as symmetries of the drawing. This may be done either by using a method that is not designed around symmetries, but that automatically generates symmetric drawings when possible,[13]or by explicitly identifying symmetries and using them to guide vertex placement in the drawing.[14]It is not always possible to display all symmetries of the graph simultaneously, so it may be necessary to choose which symmetries to display and which to leave unvisualized.
Several families of graphs are defined by having certain types of automorphisms:
Inclusion relationships between these families are indicated by the following table:
|
https://en.wikipedia.org/wiki/Graph_automorphism_problem
|
Thegraph isomorphism problemis thecomputational problemof determining whether two finitegraphsareisomorphic.[1]
The problem is not known to be solvable inpolynomial timenor to beNP-complete, and therefore may be in the computationalcomplexity classNP-intermediate. It is known that the graph isomorphism problem is in thelow hierarchyofclass NP, which implies that it is not NP-complete unless thepolynomial time hierarchycollapses to its second level.[2]At the same time, isomorphism for many special classes of graphs can be solved in polynomial time, and in practice graph isomorphism can often be solved efficiently.[3][4]
This problem is a special case of thesubgraph isomorphism problem,[5]which asks whether a given graphGcontains a subgraph that is isomorphic to another given graphH; this problem is known to be NP-complete. It is also known to be a special case of thenon-abelianhidden subgroup problemover thesymmetric group.[6]
In the area ofimage recognitionit is known as the exact graph matching.[7]
In November 2015,László Babaiannounced aquasi-polynomial timealgorithm for all graphs, that is, one with running time2O((logn)c){\displaystyle 2^{O((\log n)^{c})}}for some fixedc>0{\displaystyle c>0}.[8][9][10][11]On January 4, 2017, Babai retracted the quasi-polynomial claim and stated asub-exponential timebound instead afterHarald Helfgottdiscovered a flaw in the proof. On January 9, 2017, Babai announced a correction (published in full on January 19) and restored the quasi-polynomial claim, with Helfgott confirming the fix.[12][13]Helfgott further claims that one can takec= 3, so the running time is2O((logn)3).[14][15]Babai published a "preliminary report" on related work at the 2019Symposium on Theory of Computing, describing a quasipolynomial algorithm forgraph canonization,[16]but as of 2025[update]the full version of these algorithms remains unpublished.
Prior to this, the best accepted theoretical algorithm was due toBabai & Luks (1983), and was based on the earlier work byLuks (1982)combined with asubfactorialalgorithm of V. N. Zemlyachenko (Zemlyachenko, Korneenko & Tyshkevich 1985). The algorithm has run time 2O(√nlogn)for graphs withnvertices and relies on theclassification of finite simple groups. Without thisclassification theorem, a slightly weaker bound2O(√nlog2n)was obtained first forstrongly regular graphsbyLászló Babai(1980), and then extended to general graphs byBabai & Luks (1983). Improvement of the exponent√nfor strongly regular graphs was done bySpielman (1996). Forhypergraphsof bounded rank, asubexponentialupper bound matching the case of graphs was obtained byBabai & Codenotti (2008).
There are several competing practical algorithms for graph isomorphism, such as those due toMcKay (1981),Schmidt & Druffel (1976),Ullman (1976), andStoichev (2019). While they seem to perform well onrandom graphs, a major drawback of these algorithms is their exponential time performance in theworst case.[17]
The graph isomorphism problem is computationally equivalent to the problem of computing theautomorphism groupof a graph,[18][19][20]and is weaker than thepermutation groupisomorphism problem and the permutation group intersection problem. For the latter two problems,Babai, Kantor & Luks (1983)obtained complexity bounds similar to that for graph isomorphism.
A number of important special cases of the graph isomorphism problem have efficient, polynomial-time solutions:
Since the graph isomorphism problem is neither known to be NP-complete nor known to be tractable, researchers have sought to gain insight into the problem by defining a new classGI, the set of problems with apolynomial-time Turing reductionto the graph isomorphism problem.[34]If in fact the graph isomorphism problem is solvable in polynomial time,GIwould equalP. On the other hand, if the problem is NP-complete,GIwould equalNPand all problems inNPwould be solvable in quasi-polynomial time.
As is common forcomplexity classeswithin thepolynomial time hierarchy, a problem is calledGI-hardif there is apolynomial-time Turing reductionfrom any problem inGIto that problem, i.e., a polynomial-time solution to a GI-hard problem would yield a polynomial-time solution to the graph isomorphism problem (and so all problems inGI). A problemX{\displaystyle X}is calledcompleteforGI, orGI-complete, if it is both GI-hard and a polynomial-time solution to the GI problem would yield a polynomial-time solution toX{\displaystyle X}.
The graph isomorphism problem is contained in bothNPand co-AM. GI is contained in andlowforParity P, as well as contained in the potentially much smaller classSPP.[35]That it lies in Parity P means that the graph isomorphism problem is no harder than determining whether a polynomial-timenondeterministic Turing machinehas an even or odd number of accepting paths. GI is also contained in and low forZPPNP.[36]This essentially means that an efficientLas Vegas algorithmwith access to an NPoraclecan solve graph isomorphism so easily that it gains no power from being given the ability to do so in constant time.
There are a number of classes of mathematical objects for which the problem of isomorphism is a GI-complete problem. A number of them are graphs endowed with additional properties or restrictions:[37]
A class of graphs is called GI-complete if recognition of isomorphism for graphs from this subclass is a GI-complete problem. The following classes are GI-complete:[37]
Many classes of digraphs are also GI-complete.
There are other nontrivial GI-complete problems in addition to isomorphism problems.
Manuel Blumand Sampath Kannan (1995) have shown a probabilistic checker for programs for graph isomorphism. SupposePis a claimed polynomial-time procedure that checks if two graphs are isomorphic, but it is not trusted. To check if graphsGandHare isomorphic:
This procedure is polynomial-time and gives the correct answer ifPis a correct program for graph isomorphism. IfPis not a correct program, but answers correctly onGandH, the checker will either give the correct answer, or detect invalid behaviour ofP.
IfPis not a correct program, and answers incorrectly onGandH, the checker will detect invalid behaviour ofPwith high probability, or answer wrong with probability 2−100.
Notably,Pis used only as a blackbox.
Graphs are commonly used to encode structural information in many fields, includingcomputer visionandpattern recognition, andgraph matching, i.e., identification of similarities between graphs, is an important tools in these areas. In these areas graph isomorphism problem is known as the exact graph matching.[48]
Incheminformaticsand inmathematical chemistry, graph isomorphism testing is used to identify achemical compoundwithin achemical database.[49]Also, in organic mathematical chemistry graph isomorphism testing is useful for generation ofmolecular graphsand forcomputer synthesis.
Chemical database search is an example of graphicaldata mining, where thegraph canonizationapproach is often used.[50]In particular, a number ofidentifiersforchemical substances, such asSMILESandInChI, designed to provide a standard and human-readable way to encode molecular information and to facilitate the search for such information in databases and on the web, use canonization step in their computation, which is essentially the canonization of the graph which represents the molecule.
Inelectronic design automationgraph isomorphism is the basis of theLayout Versus Schematic(LVS) circuit design step, which is a verification whether theelectric circuitsrepresented by acircuit schematicand anintegrated circuit layoutare the same.[51]
|
https://en.wikipedia.org/wiki/Graph_isomorphism_problem
|
Ingraph theory, a branch of mathematics,graph canonizationis the problem of finding acanonical formof a given graphG. A canonical form is alabeled graphCanon(G) that isisomorphictoG, such that every graph that is isomorphic toGhas the same canonical form asG. Thus, from a solution to the graph canonization problem, one could also solve the problem ofgraph isomorphism: to test whether two graphsGandHare isomorphic, compute their canonical forms Canon(G) and Canon(H), and test whether these two canonical forms are identical.
The canonical form of a graph is an example of acompletegraph invariant: every two isomorphic graphs have the same canonical form, and every two non-isomorphic graphs have different canonical forms.[1][2]Conversely, every complete invariant of graphs may be used to construct a canonical form.[3]The vertex set of ann-vertex graph may be identified with theintegersfrom 1 ton, and using such an identification a canonical form of a graph may also be described as apermutationof its vertices. Canonical forms of a graph are also calledcanonical labelings,[4]and graph canonization is also sometimes known asgraph canonicalization.
Thegraph isomorphism problemis thecomputational problemof determining whether two finitegraphsareisomorphic.
Clearly, the graph canonization problem is at least ascomputationally hardas thegraph isomorphism problem. In fact, graph isomorphism is evenAC0-reducibleto graph canonization. However, it is still an open question whether the two problems arepolynomial time equivalent.[2]
In 2019,László Babaiannounced aquasi-polynomial timealgorithm for graph canonization, that is, one with running time2O((logn)c){\displaystyle 2^{O((\log n)^{c})}}for some fixedc>0{\displaystyle c>0}.[5]While the existence of (deterministic) polynomial algorithms for graph isomorphism is still an open problem incomputational complexity theory, in 1977László Babaireported that with probability at least 1 − exp(−O(n)), a simple vertex classification algorithm produces a canonical labeling of a graph chosen uniformly at random from the set of alln-vertex graphs after only two refinement steps. Small modifications and an addeddepth-first searchstep produce canonical labeling of such uniformly-chosen random graphs in linear expected time. This result sheds some light on the fact why many reported graph isomorphism algorithms behave well in practice.[6][7]This was an important breakthrough inprobabilistic complexity theorywhich became widely known in its manuscript form and which was still cited as an "unpublished manuscript" long after it was reported at a symposium.
A commonly known canonical form is thelexicographically smallestgraph within theisomorphism class, which is the graph of the class with lexicographically smallestadjacency matrixconsidered as a linear string.
However, the computation of the lexicographically smallest graph isNP-hard.[8]
For trees, a concise polynomial canonization algorithm requiring O(n) space is presented byRead (1972).[9]Begin by labeling each vertex with the string 01. Iteratively for each non-leaf x remove the leading 0 and trailing 1 from x's label; then sort x's label along with the labels of all adjacent leaves in lexicographic order. Concatenate these sorted labels, add back a leading 0 and trailing 1, make this the new label of x, and delete the adjacent leaves. If there are two vertices remaining, concatenate their labels in lexicographic order.
Graph canonization is the essence of many graph isomorphism algorithms. One of the leading tools is Nauty.[10]
A common application of graph canonization is in graphicaldata mining, in particular inchemical databaseapplications.[11]
A number ofidentifiersforchemical substances, such asSMILESandInChIuse canonicalization steps in their computation, which is essentially the canonicalization of the graph which represents the molecule.[12][13][14]These identifiers are designed to provide a standard (and sometimes human-readable) way to encode molecular information and to facilitate the search for such information in databases and on the web.
|
https://en.wikipedia.org/wiki/Graph_canonization
|
Ingraph theory, afractional isomorphism ofgraphswhoseadjacency matricesare denotedAandBis adoubly stochastic matrixDsuch thatDA=BD. If the doubly stochastic matrix is apermutation matrix, then it constitutes agraph isomorphism.[1][2]Fractional isomorphism is the coarsest of several differentrelaxationsofgraph isomorphism.[3]
Whereas thegraph isomorphism problemis not known to be decidable inpolynomial timeand not known to beNP-complete, the fractional graph isomorphism problem is decidable in polynomial time because it is a special case of thelinear programmingproblem, for which there is an efficient solution. More precisely, the conditions on matrixDthat it be doubly stochastic and thatDA=BDcan be expressed as linear inequalities and equalities, respectively, so any such matrixDis afeasible solutionof a linear program.[2]
Two graphs are also fractionally isomorphic if they have a common coarsestequitable partition. Apartitionof a graph is a collection of pairwise disjoint sets of vertices whose union is the vertex set of the graph. A partition is equitable if for any pair of verticesuandvin the same block of the partition and any blockBof the partition, bothuandvhave the same number of neighbors inB. An equitable partitionPis coarsest if each block in any other equitable partition is a subset of a block inP. Two coarsest equitable partitionsPandQare common if there is abijectionffrom the blocks ofPto the blocks ofQsuch for any blocksBandCinP, the number of neighbors inCof any vertex inBequals the number of neighbors inf(C)of any vertex inf(B).[1][2]
|
https://en.wikipedia.org/wiki/Fractional_graph_isomorphism
|
Ingraph theory, acritical graphis anundirected graphall of whose proper subgraphs have smallerchromatic number. In such a graph, every vertex or edge is acritical element, in the sense that its deletion would decrease the number of colors needed in agraph coloringof the given graph. Each time a single edge or vertex (along with its incident edges) is removed from a critical graph, the decrease in the number of colors needed to color that graph cannot be by more than one.
Ak{\displaystyle k}-critical graphis a critical graph with chromatic numberk{\displaystyle k}. A graphG{\displaystyle G}with chromatic numberk{\displaystyle k}isk{\displaystyle k}-vertex-criticalif each of its vertices is a critical element. Critical graphs are theminimalmembers in terms of chromatic number, which is a very important measure in graph theory.
Some properties of ak{\displaystyle k}-critical graphG{\displaystyle G}withn{\displaystyle n}vertices andm{\displaystyle m}edges:
GraphG{\displaystyle G}is vertex-criticalif and only iffor every vertexv{\displaystyle v}, there is an optimal proper coloring in whichv{\displaystyle v}is a singleton color class.
AsHajós (1961)showed, everyk{\displaystyle k}-critical graph may be formed from acomplete graphKk{\displaystyle K_{k}}by combining theHajós constructionwith an operation that identifies two non-adjacent vertices. The graphs formed in this way always requirek{\displaystyle k}colors in any proper coloring.[8]
Adouble-critical graphis a connected graph in which the deletion of any pair of adjacent vertices decreases the chromatic number by two. It is anopen problemto determine whetherKk{\displaystyle K_{k}}is the only double-criticalk{\displaystyle k}-chromatic graph.[9]
|
https://en.wikipedia.org/wiki/Critical_graph
|
Thegraph coloring gameis a mathematical game related tograph theory.Coloring game problemsarose as game-theoretic versions of well-knowngraph coloringproblems. In a coloring game, two players use a given set of colors to construct a coloring of agraph, following specific rules depending on the game we consider. One player tries to successfully complete the coloring of the graph, when the other one tries to prevent him from achieving it.
Thevertex coloring gamewas introduced in 1981 bySteven Bramsas amap-coloring game[2][3]and rediscovered ten years after by Bodlaender.[4]Its rules are as follows:
Thegame chromatic numberof a graphG{\displaystyle G}, denoted byχg(G){\displaystyle \chi _{g}(G)}, is the minimum number of colors needed for Alice to win the vertex coloring game onG{\displaystyle G}. Trivially, for every graphG{\displaystyle G}, we haveχ(G)≤χg(G)≤Δ(G)+1{\displaystyle \chi (G)\leq \chi _{g}(G)\leq \Delta (G)+1}, whereχ(G){\displaystyle \chi (G)}is thechromatic numberofG{\displaystyle G}andΔ(G){\displaystyle \Delta (G)}its maximumdegree.[5]
In the 1991 Bodlaender's paper,[6]the computational complexity was left as "an interesting open problem".
Only in 2020 it was proved that the game is PSPACE-Complete.[7]
Acyclic coloring.Every graphG{\displaystyle G}withacyclic chromatic numberk{\displaystyle k}hasχg(G)≤k(k+1){\displaystyle \chi _{g}(G)\leq k(k+1)}.[8]
Marking game.For every graphG{\displaystyle G},χg(G)≤colg(G){\displaystyle \chi _{g}(G)\leq col_{g}(G)}, wherecolg(G){\displaystyle col_{g}(G)}is thegame coloring numberofG{\displaystyle G}. Almost every known upper bound for the game chromatic number of graphs are obtained from bounds on the game coloring number.
Cycle-restrictions on edges.If every edge of a graphG{\displaystyle G}belongs to at mostc{\displaystyle c}cycles, thenχg(G)≤4+c{\displaystyle \chi _{g}(G)\leq 4+c}.[9]
For a classC{\displaystyle {\mathcal {C}}}of graphs, we denote byχg(C){\displaystyle \chi _{g}({\mathcal {C}})}the smallest integerk{\displaystyle k}such that every graphG{\displaystyle G}ofC{\displaystyle {\mathcal {C}}}hasχg(G)≤k{\displaystyle \chi _{g}(G)\leq k}. In other words,χg(C){\displaystyle \chi _{g}({\mathcal {C}})}is the exact upper bound for the game chromatic number of graphs in this class. This value is known for several standard graph classes, and bounded for some others:
Cartesian products.The game chromatic number of thecartesian productG◻H{\displaystyle G\square H}is not bounded by a function ofχg(G){\displaystyle \chi _{g}(G)}andχg(H){\displaystyle \chi _{g}(H)}. In particular, the game chromatic number of any complete bipartite graphKn,n{\displaystyle K_{n,n}}is equal to 3, but there is no upper bound forχg(Kn,n◻Km,m){\displaystyle \chi _{g}(K_{n,n}\square K_{m,m})}for arbitraryn,m{\displaystyle n,m}.[20]On the other hand, the game chromatic number ofG◻H{\displaystyle G\square H}is bounded above by a function ofcolg(G){\displaystyle {\textrm {col}}_{g}(G)}andcolg(H){\displaystyle {\textrm {col}}_{g}(H)}. In particular, ifcolg(G){\displaystyle {\textrm {col}}_{g}(G)}andcolg(H){\displaystyle {\textrm {col}}_{g}(H)}are both at mostt{\displaystyle t}, thenχg(G◻H)≤t5−t3+t2{\displaystyle \chi _{g}(G\square H)\leq t^{5}-t^{3}+t^{2}}.[21]
These questions are still open to this date.
Theedge coloring game, introduced by Lam, Shiu and Zu,[23]is similar to the vertex coloring game, except Alice and Bob construct a properedge coloringinstead of a proper vertex coloring. Its rules are as follows:
Although this game can be considered as a particular case of thevertex coloring gameonline graphs, it is mainly considered in the scientific literature as a distinct game. Thegame chromatic indexof a graphG{\displaystyle G}, denoted byχg′(G){\displaystyle \chi '_{g}(G)}, is the minimum number of colors needed for Alice to win this game onG{\displaystyle G}.
For every graphG,χ′(G)≤χg′(G)≤2Δ(G)−1{\displaystyle \chi '(G)\leq \chi '_{g}(G)\leq 2\Delta (G)-1}. There are graphs reaching these bounds but all the graphs we know reaching this upper bound have small maximum degree.[23]There exists graphs withχg′(G)>1.008Δ(G){\displaystyle \chi '_{g}(G)>1.008\Delta (G)}for arbitrary large values ofΔ(G){\displaystyle \Delta (G)}.[24]
Conjecture.There is anϵ>0{\displaystyle \epsilon >0}such that, for any arbitrary graphG{\displaystyle G}, we haveχg′(G)≤(2−ϵ)Δ(G){\displaystyle \chi '_{g}(G)\leq (2-\epsilon )\Delta (G)}.This conjecture is true whenΔ(G){\displaystyle \Delta (G)}is large enough compared to the number of vertices inG{\displaystyle G}.[24]
For a classC{\displaystyle {\mathcal {C}}}of graphs, we denote byχg′(C){\displaystyle \chi '_{g}({\mathcal {C}})}the smallest integerk{\displaystyle k}such that every graphG{\displaystyle G}ofC{\displaystyle {\mathcal {C}}}hasχg′(G)≤k{\displaystyle \chi '_{g}(G)\leq k}. In other words,χg′(C){\displaystyle \chi '_{g}({\mathcal {C}})}is the exact upper bound for the game chromatic index of graphs in this class. This value is known for several standard graph classes, and bounded for some others:
Upper bound.Is there a constantc≥2{\displaystyle c\geq 2}such thatχg′(G)≤Δ(G)+c{\displaystyle \chi '_{g}(G)\leq \Delta (G)+c}for each graphG{\displaystyle G}? If it is true, isc=2{\displaystyle c=2}enough ?[23]
Conjecture on large minimum degrees.There are aϵ>0{\displaystyle \epsilon >0}and an integerd0{\displaystyle d_{0}}such that any graphG{\displaystyle G}withδ(G)≥d0{\displaystyle \delta (G)\geq d_{0}}satisfiesχg′(G)≥(1+ϵ)δ(G){\displaystyle \chi '_{g}(G)\geq (1+\epsilon )\delta (G)}.[24]
Theincidence coloring gameis a graph coloring game, introduced by Andres,[28]and similar to the vertex coloring game, except Alice and Bob construct a properincidence coloringinstead of a proper vertex coloring. Its rules are as follows:
Theincidence game chromatic numberof a graphG{\displaystyle G}, denoted byig(G){\displaystyle i_{g}(G)}, is the minimum number of colors needed for Alice to win this game onG{\displaystyle G}.
For every graphG{\displaystyle G}with maximum degreeΔ{\displaystyle \Delta }, we have3Δ−12<ig(G)<3Δ−1{\displaystyle {\frac {3\Delta -1}{2}}<i_{g}(G)<3\Delta -1}.[28]
For a classC{\displaystyle {\mathcal {C}}}of graphs, we denote byig(C){\displaystyle i_{g}({\mathcal {C}})}the smallest integerk{\displaystyle k}such that every graphG{\displaystyle G}ofC{\displaystyle {\mathcal {C}}}hasig(G)≤k{\displaystyle i_{g}(G)\leq k}.
|
https://en.wikipedia.org/wiki/Graph_coloring_game
|
Ingraph theory, a branch of mathematics, theHajós constructionis anoperation on graphsnamed afterGyörgy Hajós(1961) that may be used to construct anycritical graphor any graph whosechromatic numberis at least some given threshold.
LetGandHbe twoundirected graphs,vwbe an edge ofG, andxybe an edge ofH. Then the Hajós construction forms a new graph that combines the two graphs by identifying verticesvandxinto a single vertex, removing the two edgesvwandxy, and adding a new edgewy.
For example, letGandHeach be acomplete graphK4on four vertices; because of the symmetry of these graphs, the choice of which edge to select from each of them is unimportant. In this case, the result of applying the Hajós construction is theMoser spindle, a seven-vertexunit distance graphthat requires four colors.
As another example, ifGandHarecycle graphsof lengthpandqrespectively, then the result of applying the Hajós construction is itself a cycle graph, of lengthp+q− 1.
A graphGis said to bek-constructible(or Hajós-k-constructible) when it formed in one of the following three ways:[1]
It is straightforward to verify that everyk-constructible graph requires at leastkcolors in anyproper graph coloring. Indeed, this is clear for the complete graphKk, and the effect of identifying two nonadjacent vertices is to force them to have the same color as each other in any coloring, something that does not reduce the number of colors. In the Hajós construction itself, the new edgewyforces at least one of the two verticeswandyto have a different color than the combined vertex forvandx, so any proper coloring of the combined graph leads to a proper coloring of one of the two smaller graphs from which it was formed, which again causes it to requirekcolors.[1]
Hajós proved more strongly that a graph requires at leastkcolors, in anyproper coloring,if and only ifit contains ak-constructible graph as a subgraph. Equivalently, everyk-critical graph(a graph that requireskcolors but for which every proper subgraph requires fewer colors) isk-constructible.[2]Alternatively, every graph that requireskcolors may be formed by combining the Hajós construction, the operation of identifying any two nonadjacent vertices, and the operations of adding a vertex or edge to the given graph, starting from the complete graphKk.[3]
A similar construction may be used forlist coloringin place of coloring.[4]
Fork= 3, everyk-critical graph (that is, every odd cycle) can be generated as ak-constructible graph such that all of the graphs formed in its construction are alsok-critical. Fork= 8, this is not true: a graph found byCatlin (1979)as acounterexampletoHajós's conjecturethatk-chromatic graphs contain a subdivision ofKk, also serves as a counterexample to this problem. Subsequently,k-critical but notk-constructible graphs solely throughk-critical graphs were found for allk≥ 4. Fork= 4, one such example is the graph obtained from thedodecahedrongraph by adding a new edge between each pair ofantipodalvertices[5]
Because merging two non-adjacent vertices reduces the number of vertices in the resulting graph, the number of operations needed to represent a given graphGusing the operations defined by Hajós may exceed the number of vertices inG.[6]
More specifically,Mansfield & Welsh (1982)define theHajós numberh(G)of ak-chromatic graphGto be the minimum number of steps needed to constructGfromKk, where each step forms a new graph by combining two previously formed graphs, merging two nonadjacent vertices of a previously formed graph, or adding a vertex or edge to a previously formed graph. They showed that, for ann-vertex graphGwithmedges,h(G) ≤ 2n2/3 −m+ 1− 1. If every graph has a polynomial Hajós number, this would imply that it is possible to prove non-colorability innondeterministic polynomial time, and therefore imply that NP =co-NP, a conclusion considered unlikely by complexity theorists.[7]However, it is not known how to prove non-polynomial lower bounds on the Hajós number without making some complexity-theoretic assumption, and if such a bound could be proven it would also imply the existence of non-polynomial bounds on certain types ofFrege systeminmathematical logic.[7]
The minimum size of anexpression treedescribing a Hajós construction for a given graphGmay be significantly larger than the Hajós number ofG, because a shortest expression forGmay re-use the same graphs multiple times, an economy not permitted in an expression tree. There exist 3-chromatic graphs for which the smallest such expression tree has exponential size.[8]
Koester (1991)used the Hajós construction to generate aninfinite setof 4-criticalpolyhedral graphs, each having more than twice as many edges as vertices. Similarly,Liu & Zhang (2006)used the construction, starting with theGrötzsch graph, to generate many 4-criticaltriangle-free graphs, which they showed to be difficult to color using traditionalbacktrackingalgorithms.
Inpolyhedral combinatorics,Euler (2003)used the Hajós construction to generatefacetsof thestable setpolytope.
|
https://en.wikipedia.org/wiki/Haj%C3%B3s_construction
|
Mathematicscan be used to studySudokupuzzles to answer questions such as "How many filled Sudoku grids are there?", "What is the minimal number of clues in a valid puzzle?" and "In what ways can Sudoku grids be symmetric?" through the use ofcombinatoricsandgroup theory.
The analysis of Sudoku is generally divided between analyzing the properties of unsolved puzzles (such as the minimum possible number of given clues) and analyzing the properties of solved puzzles. Initial analysis was largely focused on enumerating solutions, with results first appearing in 2004.[1]
For classical Sudoku, the number of filled grids is6,670,903,752,021,072,936,960(6.671×1021), which reduces to5,472,730,538essentially different solutionsunder the validity-preserving transformations. There are 26 possible types ofsymmetry, but they can only be found in about 0.005% of all filled grids. An ordinary puzzle with a unique solution must have at least 17 clues. There is a solvable puzzle with at most 21 clues for every solved grid. The largest minimal puzzle found so far has 40 clues in the 81 cells.
Ordinary Sudokus (properpuzzles) have a unique solution. AminimalSudoku is a Sudoku from which no clue can be removed leaving it a proper Sudoku. Different minimal Sudokus can have a different number of clues. This section discusses the minimum number of givens for proper puzzles.
Many Sudokus have been found with 17 clues, although finding them is not a trivial task.[2][3]A 2014 paper by Gary McGuire, Bastian Tugemann, and Gilles Civario proved that the minimum number of clues in any proper Sudoku is 17 through an exhaustive computer search based onhitting set enumeration.[4]
The fewest clues in a Sudoku with two-way diagonal symmetry (a 180° rotational symmetry) is believed to be 18, and in at least one case such a Sudoku also exhibitsautomorphism. A Sudoku with 24 clues,dihedralsymmetry (a 90° rotational symmetry, which also includes a symmetry on both orthogonal axis, 180° rotational symmetry, and diagonal symmetry) is known to exist, but it is not known if this number of clues is minimal for this class of Sudoku.[5]
The number of minimal Sudokus (Sudokus in which no clue can be deleted without losing uniqueness of the solution) is not precisely known. However, statistical techniques combined with a generator ('Unbiased Statistics of a CSP – A Controlled-Bias Generator'),[6]show that there are approximately (with 0.065% relative error):
There are many Sudoku variants, partially characterized by size (N), and the shape of theirregions. Unless noted, discussion in this article assumes classic Sudoku, i.e.N=9 (a 9×9 grid and 3×3 regions). A rectangular Sudoku uses rectangular regions of row-column dimensionR×C. Other variants include those with irregularly-shaped regions or with additional constraints (hypercube).
Regions are also calledblocksorboxes. Abandis a part of the grid that encapsulates three rows and three boxes, and astackis a part of the grid that encapsulates three columns and three boxes. Apuzzleis a partially completedgrid, and the initial values aregivensorclues. Aproperpuzzle has a unique solution. Aminimalpuzzle is a proper puzzle from which no clue can be removed without introducing additional solutions.
Solving Sudokus from the viewpoint of a player has been explored in Denis Berthier's book "The Hidden Logic of Sudoku" (2007)[7]which considers strategies such as "hidden xy-chains".
The general problem of solving Sudoku puzzles onn2×n2grids ofn×nblocks is known to beNP-complete.[8]
A puzzle can be expressed as agraph coloringproblem.[9]The aim is to construct a 9-coloring of a particular graph, given a partial 9-coloring. TheSudoku graphhas 81 vertices, one vertex for each cell. The vertices are labeled with ordered pairs (x,y), wherexandyare integers between 1 and 9. In this case, two distinct vertices labeled by (x,y) and (x′,y′) are joined by an edgeif and only if:
The puzzle is then completed by assigning an integer between 1 and 9 to each vertex, in such a way that vertices that are joined by an edge do not have the same integer assigned to them.
A Sudoku solution grid is also aLatin square.[9]There are significantly fewer Sudoku grids than Latin squares because Sudoku imposes additional regional constraints.
As in the case ofLatin squaresthe (addition- or)multiplication tables(Cayley tables) of finite groups can be used to construct Sudokus and related tables of numbers. Namely, one has to takesubgroupsandquotient groupsinto account:
Take for exampleZn⊕Zn{\displaystyle \mathbb {Z} _{n}\oplus \mathbb {Z} _{n}}the group of pairs, adding each component separately modulo somen{\displaystyle n}.
By omitting one of the components, we suddenly find ourselves inZn{\displaystyle \mathbb {Z} _{n}}(and this mapping is obviously compatible with the respective additions, i.e. it is agroup homomorphism).
One also says that the latter is aquotient groupof the former, because some once different elements become equal in the new group.
However, it is also a subgroup, because we can simply fill the missing component with0{\displaystyle 0}to get back toZn⊕Zn{\displaystyle \mathbb {Z} _{n}\oplus \mathbb {Z} _{n}}.
Under this view, we write down the example,Grid 1, forn=3{\displaystyle n=3}.
Each Sudoku region looks the same on the second component (namely like the subgroupZ3{\displaystyle \mathbb {Z} _{3}}), because these are added regardless of the first one.
On the other hand, the first components are equal in each block, and if we imagine each block as one cell, these first components show the same pattern (namely the quotient groupZ3{\displaystyle \mathbb {Z} _{3}}). As outlined in the article of Latin squares, this is a Latin square of order9{\displaystyle 9}.
Now, to yield a Sudoku, let us permute the rows (or equivalently the columns) in such a way, that each block is redistributed exactly once into each block – for example order them1,4,7,2,5,8,3,6,9{\displaystyle 1,4,7,2,5,8,3,6,9}.
This of course preserves the Latin square property. Furthermore, in each block the lines have distinct first component by construction
and each line in a block has distinct entries via the second component, because the blocks' second components originally formed a Latin square of order3{\displaystyle 3}(from the subgroupZ3{\displaystyle \mathbb {Z} _{3}}). Thus we arrive at a Sudoku (rename the pairs to numbers 1...9 if you wish). With the example and the row permutation above, we arrive atGrid 2.
For this method to work, one generally does not need a product of two equally-sized groups. A so-called shortexact sequenceof finite groups
of appropriate size already does the job. Try for example the groupZ4{\displaystyle \mathbb {Z} _{4}}with quotient- and subgroupZ2{\displaystyle \mathbb {Z} _{2}}.
It seems clear (already from enumeration arguments), that not all Sudokus can be generated this way.
A Sudoku whose regions are not (necessarily) square or rectangular is known as a Jigsaw Sudoku. In particular, anN×Nsquare whereNis prime can only be tiled with irregularN-ominoes. For small values ofNthe number of ways to tile the square (excluding symmetries) has been computed (sequenceA172477in theOEIS).[10]ForN≥ 4 some of these tilings are not compatible with any Latin square; i.e. all Sudoku puzzles on such a tiling have no solution.[10]
The answer to the question 'How many Sudoku grids are there?' depends on the definition of when similar solutions are considered different.
For the enumeration ofallpossible solutions, two solutions are considered distinct if any of their corresponding (81) cell values differ. Symmetry relations between similar solutions are ignored., e.g. the rotations of a solution are considered distinct. Symmetries play a significant role in the enumeration strategy, but not in the count ofallpossible solutions.
The first known solution to complete enumeration was posted by QSCGZ (Guenter Stertenbrink) to therec.puzzlesnewsgroupin 2003,[11][12]obtaining6,670,903,752,021,072,936,960(6.67×1021) distinct solutions.
In a 2005 study, Felgenhauer and Jarvis[13][12]analyzed thepermutationsof the top band used in valid solutions. Once the Band1symmetriesandequivalence classesfor the partial grid solutions were identified, the completions of the lower two bands were constructed and counted for each equivalence class. Summing completions over the equivalence classes, weighted by class size, gives the total number of solutions as 6,670,903,752,021,072,936,960, confirming the value obtained by QSCGZ. The value was subsequently confirmed numerous times independently. A second enumeration technique based on band generation was later developed that is significantly less computationally intensive.
This subsequent technique resulted in roughly needing 1/97 of the number of computation cycles as the original techniques, but was significantly more complicated to set up.
The precise structure of the sudoku symmetrygroupcan be expressed succinctly using thewreath product(≀). The possible row (or column) permutations form a groupisomorphictoS3≀S3of order 3!4= 1,296.[4]The whole rearrangement group is formed by letting the transposition operation (isomorphic toC2) act on two copies of that group, one for the row permutations and one for the column permutations. This isS3≀S3≀C2, a group of order 1,2962× 2 = 3,359,232. Finally, the relabelling operations commute with the rearrangement operations, so the full sudoku (VPT) group is (S3≀S3≀C2) ×S9of order 1,218,998,108,160.
The set of equivalent grids which can be reached using these operations (excluding relabeling) forms anorbitof grids under the action of the rearrangementgroup. The number of essentially different solutions is then the number of orbits, which can be computed usingBurnside's lemma. The Burnsidefixed pointsare grids that either do not change under the rearrangement operation or only differ by relabeling. To simplify the calculation the elements of the rearrangement group are sorted intoconjugacy classes, whose elements all have the same number of fixed points. It turns out only 27 of the 275 conjugacy classes of the rearrangement group have fixed points;[14]these conjugacy classes represent the different types of symmetry (self-similarity or automorphism) that can be found in completed sudoku grids. Using this technique, Ed Russell and Frazer Jarvis were the first to compute the number of essentially different sudoku solutions as5,472,730,538.[14][15]
Excluding relabeling, the operations of the sudoku symmetry group all consist of cell rearrangements which aresolution-preserving, raising the question of whether all such solution-preserving cell rearrangements are in the symmetry group. In 2008, Aviv Adler and Ilan Adler showed that all solution-preserving cell rearrangements are contained in the group, even for generaln2×n2{\displaystyle n^{2}\times n^{2}}grids.[16]
|
https://en.wikipedia.org/wiki/Mathematics_of_Sudoku
|
Ingraph theory, a part of mathematics, ak-partite graphis agraphwhoseverticesare (or can be)partitionedintokdifferentindependent sets. Equivalently, it is a graph that can becoloredwithkcolors, so that no two endpoints of an edge have the same color. Whenk= 2these are thebipartite graphs, and whenk= 3they are called thetripartite graphs.
Bipartite graphs may be recognized inpolynomial timebut, for anyk> 2it isNP-complete, given an uncolored graph, to test whether it isk-partite.[1]However, in some applications of graph theory, ak-partite graph may be given as input to a computation with its coloring already determined; this can happen when the sets of vertices in the graph represent different types of objects. For instance,folksonomieshave been modeled mathematically by tripartite graphs in which the three sets of vertices in the graph represent users of a system, resources that the users are tagging, and tags that the users have applied to the resources.[2]
Acompletek-partite graphis ak-partite graph in which there is an edge between every pair of vertices from different independent sets. These graphs are described by notation with a capital letterKsubscripted by a sequence of the sizes of each set in the partition. For instance,K2,2,2is the complete tripartite graph of aregular octahedron, which can be partitioned into three independent sets each consisting of two opposite vertices. Acomplete multipartite graphis a graph that is completek-partite for somek.[3]TheTurán graphsare the special case of complete multipartite graphs in which each two independent sets differ in size by at most one vertex.
Completek-partite graphs, complete multipartite graphs, and theircomplement graphs, thecluster graphs, are special cases ofcographs, and can be recognized in polynomial time even when the partition is not supplied as part of the input.
|
https://en.wikipedia.org/wiki/Multipartite_graph
|
Ingraph theory, auniquely colorable graphis ak-chromaticgraphthat has only one possible (proper)k-coloringup topermutationof the colors. Equivalently, there is only one way topartitionitsverticesintokindependent setsand there is no way to partition them intok− 1independent sets.
Acomplete graphis uniquely colorable, because the only proper coloring is one that assigns each vertex a different color.
Everyk-treeis uniquely (k+ 1)-colorable. The uniquely 4-colorableplanar graphsare known to be exactly theApollonian networks, that is, the planar 3-trees.[1]
Every connectedbipartite graphis uniquely 2-colorable. Its 2-coloring can be obtained by choosing a starting vertex arbitrarily, coloring the vertices at even distance from the starting vertex with one color, and coloring the vertices at odd distance from the starting vertex with the other color.[2]
A uniquelyk-colorable graphGwithnvertices has at leastm≥ (k−1)n−k(k−1)/2edges. Equality holds whenGis a(k−1)-tree.[3]
Aminimal imperfect graphis a graph in which every subgraph isperfect. The deletion of any vertex from a minimal imperfect graph leaves a uniquely colorable subgraph.
Auniquely edge-colorable graphis ak-edge-chromaticgraph that has only one possible(proper)k-edge-coloringup to permutation of the colors. The only uniquely 2-edge-colorable graphs are the paths and the cycles. For anyk, thestarsK1,kare uniquelyk-edge-colorable. Moreover,Wilson (1976)conjectured andThomason (1978)proved that, whenk≥ 4, they are also the only members in this family. However, there exist uniquely 3-edge-colorable graphs that do not fit into this classification, such as the graph of thetriangular pyramid.
If acubic graphis uniquely 3-edge-colorable, it must have exactly threeHamiltonian cycles, formed by the edges with two of its three colors, but some cubic graphs with only three Hamiltonian cycles are not uniquely 3-edge-colorable.[4]Every simpleplanarcubic graph that is uniquely 3-edge-colorable contains a triangle,[1]butW. T. Tutte(1976) observed that thegeneralized Petersen graphG(9,2) isnon-planar, triangle-free, and uniquely 3-edge-colorable. For many years it was the only known such graph, and it had been conjectured to be the only such graph[5]but now infinitely many triangle-free non-planar cubic uniquely 3-edge-colorable graphs are known.[6]
Auniquely total colorable graphis ak-total-chromatic graphthat has only one possible(proper)k-total-coloringup to permutation of the colors.
Empty graphs,paths, andcyclesof length divisible by 3 are uniquely total colorable graphs.Mahmoodian & Shokrollahi (1995)conjectured that they are also the only members in this family.
Some properties of a uniquelyk-total-colorable graphGwithnvertices:
Here χ″(G) is thetotal chromatic number; Δ(G) is themaximum degree; and δ(G) is theminimum degree.
|
https://en.wikipedia.org/wiki/Uniquely_colorable_graph
|
Incryptanalysis, thepiling-up lemmais a principle used inlinear cryptanalysisto constructlinear approximationsto the action ofblock ciphers. It was introduced byMitsuru Matsui(1993) as an analytical tool for linear cryptanalysis.[1]The lemma states that the bias (deviation of theexpected valuefrom 1/2) of alinear Boolean function(XOR-clause) ofindependentbinary random variablesis related to the product of the input biases:[2]
or
whereϵ∈[−12,12]{\displaystyle \epsilon \in [-{\tfrac {1}{2}},{\tfrac {1}{2}}]}is the bias (towards zero[3]) andI∈[−1,1]{\displaystyle I\in [-1,1]}theimbalance:[4][5]
Conversely, if the lemma does not hold, then the input variables are not independent.[6]
The lemma implies that XOR-ing independent binary variables always reduces the bias (or at least does not increase it); moreover, the output is unbiased if and only if there is at least one unbiased input variable.
Note that for two variables the quantityI(X⊕Y){\displaystyle I(X\oplus Y)}is acorrelationmeasure ofX{\displaystyle X}andY{\displaystyle Y}, equal toP(X=Y)−P(X≠Y){\displaystyle P(X=Y)-P(X\neq Y)};I(X){\displaystyle I(X)}can be interpreted as the correlation ofX{\displaystyle X}with0{\displaystyle 0}.
The piling-up lemma can be expressed more naturally when the random variables take values in{−1,1}{\displaystyle \{-1,1\}}. If we introduce variablesχi=1−2Xi=(−1)Xi{\displaystyle \chi _{i}=1-2X_{i}=(-1)^{X_{i}}}(mapping 0 to 1 and 1 to -1) then, by inspection, the XOR-operation transforms to a product:
and since theexpected valuesare the imbalances,E(χi)=I(Xi){\displaystyle E(\chi _{i})=I(X_{i})}, the lemma now states:
which isa known property of the expected value for independent variables.
Fordependent variablesthe above formulation gains a (positive or negative)covarianceterm, thus the lemma does not hold. In fact, since twoBernoullivariables are independent if and only if they are uncorrelated (i.e. have zero covariance; seeuncorrelatedness), we have the converse of the piling up lemma: if it does not hold, the variables are not independent (uncorrelated).
The piling-up lemma allows the cryptanalyst to determine theprobabilitythat the equality:
holds, where theX's arebinary variables(that is, bits: either 0 or 1).
LetP(A) denote "the probability that A is true". If it equalsone, A is certain to happen, and if it equals zero, A cannot happen. First of all, we consider the piling-up lemma for two binary variables, whereP(X1=0)=p1{\displaystyle P(X_{1}=0)=p_{1}}andP(X2=0)=p2{\displaystyle P(X_{2}=0)=p_{2}}.
Now, we consider:
Due to the properties of thexoroperation, this is equivalent to
X1=X2= 0 andX1=X2= 1 aremutually exclusive events, so we can say
Now, we must make the central assumption of the piling-up lemma: the binary variables we are dealing with areindependent; that is, the state of one has no effect on the state of any of the others. Thus we can expand the probability function as follows:
Now we express the probabilitiesp1andp2as1/2+ ε1and1/2+ ε2, where the ε's are the probability biases — the amount the probability deviates from1/2.
Thus the probability bias ε1,2for the XOR sum above is 2ε1ε2.
This formula can be extended to moreX's as follows:
Note that if any of the ε's is zero; that is, one of the binary variables is unbiased, the entire probability function will be unbiased — equal to1/2.
A related slightly different definition of the bias isϵi=P(Xi=1)−P(Xi=0),{\displaystyle \epsilon _{i}=P(X_{i}=1)-P(X_{i}=0),}in fact minus two times the previous value. The advantage is that now with
we have
adding random variables amounts to multiplying their (2nd definition) biases.
In practice, theXs are approximations to theS-boxes(substitution components) of block ciphers. Typically,Xvalues are inputs to the S-box andYvalues are the corresponding outputs. By simply looking at the S-boxes, the cryptanalyst can tell what the probability biases are. The trick is to find combinations of input and output values that have probabilities of zero or one. The closer the approximation is to zero or one, the more helpful the approximation is in linear cryptanalysis.
However, in practice, the binary variables are not independent, as is assumed in the derivation of the piling-up lemma. This consideration has to be kept in mind when applying the lemma; it is not an automatic cryptanalysis formula.
|
https://en.wikipedia.org/wiki/Piling-up_lemma
|
Incomplexity theoryandcomputability theory, anoracle machineis anabstract machineused to studydecision problems. It can be visualized as ablack box, called anoracle, which is able to solve certain problems in a single operation. The problem can be of anycomplexity class. Evenundecidable problems, such as thehalting problem, can be used.
An oracle machine can be conceived as aTuring machineconnected to anoracle. The oracle, in this context, is an entity capable of solving some problem, which for example may be adecision problemor afunction problem. The problem does not have to be computable; the oracle is not assumed to be a Turing machine or computer program. The oracle is simply a "black box" that is able to produce a solution for any instance of a givencomputational problem:
An oracle machine can perform all of the usual operations of a Turing machine, and can also query the oracle to obtain a solution to any instance of the computational problem for that oracle. For example, if the problem is a decision problem for a setAof natural numbers, the oracle machine supplies the oracle with a natural number, and the oracle responds with "yes" or "no" stating whether that number is an element ofA.
There are many equivalent definitions of oracle Turing machines, as discussed below. The one presented here is fromvan Melkebeek (2003, p. 43).
An oracle machine, like a Turing machine, includes:
In addition to these components, an oracle machine also includes:
From time to time, the oracle machine may enter the ASK state. When this happens, the following actions are performed in a single computational step:
The effect of changing to the ASK state is thus to receive, in a single step, a solution to the problem instance that is written on the oracle tape.
There are many alternative definitions to the one presented above. Many of these are specialized for the case where the oracle solves a decision problem. In this case:
These definitions are equivalent from the point of view of Turing computability: a function is oracle-computable from a given oracle under all of these definitions if it is oracle-computable under any of them. The definitions are not equivalent, however, from the point of view of computational complexity. A definition such as the one by van Melkebeek, using an oracle tape which may have its own alphabet, is required in general.
Thecomplexity classofdecision problemssolvable by an algorithm in class A with an oracle for a language L is called AL. For example, PSATis the class of problems solvable inpolynomial timeby adeterministic Turing machinewith an oracle for theBoolean satisfiability problem. The notation ABcan be extended to a set of languages B (or a complexity class B), by using the following definition:
When a language L iscompletefor some class B, then AL=ABprovided that machines in A can execute reductions used in the completeness definition of class B. In particular, since SAT isNP-completewith respect to polynomial time reductions, PSAT=PNP. However, if A =DLOGTIME, then ASATmay not equal ANP. (The definition ofAB{\displaystyle A^{B}}given above is not completely standard. In some contexts, such as the proof of thetimeandspace hierarchy theorems, it is more useful to assume that the abstract machine defining classA{\displaystyle A}only has access to a single oracle for one language. In this context,AB{\displaystyle A^{B}}is not defined if the complexity classB{\displaystyle B}does not have any complete problems with respect to the reductions available toA{\displaystyle A}.)
It is understood that NP ⊆ PNP, but the question of whether NPNP, PNP, NP, and P are equal remains tentative at best. It is believed they are different, and this leads to the definition of thepolynomial hierarchy.
Oracle machines are useful for investigating the relationship betweencomplexity classes P and NP, by considering the relationship between PAand NPAfor an oracle A. In particular, it has been shown there exist languages A and B such that PA=NPAand PB≠NPB.[4]The fact the P = NP question relativizes both ways is taken as evidence that answering this question is difficult, because a proof technique thatrelativizes(i.e., unaffected by the addition of an oracle) will not answer the P = NP question.[5]Most proof techniques relativize.[6]
One may consider the case where an oracle is chosen randomly from among all possible oracles (aninfinite set). It has been shown in this case, that with probability 1, PA≠NPA.[7]When a question is true for almost all oracles, it is said to be truefor arandom oracle. This choice of terminology is justified by the fact that random oracles support a statement with probability 0 or 1 only. (This follows fromKolmogorov's zero–one law.) This is only weak evidence that P≠NP, since a statement may be true for a random oracle but false for ordinary Turing machines;[original research?]for example, IPA≠PSPACEAfor a random oracle A butIP=PSPACE.[8]
A machine with an oracle for thehalting problemcan determine whether particular Turing machines will halt on particular inputs, but it cannot determine, in general, whether machines equivalent to itself will halt. This creates a hierarchy of machines, each with a more powerful halting oracle and an even harder halting problem.
This hierarchy of machines can be used to define thearithmetical hierarchy.[9]
Incryptography, oracles are used to make arguments for the security of cryptographic protocols where ahash functionis used. Asecurity reduction(proof of security) for the protocol is given in the case where, instead of a hash function, arandom oracleanswers each query randomly but consistently; the oracle is assumed to be available to all parties including the attacker, as the hash function is. Such a proof shows that unless the attacker solves the hard problem at the heart of the security reduction, they must make use of some interesting property of the hash function to break the protocol; they cannot treat the hash function as a black box (i.e., as a random oracle).
|
https://en.wikipedia.org/wiki/Oracle_machine
|
Computer and network surveillanceis the monitoring of computer activity and data stored locally on a computer or data being transferred overcomputer networkssuch as theInternet. This monitoring is often carried out covertly and may be completed by governments, corporations, criminal organizations, or individuals. It may or may not be legal and may or may not require authorization from a court or other independent government agencies. Computer and network surveillance programs are widespread today and almost allInternet trafficcan be monitored.[1]
Surveillance allows governments and other agencies to maintainsocial control, recognize and monitor threats or any suspicious or abnormal activity,[2]and prevent and investigatecriminalactivities. With the advent of programs such as theTotal Information Awarenessprogram, technologies such ashigh-speed surveillance computersandbiometricssoftware, and laws such as theCommunications Assistance For Law Enforcement Act, governments now possess an unprecedented ability to monitor the activities of citizens.[3]
Manycivil rightsandprivacygroups, such asReporters Without Borders, theElectronic Frontier Foundation, and theAmerican Civil Liberties Union, have expressed concern that increasing surveillance of citizens will result in amass surveillancesociety, with limited political and/or personal freedoms. Such fear has led to numerous lawsuits such asHepting v. AT&T.[3][4]ThehacktivistgroupAnonymoushas hacked into government websites in protest of what it considers "draconian surveillance".[5][6]
The vast majority of computer surveillance involves the monitoring ofpersonal dataandtrafficon theInternet.[7]For example, in the United States, theCommunications Assistance For Law Enforcement Actmandates that all phone calls andbroadbandinternet traffic(emails,web traffic,instant messaging, etc.) be available for unimpeded, real-time monitoring by Federal law enforcement agencies.[8][9][10]
Packet capture(also known as "packet sniffing") is the monitoring of data traffic on anetwork.[11]Data sent between computers over theInternetor between any networks takes the form of small chunks called packets, which are routed to their destination and assembled back into a complete message. Apacket capture applianceintercepts these packets, so that they may be examined and analyzed. Computer technology is needed to performtraffic analysisand sift through intercepted data to look for important/useful information. Under theCommunications Assistance For Law Enforcement Act, all U.S. telecommunications providers are required to install such packet capture technology so that Federal law enforcement and intelligence agencies are able to intercept all of their customers'broadband Internetandvoice over Internet protocol(VoIP) traffic. These technologies can be used both by the intelligence and for illegal activities.[12]
There is far too much data gathered by these packet sniffers for human investigators to manually search through. Thus, automated Internet surveillance computers sift through the vast amount of intercepted Internet traffic, filtering out, and reporting to investigators those bits of information which are "interesting", for example, the use of certain words or phrases, visiting certain types of web sites, or communicating via email or chat with a certain individual or group.[13]Billions of dollars per year are spent by agencies such as theInformation Awareness Office,NSA, and theFBI, for the development, purchase, implementation, and operation of systems which intercept and analyze this data, extracting only the information that is useful to law enforcement and intelligence agencies.[14]
Similar systems are now used byIranian Security dept.to more easily distinguish between peaceful citizens and terrorists. All of the technology has been allegedly installed by GermanSiemens AGand FinnishNokia.[15]
The Internet's rapid development has become a primary form of communication. More people are potentially subject to Internet surveillance. There are advantages and disadvantages tonetwork monitoring. For instance, systems described as "Web 2.0"[16]have greatly impacted modern society. Tim O’ Reilly, who first explained the concept of "Web 2.0",[16]stated that Web 2.0 provides communication platforms that are "user generated", with self-produced content, motivating more people to communicate with friends online.[17]However, Internet surveillance also has a disadvantage. One researcher fromUppsala Universitysaid "Web 2.0 surveillance is directed at large user groups who help to hegemonically produce and reproduce surveillance by providing user-generated (self-produced) content. We can characterize Web 2.0 surveillance as mass self-surveillance".[18]Surveillance companies monitor people while they are focused on work or entertainment. Yet, employers themselves alsomonitor their employees. They do so in order to protect the company's assets and to control public communications but most importantly, to make sure that their employees are actively working and being productive.[19]This can emotionally affect people; this is because it can cause emotions like jealousy. A research group states "...we set out to test the prediction that feelings of jealousy lead to 'creeping' on a partner through Facebook, and that women are particularly likely to engage in partner monitoring in response to jealousy".[20]The study shows that women can become jealous of other people when they are in an online group.
Virtual assistantshave become socially integrated into many people's lives. Currently, virtual assistants such as Amazon's Alexa or Apple's Siri cannot call 911 or local services.[21]They are constantly listening for command and recording parts of conversations that will help improve algorithms. If the law enforcement is able to be called using a virtual assistant, the law enforcement would then be able to have access to all the information saved for the device.[21]The device is connected to the home's internet, because of this law enforcement would be the exact location of the individual calling for law enforcement.[21]While the virtual assistance devices are popular, many debates the lack of privacy. The devices are listening to every conversation the owner is having. Even if the owner is not talking to a virtual assistant, the device is still listening to the conversation in hopes that the owner will need assistance, as well as to gather data.[22]
Corporate surveillanceof computer activity is very common. The data collected is most often used for marketing purposes or sold to other corporations, but is also regularly shared with government agencies. It can be used as a form ofbusiness intelligence, which enables the corporation to better tailor their products and/or services to be desirable by their customers. The data can also be sold to other corporations so that they can use it for the aforementioned purpose, or it can be used for direct marketing purposes, such astargeted advertisements, where ads are targeted to the user of the search engine by analyzing their search history and emails[23](if they use free webmail services), which are kept in a database.[24]
Such type of surveillance is also used to establish business purposes of monitoring, which may include the following:
The second component of prevention is determining the ownership of technology resources. The ownership of the firm's networks, servers, computers, files, and e-mail should be explicitly stated. There should be a distinction between an employee's personal electronic devices, which should be limited and proscribed, and those owned by the firm.
For instance,Google Searchstores identifying information for each web search. AnIP addressand the search phrase used are stored in a database for up to 18 months.[25]Google also scans the content of emails of users of its Gmail webmail service in order to create targeted advertising based on what people are talking about in their personal email correspondences.[26]Google is, by far, the largest Internet advertising agency—millions of sites place Google's advertising banners and links on their websites in order to earn money from visitors who click on the ads. Each page containing Google advertisements adds, reads, and modifies"cookies"on each visitor's computer.[27]These cookiestrack the useracross all of these sites and gather information about their web surfing habits, keeping track of which sites they visit, and what they do when they are on these sites. This information, along with the informationfrom their email accounts, and search engine histories, is stored by Google to use to build aprofileof the user to deliver better-targeted advertising.[26]
The United States government often gains access to these databases, either by producing a warrant for it, or by simply asking. TheDepartment of Homeland Securityhas openly stated that it uses data collected from consumer credit and direct marketing agencies for augmenting the profiles of individuals that it is monitoring.[24]
In addition to monitoring information sent over a computer network, there is also a way to examine data stored on a computer'shard drive, and to monitor the activities of a person using the computer. A surveillance program installed on a computer can search the contents of the hard drive for suspicious data, can monitor computer use, collectpasswords, and/or report back activities in real-time to its operator through the Internet connection.[28]A keylogger is an example of this type of program. Normal keylogging programs store their data on the local hard drive, but some are programmed to automatically transmit data over the network to a remote computer or Web server.
There are multiple ways of installing such software. The most common is remote installation, using abackdoorcreated by acomputer virusortrojan. This tactic has the advantage of potentially subjecting multiple computers to surveillance. Viruses often spread to thousands or millions of computers, and leave "backdoors" which are accessible over a network connection, and enable an intruder to remotely install software and execute commands. These viruses and trojans are sometimes developed by government agencies, such asCIPAVandMagic Lantern. More often, however, viruses created by other people orspywareinstalled by marketing agencies can be used to gain access through the security breaches that they create.[29]
Another method is"cracking"into the computer to gain access over a network. An attacker can then install surveillance software remotely.Serversand computers with permanentbroadbandconnections are most vulnerable to this type of attack.[30]Another source of security cracking is employees giving out information or users using brute force tactics to guess their password.[31]
One can also physically place surveillance software on a computer by gaining entry to the place where the computer is stored and install it from acompact disc,floppy disk, orthumbdrive. This method shares a disadvantage with hardware devices in that it requiresphysical accessto the computer.[32]One well-known worm that uses this method of spreading itself isStuxnet.[33]
One common form of surveillance is tocreate maps of social networksbased on data fromsocial networking sitesas well as fromtraffic analysisinformation from phone call records such as those in theNSA call database,[34]and internet traffic data gathered underCALEA. Thesesocial network"maps" are thendata minedto extract useful information such as personal interests, friendships and affiliations, wants, beliefs, thoughts, and activities.[35][36][37]
Many U.S. government agencies such as theDefense Advanced Research Projects Agency (DARPA), theNational Security Agency (NSA), and theDepartment of Homeland Security (DHS)are currently investing heavily in research involving social network analysis.[38][39]The intelligence community believes that the biggest threat to the U.S. comes from decentralized, leaderless, geographically dispersed groups. These types of threats are most easily countered by finding important nodes in the network, and removing them. To do this requires a detailed map of the network.[37][40]
Jason Ethier of Northeastern University, in his study of modern social network analysis, said the following of the Scalable Social Network Analysis Program developed by theInformation Awareness Office:
The purpose of the SSNA algorithms program is to extend techniques of social network analysis to assist with distinguishing potential terrorist cells from legitimate groups of people ... In order to be successful SSNA will require information on the social interactions of the majority of people around the globe. Since the Defense Department cannot easily distinguish between peaceful citizens and terrorists, it will be necessary for them to gather data on innocent civilians as well as on potential terrorists.
With only commercially available equipment, it has been shown that it is possible to monitor computers from a distance by detecting theradiationemitted by theCRT monitor. This form of computer surveillance, known asTEMPEST, involves reading electromagnetic emanations from computing devices in order to extract data from them at distances of hundreds of meters.[41][42][43]
IBM researchers have also found that, for most computer keyboards, each key emits a slightly different noise when pressed. The differences are individually identifiable under some conditions, and so it's possible to log key strokes without actually requiring logging software to run on the associated computer.[44][45]
In 2015, lawmakers in California passed a law prohibiting any investigative personnel in the state to force businesses to hand over digital communication without a warrant, calling this Electronic Communications Privacy Act.[46]At the same time in California, state senator Jerry Hill introduced a bill making law enforcement agencies to disclose more information on their usage and information from theStingray phone trackerdevice.[46]As the law took into effect in January 2016, it will now require cities to operate with new guidelines in relation to how and when law enforcement use this device.[46]Some legislators and those holding a public office have disagreed with this technology because of the warrantless tracking, but now if a city wants to use this device, it must be heard by a public hearing.[46]Some cities have pulled out of using the StingRay such as Santa Clara County.
And it has also been shown, byAdi Shamiret al., that even the high frequencynoiseemitted by aCPUincludes information about the instructions being executed.[47]
In German-speaking countries, spyware used or made by the government is sometimes calledgovware.[48]Some countries like Switzerland and Germany have a legal framework governing the use of such software.[49][50]Known examples include the SwissMiniPanzer and MegaPanzerand the GermanR2D2 (trojan).
Policeware is a software designed to police citizens by monitoring the discussion and interaction of its citizens.[51]Within the U.S.,Carnivorewas the first incarnation of secretly installed e-mail monitoring software installed in Internet service providers' networks to log computer communication, including transmitted e-mails.[52]Magic Lanternis another such application, this time running in a targeted computer in a trojan style and performing keystroke logging.CIPAV, deployed by the FBI, is a multi-purpose spyware/trojan.
TheClipper Chip, formerly known as MYK-78, is a small hardware chip that the government can install into phones, designed in the nineties. It was intended to secure private communication and data by reading voice messages that are encoded and decode them. The Clipper Chip was designed during the Clinton administration to, “…protect personal safety and national security against a developing information anarchy that fosters criminals, terrorists and foreign foes.”[53]The government portrayed it as the solution to the secret codes orcryptographickeys that the age of technology created. Thus, this has raised controversy in the public, because the Clipper Chip is thought to have been the next “Big Brother” tool. This led to the failure of the Clipper proposal, even though there have been many attempts to push the agenda.[54]
The "Consumer Broadband and Digital Television Promotion Act" (CBDTPA) was a bill proposed in the United States Congress. CBDTPA was known as the "Security Systems and Standards Certification Act" (SSSCA) while in draft form and was killed in committee in 2002. Had CBDTPA become law, it would have prohibited technology that could be used to read digital content under copyright (such as music, video, and e-books) withoutdigital rights management(DRM) that prevented access to this material without the permission of the copyright holder.[55]
Surveillanceandcensorshipare different. Surveillance can be performed without censorship, but it is harder to engage in censorship without some forms of surveillance.[56]And even when surveillance does not lead directly to censorship, the widespread knowledge or belief that a person, their computer, or their use of the Internet is under surveillance can lead toself-censorship.[57]
In March 2013Reporters Without Bordersissued aSpecial report on Internet surveillancethat examines the use of technology that monitors online activity and intercepts electronic communication in order to arrest journalists,citizen-journalists, and dissidents. The report includes a list of "State Enemies of the Internet",Bahrain,China,Iran,Syria, andVietnam, countries whose governments are involved in active, intrusive surveillance of news providers, resulting in grave violations of freedom of information and human rights. Computer and network surveillance is on the increase in these countries. The report also includes a second list of "Corporate Enemies of the Internet", includingAmesys(France),Blue Coat Systems(U.S.), Gamma (UK and Germany), Hacking Team (Italy), and Trovicor (Germany), companies that sell products that are liable to be used by governments to violate human rights and freedom of information. Neither list is exhaustive and they are likely to be expanded in the future.[58]
Protection of sources is no longer just a matter of journalistic ethics. Journalists should equip themselves with a "digital survival kit" if they are exchanging sensitive information online, storing it on a computer hard-drive or mobile phone.[58][59]Individuals associated with high-profile rights organizations, dissident groups, protest groups, or reform groups are urged to take extra precautions to protect their online identities.[60]
Countermeasures against surveillance vary based on the type of eavesdropping targeted. Electromagnetic eavesdropping, such as TEMPEST and its derivatives, often requires hardware shielding, such asFaraday cages, to block unintended emissions. To prevent interception of data in transit, encryption is a key defense. When properly implemented withend-to-end encryption, or while using tools such asTor, and provided the device remains uncompromised and free from direct monitoring via electromagnetic analysis, audio recording, or similar methodologies, the content of communication is generally considered secure.
For a number of years, numerous government initiatives have sought toweaken encryptionor introduce backdoors for law enforcement access.[61]Privacy advocates and the broader technology industry strongly oppose these measures,[62]arguing that any backdoor would inevitably be discovered and exploited by malicious actors. Such vulnerabilities would endanger everyone's private data[63]while failing to hinder criminals, who could switch to alternative platforms or create their own encrypted systems.
Surveillance remains effective even when encryption is correctly employed, by exploiting metadata that is often accessible to packet sniffers unless countermeasures are applied.[64]This includesDNSqueries,IP addresses, phone numbers,URLs, timestamps, and communication durations, which can reveal significant information about user activity and interactions or associations with aperson of interest.
Yan, W. (2019) Introduction to Intelligent Surveillance: Surveillance Data Capture, Transmission, and Analytics, Springer.
|
https://en.wikipedia.org/wiki/Computer_and_network_surveillance
|
Incomputer security, acovert channelis a type ofattackthat creates a capability to transfer information objects between processes that are not supposed to be allowed to communicate by thecomputer security policy. The term, originated in 1973 byButler Lampson, is defined as channels "not intended forinformation transferat all, such as the service program's effect onsystem load," to distinguish it fromlegitimatechannels that are subjected to access controls byCOMPUSEC.[1]
A covert channel is so called because it is hidden from the access control mechanisms of secure operating systems since it does not use the legitimate data transfer mechanisms of the computer system (typically, read and write), and therefore cannot be detected or controlled by the security mechanisms that underlie secure operating systems. Covert channels are exceedingly hard to install in real systems, and can often be detected by monitoring system performance. In addition, they suffer from a lowsignal-to-noise ratioand low data rates (typically, on the order of a few bits per second). They can also be removed manually with a high degree of assurance from secure systems by well established covert channel analysis strategies.
Covert channels are distinct from, and often confused with, legitimate channel exploitations that attack low-assurance pseudo-secure systems using schemes such assteganographyor even less sophisticated schemes to disguise prohibited objects inside of legitimate information objects. The legitimate channel misuse by steganography is specifically not a form of covert channel.[citation needed]
Covert channels can tunnel through secure operating systems and require special measures to control. Covert channel analysis is the only proven way to control covert channels.[citation needed]By contrast, secure operating systems can easily prevent misuse of legitimate channels, so distinguishing both is important. Analysis of legitimate channels for hidden objects is often misrepresented as the only successful countermeasure for legitimate channel misuse. Because this amounts to analysis of large amounts of software, it was shown as early as 1972 to be impractical.[2]Without being informed of this, some are misled to believe an analysis will "manage the risk" of these legitimate channels.
TheTrusted Computer Security Evaluation Criteria(TCSEC) was a set of criteria, now deprecated, that had been established by theNational Computer Security Center, an agency managed by the United States'National Security Agency.
Lampson's definition of acovert channelwas paraphrased in the TCSEC[3]specifically to refer to ways of transferring information from a higher classification compartment to a lower classification. In a shared processing environment, it is difficult to completely insulate one process from the effects another process can have on the operating environment. A covert channel is created by a sender process that modulates some condition (such as free space, availability of some service, wait time to execute) that can be detected by a receiving process.
The TCSEC defines two kinds of covert channels:
The TCSEC, also known as theOrange Book,[4]requires analysis of covert storage channels to be classified as a B2 system and analysis of covert timing channels is a requirement for class B3.
The use of delays between packets transmitted overcomputer networkswas first explored by Girling[5]for covert communication. This work motivated many other works to establish or detect a covert communication and analyze the fundamental limitations of such scenarios.
Ordinary things, such as existence of a file or time used for a computation, have been the medium through which a covert channel communicates. Covert channels are not easy to find because these media are so numerous and frequently used.
Two relatively old techniques remain the standards for locating potential covert channels. One works by analyzing the resources of a system and other works at the source-code level.
The possibility of covert channels cannot be eliminated,[2]although it can be significantly reduced by careful design and analysis.
The detection of a covert channel can be made more difficult by using characteristics of the communications medium for the legitimate channel that are never controlled or examined by legitimate users.
For example, a file can be opened and closed by a program in a specific, timed pattern that can be detected by another program, and the pattern can be interpreted as a string of bits, forming a covert channel.
Since it is unlikely that legitimate users will check for patterns of file opening and closing operations, this type of covert channel can remain undetected for long periods.
A similar case isport knocking.
In usual communications the timing of requests is irrelevant and unwatched.
Port knocking makes it significant.
Handel and Sandford presented research where they study covert channels within the general design of network communication protocols.[6]They employ theOSI modelas a basis for their development in which they characterize system elements having potential to be used for data hiding. The adopted approach has advantages over these because standards opposed to specific network environments or architectures are considered.
Their study does not aim to present foolproof steganographic schemes. Rather, they establish basic principles for data hiding in each of sevenOSI layers. Besides suggesting the use of the reserved fields of protocols headers (that are easily detectable) at higher network layers, they also propose the possibility of timing channels involving CSMA/CD manipulation at the physical layer.
Their work identifies covert channel merit such as:
Their covert channel analysis does not consider issues such as interoperability of these data hiding techniques with other network nodes, covert channel capacity estimation, effect of data hiding on the network in terms of complexity and compatibility. Moreover, the generality of the techniques cannot be fully justified in practice since the OSI model does not exist per se in functional systems.
As Girling first analyzes covert channels in a network environment. His work focuses on local area networks (LANs) in which three obvious covert channels (two storage channel and one timing channel) are identified. This demonstrates the real examples of bandwidth possibilities for simple covert channels in LANs. For a specific LAN environment, the author introduced the notion of a wiretapper who monitors the activities of a specific transmitter on LAN. The covertly communicating parties are the transmitter and the wiretapper. The covert information according to Girling can be communicated through any of following obvious ways:
The scenario transmits covert information through a "when-is-sent" strategy therefore termed as timing covert channel. The time to transmit a block of data is calculated as function of software processing time, network speed, network block sizes and protocol overhead. Assuming block of various sizes are transmitted on the LAN, software overhead is computed on average and novel time evaluation is used to estimate the bandwidth (capacity) of covert channels are also presented. The work paves the way for future research.
Focusing on the IP and TCP headers of TCP/IP Protocol suite, an article published by Craig Rowland devises proper encoding and decoding techniques by utilizing the IP identification field, the TCP initial sequence number and acknowledge sequence number fields.[7]These techniques are implemented in a simple utility written for Linux systems running version 2.0 kernels.
Rowland provides a proof of concept as well as practical encoding and decoding techniques for exploitation of covert channels using the TCP/IP protocol suite. These techniques are analyzed considering security mechanisms like firewall network address translation.
However, the non-detectability of these covert communication techniques is questionable. For instance, a case where sequence number field of TCP header is manipulated, the encoding scheme is adopted such that every time the same alphabet is covertly communicated, it is encoded with the same sequence number.
Moreover, the usages of sequence number field as well as the acknowledgment field cannot be made specific to the ASCII coding of English language alphabet as proposed, since both fields take into account the receipt of data bytes pertaining to specific network packet(s).
After Rowland, several authors in academia published more work on covert channels in the TCP/IP protocol suite, including a plethora of countermeasures ranging from statistical approaches to machine learning.[8][9][10][11]The research on network covert channels overlaps with the domain ofnetwork steganography, which emerged later.
|
https://en.wikipedia.org/wiki/Covert_channel
|
Incomputer science, an operation,functionorexpressionis said to have aside effectif it has any observable effect other than its primary effect of reading the value of its arguments and returning a value to the invoker of the operation. Example side effects include modifying anon-local variable, astatic local variableor a mutable argumentpassed by reference; raising errors or exceptions; performingI/O; or calling other functions with side-effects.[1]In the presence of side effects, a program's behaviour may depend on history; that is, the order of evaluation matters. Understanding and debugging a function with side effects requires knowledge about the context and its possible histories.[2][3]Side effects play an important role in the design and analysis ofprogramming languages. The degree to which side effects are used depends on the programming paradigm. For example,imperative programmingis commonly used to produce side effects, to update a system's state. By contrast,declarative programmingis commonly used to report on the state of system, without side effects.
Functional programmingaims to minimize or eliminate side effects. The lack of side effects makes it easier to doformal verificationof a program. The functional languageHaskelleliminates side effects such asI/Oand other stateful computations by replacing them withmonadicactions.[4][5]Functional languages such asStandard ML,SchemeandScalado not restrict side effects, but it is customary for programmers to avoid them.[6]
Effect systemsextend types to keep track of effects, permitting concise notation for functions with effects, while maintaining information about the extent and nature of side effects. In particular, functions without effects correspond to pure functions.
Assembly languageprogrammers must be aware ofhiddenside effects—instructions that modify parts of the processor state which are not mentioned in the instruction's mnemonic. A classic example of a hidden side effect is an arithmetic instruction that implicitly modifiescondition codes(a hidden side effect) while it explicitly modifies aregister(the intended effect). One potential drawback of aninstruction setwith hidden side effects is that, if many instructions have side effects on a single piece of state, like condition codes, then the logic required to update that state sequentially may become a performance bottleneck. The problem is particularly acute on some processors designed withpipelining(since 1990) or without-of-order execution. Such a processor may require additional control circuitry to detect hidden side effects and stall the pipeline if the next instruction depends on the results of those effects.
Absence of side effects is a necessary, but not sufficient, condition for referential transparency. Referential transparency means that an expression (such as a function call) can be replaced with its value. This requires that the expression ispure, that is to say the expression must bedeterministic(always give the samevaluefor the same input) and side-effect free.
Side effects caused by the time taken for an operation to execute are usually ignored when discussing side effects and referential transparency. There are some cases, such as with hardware timing or testing, where operations are inserted specifically for their temporal side effects e.g.sleep(5000)orfor (int i = 0; i < 10000; ++i) {}. These instructions do not change state other than taking an amount of time to complete.
Asubroutinewith side effects is idempotent if multiple applications of the subroutine have the same effect on the system state as a single application, in other words if the function from the system state space to itself associated with the subroutine is idempotent in themathematical sense. For instance, consider the followingPythonprogram:
setxis idempotent because the second application ofsetxto 3 has the same effect on the system state as the first application:xwas already set to 3 after the first application, and it is still set to 3 after the second application.
Apure functionis idempotent if it is idempotent in themathematical sense. For instance, consider the following Python program:
absis idempotent because the second application ofabsto the return value of the first application to -3 returns the same value as the first application to -3.
One common demonstration of side effect behavior is that of theassignment operatorinC. The assignmenta = bis an expression that evaluates to the same value as the expressionb, with the side effect of storing theR-valueofbinto theL-valueofa. This allows multiple assignment:
Because the operatorright associates, this is equivalent to
This presents a potential hangup for novice programmers who may confuse
with
|
https://en.wikipedia.org/wiki/Side_effect_(computer_science)
|
Wire dataorwire imageis the information that passes over computer and telecommunication networks defining communications betweenclient and serverdevices. It is the result of decodingwire and transport protocolscontaining the bi-directional data payload. More precisely, wire data is the information that is communicated in each layer of theOSI model(Layer 1 not being included because those protocols are used to establish connections and do not communicate information).
Wire data is the observed behavior and communication between networked elements which is an important source of information used by IT operations staff to troubleshoot performance issues, create activity baselines, detect anomalous activity, investigate security incidents, and discover IT assets and their dependencies.
According to a March 2016 research note from American IT research and advisory firmGartner, wire data will play a more important role than machine data for analytics in the future: "While log data will certainly have a role in future monitoring and analytics, it is wire data—radically rethought and used in new ways—that will prove to be the most critical source of data for availability and performance management over the next five years."[1]
Real-time wire data streams are also important sources of data for business andoperational intelligenceteams. In these types of scenarios, wire data is used to measure order transactions for real-time reporting on transaction volume, success, and failure rates; tracking patient admission rates at hospitals; as well as reporting on the weights and measures of airplanes prior to take-off.
Wire data is distinct frommachine-generated data, which is system self-reported information typically in the form of logs sourced from elements like network routers, servers, and other equipment. Unlike those forms of machine-generated data, which are dependent on the logging configurations of those devices, wire data is defined by wire and transport protocols. There is a small amount of overlap between wire data and machine-generated data but also significant differences. For example, web server logs typically recordHTTP status code 200responses, indicating that a web page was served to a client. However, web servers do not log the transaction payload and so would not be able to show which HTTP status code 200 responses were for pages with a "service unavailable" message. That information is contained in the wire data or transaction payload and is not necessarily logged by the server.
Traditional methods of capturing and analyzing wire data include offline network packet analyzers. Newer approaches receive a copy of network traffic from aport mirror(SPAN) or network tap and reassemble those packets into full per-client sessions and transaction streams, analyzing the entire transaction payload in real time and generating metadata on those transactions without storing the actual packets.[2]
|
https://en.wikipedia.org/wiki/Wire_data
|
Theleftover hash lemmais alemmaincryptographyfirst stated byRussell Impagliazzo,Leonid Levin, andMichael Luby.[1]
Given a secretkeyXthat hasnuniform randombits, of which anadversarywas able to learn the values of somet<nbits of that key, the leftover hash lemma states that it is possible to produce a key of aboutn−tbits, over which the adversary hasalmost noknowledge, without knowing whichtare known to the adversary. Since the adversary knows all butn−tbits, this is almostoptimal.
More precisely, the leftover hash lemma states that it is possible to extract a length asymptotic toH∞(X){\displaystyle H_{\infty }(X)}(themin-entropyofX) bits from arandom variableX) that are almost uniformly distributed. In other words, an adversary who has some partial knowledge aboutX, will have almost no knowledge about the extracted value. This is also known asprivacy amplification(see privacy amplification section in the articleQuantum key distribution).
Randomness extractorsachieve the same result, but use (normally) less randomness.
LetXbe a random variable overX{\displaystyle {\mathcal {X}}}and letm>0{\displaystyle m>0}. Leth:S×X→{0,1}m{\textstyle h\colon {\mathcal {S}}\times {\mathcal {X}}\rightarrow \{0,\,1\}^{m}}be a2-universalhash function. If
then forSuniform overS{\displaystyle {\mathcal {S}}}and independent ofX, we have:
whereUis uniform over{0,1}m{\displaystyle \{0,1\}^{m}}and independent ofS.[2]
H∞(X)=−logmaxxPr[X=x]{\textstyle H_{\infty }(X)=-\log \max _{x}\Pr[X=x]}is the min-entropy ofX, which measures the amount of randomnessXhas. The min-entropy is always less than or equal to theShannon entropy. Note thatmaxxPr[X=x]{\textstyle \max _{x}\Pr[X=x]}is the probability of correctly guessingX. (The best guess is to guess the most probable value.) Therefore, the min-entropy measures how difficult it is to guessX.
0≤δ(X,Y)=12∑v|Pr[X=v]−Pr[Y=v]|≤1{\textstyle 0\leq \delta (X,Y)={\frac {1}{2}}\sum _{v}\left|\Pr[X=v]-\Pr[Y=v]\right|\leq 1}is astatistical distancebetweenXandY.
|
https://en.wikipedia.org/wiki/Leftover_hash_lemma
|
Incryptography, asemantically securecryptosystemis one where only negligible information about theplaintextcan be feasibly extracted from theciphertext. Specifically, anyprobabilistic, polynomial-time algorithm(PPTA) that is given the ciphertext of a certain messagem{\displaystyle m}(taken from any distribution of messages), and the message's length, cannot determine any partial information on the message with probability non-negligibly higher than all other PPTA's that only have access to the message length (and not the ciphertext).[1]This concept is the computational complexity analogue toShannon'sconcept ofperfect secrecy. Perfect secrecy means that the ciphertext reveals no information at all about the plaintext, whereas semantic security implies that any information revealed cannot be feasibly extracted.[2][3]: 378–381
The notion of semantic security was first put forward byGoldwasserandMicaliin 1982.[1][4]However, the definition they initially proposed offered no straightforward means to prove the security of practical cryptosystems. Goldwasser/Micali subsequently demonstrated that semantic security is equivalent to another definition of security calledciphertext indistinguishabilityunder chosen-plaintext attack.[5]This latter definition is more common than the original definition of semantic security because it better facilitates proving the security of practical cryptosystems.
In the case ofsymmetric-key algorithmcryptosystems, an adversary must not be able to compute any information about a plaintext from its ciphertext. This may be posited as an adversary, given two plaintexts of equal length and their two respective ciphertexts, cannot determine which ciphertext belongs to which plaintext.
For anasymmetric key encryption algorithmcryptosystem to be semantically secure, it must be infeasible for acomputationally boundedadversary to derive significant information about a message (plaintext) when given only itsciphertextand the corresponding public encryption key. Semantic security considers only the case of a "passive" attacker, i.e., one who generates and observes ciphertexts using the public key and plaintexts of their choice. Unlike other security definitions, semantic security does not consider the case ofchosen ciphertext attack(CCA), where an attacker is able to request the decryption of chosen ciphertexts, and many semantically secure encryption schemes are demonstrably insecure against chosen ciphertext attack. Consequently, semantic security is now considered an insufficient condition for securing a general-purpose encryption scheme.
Indistinguishability underChosen Plaintext Attack(IND-CPA) is commonly defined by the following experiment:[6]
The underlyingcryptosystemis IND-CPA (and thus semantically secure under chosen plaintext attack) if the adversary cannot determine which of the two messages was chosen by the oracle, with probability significantly greater than1/2{\displaystyle 1/2}(the success rate of random guessing). Variants of this definition define indistinguishability underchosen ciphertext attackandadaptive chosen ciphertext attack(IND-CCA,IND-CCA2).
Because the adversary possesses the public encryption key in the above game, a semantically secure encryption scheme must by definition beprobabilistic, possessing a component ofrandomness; if this were not the case, the adversary could simply compute the deterministic encryption ofm0{\displaystyle m_{0}}andm1{\displaystyle m_{1}}and compare these encryptions with the returned ciphertextc{\displaystyle c}to successfully guess the oracle's choice.
Randomness plays a key role incryptographyby preventing attackers from detecting patterns in ciphertexts. In a semantically secure cryptosystem, encrypting the same plaintext multiple times should produce different ciphertexts.[7]
If encryption relies on predictable or weak randomness, it becomes easier to break.[8]Poor randomness can lead to patterns that attackers can analyze, potentially allowing them to recover secretkeysor decrypt messages. Because of this, cryptographic systems must use strong and unpredictable random values to maintain security.[9]
Strong randomness is critical in:
Several cryptographic failures have resulted from weak randomness, allowing attackers to break encryption.
An error in Debian’sOpenSSLremoved entropy collection, producing a small set of predictable keys. Attackers could guessSSHandTLSkeys, allowing unauthorized access.[12]
Sony’sPlayStation 3misused theElliptic Curve Digital Signature Algorithm(ECDSA) by reusing the samenonce- a random number used once in cryptographic signing - in multiple signatures. Since ECDSA relies on unique nonces for security, attackers recovered Sony’s private signing key, allowing them to sign unauthorized software.[13]
A flaw inInfineon'sRSAkey generation created weak keys that attackers could efficiently factor. This vulnerability affected smart cards andTrusted Platform Modules(TPMs), requiring widespread key replacements.[14]
To prevent such failures, cryptographic systems must generate unpredictable and high-quality random values.[15]
CSPRNGs provide secure random numbers resistant to attacks. Common examples include:
Secure randomness requires high entropy sources, such as:
Some encryption schemes require added randomness to maintain security:
To verify randomness quality, cryptographic implementations should undergo:
Semantically secure encryption algorithms includeGoldwasser-Micali,ElGamalandPaillier. These schemes are consideredprovably secure, as their semantic security can be reduced to solving some hard mathematical problem (e.g.,Decisional Diffie-Hellmanor theQuadratic Residuosity Problem). Other, semantically insecure algorithms such asRSA, can be made semantically secure (under stronger assumptions) through the use of random encryption padding schemes such asOptimal Asymmetric Encryption Padding(OAEP).
|
https://en.wikipedia.org/wiki/Semantic_security
|
TheClipper chipwas achipsetthat was developed and promoted by the United StatesNational Security Agency(NSA) as anencryptiondevice that secured "voice and data messages" with a built-inbackdoorthat was intended to "allow Federal, State, and local law enforcement officials the ability to decode intercepted voice and data transmissions." It was intended to be adopted by telecommunications companies for voice transmission. Introduced in 1993, it was entirely defunct by 1996.
The Clipper chip used a data encryptionalgorithmcalledSkipjack[1]to transmit information and theDiffie–Hellman key exchange-algorithm to distribute the public keys between peers. Skipjack was invented by theNational Security Agencyof the U.S. Government; this algorithm was initiallyclassifiedSECRET, which prevented it from being subjected topeer reviewfrom the encryption research community. The government did state that it used an80-bit key, that the algorithm wassymmetric, and that it was similar to theDESalgorithm. The Skipjack algorithm was declassified and published by the NSA on June 24, 1998. The initial cost of the chips was said to be $16 (unprogrammed) or $26 (programmed), with its logic designed byMykotronx, and fabricated byVLSI Technology, Inc.
At the heart of the concept waskey escrow. In the factory, any new telephone or other device with a Clipper chip would be given acryptographic key, that would then be provided to the government inescrow. If government agencies "established their authority" to listen to a communication, then the key would be given to those government agencies, who could then decrypt all data transmitted by that particular telephone. The newly formedElectronic Frontier Foundationpreferred the term "key surrender" to emphasize what they alleged was really occurring.[2]
The Clinton Administration argued that the Clipper chip was essential for law enforcement to keep up with the constantly progressing technology in the United States.[3]While many believed that the device would act as an additional way for terrorists to receive information, the Clinton Administration said it would actually increase national security.[4]They argued that because "terrorists would have to use it to communicate with outsiders — banks, suppliers, and contacts — the Government could listen in on those calls."[4]
There were several advocates of the Clipper chip who argued that the technology was safe to implement and effective for its intended purpose of providing law enforcement with the ability to intercept communications when necessary and with a warrant to do so. Howard S. Dakoff, writing in theJohn Marshall Law Review, stated that the technology was secure and the legal rationale for its implementation was sound.[5]Stewart Bakerwrote an opinion piece inWiredmagazine debunking a series of what he purported to be myths surrounding the technology.[6]
Organizations such as theElectronic Privacy Information Centerand theElectronic Frontier Foundationchallenged the Clipper chip proposal, saying that it would have the effect not only of subjecting citizens to increased and possibly illegal governmentsurveillance, but that the strength of the Clipper chip's encryption could not be evaluated by the public as its design was classified secret, and that therefore individuals and businesses might be hobbled with an insecure communications system. Further, it was pointed out that while American companies could be forced to use the Clipper chip in their encryption products, foreign companies could not, and presumably phones with strong data encryption would be manufactured abroad and spread throughout the world and into the United States, negating the point of the whole exercise, and, of course, materially damaging U.S. manufacturers en route. SenatorsJohn AshcroftandJohn Kerrywere opponents of the Clipper chip proposal, arguing in favor of the individual's right to encrypt messages and export encryption software.[7]
The release and development of several strong cryptographic software packages such asNautilus,PGP[8]andPGPfonewas in response to the government push for the Clipper chip. The thinking was that if strong cryptography was freely available on the Internet as an alternative, the government would be unable to stop its use.
In 1994,Matt Blazepublished the paperProtocol Failure in the Escrowed Encryption Standard.[9]It pointed out that the Clipper's escrow system had a serious vulnerability: the chip transmitted a 128-bit "Law Enforcement Access Field" (LEAF) that contained the information necessary to recover the encryption key. To prevent the software that transmitted the message from tampering with the LEAF, a 16-bithashwas included. The Clipper chip would not decode messages with an invalid hash; however, the 16-bit hash was too short to provide meaningful security. Abrute-force attackwould quickly produce another LEAF value that would give the same hash but not yield the correct keys after the escrow attempt. This would allow the Clipper chip to be used as an encryption device, while disabling the key escrow capability.[9]: 63In 1995 Yair Frankel andMoti Yungpublished another attack which is inherent to the design and which shows that the key escrow device tracking and authenticating capability (namely, the LEAF) of one device, can be attached to messages coming from another device and will nevertheless be received, thus bypassing the escrow in real time.[10]In 1997, a group of leading cryptographers published a paper, "The Risks of Key Recovery, Key Escrow, and Trusted Third-Party Encryption", analyzing the architectural vulnerabilities of implementing key escrow systems in general, including but not limited to the Clipper chip Skipjack protocol.[11]
The Clipper chip was not embraced by consumers or manufacturers and the chip itself was no longer relevant by 1996; the only significant purchaser of phones with the chip was the United States Department of Justice.[12]The U.S. government continued to press forkey escrowby offering incentives to manufacturers, allowing more relaxed export controls if key escrow were part of cryptographic software that was exported. These attempts were largely made moot by the widespread use of strong cryptographic technologies, such asPGP, which were not under the control of the U.S. government.
As of 2013[update], strongly encrypted voice channels are still not the predominant mode for current cell phone communications.[13][needs update]Secure cell phone devices andsmartphoneapps exist, but may require specialized hardware, and typically require that both ends of the connection employ the same encryption mechanism. Such apps usually communicate over secure Internet pathways (e.g.ZRTP) instead of through phone voice data networks.
Following theSnowden disclosuresfrom 2013,AppleandGooglestated that they would lock down all data stored on their smartphones with encryption, in such a way that Apple and Google themselves could not break the encryption even if ordered to do so with a warrant.[14]This prompted a strong reaction from the authorities, including the chief of detectives for theChicago Police Departmentstating that "Apple['siPhone] will become the phone of choice for thepedophile".[15]An editorial in theWashington Postargued that "smartphone users must accept that they cannot be above the law if there is a valid search warrant", and after claiming to agree that backdoors would be undesirable, then suggested implementing a "golden key" backdoor which would unlock the data with a warrant.[16][17]The members of "The Risks of Key Recovery, Key Escrow, and Trusted Third-Party Encryption" 1997 paper, as well as other researchers at MIT, wrote a follow-up article in response to the revival of this debate, arguing that mandated government access to private conversations would be an even worse problem than it would have been twenty years before.[18]
|
https://en.wikipedia.org/wiki/Clipper_chip
|
Data Securities International, DSIwas a technology escrow administration company based inSan Francisco, California. Founded in 1982, the companyescrows source codeand other maintenance materials for licensees and stakeholders. In 1997,Iron Mountain Incorporatedacquired the company. In 2021, Iron Mountain sold DSI (now IPM within IRM) for $220 million (see NASDAQ).
Dwight C. Olsonwas the founder of Data Securities International.[1]
Data Securities International was founded in 1982.[2]The company grew steadily over the years before being sold toIron Mountainin 1997.[3]
Data Securities International introduced the concept in the mid 1980s for a Total Software Value (TSV) that uses the composites of Ownership Value (OV) or the software inventory, Market Value (MV), and Internal Cost Savings (ICS) as values and influencing variables of software as a financial asset. A TSV software inventory valuation (OV) analysis looks at the sum total (or bundle) of the various software components or intellectual assets that make software usable as a product.[4]
Total Software Value is explained in the book "The Long Journey to Software Valuation" seeISBN978-1-7344129-0-1and Copyright Registration: TXu 2-181-571[5]
|
https://en.wikipedia.org/wiki/Data_Securities_International
|
Incryptography, arelated-key attackis any form ofcryptanalysiswhere the attacker can observe the operation of acipherunder several differentkeyswhose values are initially unknown, but where some mathematical relationship connecting the keys is known to the attacker. For example, the attacker might know that the last 80 bits of the keys are always the same, even though they don't know, at first, what the bits are.
KASUMI is an eight round, 64-bit block cipher with a 128-bit key. It is based upon MISTY1 and was designed to form the basis of the3Gconfidentiality and integrity algorithms.
Mark Blunden and Adrian Escott described differential related key attacks on five and six rounds of KASUMI.[1]Differential attackswere introduced by Biham and Shamir. Related key attacks were first introduced by Biham.[2]Differential related key attacks are discussed in Kelsey et al.[3]
An important example of a cryptographic protocol that failed because of a related-key attack isWired Equivalent Privacy(WEP) used inWi-Fiwireless networks. Each clientWi-Fi network adapterandwireless access pointin a WEP-protected network shares the same WEP key. Encryption uses theRC4algorithm, astream cipher. It is essential that the same key never be used twice with a stream cipher. To prevent this from happening, WEP includes a 24-bitinitialization vector(IV) in each message packet. The RC4 key for that packet is the IV concatenated with the WEP key. WEP keys have to be changed manually and this typically happens infrequently. An attacker therefore can assume that all the keys used to encrypt packets share a single WEP key. This fact opened up WEP to a series of attacks which proved devastating. The simplest to understand uses the fact that the 24-bit IV only allows a little under 17 million possibilities. Because of thebirthday paradox, it is likely that for every 4096 packets, two will share the same IV and hence the same RC4 key, allowing the packets to be attacked. More devastating attacks take advantage of certainweak keysin RC4 and eventually allow the WEP key itself to be recovered. In 2005, agents from the U.S.Federal Bureau of Investigationpublicly demonstrated the ability to do this with widely available software tools in about three minutes.
One approach to preventing related-key attacks is to design protocols and applications so that encryption keys will never have a simple relationship with each other. For example, each encryption key can be generated from the underlying key material using akey derivation function.
For example, a replacement for WEP,Wi-Fi Protected Access(WPA), uses three levels of keys: master key, working key and RC4 key. The master WPA key is shared with each client and access point and is used in a protocol calledTemporal Key Integrity Protocol(TKIP) to create new working keys frequently enough to thwart known attack methods. The working keys are then combined with a longer, 48-bit IV to form the RC4 key for each packet. This design mimics the WEP approach enough to allow WPA to be used with first-generation Wi-Fi network cards, some of which implemented portions of WEP in hardware. However, not all first-generation access points can run WPA.
Another, more conservative approach is to employ a cipher designed to prevent related-key attacks altogether, usually by incorporating a strongkey schedule. A newer version of Wi-Fi Protected Access, WPA2, uses theAESblock cipherinstead of RC4, in part for this reason. There arerelated-key attacks against AES, but unlike those against RC4, they're far from practical to implement, and WPA2's key generation functions may provide some security against them. Many older network cards cannot run WPA2.
|
https://en.wikipedia.org/wiki/Related-key_attack
|
Abackdooris a typically covert method of bypassing normalauthenticationor encryption in a computer, product, embedded device (e.g. ahome router), or its embodiment (e.g. part of acryptosystem,algorithm,chipset, or even a "homunculus computer"—a tiny computer-within-a-computer such as that found in Intel'sAMT technology).[1][2]Backdoors are most often used for securing remote access to a computer, or obtaining access toplaintextin cryptosystems. From there it may be used to gain access to privileged information like passwords, corrupt or delete data on hard drives, or transfer information within autoschediastic networks.
In the United States, the 1994Communications Assistance for Law Enforcement Actforces internet providers to provide backdoors for government authorities.[3][4]In 2024, the U.S. government realized that China had been tapping communications in the U.S. using that infrastructure for months, or perhaps longer;[5]China recorded presidential candidate campaign office phone calls —including employees of the then-vice president of the nation– and of the candidates themselves.[6]
A backdoor may take the form of a hidden part of a program,[7]a separate program (e.g.Back Orificemay subvert the system through arootkit),code in the firmwareof the hardware,[8]or parts of anoperating systemsuch asWindows.[9][10][11]Trojan horsescan be used to create vulnerabilities in a device. A Trojan horse may appear to be an entirely legitimate program, but when executed, it triggers an activity that may install a backdoor.[12]Although some are secretly installed, other backdoors are deliberate and widely known. These kinds of backdoors have "legitimate" uses such as providing the manufacturer with a way to restore user passwords.
Many systems that store information within the cloud fail to create accurate security measures. If many systems are connected within thecloud, hackers can gain access to all other platforms through the most vulnerable system.[13]Default passwords(or other default credentials) can function as backdoors if they are not changed by the user. Somedebuggingfeatures can also act as backdoors if they are not removed in the release version.[14]In 1993, the United States government attempted to deploy anencryptionsystem, theClipper chip, with an explicit backdoor for law enforcement and national security access. The chip was unsuccessful.[15]
Recent proposals to counter backdoors include creating a database of backdoors' triggers and then using neural networks to detect them.[16]
The threat of backdoors surfaced when multiuser and networkedoperating systemsbecame widely adopted. Petersen and Turn discussed computer subversion in a paper published in the proceedings of the 1967 AFIPS Conference.[17]They noted a class of active infiltration attacks that use "trapdoor" entry points into the system to bypass security facilities and permit direct access to data. The use of the wordtrapdoorhere clearly coincides with more recent definitions of a backdoor. However, since the advent ofpublic key cryptographythe termtrapdoorhas acquired a different meaning (seetrapdoor function), and thus the term "backdoor" is now preferred, only after the term trapdoor went out of use. More generally, such security breaches were discussed at length in aRAND Corporationtask force report published underDARPAsponsorship by J.P. Anderson and D.J. Edwards in 1970.[18]
While initially targeting the computer vision domain, backdoor attacks have expanded to encompass various other domains, including text, audio, ML-based computer-aided design, and ML-based wireless signal classification. Additionally, vulnerabilities in backdoors have been demonstrated in deepgenerative models,reinforcement learning(e.g., AI GO), and deep graph models. These broad-ranging potential risks have prompted concerns from national security agencies regarding their potentially disastrous consequences.[19]
A backdoor in a login system might take the form of ahard codeduser and password combination which gives access to the system. An example of this sort of backdoor was used as a plot device in the1983filmWarGames, in which the architect of the "WOPR" computer system had inserted a hardcoded password-less account which gave the user access to the system, and to undocumented parts of the system (in particular, a video game-like simulation mode and direct interaction with theartificial intelligence).
Although the number of backdoors in systems usingproprietary software(software whosesource codeis not publicly available) is not widely credited, they are nevertheless frequently exposed. Programmers have even succeeded in secretly installing large amounts of benign code asEaster eggsin programs, although such cases may involve official forbearance, if not actual permission.
There are a number ofcloak and daggerconsiderations that come into play when apportioning responsibility.
Covert backdoors sometimes masquerade as inadvertent defects (bugs) for reasons ofplausible deniability. In some cases, these might begin life as an actual bug (inadvertent error), which, once discovered are then deliberately left unfixed and undisclosed, whether by a rogue employee for personal advantage, or with executive awareness and oversight.
It is also possible for an entirely above-board corporation's technology base to be covertly and untraceably tainted by external agents (hackers), though this level of sophistication is thought to exist mainly at the level of nation state actors. For example, if aphotomaskobtained from a photomask supplier differs in a few gates from its photomask specification, a chip manufacturer would be hard-pressed to detect this if otherwise functionally silent; a covert rootkit running in the photomask etching equipment could enact this discrepancy unbeknown to the photomask manufacturer, either, and by such means, one backdoor potentially leads to another.[note 1]
In general terms, the long dependency-chains in the modern,highly specializedtechnological economy and innumerable human-elements processcontrol-pointsmake it difficult to conclusively pinpoint responsibility at such time as a covert backdoor becomes unveiled.
Even direct admissions of responsibility must be scrutinized carefully if the confessing party is beholden to other powerful interests.
Manycomputer worms, such asSobigandMydoom, install a backdoor on the affected computer (generally aPConbroadbandrunningMicrosoft WindowsandMicrosoft Outlook). Such backdoors appear to be installed so thatspammerscan send junke-mailfrom the infected machines. Others, such as theSony/BMG rootkit, placed secretly on millions of music CDs through late 2005, are intended asDRMmeasures—and, in that case, as data-gatheringagents, since both surreptitious programs they installed routinely contacted central servers.
A sophisticated attempt to plant a backdoor in theLinux kernel, exposed in November 2003, added a small and subtle code change by subverting therevision control system.[20]In this case, a two-line change appeared tocheckroot accesspermissions of a caller to thesys_wait4function, but because it used assignment=instead of equality checking==, it actuallygrantedpermissions to the system. This difference is easily overlooked, and could even be interpreted as an accidental typographical error, rather than an intentional attack.[21][22]
In January 2014, a backdoor was discovered in certainSamsungAndroidproducts, like the Galaxy devices. The Samsung proprietary Android versions are fitted with a backdoor that provides remote access to the data stored on the device. In particular, the Samsung Android software that is in charge of handling the communications with the modem, using the Samsung IPC protocol, implements a class of requests known as remote file server (RFS) commands, that allows the backdoor operator to perform via modem remote I/O operations on the device hard disk or other storage. As the modem is running Samsung proprietary Android software, it is likely that it offers over-the-air remote control that could then be used to issue the RFS commands and thus to access the file system on the device.[23]
Harder to detect backdoors involve modifyingobject code, rather than source code—object code is much harder to inspect, as it is designed to be machine-readable, not human-readable. These backdoors can be inserted either directly in the on-disk object code, or inserted at some point during compilation, assembly linking, or loading—in the latter case the backdoor never appears on disk, only in memory. Object code backdoors are difficult to detect by inspection of the object code, but are easily detected by simply checking for changes (differences), notably in length or in checksum, and in some cases can be detected or analyzed by disassembling the object code. Further, object code backdoors can be removed (assuming source code is available) by simply recompiling from source on a trusted system.
Thus for such backdoors to avoid detection, all extant copies of a binary must be subverted, and any validation checksums must also be compromised, and source must be unavailable, to prevent recompilation. Alternatively, these other tools (length checks, diff, checksumming, disassemblers) can themselves be compromised to conceal the backdoor, for example detecting that the subverted binary is being checksummed and returning the expected value, not the actual value. To conceal these further subversions, the tools must also conceal the changes in themselves—for example, a subverted checksummer must also detect if it is checksumming itself (or other subverted tools) and return false values. This leads to extensive changes in the system and tools being needed to conceal a single change.
As object code can be regenerated by recompiling (reassembling, relinking) the original source code, making a persistent object code backdoor (without modifying source code) requires subverting thecompileritself—so that when it detects that it is compiling the program under attack it inserts the backdoor—or alternatively the assembler, linker, or loader. As this requires subverting the compiler, this in turn can be fixed by recompiling the compiler, removing the backdoor insertion code. This defense can in turn be subverted by putting a source meta-backdoor in the compiler, so that when it detects that it is compiling itself it then inserts this meta-backdoor generator, together with the original backdoor generator for the original program under attack. After this is done, the source meta-backdoor can be removed, and the compiler recompiled from original source with the compromised compiler executable: the backdoor has been bootstrapped. This attack dates to a 1974 paper by Karger and Schell,[24]and was popularized in Thompson's 1984 article, entitled "Reflections on Trusting Trust";[25]it is hence colloquially known as the "Trusting Trust" attack. Seecompiler backdoors, below, for details. Analogous attacks can target lower levels of the system,
such as the operating system, and can be inserted during the systembootingprocess; these are also mentioned by Karger and Schell in 1974, and now exist in the form ofboot sector viruses.[24][26]
A traditional backdoor is a symmetric backdoor: anyone that finds the backdoor can in turn use it. The notion of an asymmetric backdoor was introduced by Adam Young andMoti Yungin theProceedings of Advances in Cryptology – Crypto '96. An asymmetric backdoor can only be used by the attacker who plants it, even if the full implementation of the backdoor becomes public (e.g. via publishing, being discovered and disclosed byreverse engineering, etc.). Also, it is computationally intractable to detect the presence of an asymmetric backdoor under black-box queries. This class of attacks have been termedkleptography; they can be carried out in software, hardware (for example,smartcards), or a combination of the two. The theory of asymmetric backdoors is part of a larger field now calledcryptovirology. Notably,NSAinserted a kleptographic backdoor into theDual EC DRBGstandard.[8][27][28]
There exists an experimental asymmetric backdoor inRSAkeygeneration. This OpenSSL RSA backdoor, designed by Young and Yung, utilizes a twisted pair of elliptic curves, and has been made available.[29]
A sophisticated form ofblack boxbackdoor is acompiler backdoor, where not only is a compiler subverted—to insert a backdoor in some other program, such as a login program—but it is further modified to detect when it is compiling itself and then inserts both the backdoor insertion code (targeting the other program) and the code-modifying self-compilation, like the mechanism through whichretrovirusesinfect their host. This can be done by modifying the source code, and the resulting compromised compiler (object code) can compile the original (unmodified) source code and insert itself: the exploit has been boot-strapped.
This attack was originally presented in Karger & Schell (1974),[note 2]which was aUnited States Air Forcesecurity analysis ofMultics, where they described such an attack on aPL/Icompiler, and call it a "compiler trap door". They also mention a variant where the system initialization code is modified to insert a backdoor duringbooting, as this is complex and poorly understood, and call it an "initialization trapdoor"; this is now known as aboot sector virus.[26]
This attack was then actually implemented byKen Thompson, and popularized in hisTuring Awardacceptance speech in 1983, "Reflections on Trusting Trust",[25]which points out that trust is relative, and the only software one can truly trust is code where every step of the bootstrapping has been inspected. This backdoor mechanism is based on the fact that people only review source (human-written) code, and not compiledmachine code(object code). Aprogramcalled acompileris used to create the second from the first, and the compiler is usually trusted to do an honest job.
Thompson's paper[25]describes a modified version of theUnixCcompiler that would put an invisible backdoor in the Unixlogincommand when it noticed that the login program was being compiled, and would also add this feature undetectably to future compiler versions upon their compilation as well. As the compiler itself was a compiled program, users would be extremely unlikely to notice the machine code instructions that performed these tasks. (Because of the second task, the compiler's source code would appear "clean".) What's worse, in Thompson'sproof of conceptimplementation, the subverted compiler also subverted the analysis program (thedisassembler), so that anyone who examined the binaries in the usual way would not actually see the real code that was running, but something else instead.
Karger and Schell gave an updated analysis of the original exploit in 2002, and, in 2009, Wheeler wrote a historical overview and survey of the literature.[note 3]In 2023, Cox published an annotated version of Thompson's backdoor source code.[31]
Thompson's version was, officially, never released into the wild. However, it is believed that a version was distributed toBBNand at least one use of the backdoor was recorded.[note 4]There are scattered anecdotal reports of such backdoors in subsequent years.
In August 2009, an attack of this kind was discovered by Sophos labs. The W32/Induc-A virus infected the program compiler forDelphi, a Windows programming language. The virus introduced its own code to the compilation of new Delphi programs, allowing it to infect and propagate to many systems, without the knowledge of the software programmer. The virus looks for a Delphi installation, modifies the SysConst.pas file, which is the source code of a part of the standard library and compiles it. After that, every program compiled by that Delphi installation will contain the virus. An attack that propagates by building its ownTrojan horsecan be especially hard to discover. It resulted in many software vendors releasing infected executables without realizing it, sometimes claiming false positives. After all, the executable was not tampered with, the compiler was. It is believed that the Induc-A virus had been propagating for at least a year before it was discovered.[note 5]
In 2015, a malicious copy of Xcode,XcodeGhost, also performed a similar attack and infected iOS apps from a dozen of software companies in China. Globally, 4,000 apps were found to be affected. It was not a true Thompson Trojan, as it does not infect development tools themselves, but it did prove that toolchain poisoning can cause substantial damages.[34]
Once a system has been compromised with a backdoor or Trojan horse, such as theTrusting Trustcompiler, it is very hard for the "rightful" user to regain control of the system – typically one should rebuild a clean system and transfer data (but not executables) over. However, several practical weaknesses in theTrusting Trustscheme have been suggested. For example, a sufficiently motivated user could painstakingly review the machine code of the untrusted compiler before using it. As mentioned above, there are ways to hide the Trojan horse, such as subverting the disassembler; but there are ways to counter that defense, too, such as writing a disassembler from scratch.[citation needed]
A generic method to counter trusting trust attacks is calleddiverse double-compiling. The method requires a different compiler and the source code of the compiler-under-test. That source, compiled with both compilers, results in two different stage-1 compilers, which however should have the same behavior. Thus the same source compiled with both stage-1 compilers must then result in two identical stage-2 compilers. A formal proof is given that the latter comparison guarantees that the purported source code and executable of the compiler-under-test correspond, under some assumptions. This method was applied by its author to verify that the C compiler of theGCC suite(v. 3.0.4) contained no trojan, usingicc(v. 11.0) as the different compiler.[30]
In practice such verifications are not done by end users, except in extreme circumstances of intrusion detection and analysis, due to the rarity of such sophisticated attacks, and because programs are typically distributed in binary form. Removing backdoors (including compiler backdoors) is typically done by simply rebuilding a clean system. However, the sophisticated verifications are of interest to operating system vendors, to ensure that they are not distributing a compromised system, and in high-security settings, where such attacks are a realistic concern.
|
https://en.wikipedia.org/wiki/Backdoor_(computing)
|
Inmathematics, ahomogeneous distributionis adistributionSonEuclidean spaceRnorRn\ {0} that ishomogeneousin the sense that, roughly speaking,
for allt> 0.
More precisely, letμt:x↦x/t{\displaystyle \mu _{t}:x\mapsto x/t}be the scalar division operator onRn. A distributionSonRnorRn\ {0} is homogeneous of degreemprovided that
for all positive realtand all test functions φ. The additional factor oft−nis needed to reproduce the usual notion of homogeneity for locally integrable functions, and comes about from theJacobian change of variables. The numbermcan be real or complex.
It can be a non-trivial problem to extend a given homogeneous distribution fromRn\ {0} to a distribution onRn, although this is necessary for many of the techniques ofFourier analysis, in particular theFourier transform, to be brought to bear. Such an extension exists in most cases, however, although it may not be unique.
IfSis a homogeneous distribution onRn\ {0} of degree α, then theweakfirst partial derivativeofS
has degree α−1. Furthermore, a version ofEuler's homogeneous function theoremholds: a distributionSis homogeneous of degree α if and only if
A complete classification of homogeneous distributions in one dimension is possible. The homogeneous distributions onR\ {0} are given by variouspower functions. In addition to the power functions, homogeneous distributions onRinclude theDirac delta functionand its derivatives.
The Dirac delta function is homogeneous of degree −1. Intuitively,
by making a change of variablesy=txin the "integral". Moreover, thekth weak derivative of the delta function δ(k)is homogeneous of degree −k−1. These distributions all have support consisting only of the origin: when localized overR\ {0}, these distributions are all identically zero.
In one dimension, the function
is locally integrable onR\ {0}, and thus defines a distribution. The distribution is homogeneous of degree α. Similarlyx−α=(−x)+α{\displaystyle x_{-}^{\alpha }=(-x)_{+}^{\alpha }}and|x|α=x+α+x−α{\displaystyle |x|^{\alpha }=x_{+}^{\alpha }+x_{-}^{\alpha }}are homogeneous distributions of degree α.
However, each of these distributions is only locally integrable on all ofRprovided Re(α) > −1. But although the functionx+α{\displaystyle x_{+}^{\alpha }}naively defined by the above formula fails to be locally integrable for Re α ≤ −1, the mapping
is aholomorphic functionfrom the right half-plane to thetopological vector spaceof tempered distributions. It admits a uniquemeromorphicextension with simple poles at each negative integerα = −1, −2, .... The resulting extension is homogeneous of degree α, provided α is not a negative integer, since on the one hand the relation
holds and is holomorphic in α > 0. On the other hand, both sides extend meromorphically in α, and so remain equal throughout the domain of definition.
Throughout the domain of definition,xα+also satisfies the following properties:
There are several distinct ways to extend the definition of power functions to homogeneous distributions onRat the negative integers.
The poles inxα+at the negative integers can be removed by renormalizing. Put
This is anentire functionof α. At the negative integers,
The distributionsχ+a{\displaystyle \chi _{+}^{a}}have the properties
A second approach is to define the distributionx_−k{\displaystyle {\underline {x}}^{-k}}, fork= 1, 2, ...,
These clearly retain the original properties of power functions:
These distributions are also characterized by their action on test functions
and so generalize theCauchy principal valuedistribution of 1/xthat arises in theHilbert transform.
Another homogeneous distribution is given by the distributional limit
That is, acting on test functions
The branch of the logarithm is chosen to be single-valued in the upper half-plane and to agree with the natural log along the positive real axis. As the limit of entire functions,(x+ i0)α[φ]is an entire function of α. Similarly,
is also a well-defined distribution for all α
When Re α > 0,
which then holds by analytic continuation whenever α is not a negative integer. By the permanence of functional relations,
At the negative integers, the identity holds (at the level of distributions onR\ {0})
and the singularities cancel to give a well-defined distribution onR. The average of the two distributions agrees withx_−k{\displaystyle {\underline {x}}^{-k}}:
The difference of the two distributions is a multiple of the delta function:
which is known as thePlemeljjump relation.
The followingclassification theoremholds (Gel'fand & Shilov 1966, §3.11). LetSbe a distribution homogeneous of degree α onR\ {0}. ThenS=ax+α+bx−α{\displaystyle S=ax_{+}^{\alpha }+bx_{-}^{\alpha }}for some constantsa,b. Any distributionSonRhomogeneous of degreeα ≠ −1, −2, ...is of this form as well. As a result, every homogeneous distribution of degreeα ≠ −1, −2, ...onR\ {0} extends toR.
Finally, homogeneous distributions of degree −k, a negative integer, onRare all of the form:
Homogeneous distributions on the Euclidean spaceRn\ {0} with the origin deleted are always of the form
whereƒis a distribution on the unit sphereSn−1. The number λ, which is the degree of the homogeneous distributionS, may be real or complex.
Any homogeneous distribution of the form (1) onRn\ {0} extends uniquely to a homogeneous distribution onRnprovidedRe λ > −n. In fact, an analytic continuation argument similar to the one-dimensional case extends this for allλ ≠ −n, −n−1, ....
|
https://en.wikipedia.org/wiki/Homogeneous_distribution
|
Incryptography,integral cryptanalysisis acryptanalytic attackthat is particularly applicable toblock ciphersbased onsubstitution–permutation networks. It was originally designed byLars Knudsenas a dedicated attack againstSquare, so it is commonly known as theSquare attack. It was also extended to a few other ciphers related to Square:CRYPTON,Rijndael, andSHARK.Stefan Lucksgeneralized the attack to what he called asaturation attackand used it to attackTwofish, which is not at all similar to Square, having a radically differentFeistel networkstructure. Forms of integral cryptanalysis have since been applied to a variety of ciphers, includingHierocrypt,IDEA,Camellia,Skipjack,MISTY1,MISTY2,SAFER++,KHAZAD, andFOX(now calledIDEA NXT).
Unlikedifferential cryptanalysis, which uses pairs ofchosen plaintextswith a fixedXORdifference, integral cryptanalysis usessetsor evenmultisetsof chosen plaintexts of which part is held constant, and another part varies through all possibilities. For example, an attack might use 256 chosen plaintexts that have all but 8 of their bits the same, but all differ in those 8 bits. Such a set necessarily has an XOR sum of 0, and the XOR sums of the corresponding sets of ciphertexts provide information about the cipher's operation. This contrast between the differences of pairs of texts and the sums of larger sets of texts inspired the name "integral cryptanalysis", borrowing the terminology ofcalculus.
This cryptography-related article is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Integral_cryptanalysis
|
Incryptography,differential equations of addition(DEA) are one of the most basic equations related todifferential cryptanalysisthat mix additions over two different groups (e.g. addition modulo 232and addition over GF(2)) and where input and output differences are expressed as XORs.
Differential equations of addition(DEA) are of the following form:
(x+y)⊕((x⊕a)+(y⊕b))=c{\displaystyle (x+y)\oplus ((x\oplus a)+(y\oplus b))=c}
wherex{\displaystyle x}andy{\displaystyle y}aren{\displaystyle n}-bitunknownvariables anda{\displaystyle a},b{\displaystyle b}andc{\displaystyle c}areknownvariables. The symbols+{\displaystyle +}and⊕{\displaystyle \oplus }denoteaddition modulo2n{\displaystyle 2^{n}}andbitwise exclusive-orrespectively. The above equation is denoted by(a,b,c){\displaystyle (a,b,c)}.
Let a set
S={(ai,bi,ci)|i<k}{\displaystyle S=\{(a_{i},b_{i},c_{i})|i<k\}}
for integeri{\displaystyle i}denote a system ofk(n){\displaystyle k(n)}DEAwherek(n){\displaystyle k(n)}is a polynomial inn{\displaystyle n}. It has been proved that the satisfiability of an arbitrary set of DEA is in thecomplexity class Pwhen a brute force search requires anexponential time.
In 2013, some properties of a special form of DEA were reported by Chengqing Li et al., wherea=0{\displaystyle a=0}andy{\displaystyle y}is assumed known. Essentially, the special DEA can be represented as(x∔α)⊕(x∔β)=c{\displaystyle (x\dotplus \alpha )\oplus (x\dotplus \beta )=c}. Based on the found properties, an algorithm for derivingx{\displaystyle x}was proposed and analyzed.[1]
Solution to an arbitrary set of DEA (either in batch and or in adaptive query model) was due toSouradyuti PaulandBart Preneel. The solution techniques have been used to attack the stream cipherHelix.
|
https://en.wikipedia.org/wiki/Differential_equations_of_addition
|
TheAND gateis a basic digitallogic gatethat implements thelogical conjunction(∧) frommathematical logic– AND gates behave according to theirtruth table. A HIGH output (1) results only if all the inputs to the AND gate are HIGH (1). If any of the inputs to the AND gate are not HIGH, a LOW (0) is outputted. The function can be extended to any number of inputs by multiple gates up in a chain.
There are three symbols for AND gates: the American (ANSIor 'military') symbol and theIEC('European' or 'rectangular') symbol, as well as the deprecatedDINsymbol. Additional inputs can be added as needed. For more information see theLogic gate symbolsarticle. It can also be denoted as symbol "^" or "&".
The AND gate with inputsAandBand outputCimplements the logical expressionC=A⋅B{\displaystyle C=A\cdot B}. This expression also may be denoted asC=A∧B{\displaystyle C=A\wedge B}orC=A&B{\displaystyle C=A\And B}.
As ofUnicode16.0.0, the AND gate is also encoded in theSymbols for Legacy Computing Supplementblock asU+1CC16LOGIC GATE AND.
In logic families likeTTL,NMOS,PMOSandCMOS, an AND gate is built from aNAND gatefollowed by aninverter. In the CMOS implementation above, transistors T1-T4 realize the NAND gate and transistors T5 and T6 the inverter. The need for an inverter makes AND gates less efficient than NAND gates.
AND gates can also be made from discrete components and are readily available asintegrated circuitsin several differentlogic families.
f(a,b)=a∗b{\displaystyle f(a,b)=a*b}is the analytical representation of AND gate:
If no specific AND gates are available, one can be made fromNANDorNORgates, because NAND and NOR gates are "universal gates"[1]meaning that they can be used to make all the others.
AND gates with multiple inputs are designated with the same symbol, with more lines leading in.[2]While direct implementations with more than four inputs are possible in logic families likeCMOS, these are inefficient. More efficient implementations use a cascade ofNANDandNORgates, as shown in the picture on the right below. This is more efficient than the cascade of AND gates shown on the left.[3]
|
https://en.wikipedia.org/wiki/AND_gate
|
TheOR gateis a digitallogic gatethat implementslogical disjunction. The OR gate outputs "true" if any of its inputs is "true"; otherwise it outputs "false". The input and output states are normally represented by differentvoltagelevels.
Any OR gate can be constructed with two or more inputs. It outputs a 1 if any of these inputs are 1, or outputs a 0 only if all inputs are 0. The inputs and outputs are binary digits ("bits") which have two possiblelogical states. In addition to 1 and 0, these states may be called true and false, high and low, active and inactive, or other such pairs of symbols.
Thus it performs alogical disjunction(∨) frommathematical logic. The gate can be represented with the plus sign (+) because it can be used forlogical addition.[1]Equivalently, an OR gate finds themaximumbetween two binary digits, just as theAND gatefinds theminimum.[2]
Together with theAND gateand theNOT gate, the OR gate is one of three basic logic gates from which anyBoolean circuitmay be constructed. All otherlogic gatesmay be made from these three gates; any function in binary mathematics may be implemented with them.[3]
It is sometimes called theinclusive OR gateto distinguish it fromXOR, the exclusive OR gate.[4]The behavior of OR is the same as XOR except in the case of a 1 for both inputs. In situations where this never arises (for example, in afull-adder) the two types of gates are interchangeable. This substitution is convenient when a circuit is being implemented using simpleintegrated circuitchips which contain only one gate type per chip.
There are twologic gate symbolscurrently representing the OR gate: the American (ANSIor 'military') symbol and theIEC('European' or 'rectangular') symbol. TheDINsymbol is deprecated.[5][6]
The "≥1" on the IEC symbol indicates that the output is activated by at least one active input.[7]
As ofUnicode16.0.0, the OR gate is also encoded in theSymbols for Legacy Computing Supplementblock asU+1CC15LOGIC GATE OR.
OR gates are basic logic gates, and are available inTTLandCMOSICslogic families. The standard 4000 seriesCMOSIC is the 4071, which includes four independent two-input OR gates. The TTL device is the 7432. There are many offshoots of the original 7432 OR gate, all having the same pinout but different internal architecture, allowing them to operate in different voltage ranges and/or at higher speeds. In addition to the standard 2-input OR gate, 3- and 4-input OR gates are also available. In the CMOS series, these are:
Variations include:
f(a,b)=a+b−a∗b{\displaystyle f(a,b)=a+b-a*b}is the analytical representation of OR gate:
OR gates with multiple inputs are designated with the same symbol, with more lines leading in.[8]While direct implementations with more than three inputs are possible in logic families like CMOS, these are inefficient. More efficient implementations use a cascade ofNORandNANDgates, as shown in the picture below.
If no specific OR gates are available, one can be made from NAND or NOR gates in the configuration shown in the image below. Any logic gate can be made from a combination ofNANDorNORgates.
Withactive lowopen collectorlogic outputs, as used for control signals in many circuits, an OR function can be produced by wiring together several outputs. This arrangement is called awired OR. This implementation of an OR function typically is also found in integrated circuits of N or P-type only transistor processes.
|
https://en.wikipedia.org/wiki/OR_gate
|
In digital logic, aninverterorNOT gateis alogic gatewhich implementslogical negation. It outputs abitopposite of the bit that is put into it. The bits are typically implemented as two differingvoltagelevels.
The NOT gate outputs a zero when given a one, and a one when given a zero. Hence, it inverts its inputs. Colloquially, this inversion of bits is called "flipping" bits.[1]As with all binary logic gates, other pairs of symbols — such as true and false, or high and low — may be used in lieu of one and zero.
It is equivalent to thelogical negationoperator (¬) inmathematical logic. Because it has only one input, it is aunary operationand has the simplest type oftruth table. It is also called the complement gate[2]because it produces theones' complementof a binary number, swapping 0s and 1s.
The NOT gate is one of three basic logic gates from which anyBoolean circuitmay be built up. Together with theAND gateand theOR gate, any function in binary mathematics may be implemented. All otherlogic gatesmay be made from these three.[3]
The terms "programmable inverter" or "controlled inverter" do not refer to this gate; instead, these terms refer to theXOR gatebecause it can conditionally function like a NOT gate.[1][3]
The traditional symbol for an inverter circuit is a triangle touching a small circle or "bubble". Input and output lines are attached to the symbol; the bubble is typically attached to the output line. To symbolizeactive-low input, sometimes the bubble is instead placed on the input line.[4]Sometimes only the circle portion of the symbol is used, and it is attached to the input or output of another gate; the symbols forNANDandNORare formed in this way.[3]
A bar oroverline( ‾ ) above a variable can denote negation (or inversion or complement) performed by a NOT gate.[4]A slash (/) before the variable is also used.[3]
An inverter circuit outputs a voltage representing the opposite logic-level to its input. Its main function is to invert the input signal applied. If the applied input is low then the output becomes high and vice versa. Inverters can be constructed using a singleNMOStransistor or a singlePMOStransistor coupled with aresistor. Since this "resistive-drain" approach uses only a single type of transistor, it can be fabricated at a low cost. However, because current flows through the resistor in one of the two states, the resistive-drain configuration is disadvantaged for power consumption and processing speed. Alternatively, inverters can be constructed using two complementary transistors in aCMOSconfiguration. This configuration greatly reduces power consumption since one of the transistors is always off in both logic states.[5]Processing speed can also be improved due to the relatively low resistance compared to the NMOS-only or PMOS-only type devices. Inverters can also be constructed withbipolar junction transistors(BJT) in either aresistor–transistor logic(RTL) or atransistor–transistor logic(TTL) configuration.
Digitalelectronics circuits operate at fixed voltage levels corresponding to a logical 0 or 1 (seebinary). An inverter circuit serves as the basic logic gate to swap between those two voltage levels. Implementation determines the actual voltage, but common levels include (0, +5V) for TTL circuits.
The inverter is a basic building block in digital electronics. Multiplexers, decoders, state machines, and other sophisticated digital devices may use inverters.
Thehex inverteris anintegrated circuitthat contains six (hexa-) inverters. For example, the7404TTLchip which has 14 pins and the 4049CMOSchip which has 16 pins, 2 of which are used for power/referencing, and 12 of which are used by the inputs and outputs of the six inverters (the 4049 has 2 pins with no connection).
f(a)=1−a{\displaystyle f(a)=1-a}is the analytical representation of NOT gate:
If no specific NOT gates are available, one can be made from the universalNANDorNORgates,[6]or anXOR gateby setting one input to high.
Digital inverter quality is often measured using the voltage transfer curve (VTC), which is a plot of output vs. input voltage. From such a graph, device parameters including noise tolerance, gain, and operating logic levels can be obtained.
Ideally, the VTC appears as an inverted step function – this would indicate precise switching betweenonandoff– but in real devices, a gradual transition region exists. The VTC indicates that for low input voltage, the circuit outputs high voltage; for high input, the output tapers off towards the low level. The slope of this transition region is a measure of quality – steep (close to vertical) slopes yield precise switching.
The tolerance to noise can be measured by comparing the minimum input to the maximum output for each region of operation (on / off).
Since the transition region is steep and approximately linear, a properly-biased CMOS inverter digital logic gate may be used as a high-gain analoglinear amplifier[7][8][9][10][11]or even combined to form anopamp.[12]Maximum gain is achieved when the input and output operating points are the same voltage, which can be biased by connecting a resistor between the output and input.[13]
|
https://en.wikipedia.org/wiki/Inverter_(logic_gate)
|
Indigital electronics, aNAND(NOT AND)gateis alogic gatewhich produces an output which is false only if all its inputs are true; thus its output iscomplementto that of anAND gate. A LOW (0) output results only if all the inputs to the gate are HIGH (1); if any input is LOW (0), a HIGH (1) output results. A NAND gate is made using transistors and junction diodes. ByDe Morgan's laws, a two-input NAND gate's logic may be expressed asA¯∨B¯=A⋅B¯{\displaystyle {\overline {A}}\lor {\overline {B}}={\overline {A\cdot B}}}, making a NAND gate equivalent toinvertersfollowed by anOR gate.
The NAND gate is significant because anyBoolean functioncan be implemented by using a combination of NAND gates. This property is called "functional completeness". It shares this property with theNOR gate. Digital systems employing certain logic circuits take advantage of NAND's functional completeness.
NAND gates with two or more inputs are available asintegrated circuitsintransistor–transistor logic,CMOS, and otherlogic families.
There are three symbols for NAND gates: theMIL/ANSIsymbol, theIECsymbol and the deprecatedDINsymbol sometimes found on old schematics. The ANSI symbol for the NAND gate is a standard AND gate with an inversion bubble connected.
The functionNAND(a1,a2, ...,an)islogically equivalenttoNOT(a1ANDa2AND ... ANDan).
One way of expressing A NAND B isA∧B¯{\displaystyle {\overline {A\land B}}}, where the symbol∧{\displaystyle {\land }}signifies AND and the bar signifies the negation of the expression under it: in essence, simply¬(A∧B){\displaystyle {\displaystyle \lnot (A\land B)}}.
The basic implementations can be understood from the image on the left below: If either of the switches S1 or S2 is open, thepull-up resistorR will set the output signal Q to 1 (high). If S1 and S2 are both closed, the pull-up resistor will be overridden by the switches, and the output will be 0 (low).
In thedepletion-load NMOS logicrealization in the middle below, the switches are the transistors T2 and T3, and the transistor T1 fulfills the function of the pull-up resistor.
In theCMOSrealization on the right below, the switches are then-typetransistors T3 and T4, and the pull-up resistor is made up of thep-typetransistors T1 and T2, which form the complement of transistors T3 and T4.
In CMOS, NAND gates are more efficient thanNOR gates. This is due to the faster charge mobility in n-MOSFETs compared to p-MOSFETs, so that the parallel connection of two p-MOSFETs (T1 and T2) realised in the NAND gate is more favourable than their series connection in the NOR gate. For this reason, NAND gates are generally preferred over NOR gates in CMOS circuits.[1]
NAND gates are basic logic gates, and as such they are recognised inTTLandCMOSICs.
The standard,4000 series,CMOSICis the 4011, which includes four independent, two-input, NAND gates. These devices are available from many semiconductor manufacturers. These are usually available in both through-holeDILandSOICformats. Datasheets are readily available in mostdatasheet databases.
The standard two-, three-, four- and eight-input NAND gates are available:
The NAND gate has the property offunctional completeness, which it shares with theNOR gate. That is, any other logic function (AND, OR, etc.) can be implemented using only NAND gates.[2]An entire processor can be created using NAND gates alone. In TTL ICs using multiple-emittertransistors, it also requires fewer transistors than a NOR gate.
As NOR gates are also functionally complete, if no specific NAND gates are available, one can be made fromNORgates usingNOR logic.[2]
|
https://en.wikipedia.org/wiki/NAND_gate
|
TheNOR(NOT OR)gateis a digitallogic gatethat implementslogical NOR- it behaves according to thetruth tableto the right. A HIGH output (1) results if both the inputs to the gate are LOW (0); if one or both input is HIGH (1), a LOW output (0) results. NOR is the result of thenegationof theORoperator. It can also in some senses be seen as the inverse of anAND gate. NOR is afunctionally completeoperation—NOR gates can be combined to generate any other logical function. It shares this property with theNAND gate. By contrast, theORoperator ismonotonicas it can only change LOW to HIGH but not vice versa.
In most, but not all, circuit implementations, the negation comes for free—includingCMOSandTTL. In such logic families, OR is the more complicated operation; it may use a NOR followed by a NOT. A significant exception is some forms of thedomino logicfamily.
There are three symbols for NOR gates: the American (ANSI or 'military') symbol and the IEC ('European' or 'rectangular') symbol, as well as the deprecatedDINsymbol. For more information seeLogic Gate Symbols. The ANSI symbol for the NOR gate is a standard OR gate with an inversion bubble connected.
The bubble indicates that the function of the or gate has been inverted.
NOR Gates are basic logic gates, and as such they are recognised inTTLandCMOSICs. The standard,4000 series, CMOS IC is the 4001, which includes four independent, two-input, NOR gates. The pinout diagram is as follows:
These devices are available from most semiconductor manufacturers such asFairchild Semiconductor,PhilipsorTexas Instruments. These are usually available in both through-holeDIPandSOICformat. Datasheets are readily available in mostdatasheet databases.
In the popular CMOS and TTLlogic families, NOR gates with up to 8 inputs are available:
In the olderRTLandECLfamilies, NOR gates were efficient and most commonly used.
The left diagram above show the construction of a 2-input NOR gate usingNMOS logiccircuitry. If either of the inputs is high, the corresponding N-channelMOSFETis turned on and the output is pulled low; otherwise the output is pulled high through thepull-up resistor. In the CMOS implementation on the right, the function of the pull-up resistor is implemented by the two p-type transistors in series on the top.
In CMOS, NOR gates are less efficient thanNAND gates. This is due to the faster charge mobility in n-MOSFETs compared to p-MOSFETs, so that the parallel connection of two p-MOSFETs realised in the NAND gate is more favourable than their series connection in the NOR gate. For this reason, NAND gates are generally preferred over NOR gates in CMOS circuits.[1]
The NOR gate has the property offunctional completeness, which it shares with the NAND gate. That is, any other logic function (AND, OR, etc.) can be implemented using only NOR gates.[2]An entire processor can be created using NOR gates alone. The originalApollo Guidance Computerused 4,100 integrated circuits (IC), each one containing only two 3-input NOR gates.[3]
As NAND gates are also functionally complete, if no specific NOR gates are available, one can be made fromNANDgates usingNAND logic.[2]
|
https://en.wikipedia.org/wiki/NOR_gate
|
TheXNOR gate(sometimesENOR,EXNOR,NXOR,XANDand pronounced asExclusive NOR) is a digitallogic gatewhose function is thelogical complementof the Exclusive OR (XOR) gate.[1]It is equivalent to the logical connective (↔{\displaystyle \leftrightarrow }) frommathematical logic, also known as the material biconditional. The two-input version implementslogical equality, behaving according to the truth table to the right, and hence the gate is sometimes called an "equivalence gate". A high output (1) results if both of the inputs to the gate are the same. If one but not both inputs are high (1), a low output (0) results.
Thealgebraic notationused to represent the XNOR operation isS=A⊙B{\displaystyle S=A\odot B}. The algebraic expressions(A+B¯)⋅(A¯+B){\displaystyle (A+{\overline {B}})\cdot ({\overline {A}}+B)}andA⋅B+A¯⋅B¯{\displaystyle A\cdot B+{\overline {A}}\cdot {\overline {B}}}both represent the XNOR gate with inputsAandB.
There aretwo symbols for XNOR gates: one with distinctive shape and one with rectangular shape and label. Both symbols for the XNOR gate are that of theXOR gatewith an added inversion bubble.
XNOR gates are represented in mostTTLandCMOSICfamilies. The standard4000 seriesCMOS IC is the 4077, and the TTL IC is the 74266 (although anopen-collectorimplementation). Both include four independent, two-input, XNOR gates. The (now obsolete) 74S135 implemented four two-input XOR/XNOR gates or two three-input XNOR gates.
Both the TTL74LSimplementation, the 74LS266, as well as the CMOS gates (CD4077, 74HC4077 and 74HC266 and so on) are available from most semiconductor manufacturers such asTexas InstrumentsorNXP, etc.[2]They are usually available in both through-holeDIPandSOICformats (SOIC-14, SOC-14 or TSSOP-14).
Datasheets are readily available in mostdatasheet databasesand suppliers.
An XNOR gate can be implemented using a NAND gate and anOR-AND-invertgate, as shown in the following picture.[3]This is based on the identity
a⊻b¯⟺(a∧¯b)∧¯(a∨b){\displaystyle {\overline {a\veebar b}}\iff \left(a{\overline {\land }}b\right){\overline {\land }}\left(a\lor b\right)}
An alternative, which is useful when inverted inputs are also available (for example from aflip-flop), uses a 2-2AND-OR-Invertgate, shown on below on the right.
CMOS implementations based on the OAI logic above can be realized with 10transistors, as shown below. The implementation which uses both normal and inverted inputs uses 8 transistors, or 12 if inverters have to be used.
Both the 4077 and 74x266 devices (SN74LS266, 74HC266, 74266, etc.) have the same pinout diagram, as follows:
If a specific type of gate is not available, a circuit that implements the same function can be constructed from other available gates. A circuit implementing an XNOR function can be trivially constructed from an XOR gate followed by a NOT gate. If we consider the expression(A+B¯)⋅(A¯+B){\displaystyle (A+{\overline {B}})\cdot ({\overline {A}}+B)}, we can construct an XNOR gate circuit directly using AND, OR and NOT gates. However, this approach requires five gates of three different kinds.
As alternative, if different gates are available we can applyBoolean algebrato transform(A+B¯)⋅(A¯+B)≡(A⋅B)+(A¯⋅B¯){\displaystyle (A+{\overline {B}})\cdot ({\overline {A}}+B)\equiv (A\cdot B)+({\overline {A}}\cdot {\overline {B}})}as stated above, and applyde Morgan's Lawto the last term to get(A⋅B)+(A+B)¯{\displaystyle (A\cdot B)+{\overline {(A+B)}}}which can be implemented using only three gates as shown on the right.
An XNOR gate circuit can be made from four NOR gates. In fact, both NAND and NOR gates are so-called "universal gates" and any logical function can be constructed from eitherNAND logicorNOR logicalone. If the four NOR gates are replaced by NAND gates, this results in an XOR gate, which can be converted to an XNOR gate by inverting the output or one of the inputs (e.g. with a fifth NAND gate).
An alternative arrangement is of five NAND gates in a topology that emphasizes the construction of the function from(A⋅B)+(A¯⋅B¯){\displaystyle (A\cdot B)+({\overline {A}}\cdot {\overline {B}})}, noting fromde Morgan's Lawthat a NAND gate is an inverted-input OR gate. Another alternative arrangement is of five NOR gates in a topology that emphasizes the construction of the function from(A+B¯)⋅(A¯+B){\displaystyle (A+{\overline {B}})\cdot ({\overline {A}}+B)}, noting fromde Morgan's Lawthat a NOR gate is an inverted-input AND gate.
For the NAND constructions, the lower arrangement offers the advantage of a shorter propagation delay (the time delay between an input changing and the output changing). For the NOR constructions, the upper arrangement requires fewer gates.
From the opposite perspective, constructing other gates using only XNOR gates is possible though XNOR is not a fullyuniversal logic gate. NOT and XOR gates can be constructed this way.
Although other gates (OR, NOR, AND, NAND) are available from manufacturers with three or more inputs per gate, this is not strictly true with XOR and XNOR gates. However, extending the concept of thebinarylogical operation to three inputs, the SN74S135 with two shared "C" and four independent "A" and "B" inputs for its four outputs, was a device that followed the truth table:
This is effectively Q = NOT ((A XOR B) XOR C). Another way to interpret this is that the output is true if an even number of inputs are true. It does not implement a logical "equivalence" function, unlike two-input XNOR gates.
|
https://en.wikipedia.org/wiki/XNOR_gate
|
TheIMPLY gateis a digitallogic gatethat implements alogical conditional.
IMPLY can be denoted in algebraic expressions with thelogic symbolright-facing arrow (→). Logically, it is equivalent tomaterial implication, and the logical expression ¬A v B.
There are two symbols for IMPLY gates: the traditional symbol and theIEEEsymbol. For more information seeLogic gate symbols.
While the Implication gate is notfunctionally completeby itself, it is in conjunction with the constant 0 source. This can be shown via the following:
A→0:=¬A(A→0)→B=¬(¬A)∨B=A∨B.{\displaystyle {\begin{aligned}A\rightarrow 0&:=\neg A\\(A\rightarrow 0)\rightarrow B&=\neg (\neg A)\lor B\\&=A\lor B.\end{aligned}}}
Thus as the implication gate with the addition of the constant 0 source can create both the NOT gate and the OR gate, it can create the NOR gate, which is a universal gate.
This computing article is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/IMPLY_gate
|
Alogic gateis a device that performs aBoolean function, alogical operationperformed on one or morebinaryinputs that produces a single binary output. Depending on the context, the term may refer to anideal logic gate, one that has, for instance, zerorise timeand unlimitedfan-out, or it may refer to a non-ideal physical device[1](seeideal and real op-ampsfor comparison).
The primary way of building logic gates usesdiodesortransistorsacting aselectronic switches. Today, most logic gates are made fromMOSFETs(metal–oxide–semiconductorfield-effect transistors).[2]They can also be constructed usingvacuum tubes, electromagneticrelayswithrelay logic,fluidic logic,pneumatic logic,optics,molecules, acoustics,[3]or evenmechanicalor thermal[4]elements.
Logic gates can be cascaded in the same way that Boolean functions can be composed, allowing the construction of a physical model of all ofBoolean logic, and therefore, all of the algorithms andmathematicsthat can be described with Boolean logic.Logic circuitsinclude such devices asmultiplexers,registers,arithmetic logic units(ALUs), andcomputer memory, all the way up through completemicroprocessors,[5]which may contain more than 100 million logic gates.
Compound logic gatesAND-OR-Invert(AOI) andOR-AND-Invert(OAI) are often employed in circuit design because their construction using MOSFETs is simpler and more efficient than the sum of the individual gates.[6]
Thebinary number systemwas refined byGottfried Wilhelm Leibniz(published in 1705), influenced by the ancientI Ching's binary system.[7][8]Leibniz established that using the binary system combined the principles ofarithmeticandlogic.
Theanalytical enginedevised byCharles Babbagein 1837 used mechanical logic gates based on gears.[9]
In an 1886 letter,Charles Sanders Peircedescribed how logical operations could be carried out by electrical switching circuits.[10]EarlyElectromechanical computerswere constructed fromswitchesandrelay logicrather than the later innovations ofvacuum tubes(thermionic valves) ortransistors(from which later electronic computers were constructed).Ludwig Wittgensteinintroduced a version of the 16-rowtruth tableas proposition 5.101 ofTractatus Logico-Philosophicus(1921).Walther Bothe, inventor of thecoincidence circuit,[11]got part of the 1954Nobel Prizein physics, for the first modern electronic AND gate in 1924.Konrad Zusedesigned and built electromechanical logic gates for his computerZ1(from 1935 to 1938).
From 1934 to 1936,NECengineerAkira Nakashima,Claude ShannonandVictor Shestakovintroducedswitching circuit theoryin a series of papers showing thattwo-valuedBoolean algebra, which they discovered independently, can describe the operation of switching circuits.[12][13][14][15]Using this property of electrical switches to implement logic is the fundamental concept that underlies all electronic digitalcomputers. Switching circuit theory became the foundation ofdigital circuitdesign, as it became widely known in the electrical engineering community during and afterWorld War II, with theoretical rigor superseding thead hocmethods that had prevailed previously.[15]
In 1948,BardeenandBrattainpatented an insulated-gate transistor (IGFET) with an inversion layer. Their concept forms the basis of CMOS technology today.[16]In 1957 Frosch and Derick were able to manufacturePMOSandNMOSplanar gates.[17]Later a team at Bell Labs demonstrated a working MOS with PMOS and NMOS gates.[18]Both types were later combined and adapted intocomplementary MOS(CMOS) logic byChih-Tang SahandFrank WanlassatFairchild Semiconductorin 1963.[19]
There are two sets of symbols for elementary logic gates in common use, both defined inANSI/IEEEStd 91-1984 and its supplement ANSI/IEEE Std 91a-1991. The "distinctive shape" set, based on traditional schematics, is used for simple drawings and derives fromUnited States Military StandardMIL-STD-806 of the 1950s and 1960s.[20]It is sometimes unofficially described as "military", reflecting its origin. The "rectangular shape" set, based on ANSI Y32.14 and other early industry standards as later refined by IEEE and IEC, has rectangular outlines for all types of gate and allows representation of a much wider range of devices than is possible with the traditional symbols.[21]The IEC standard,IEC60617-12, has been adopted by other standards, such asEN60617-12:1999 in Europe,BSEN 60617-12:1999 in the United Kingdom, andDINEN 60617-12:1998 in Germany.
The mutual goal of IEEE Std 91-1984 and IEC 617-12 was to provide a uniform method of describing the complex logic functions of digital circuits with schematic symbols. These functions were more complex than simple AND and OR gates. They could be medium-scale circuits such as a 4-bit counter to a large-scale circuit such as a microprocessor.
IEC 617-12 and its renumbered successor IEC 60617-12 do not explicitly show the "distinctive shape" symbols, but do not prohibit them.[21]These are, however, shown in ANSI/IEEE Std 91 (and 91a) with this note: "The distinctive-shape symbol is, according to IEC Publication 617, Part 12, not preferred, but is not considered to be in contradiction to that standard." IEC 60617-12 correspondingly contains the note (Section 2.1) "Although non-preferred, the use of other symbols recognized by official national standards, that is distinctive shapes in place of symbols [list of basic gates], shall not be considered to be in contradiction with this standard. Usage of these other symbols in combination to form complex symbols (for example, use as embedded symbols) is discouraged." This compromise was reached between the respective IEEE and IEC working groups to permit the IEEE and IEC standards to be in mutual compliance with one another.
In the 1980s, schematics were the predominant method to design bothcircuit boardsand custom ICs known asgate arrays. Today custom ICs and thefield-programmable gate arrayare typically designed withHardware Description Languages(HDL) such asVerilogorVHDL.
By use ofDe Morgan's laws, anANDfunction is identical to anORfunction with negated inputs and outputs. Likewise, anORfunction is identical to anANDfunction with negated inputs and outputs. A NAND gate is equivalent to an OR gate with negated inputs, and a NOR gate is equivalent to an AND gate with negated inputs.
This leads to an alternative set of symbols for basic gates that use the opposite core symbol (ANDorOR) but with the inputs and outputs negated. Use of these alternative symbols can make logic circuit diagrams much clearer and help to show accidental connection of an active high output to an active low input or vice versa. Any connection that has logic negations at both ends can be replaced by a negationless connection and a suitable change of gate or vice versa. Any connection that has a negation at one end and no negation at the other can be made easier to interpret by instead using the De Morgan equivalent symbol at either of the two ends. When negation or polarity indicators on both ends of a connection match, there is no logic negation in that path (effectively, bubbles "cancel"), making it easier to follow logic states from one symbol to the next. This is commonly seen in real logic diagrams – thus the reader must not get into the habit of associating the shapes exclusively as OR or AND shapes, but also take into account the bubbles at both inputs and outputs in order to determine the "true" logic function indicated.
A De Morgan symbol can show more clearly a gate's primary logical purpose and the polarity of its nodes that are considered in the "signaled" (active, on) state. Consider the simplified case where a two-input NAND gate is used to drive a motor when either of its inputs are brought low by a switch. The "signaled" state (motor on) occurs when either one OR the other switch is on. Unlike a regular NAND symbol, which suggests AND logic, the De Morgan version, a two negative-input OR gate, correctly shows that OR is of interest. The regular NAND symbol has a bubble at the output and none at the inputs (the opposite of the states that will turn the motor on), but the De Morgan symbol shows both inputs and output in the polarity that will drive the motor.
De Morgan's theorem is most commonly used to implement logic gates as combinations of only NAND gates, or as combinations of only NOR gates, for economic reasons.
Output comparison of various logic gates:
Charles Sanders Peirce(during 1880–1881) showed thatNOR gates alone(or alternativelyNAND gates alone) can be used to reproduce the functions of all the other logic gates, but his work on it was unpublished until 1933.[24]The first published proof was byHenry M. Shefferin 1913, so the NAND logical operation is sometimes calledSheffer stroke; thelogical NORis sometimes calledPeirce's arrow.[25]Consequently, these gates are sometimes calleduniversal logic gates.[26]
Logic gates can also be used to hold a state, allowing data storage. A storage element can be constructed by connecting several gates in a "latch" circuit. Latching circuitry is used instatic random-access memory. More complicated designs that useclock signalsand that change only on a rising or falling edge of the clock are called edge-triggered "flip-flops". Formally, a flip-flop is called abistable circuit, because it has two stable states which it can maintain indefinitely. The combination of multiple flip-flops in parallel, to store a multiple-bit value, is known as a register. When using any of these gate setups the overall system has memory; it is then called asequential logicsystem since its output can be influenced by its previous state(s), i.e. by thesequenceof input states. In contrast, the output fromcombinational logicis purely a combination of its present inputs, unaffected by the previous input and output states.
These logic circuits are used in computermemory. They vary in performance, based on factors ofspeed, complexity, and reliability of storage, and many different types of designs are used based on the application.
Afunctionally completelogic system may be composed ofrelays,valves(vacuum tubes), ortransistors.
Electronic logic gates differ significantly from their relay-and-switch equivalents. They are much faster, consume much less power, and are much smaller (all by a factor of a million or more in most cases). Also, there is a fundamental structural difference. The switch circuit creates a continuous metallic path for current to flow (in either direction) between its input and its output. The semiconductor logic gate, on the other hand, acts as a high-gainvoltageamplifier, which sinks a tiny current at its input and produces a low-impedance voltage at its output. It is not possible for current to flow between the output and the input of a semiconductor logic gate.
For small-scale logic, designers now use prefabricated logic gates from families of devices such as theTTL7400 seriesbyTexas Instruments, theCMOS4000 seriesbyRCA, and their more recent descendants. Increasingly, these fixed-function logic gates are being replaced byprogrammable logic devices, which allow designers to pack many mixed logic gates into a single integrated circuit. The field-programmable nature ofprogrammable logic devicessuch asFPGAshas reduced the 'hard' property of hardware; it is now possible to change the logic design of a hardware system by reprogramming some of its components, thus allowing the features or function of a hardware implementation of a logic system to be changed.
An important advantage of standardized integrated circuit logic families, such as the 7400 and 4000 families, is that they can be cascaded. This means that the output of one gate can be wired to the inputs of one or several other gates, and so on. Systems with varying degrees of complexity can be built without great concern of the designer for the internal workings of the gates, provided the limitations of each integrated circuit are considered.
The output of one gate can only drive a finite number of inputs to other gates, a number called the 'fan-outlimit'. Also, there is always a delay, called the 'propagation delay', from a change in input of a gate to the corresponding change in its output. When gates are cascaded, the total propagation delay is approximately the sum of the individual delays, an effect which can become a problem in high-speedsynchronous circuits. Additional delay can be caused when many inputs are connected to an output, due to the distributedcapacitanceof all the inputs and wiring and the finite amount of current that each output can provide.
There are severallogic familieswith different characteristics (power consumption, speed, cost, size) such as:RDL(resistor–diode logic),RTL(resistor-transistor logic),DTL(diode–transistor logic),TTL(transistor–transistor logic) and CMOS. There are also sub-variants, e.g. standard CMOS logic vs. advanced types using still CMOS technology, but with some optimizations for avoiding loss of speed due to slower PMOS transistors.
The simplest family of logic gates usesbipolar transistors, and is calledresistor–transistor logic(RTL). Unlike simple diode logic gates (which do not have a gain element), RTL gates can be cascaded indefinitely to produce more complex logic functions. RTL gates were used in earlyintegrated circuits. For higher speed and better density, the resistors used in RTL were replaced by diodes resulting indiode–transistor logic(DTL).Transistor–transistor logic(TTL) then supplanted DTL.
As integrated circuits became more complex, bipolar transistors were replaced with smallerfield-effect transistors(MOSFETs); seePMOSandNMOS. To reduce power consumption still further, most contemporary chip implementations of digital systems now useCMOSlogic. CMOS uses complementary (both n-channel and p-channel) MOSFET devices to achieve a high speed with low power dissipation.
Other types of logic gates include, but are not limited to:[27]
A three-state logic gate is a type of logic gate that can have three different outputs: high (H), low (L) and high-impedance (Z). The high-impedance state plays no role in the logic, which is strictly binary. These devices are used onbusesof theCPUto allow multiple chips to send data. A group of three-state outputs driving a line with a suitable control circuit is basically equivalent to amultiplexer, which may be physically distributed over separate devices or plug-in cards.
In electronics, a high output would mean the output is sourcing current from the positive power terminal (positive voltage). A low output would mean the output is sinking current to the negative power terminal (zero voltage). High impedance would mean that the output is effectively disconnected from the circuit.
Non-electronic implementations are varied, though few of them are used in practical applications. Many early electromechanical digital computers, such as theHarvard Mark I, were built fromrelay logicgates, using electro-mechanicalrelays. Logic gates can be made usingpneumaticdevices, such as the Sorteberg relay or mechanical logic gates, including on a molecular scale.[29]Various types of fundamental logic gates have been constructed using molecules (molecular logic gates), which are based on chemical inputs and spectroscopic outputs.[30]Logic gates have been made out ofDNA(seeDNA nanotechnology)[31]and used to create a computer called MAYA (seeMAYA-II). Logic gates can be made fromquantum mechanicaleffects, seequantum logic gate.Photonic logicgates usenonlinear opticaleffects.
In principle any method that leads to a gate that isfunctionally complete(for example, either a NOR or a NAND gate) can be used to make any kind of digital logic circuit. Note that the use of 3-state logic for bus systems is not needed, and can be replaced by digital multiplexers, which can be built using only simple logic gates (such as NAND gates, NOR gates, or AND and OR gates).
|
https://en.wikipedia.org/wiki/Logic_gate
|
Gauss's lemmainnumber theorygives a condition for an integer to be aquadratic residue. Although it is not useful computationally, it has theoretical significance, being involved in someproofs of quadratic reciprocity.
It made its first appearance inCarl Friedrich Gauss's third proof (1808)[1]: 458–462ofquadratic reciprocityand he proved it again in his fifth proof (1818).[1]: 496–501
For any odd primepletabe an integer that iscoprimetop.
Consider the integers
and their least positive residues modulop. These residues are all distinct, so there are (p− 1)/2of them.
Letnbe the number of these residues that are greater thanp/2. Then
where(ap){\displaystyle \left({\frac {a}{p}}\right)}is theLegendre symbol.
Takingp= 11 anda= 7, the relevant sequence of integers is
After reduction modulo 11, this sequence becomes
Three of these integers are larger than 11/2 (namely 6, 7 and 10), son= 3. Correspondingly Gauss's lemma predicts that
This is indeed correct, because 7 is not a quadratic residue modulo 11.
The above sequence of residues
may also be written
In this form, the integers larger than 11/2 appear as negative numbers. It is also apparent that the absolute values of the residues are a permutation of the residues
A fairly simple proof,[1]: 458–462reminiscent of one of the simplestproofs of Fermat's little theorem, can be obtained by evaluating the product
modulopin two different ways. On one hand it is equal to
The second evaluation takes more work. Ifxis a nonzero residue modulop, let us define the "absolute value" ofxto be
Sincencounts those multipleskawhich are in the latter range, and since for those multiples,−kais in the first range, we have
Now observe that the values|ra|aredistinctforr= 1, 2, …, (p− 1)/2. Indeed, we have
becauseais coprime top.
This givesr=s, sincerandsare positive least residues. But there are exactly(p− 1)/2of them, so their values are a rearrangement of the integers1, 2, …, (p− 1)/2. Therefore,
Comparing with our first evaluation, we may cancel out the nonzero factor
and we are left with
This is the desired result, because byEuler's criterionthe left hand side is just an alternative expression for the Legendre symbol(ap){\displaystyle \left({\frac {a}{p}}\right)}.
For any odd primepletabe an integer that iscoprimetop.
LetI⊂(Z/pZ)×{\displaystyle I\subset (\mathbb {Z} /p\mathbb {Z} )^{\times }}be a set such that(Z/pZ)×{\displaystyle (\mathbb {Z} /p\mathbb {Z} )^{\times }}is the disjoint union ofI{\displaystyle I}and−I={−i:i∈I}{\displaystyle -I=\{-i:i\in I\}}.
Then(ap)=(−1)t{\displaystyle \left({\frac {a}{p}}\right)=(-1)^{t}}, wheret=#{j∈I:aj∈−I}{\displaystyle t=\#\{j\in I:aj\in -I\}}.[2]
In the original statement,I={1,2,…,p−12}{\displaystyle I=\{1,2,\dots ,{\frac {p-1}{2}}\}}.
The proof is almost the same.
Gauss's lemma is used in many,[3]: Ch. 1[3]: 9but by no means all, of the known proofs of quadratic reciprocity.
For example,Gotthold Eisenstein[3]: 236used Gauss's lemma to prove that ifpis an odd prime then
and used this formula to prove quadratic reciprocity. By usingellipticrather thancircularfunctions, he proved thecubicandquartic reciprocitylaws.[3]: Ch. 8
Leopold Kronecker[3]: Ex. 1.34used the lemma to show that
Switchingpandqimmediately gives quadratic reciprocity.
It is also used in what are probably the simplest proofs of the "second supplementary law"
Generalizations of Gauss's lemma can be used to compute higher power residue symbols. In his second monograph on biquadratic reciprocity,[4]: §§69–71Gauss used a fourth-power lemma to derive the formula for the biquadratic character of1 +iinZ[i], the ring ofGaussian integers. Subsequently, Eisenstein used third- and fourth-power versions to provecubicandquartic reciprocity.[3]: Ch. 8
Letkbe analgebraic number fieldwithring of integersOk,{\displaystyle {\mathcal {O}}_{k},}and letp⊂Ok{\displaystyle {\mathfrak {p}}\subset {\mathcal {O}}_{k}}be aprime ideal. Theideal normNp{\displaystyle \mathrm {N} {\mathfrak {p}}}ofp{\displaystyle {\mathfrak {p}}}is defined as the cardinality of the residue class ring. Sincep{\displaystyle {\mathfrak {p}}}is prime this is afinite fieldOk/p{\displaystyle {\mathcal {O}}_{k}/{\mathfrak {p}}}, so the ideal norm isNp=|Ok/p|{\displaystyle \mathrm {N} {\mathfrak {p}}=|{\mathcal {O}}_{k}/{\mathfrak {p}}|}.
Assume that a primitiventhroot of unityζn∈Ok,{\displaystyle \zeta _{n}\in {\mathcal {O}}_{k},}and thatnandp{\displaystyle {\mathfrak {p}}}arecoprime(i.e.n∉p{\displaystyle n\not \in {\mathfrak {p}}}). Then no two distinctnth roots of unity can be congruent modulop{\displaystyle {\mathfrak {p}}}.
This can be proved by contradiction, beginning by assuming thatζnr≡ζns{\displaystyle \zeta _{n}^{r}\equiv \zeta _{n}^{s}}modp{\displaystyle {\mathfrak {p}}},0 <r<s≤n. Lett=s−rsuch thatζnt≡1{\displaystyle \zeta _{n}^{t}\equiv 1}modp{\displaystyle {\mathfrak {p}}}, and0 <t<n. From the definition of roots of unity,
and dividing byx− 1gives
Lettingx= 1and taking residues modp{\displaystyle {\mathfrak {p}}},
Sincenandp{\displaystyle {\mathfrak {p}}}are coprime,n≢0{\displaystyle n\not \equiv 0}modp,{\displaystyle {\mathfrak {p}},}but under the assumption, one of the factors on the right must be zero. Therefore, the assumption that two distinct roots are congruent is false.
Thus the residue classes ofOk/p{\displaystyle {\mathcal {O}}_{k}/{\mathfrak {p}}}containing the powers ofζnare a subgroup of ordernof its (multiplicative) group of units,(Ok/p)×=Ok/p−{0}.{\displaystyle ({\mathcal {O}}_{k}/{\mathfrak {p}})^{\times }={\mathcal {O}}_{k}/{\mathfrak {p}}-\{0\}.}Therefore, the order of(Ok/p)×{\displaystyle ({\mathcal {O}}_{k}/{\mathfrak {p}})^{\times }}is a multiple ofn, and
There is an analogue of Fermat's theorem inOk{\displaystyle {\mathcal {O}}_{k}}. Ifα∈Ok{\displaystyle \alpha \in {\mathcal {O}}_{k}}forα∉p{\displaystyle \alpha \not \in {\mathfrak {p}}}, then[3]: Ch. 4.1
and sinceNp≡1{\displaystyle \mathrm {N} {\mathfrak {p}}\equiv 1}modn,
is well-defined and congruent to a uniquenth root of unity ζns.
This root of unity is called thenth-power residue symbol forOk,{\displaystyle {\mathcal {O}}_{k},}and is denoted by
It can be proven that[3]: Prop. 4.1
if and only if there is anη∈Ok{\displaystyle \eta \in {\mathcal {O}}_{k}}such thatα≡ηnmodp{\displaystyle {\mathfrak {p}}}.
Letμn={1,ζn,ζn2,…,ζnn−1}{\displaystyle \mu _{n}=\{1,\zeta _{n},\zeta _{n}^{2},\dots ,\zeta _{n}^{n-1}\}}be the multiplicative group of thenth roots of unity, and letA={a1,a2,…,am}{\displaystyle A=\{a_{1},a_{2},\dots ,a_{m}\}}be representatives of the cosets of(Ok/p)×/μn.{\displaystyle ({\mathcal {O}}_{k}/{\mathfrak {p}})^{\times }/\mu _{n}.}ThenAis called a1/nsystemmodp.{\displaystyle {\mathfrak {p}}.}[3]: Ch. 4.2
In other words, there aremn=Np−1{\displaystyle mn=\mathrm {N} {\mathfrak {p}}-1}numbers in the setAμ={aiζnj:1≤i≤m,0≤j≤n−1},{\displaystyle A\mu =\{a_{i}\zeta _{n}^{j}\;:\;1\leq i\leq m,\;\;\;0\leq j\leq n-1\},}and this set constitutes a representative set for(Ok/p)×.{\displaystyle ({\mathcal {O}}_{k}/{\mathfrak {p}})^{\times }.}
The numbers1, 2, … (p− 1)/2, used in the original version of the lemma, are a 1/2 system (modp).
Constructing a1/nsystem is straightforward: letMbe a representative set for(Ok/p)×.{\displaystyle ({\mathcal {O}}_{k}/{\mathfrak {p}})^{\times }.}Pick anya1∈M{\displaystyle a_{1}\in M}and remove the numbers congruent toa1,a1ζn,a1ζn2,…,a1ζnn−1{\displaystyle a_{1},a_{1}\zeta _{n},a_{1}\zeta _{n}^{2},\dots ,a_{1}\zeta _{n}^{n-1}}fromM. Picka2fromMand remove the numbers congruent toa2,a2ζn,a2ζn2,…,a2ζnn−1{\displaystyle a_{2},a_{2}\zeta _{n},a_{2}\zeta _{n}^{2},\dots ,a_{2}\zeta _{n}^{n-1}}Repeat untilMis exhausted. Then{a1,a2, …am}is a1/nsystem modp.{\displaystyle {\mathfrak {p}}.}
Gauss's lemma may be extended to thenth power residue symbol as follows.[3]: Prop. 4.3Letζn∈Ok{\displaystyle \zeta _{n}\in {\mathcal {O}}_{k}}be a primitiventh root of unity,p⊂Ok{\displaystyle {\mathfrak {p}}\subset {\mathcal {O}}_{k}}a prime ideal,γ∈Ok,nγ∉p,{\displaystyle \gamma \in {\mathcal {O}}_{k},\;\;n\gamma \not \in {\mathfrak {p}},}(i.e.p{\displaystyle {\mathfrak {p}}}is coprime to bothγandn) and letA= {a1,a2, …,am}be a1/nsystem modp.{\displaystyle {\mathfrak {p}}.}
Then for eachi,1 ≤i≤m, there are integersπ(i), unique (modm), andb(i), unique (modn), such that
and thenth-power residue symbol is given by the formula
The classical lemma for the quadratic Legendre symbol is the special casen= 2,ζ2= −1,A= {1, 2, …, (p− 1)/2},b(k) = 1ifak>p/2,b(k) = 0ifak<p/2.
The proof of thenth-power lemma uses the same ideas that were used in the proof of the quadratic lemma.
The existence of the integersπ(i)andb(i), and their uniqueness (modm) and (modn), respectively, come from the fact thatAμis a representative set.
Assume thatπ(i)=π(j)=p, i.e.
and
Then
Becauseγandp{\displaystyle {\mathfrak {p}}}are coprime both sides can be divided byγ, giving
which, sinceAis a1/nsystem, impliess=randi=j, showing thatπis a permutation of the set{1, 2, …,m}.
Then on the one hand, by the definition of the power residue symbol,
and on the other hand, sinceπis a permutation,
so
and since for all1 ≤i≤m,aiandp{\displaystyle {\mathfrak {p}}}are coprime,a1a2…amcan be cancelled from both sides of the congruence,
and the theorem follows from the fact that no two distinctnth roots of unity can be congruent (modp{\displaystyle {\mathfrak {p}}}).
LetGbe the multiplicative group of nonzero residue classes inZ/pZ, and letHbe the subgroup {+1, −1}. Consider the following coset representatives ofHinG,
Applying the machinery of thetransferto this collection of coset representatives, we obtain the transfer homomorphism
which turns out to be the map that sendsato(−1)n, whereaandnare as in the statement of the lemma. Gauss's lemma may then be viewed as a computation that explicitly identifies this homomorphism as being the quadratic residue character.
|
https://en.wikipedia.org/wiki/Gauss%27s_lemma_(number_theory)
|
Innumber theory,Zolotarev's lemmastates that theLegendre symbol
for an integeramoduloan oddprime numberp, wherepdoes not dividea, can be computed as the sign of a permutation:
where ε denotes thesignature of a permutationand πais thepermutationof the nonzeroresidue classesmodpinduced bymultiplicationbya.
For example, takea= 2 andp= 7. The nonzero squares mod 7 are 1, 2, and 4, so (2|7) = 1 and (6|7) = −1. Multiplication by 2 on the nonzero numbers mod 7 has the cycle decomposition (1,2,4)(3,6,5), so the sign of this permutation is 1, which is (2|7). Multiplication by 6 on the nonzero numbers mod 7 has cycle decomposition (1,6)(2,5)(3,4), whose sign is −1, which is (6|7).
In general, for anyfinite groupGof ordern, it is straightforward to determine the signature of the permutation πgmade by left-multiplication by the elementgofG. The permutation πgwill be even, unless there are an odd number oforbitsof even size. Assumingneven, therefore, the condition for πgto be an odd permutation, whenghas orderk, is thatn/kshould be odd, or that the subgroup <g> generated bygshould have oddindex.
We will apply this to the group of nonzero numbers modp, which is acyclic groupof orderp− 1. Thejth power of aprimitive root modulo pwill have index thegreatest common divisor
The condition for a nonzero number modpto be aquadratic non-residueis to be an odd power of a primitive root.
The lemma therefore comes down to saying thatiis odd whenjis odd, which is truea fortiori, andjis odd wheniis odd, which is true becausep− 1 is even (pis odd).
Zolotarev's lemma can be deduced easily fromGauss's lemmaandvice versa. The example
i.e. the Legendre symbol (a/p) witha= 3 andp= 11, will illustrate how the proof goes. Start with the set {1, 2, . . . ,p− 1} arranged as a matrix of two rows such that the sum of the two elements in any column is zero modp, say:
Apply the permutationU:x↦ax(modp){\displaystyle U:x\mapsto ax{\pmod {p}}}:
The columns still have the property that the sum of two elements in one column is zero modp. Now apply a permutationVwhich swaps any pairs in which the upper member was originally a lower member:
Finally, apply a permutation W which gets back the original matrix:
We haveW−1=VU. Zolotarev's lemma says (a/p) = 1 if and only if the permutationUis even. Gauss's lemma says (a/p) = 1 iffVis even. ButWis even, so the two lemmas are equivalent for the given (but arbitrary)aandp.
This interpretation of the Legendre symbol as the sign of a permutation can be extended to theJacobi symbol
whereaandnare relatively prime integers with oddn> 0:ais invertible modn, so multiplication byaonZ/nZis a permutation and a generalization of Zolotarev's lemma is that the Jacobi symbol above is the sign of this permutation.
For example, multiplication by 2 onZ/21Zhas cycle decomposition (0)(1,2,4,8,16,11)(3,6,12)(5,10,20,19,17,13)(7,14)(9,18,15), so the sign of this permutation is (1)(−1)(1)(−1)(−1)(1) = −1 and the Jacobi symbol (2|21) is −1. (Note that multiplication by 2 on the units mod 21 is a product of two 6-cycles, so its sign is 1. Thus it's important to useallintegers modnand not just the units modnto define the right permutation.)
Whenn=pis an odd prime andais not divisible byp, multiplication byafixes 0 modp, so the sign of multiplication byaon all numbers modpand on the units modphave the same sign. But for compositenthat is not the case, as we see in the example above.
This lemma was introduced byYegor Ivanovich Zolotarevin an 1872 proof ofquadratic reciprocity.
|
https://en.wikipedia.org/wiki/Zolotarev%27s_lemma
|
Inmathematics, acharacter sumis a sum∑χ(n){\textstyle \sum \chi (n)}of values of aDirichlet characterχmoduloN, taken over a given range of values ofn. Such sums are basic in a number of questions, for example in the distribution ofquadratic residues, and in particular in the classical question of finding an upper bound for theleast quadratic non-residuemoduloN. Character sums are often closely linked toexponential sumsby theGauss sums(this is like a finiteMellin transform).
Assume χ is a non-principal Dirichlet character to the modulusN.
The sum taken over all residue classes modNis then zero. This means that the cases of interest will be sumsΣ{\displaystyle \Sigma }over relatively short ranges, of lengthR<Nsay,
A fundamental improvement on the trivial estimateΣ=O(N){\displaystyle \Sigma =O(N)}is thePólya–Vinogradov inequality, established independently byGeorge PólyaandI. M. Vinogradovin 1918,[1][2]stating inbig O notationthat
Assuming thegeneralized Riemann hypothesis,Hugh MontgomeryandR. C. Vaughanhave shown[3]that there is the further improvement
Another significant type of character sum is that formed by
for some functionF, generally apolynomial. A classical result is the case of a quadratic, for example,
and χ aLegendre symbol. Here the sum can be evaluated (as −1), a result that is connected to thelocal zeta-functionof aconic section.
More generally, such sums for theJacobi symbolrelate to local zeta-functions ofelliptic curvesandhyperelliptic curves; this means that by means ofAndré Weil's results, forN=paprime number, there are non-trivial bounds
The constant implicit in the notation islinearin thegenusof the curve in question, and so (Legendre symbol or hyperelliptic case) can be taken as the degree ofF. (More general results, for other values ofN, can be obtained starting from there.)
Weil's results also led to theBurgess bound,[4]applying to give non-trivial results beyond Pólya–Vinogradov, forRa power ofNgreater than 1/4.
Assume the modulusNis a prime.
for any integerr≥ 3.[5]
|
https://en.wikipedia.org/wiki/Character_sum
|
Innumber theory, thelaw of quadratic reciprocityis a theorem aboutmodular arithmeticthat gives conditions for the solvability ofquadratic equationsmoduloprime numbers. Due to its subtlety, it has many formulations, but the most standard statement is:
Law of quadratic reciprocity—Letpandqbe distinct odd prime numbers, and define theLegendre symbolas
Then
This law, together with itssupplements, allows the easy calculation of any Legendre symbol, making it possible to determine whether there is an integer solution for any quadratic equation of the formx2≡amodp{\displaystyle x^{2}\equiv a{\bmod {p}}}for an odd primep{\displaystyle p}; that is, to determine the "perfect squares" modulop{\displaystyle p}. However, this is anon-constructiveresult: it gives no help at all for finding aspecificsolution; for this, other methods are required. For example, in the casep≡3mod4{\displaystyle p\equiv 3{\bmod {4}}}usingEuler's criterionone can give an explicit formula for the "square roots" modulop{\displaystyle p}of a quadratic residuea{\displaystyle a}, namely,
indeed,
This formula only works if it is known in advance thata{\displaystyle a}is aquadratic residue, which can be checked using the law of quadratic reciprocity.
The quadratic reciprocity theorem was conjectured byLeonhard EulerandAdrien-Marie Legendreand first proved byCarl Friedrich Gauss,[1]who referred to it as the "fundamental theorem" in hisDisquisitiones Arithmeticaeand his papers, writing
Privately, Gauss referred to it as the "golden theorem".[2]He published sixproofsfor it, and two more were found in his posthumous papers. There are now over 240 published proofs.[3]The shortest known proof is includedbelow, together with short proofs of the law's supplements (the Legendre symbols of −1 and 2).
Generalizing thereciprocity lawto higher powers has been a leading problem in mathematics, and has been crucial to the development of much of the machinery ofmodern algebra, number theory, andalgebraic geometry, culminating inArtin reciprocity,class field theory, and theLanglands program.
Quadratic reciprocity arises from certain subtle factorization patterns involving perfect square numbers. In this section, we give examples which lead to the general case.
Consider the polynomialf(n)=n2−5{\displaystyle f(n)=n^{2}-5}and its values forn∈N.{\displaystyle n\in \mathbb {N} .}The prime factorizations of these values are given as follows:
The prime factorsp{\displaystyle p}dividingf(n){\displaystyle f(n)}arep=2,5{\displaystyle p=2,5}, and every prime whose final digit is1{\displaystyle 1}or9{\displaystyle 9}; no primes ending in3{\displaystyle 3}or7{\displaystyle 7}ever appear. Now,p{\displaystyle p}is a prime factor of somen2−5{\displaystyle n^{2}-5}whenevern2−5≡0modp{\displaystyle n^{2}-5\equiv 0{\bmod {p}}}, i.e. whenevern2≡5modp,{\displaystyle n^{2}\equiv 5{\bmod {p}},}i.e. whenever 5 is a quadratic residue modulop{\displaystyle p}. This happens forp=2,5{\displaystyle p=2,5}and those primes withp≡1,4mod5,{\displaystyle p\equiv 1,4{\bmod {5}},}and the latter numbers1=(±1)2{\displaystyle 1=(\pm 1)^{2}}and4=(±2)2{\displaystyle 4=(\pm 2)^{2}}are precisely the quadratic residues modulo5{\displaystyle 5}. Therefore, except forp=2,5{\displaystyle p=2,5}, we have that5{\displaystyle 5}is a quadratic residue modulop{\displaystyle p}iffp{\displaystyle p}is a quadratic residue modulo5{\displaystyle 5}.
The law of quadratic reciprocity gives a similar characterization of prime divisors off(n)=n2−q{\displaystyle f(n)=n^{2}-q}for any primeq, which leads to a characterization for any integerq{\displaystyle q}.
Letpbe an odd prime. A number modulopis aquadratic residuewhenever it is congruent to a square (modp); otherwise it is a quadratic non-residue. ("Quadratic" can be dropped if it is clear from the context.) Here we exclude zero as a special case. Then as a consequence of the fact that the multiplicative group of afinite fieldof orderpis cyclic of orderp-1, the following statements hold:
For the avoidance of doubt, these statements donothold if the modulus is not prime.
For example, there are only 3 quadratic residues (1, 4 and 9) in the multiplicative group modulo 15.
Moreover, although 7 and 8 are quadratic non-residues, their product 7x8 = 11 is also a quadratic non-residue, in contrast to the prime case.
Quadratic residues appear as entries in the following table, indexed by the row number as modulus and column number as root:
This table is complete for odd primes less than 50. To check whether a numbermis a quadratic residue mod one of these primesp, finda≡m(modp) and 0 ≤a<p. Ifais in rowp, thenmis a residue (modp); ifais not in rowpof the table, thenmis a nonresidue (modp).
The quadratic reciprocity law is the statement that certain patterns found in the table are true in general.
Another way to organize the data is to see which primes are quadratic residues mod which other primes, as illustrated in the following table. The entry in rowpcolumnqisRifqis a quadratic residue (modp); if it is a nonresidue the entry isN.
If the row, or the column, or both, are ≡ 1 (mod 4) the entry is blue or green; if both row and column are ≡ 3 (mod 4), it is yellow or orange.
The blue and green entries are symmetric around the diagonal: The entry for rowp, columnqisR(respN) if and only if the entry at rowq, columnp, isR(respN).
The yellow and orange ones, on the other hand, are antisymmetric: The entry for rowp, columnqisR(respN) if and only if the entry at rowq, columnp, isN(respR).
The reciprocity law states that these patterns hold for allpandq.
Ordering the rows and columns mod 4 makes the pattern clearer.
The supplements provide solutions to specific cases of quadratic reciprocity. They are often quoted as partial results, without having to resort to the complete theorem.
Trivially 1 is a quadratic residue for all primes. The question becomes more interesting for −1. Examining the table, we find −1 in rows 5, 13, 17, 29, 37, and 41 but not in rows 3, 7, 11, 19, 23, 31, 43 or 47. The former set of primes are all congruent to 1 modulo 4, and the latter are congruent to 3 modulo 4.
Examining the table, we find 2 in rows 7, 17, 23, 31, 41, and 47, but not in rows 3, 5, 11, 13, 19, 29, 37, or 43. The former primes are all ≡ ±1 (mod 8), and the latter are all ≡ ±3 (mod 8). This leads to
−2 is in rows 3, 11, 17, 19, 41, 43, but not in rows 5, 7, 13, 23, 29, 31, 37, or 47. The former are ≡ 1 or ≡ 3 (mod 8), and the latter are ≡ 5, 7 (mod 8).
3 is in rows 11, 13, 23, 37, and 47, but not in rows 5, 7, 17, 19, 29, 31, 41, or 43. The former are ≡ ±1 (mod 12) and the latter are all ≡ ±5 (mod 12).
−3 is in rows 7, 13, 19, 31, 37, and 43 but not in rows 5, 11, 17, 23, 29, 41, or 47. The former are ≡ 1 (mod 3) and the latter ≡ 2 (mod 3).
Since the only residue (mod 3) is 1, we see that −3 is a quadratic residue modulo every prime which is a residue modulo 3.
5 is in rows 11, 19, 29, 31, and 41 but not in rows 3, 7, 13, 17, 23, 37, 43, or 47. The former are ≡ ±1 (mod 5) and the latter are ≡ ±2 (mod 5).
Since the only residues (mod 5) are ±1, we see that 5 is a quadratic residue modulo every prime which is a residue modulo 5.
−5 is in rows 3, 7, 23, 29, 41, 43, and 47 but not in rows 11, 13, 17, 19, 31, or 37. The former are ≡ 1, 3, 7, 9 (mod 20) and the latter are ≡ 11, 13, 17, 19 (mod 20).
The observations about −3 and 5 continue to hold: −7 is a residue modulopif and only ifpis a residue modulo 7, −11 is a residue modulopif and only ifpis a residue modulo 11, 13 is a residue (modp) if and only ifpis a residue modulo 13, etc. The more complicated-looking rules for the quadratic characters of 3 and −5, which depend upon congruences modulo 12 and 20 respectively, are simply the ones for −3 and 5 working with the first supplement.
The generalization of the rules for −3 and 5 is Gauss's statement of quadratic reciprocity.
Quadratic Reciprocity (Gauss's statement).Ifq≡1mod4{\displaystyle q\equiv 1{\bmod {4}}}, then the congruencex2≡pmodq{\displaystyle x^{2}\equiv p{\bmod {q}}}is solvable if and only ifx2≡qmodp{\displaystyle x^{2}\equiv q{\bmod {p}}}is solvable. Ifq≡3mod4{\displaystyle q\equiv 3{\bmod {4}}}andp≡3mod4{\displaystyle p\equiv 3{\bmod {4}}}, then the congruencex2≡pmodq{\displaystyle x^{2}\equiv p{\bmod {q}}}is solvable if and only ifx2≡−qmodp{\displaystyle x^{2}\equiv -q{\bmod {p}}}is solvable.
Quadratic Reciprocity (combined statement).Defineq∗=(−1)q−12q{\displaystyle q^{*}=(-1)^{\frac {q-1}{2}}q}. Then the congruencex2≡pmodq{\displaystyle x^{2}\equiv p{\bmod {q}}}is solvable if and only ifx2≡q∗modp{\displaystyle x^{2}\equiv q^{*}{\bmod {p}}}is solvable.
Quadratic Reciprocity (Legendre's statement).Ifporqare congruent to 1 modulo 4, then:x2≡qmodp{\displaystyle x^{2}\equiv q{\bmod {p}}}is solvable if and only ifx2≡pmodq{\displaystyle x^{2}\equiv p{\bmod {q}}}is solvable. Ifpandqare congruent to 3 modulo 4, then:x2≡qmodp{\displaystyle x^{2}\equiv q{\bmod {p}}}is solvable if and only ifx2≡pmodq{\displaystyle x^{2}\equiv p{\bmod {q}}}is not solvable.
The last is immediately equivalent to the modern form stated in the introduction above. It is a simple exercise to prove that Legendre's and Gauss's statements are equivalent – it requires no more than the first supplement and the facts about multiplying residues and nonresidues.
Apparently, the shortest known proof yet was published by B. Veklych in theAmerican Mathematical Monthly.[4]
The value of the Legendre symbol of−1{\displaystyle -1}(used in the proof above) follows directly fromEuler's criterion:
by Euler's criterion, but both sides of this congruence are numbers of the form±1{\displaystyle \pm 1}, so they must be equal.
Whether2{\displaystyle 2}is a quadratic residue can be concluded if we know the number of solutions of the equationx2+y2=2{\displaystyle x^{2}+y^{2}=2}withx,y∈Zp,{\displaystyle x,y\in \mathbb {Z} _{p},}which can be solved by standard methods. Namely, all its solutions wherexy≠0,x≠±y{\displaystyle xy\neq 0,x\neq \pm y}can be grouped into octuplets of the form(±x,±y),(±y,±x){\displaystyle (\pm x,\pm y),(\pm y,\pm x)}, and what is left are four solutions of the form(±1,±1){\displaystyle (\pm 1,\pm 1)}and possibly four additional solutions wherex2=2,y=0{\displaystyle x^{2}=2,y=0}andx=0,y2=2{\displaystyle x=0,y^{2}=2}, which exist precisely if2{\displaystyle 2}is a quadratic residue. That is,2{\displaystyle 2}is a quadratic residue precisely if the number of solutions of this equation is divisible by8{\displaystyle 8}. And this equation can be solved in just the same way here as over the rational numbers: substitutex=a+1,y=at+1{\displaystyle x=a+1,y=at+1}, where we demand thata≠0{\displaystyle a\neq 0}(leaving out the two solutions(1,±1){\displaystyle (1,\pm 1)}), then the original equation transforms into
Heret{\displaystyle t}can have any value that does not make the denominator zero – for which there are1+(−1p){\displaystyle 1+\left({\frac {-1}{p}}\right)}possibilities (i.e.2{\displaystyle 2}if−1{\displaystyle -1}is a residue,0{\displaystyle 0}if not) – and also does not makea{\displaystyle a}zero, which excludes one more option,t=−1{\displaystyle t=-1}. Thus there are
possibilities fort{\displaystyle t}, and so together with the two excluded solutions there are overallp−(−1p){\displaystyle p-\left({\frac {-1}{p}}\right)}solutions of the original equation. Therefore,2{\displaystyle 2}is a residue modulop{\displaystyle p}if and only if8{\displaystyle 8}dividesp−(−1)p−12{\displaystyle p-(-1)^{\frac {p-1}{2}}}. This is a reformulation of the condition stated above.
The theorem was formulated in many ways before its modern form: Euler and Legendre did not have Gauss's congruence notation, nor did Gauss have the Legendre symbol.
In this articlepandqalways refer to distinct positive odd primes, andxandyto unspecified integers.
Fermat proved[5](or claimed to have proved)[6]a number of theorems about expressing a prime by a quadratic form:
He did not state the law of quadratic reciprocity, although the cases −1, ±2, and ±3 are easy deductions from these and others of his theorems.
He also claimed to have a proof that if the prime numberpends with 7, (in base 10) and the prime numberqends in 3, andp≡q≡ 3 (mod 4), then
Euler conjectured, and Lagrange proved, that[7]
Proving these and other statements of Fermat was one of the things that led mathematicians to the reciprocity theorem.
Translated into modern notation, Euler stated[8]that for distinct odd primespandq:
This is equivalent to quadratic reciprocity.
He could not prove it, but he did prove the second supplement.[9]
Fermat proved that ifpis a prime number andais an integer,
Thus ifpdoes not dividea, using the non-obvious fact (see for example Ireland and Rosen below) that the residues modulopform afieldand therefore in particular the multiplicative group is cyclic, hence there can be at most two solutions to a quadratic equation:
Legendre[10]letsaandArepresent positive primes ≡ 1 (mod 4) andbandBpositive primes ≡ 3 (mod 4), and sets out a table of eight theorems that together are equivalent to quadratic reciprocity:
He says that since expressions of the form
will come up so often he will abbreviate them as:
This is now known as theLegendre symbol, and an equivalent[11][12]definition is used today: for all integersaand all odd primesp
He notes that these can be combined:
A number of proofs, especially those based onGauss's Lemma,[13]explicitly calculate this formula.
From these two supplements, we can obtain a third reciprocity law for the quadratic character -2 as follows:
For -2 to be a quadratic residue, either -1 or 2 are both quadratic residues, or both non-residues :modp{\displaystyle {\bmod {p}}}.
So either :p−12orp2−18{\displaystyle {\frac {p-1}{2}}{\text{ or }}{\frac {p^{2}-1}{8}}}are both even, or they are both odd. The sum of these two expressions is
Legendre's attempt to prove reciprocity is based on a theorem of his:
Example.Theorem I is handled by lettinga≡ 1 andb≡ 3 (mod 4) be primes and assuming that(ba)=1{\displaystyle \left({\tfrac {b}{a}}\right)=1}and, contrary the theorem, that(ab)=−1.{\displaystyle \left({\tfrac {a}{b}}\right)=-1.}Thenx2+ay2−bz2=0{\displaystyle x^{2}+ay^{2}-bz^{2}=0}has a solution, and taking congruences (mod 4) leads to a contradiction.
This technique doesn't work for Theorem VIII. Letb≡B≡ 3 (mod 4), and assume
Then if there is another primep≡ 1 (mod 4) such that
the solvability ofBx2+by2−pz2=0{\displaystyle Bx^{2}+by^{2}-pz^{2}=0}leads to a contradiction (mod 4). But Legendre was unable to prove there has to be such a primep; he was later able to show that all that is required is:
but he couldn't prove that either.Hilbert symbol (below)discusses how techniques based on the existence of solutions toax2+by2+cz2=0{\displaystyle ax^{2}+by^{2}+cz^{2}=0}can be made to work.
Gauss first proves[14]the supplementary laws. He sets[15]the basis for induction by proving the theorem for ±3 and ±5. Noting[16]that it is easier to state for −3 and +5 than it is for +3 or −5, he states[17]the general theorem in the form:
Introducing the notationaRb(resp.aNb) to meanais a quadratic residue (resp. nonresidue) (modb), and lettinga,a′, etc. represent positive primes ≡ 1 (mod 4) andb,b′, etc. positive primes ≡ 3 (mod 4), he breaks it out into the same 8 cases as Legendre:
In the next Article he generalizes this to what are basically the rules for theJacobi symbol (below). LettingA,A′, etc. represent any (prime or composite) positive numbers ≡ 1 (mod 4) andB,B′, etc. positive numbers ≡ 3 (mod 4):
All of these cases take the form "if a prime is a residue (mod a composite), then the composite is a residue or nonresidue (mod the prime), depending on the congruences (mod 4)". He proves that these follow from cases 1) - 8).
Gauss needed, and was able to prove,[18]a lemma similar to the one Legendre needed:
The proof of quadratic reciprocity usescomplete induction.
These can be combined:
A number of proofs of the theorem, especially those based onGauss sums[19]or the splitting of primes inalgebraic number fields,[20][21]derive this formula.
The statements in this section are equivalent to quadratic reciprocity: if, for example, Euler's version is assumed, the Legendre-Gauss version can be deduced from it, and vice versa.
This can be proven usingGauss's lemma.
Gauss's fourth proof consists of proving this theorem (by comparing two formulas for the value of Gauss sums) and then restricting it to two primes. He then gives an example: Leta= 3,b= 5,c= 7, andd= 11. Three of these, 3, 7, and 11 ≡ 3 (mod 4), som≡ 3 (mod 4). 5×7×11 R 3; 3×7×11 R 5; 3×5×11 R 7; and 3×5×7 N 11, so there are an odd number of nonresidues.
TheJacobi symbolis a generalization of the Legendre symbol; the main difference is that the bottom number has to be positive and odd, but does not have to be prime. If it is prime, the two symbols agree. It obeys the same rules of manipulation as the Legendre symbol. In particular
and if both numbers are positive and odd (this is sometimes called "Jacobi's reciprocity law"):
However, if the Jacobi symbol is 1 but the denominator is not a prime, it does not necessarily follow that the numerator is a quadratic residue of the denominator. Gauss's cases 9) - 14) above can be expressed in terms of Jacobi symbols:
and sincepis prime the left hand side is a Legendre symbol, and we know whetherMis a residue modulopor not.
The formulas listed in the preceding section are true for Jacobi symbols as long as the symbols are defined. Euler's formula may be written
Example.
2 is a residue modulo the primes 7, 23 and 31:
But 2 is not a quadratic residue modulo 5, so it can't be one modulo 15. This is related to the problem Legendre had: if(am)=−1,{\displaystyle \left({\tfrac {a}{m}}\right)=-1,}thenais a non-residue modulo every prime in the arithmetic progressionm+ 4a,m+ 8a, ..., if thereareany primes in this series, but that wasn't proved until decades after Legendre.[26]
Eisenstein's formula requires relative primality conditions (which are true if the numbers are prime)
The quadratic reciprocity law can be formulated in terms of theHilbert symbol(a,b)v{\displaystyle (a,b)_{v}}whereaandbare any two nonzero rational numbers andvruns over all the non-trivial absolute values of the rationals (the Archimedean one and thep-adic absolute values for primesp). The Hilbert symbol(a,b)v{\displaystyle (a,b)_{v}}is 1 or −1. It is defined to be 1 if and only if the equationax2+by2=z2{\displaystyle ax^{2}+by^{2}=z^{2}}has a solution in thecompletionof the rationals atvother thanx=y=z=0{\displaystyle x=y=z=0}. The Hilbert reciprocity law states that(a,b)v{\displaystyle (a,b)_{v}}, for fixedaandband varyingv, is 1 for all but finitely manyvand the product of(a,b)v{\displaystyle (a,b)_{v}}over allvis 1. (This formally resembles the residue theorem from complex analysis.)
The proof of Hilbert reciprocity reduces to checking a few special cases, and the non-trivial cases turn out to be equivalent to the main law and the two supplementary laws of quadratic reciprocity for the Legendre symbol. There is no kind of reciprocity in the Hilbert reciprocity law; its name simply indicates the historical source of the result in quadratic reciprocity. Unlike quadratic reciprocity, which requires sign conditions (namely positivity of the primes involved) and a special treatment of the prime 2, the Hilbert reciprocity law treats all absolute values of the rationals on an equal footing. Therefore, it is a more natural way of expressing quadratic reciprocity with a view towards generalization: the Hilbert reciprocity law extends with very few changes to allglobal fieldsand this extension can rightly be considered a generalization of quadratic reciprocity to all global fields.
The early proofs of quadratic reciprocity are relatively unilluminating. The situation changed when Gauss usedGauss sumsto show thatquadratic fieldsare subfields ofcyclotomic fields, and implicitly deduced quadratic reciprocity from a reciprocity theorem for cyclotomic fields. His proof was cast in modern form by later algebraic number theorists. This proof served as a template forclass field theory, which can be viewed as a vast generalization of quadratic reciprocity.
Robert Langlandsformulated theLanglands program, which gives a conjectural vast generalization of class field theory. He wrote:[27]
There are also quadratic reciprocity laws inringsother than the integers.
In his second monograph onquartic reciprocity[29]Gauss stated quadratic reciprocity for the ringZ[i]{\displaystyle \mathbb {Z} [i]}ofGaussian integers, saying that it is a corollary of thebiquadratic lawinZ[i],{\displaystyle \mathbb {Z} [i],}but did not provide a proof of either theorem.Dirichlet[30]showed that the law inZ[i]{\displaystyle \mathbb {Z} [i]}can be deduced from the law forZ{\displaystyle \mathbb {Z} }without using quartic reciprocity.
For an odd Gaussian primeπ{\displaystyle \pi }and a Gaussian integerα{\displaystyle \alpha }relatively prime toπ,{\displaystyle \pi ,}define the quadratic character forZ[i]{\displaystyle \mathbb {Z} [i]}by:
Letλ=a+bi,μ=c+di{\displaystyle \lambda =a+bi,\mu =c+di}be distinct Gaussian primes whereaandcare odd andbanddare even. Then[31]
Consider the following third root of unity:
The ring of Eisenstein integers isZ[ω].{\displaystyle \mathbb {Z} [\omega ].}[32]For an Eisenstein primeπ,Nπ≠3,{\displaystyle \pi ,\mathrm {N} \pi \neq 3,}and an Eisenstein integerα{\displaystyle \alpha }withgcd(α,π)=1,{\displaystyle \gcd(\alpha ,\pi )=1,}define the quadratic character forZ[ω]{\displaystyle \mathbb {Z} [\omega ]}by the formula
Let λ =a+bωand μ =c+dωbe distinct Eisenstein primes whereaandcare not divisible by 3 andbanddare divisible by 3. Eisenstein proved[33]
The above laws are special cases of more general laws that hold for thering of integersin anyimaginary quadratic number field. Letkbe an imaginary quadratic number field with ring of integersOk.{\displaystyle {\mathcal {O}}_{k}.}For aprime idealp⊂Ok{\displaystyle {\mathfrak {p}}\subset {\mathcal {O}}_{k}}with odd normNp{\displaystyle \mathrm {N} {\mathfrak {p}}}andα∈Ok,{\displaystyle \alpha \in {\mathcal {O}}_{k},}define the quadratic character forOk{\displaystyle {\mathcal {O}}_{k}}as
for an arbitrary ideala⊂Ok{\displaystyle {\mathfrak {a}}\subset {\mathcal {O}}_{k}}factored into prime idealsa=p1⋯pn{\displaystyle {\mathfrak {a}}={\mathfrak {p}}_{1}\cdots {\mathfrak {p}}_{n}}define
and forβ∈Ok{\displaystyle \beta \in {\mathcal {O}}_{k}}define
LetOk=Zω1⊕Zω2,{\displaystyle {\mathcal {O}}_{k}=\mathbb {Z} \omega _{1}\oplus \mathbb {Z} \omega _{2},}i.e.{ω1,ω2}{\displaystyle \left\{\omega _{1},\omega _{2}\right\}}is anintegral basisforOk.{\displaystyle {\mathcal {O}}_{k}.}Forν∈Ok{\displaystyle \nu \in {\mathcal {O}}_{k}}with odd normNν,{\displaystyle \mathrm {N} \nu ,}define (ordinary) integersa,b,c,dby the equations,
and a function
Ifm=Nμandn=Nνare both odd, Herglotz proved[34]
Also, if
Then[35]
LetFbe afinite fieldwithq=pnelements, wherepis an odd prime number andnis positive, and letF[x] be thering of polynomialsin one variable with coefficients inF. Iff,g∈F[x]{\displaystyle f,g\in F[x]}andfisirreducible,monic, and has positive degree, define the quadratic character forF[x] in the usual manner:
Iff=f1⋯fn{\displaystyle f=f_{1}\cdots f_{n}}is a product of monic irreducibles let
Dedekind proved that iff,g∈F[x]{\displaystyle f,g\in F[x]}are monic and have positive degrees,[36]
The attempt to generalize quadratic reciprocity for powers higher than the second was one of the main goals that led 19th century mathematicians, includingCarl Friedrich Gauss,Peter Gustav Lejeune Dirichlet,Carl Gustav Jakob Jacobi,Gotthold Eisenstein,Richard Dedekind,Ernst Kummer, andDavid Hilbertto the study of general algebraic number fields and their rings of integers;[37]specifically Kummer invented ideals in order to state and prove higher reciprocity laws.
Theninthin the list of23 unsolved problemswhich David Hilbert proposed to the Congress of Mathematicians in 1900 asked for the
"Proof of the most general reciprocity law [f]or an arbitrary number field".[38]Building upon work byPhilipp Furtwängler,Teiji Takagi,Helmut Hasseand others, Emil Artin discoveredArtin reciprocityin 1923, a general theorem for which all known reciprocity laws are special cases, and proved it in 1927.[39]
TheDisquisitiones Arithmeticaehas been translated (from Latin) into English and German. The German edition includes all of Gauss's papers on number theory: all the proofs of quadratic reciprocity, the determination of the sign of the Gauss sum, the investigations into biquadratic reciprocity, and unpublished notes. Footnotes referencing theDisquisitiones Arithmeticaeare of the form "Gauss, DA, Art.n".
The two monographs Gauss published on biquadratic reciprocity have consecutively numbered sections: the first contains §§ 1–23 and the second §§ 24–76. Footnotes referencing these are of the form "Gauss, BQ, §n".
These are in Gauss'sWerke, Vol II, pp. 65–92 and 93–148. German translations are in pp. 511–533 and 534–586 ofUntersuchungen über höhere Arithmetik.
Every textbook onelementary number theory(and quite a few onalgebraic number theory) has a proof of quadratic reciprocity. Two are especially noteworthy:
Franz Lemmermeyer'sReciprocity Laws: From Euler to Eisensteinhasmanyproofs (some in exercises) of both quadratic and higher-power reciprocity laws and a discussion of their history. Its immense bibliography includes literature citations for 196 different publishedproofs for the quadratic reciprocity law.
Kenneth Ireland andMichael Rosen'sA Classical Introduction to Modern Number Theoryalso has many proofs of quadratic reciprocity (and many exercises), and covers the cubic and biquadratic cases as well. Exercise 13.26 (p. 202) says it all
Count the number of proofs to the law of quadratic reciprocity given thus far in this book and devise another one.
|
https://en.wikipedia.org/wiki/Law_of_quadratic_reciprocity
|
Aquadratic residue codeis a type ofcyclic code.
Examples of quadratic
residue codes include the(7,4){\displaystyle (7,4)}Hamming codeoverGF(2){\displaystyle GF(2)}, the(23,12){\displaystyle (23,12)}binary Golay codeoverGF(2){\displaystyle GF(2)}and the(11,6){\displaystyle (11,6)}ternary Golay codeoverGF(3){\displaystyle GF(3)}.
There is a quadratic residue code of lengthp{\displaystyle p}over the finite fieldGF(l){\displaystyle GF(l)}wheneverp{\displaystyle p}andl{\displaystyle l}are primes,p{\displaystyle p}is odd, andl{\displaystyle l}is aquadratic residuemodulop{\displaystyle p}.
Its generator polynomial as a cyclic code is given by
whereQ{\displaystyle Q}is the set of quadratic residues ofp{\displaystyle p}in the set{1,2,…,p−1}{\displaystyle \{1,2,\ldots ,p-1\}}andζ{\displaystyle \zeta }is a primitivep{\displaystyle p}th root of
unity in some finite extension field ofGF(l){\displaystyle GF(l)}.
The condition thatl{\displaystyle l}is a quadratic residue
ofp{\displaystyle p}ensures that the coefficients off{\displaystyle f}lie inGF(l){\displaystyle GF(l)}. The dimension of the code is(p+1)/2{\displaystyle (p+1)/2}.
Replacingζ{\displaystyle \zeta }by another primitivep{\displaystyle p}-th
root of unityζr{\displaystyle \zeta ^{r}}either results in the same code
or an equivalent code, according to whether or notr{\displaystyle r}is a quadratic residue ofp{\displaystyle p}.
An alternative construction avoids roots of unity. Define
for a suitablec∈GF(l){\displaystyle c\in GF(l)}. Whenl=2{\displaystyle l=2}choosec{\displaystyle c}to ensure thatg(1)=1{\displaystyle g(1)=1}.
Ifl{\displaystyle l}is odd, choosec=(1+p∗)/2{\displaystyle c=(1+{\sqrt {p^{*}}})/2},
wherep∗=p{\displaystyle p^{*}=p}or−p{\displaystyle -p}according to whetherp{\displaystyle p}is congruent to1{\displaystyle 1}or3{\displaystyle 3}modulo4{\displaystyle 4}. Theng(x){\displaystyle g(x)}also generates
a quadratic residue code; more precisely the ideal ofFl[X]/⟨Xp−1⟩{\displaystyle F_{l}[X]/\langle X^{p}-1\rangle }generated byg(x){\displaystyle g(x)}corresponds to the quadratic residue code.
Theminimum weightof a quadratic residue code of lengthp{\displaystyle p}is greater thanp{\displaystyle {\sqrt {p}}}; this is thesquare root bound.
Adding an overall parity-check digit to a quadratic residue code
gives anextended quadratic residue code. Whenp≡3{\displaystyle p\equiv 3}(mod4{\displaystyle 4}) an extended quadratic
residue code is self-dual; otherwise it is equivalent but not
equal to its dual. By theGleason–Prange theorem(named forAndrew GleasonandEugene Prange), the automorphism group of an extended quadratic residue
code has a subgroup which is isomorphic to
eitherPSL2(p){\displaystyle PSL_{2}(p)}orSL2(p){\displaystyle SL_{2}(p)}.
Since late 1980, there are many algebraic decoding algorithms were developed for correcting errors on quadratic residue codes. These algorithms can achieve the (true) error-correcting capacity⌊(d−1)/2⌋{\displaystyle \lfloor (d-1)/2\rfloor }of the quadratic residue codes with the code length up to 113. However, decoding of long binary quadratic residue codes and non-binary quadratic residue codes continue to be a challenge. Currently, decoding quadratic residue codes is still an active research area in the theory of error-correcting code.
|
https://en.wikipedia.org/wiki/Quadratic_residue_code
|
Manymathematical problemshave been stated but not yet solved. These problems come from manyareas of mathematics, such astheoretical physics,computer science,algebra,analysis,combinatorics,algebraic,differential,discreteandEuclidean geometries,graph theory,group theory,model theory,number theory,set theory,Ramsey theory,dynamical systems, andpartial differential equations. Some problems belong to more than one discipline and are studied using techniques from different areas. Prizes are often awarded for the solution to a long-standing problem, and some lists of unsolved problems, such as theMillennium Prize Problems, receive considerable attention.
This list is a composite of notable unsolved problems mentioned in previously published lists, including but not limited to lists considered authoritative, and the problems listed here vary widely in both difficulty and importance.
Various mathematicians and organizations have published and promoted lists of unsolved mathematical problems. In some cases, the lists have been associated with prizes for the discoverers of solutions.
Of the original sevenMillennium Prize Problemslisted by theClay Mathematics Institutein 2000, six remain unsolved to date:[6]
The seventh problem, thePoincaré conjecture, was solved byGrigori Perelmanin 2003.[14]However, a generalization called thesmooth four-dimensional Poincaré conjecture—that is, whether afour-dimensionaltopological spherecan have two or more inequivalentsmooth structures—is unsolved.[15]
Note: These conjectures are aboutmodelsofZermelo-Frankel set theorywithchoice, and may not be able to be expressed in models of other set theories such as the variousconstructive set theoriesornon-wellfounded set theory.
|
https://en.wikipedia.org/wiki/List_of_unsolved_problems_in_mathematics
|
Incomputational complexity theory, theunique games conjecture(often referred to asUGC) is a conjecture made bySubhash Khotin 2002.[1][2][3]The conjecture postulates that the problem of determining the approximatevalueof a certain type of game, known as aunique game, hasNP-hardcomputational complexity. It has broad applications in the theory ofhardness of approximation. If the unique games conjecture is true and P ≠ NP,[4]then for many important problems it is not only impossible to get an exact solution inpolynomial time(as postulated by theP versus NP problem), but also impossible to get a good polynomial-time approximation. The problems for which such an inapproximability result would hold includeconstraint satisfaction problems, which crop up in a wide variety of disciplines.
The conjecture is unusual in that the academic world seems about evenly divided on whether it is true or not.[1]
The unique games conjecture can be stated in a number of equivalent ways.
The following formulation of the unique games conjecture is often used inhardness of approximation. The conjecture postulates theNP-hardnessof the followingpromise problemknown aslabel cover with unique constraints. For each edge, the colors on the two vertices are restricted to some particular ordered pairs.Uniqueconstraints means that for each edge none of the ordered pairs have the same color for the same node.
This means that an instance of label cover with unique constraints over an alphabet of sizekcan be represented as adirected graphtogether with a collection ofpermutationsπe: [k] → [k], one for each edgeeof the graph. An assignment to a label cover instance gives to each vertex ofGa value in the set [k] = {1, 2, ... k}, often called “colours.”
Such instances are strongly constrained in the sense that the colour of a vertex uniquely defines the colours of its neighbours, and hence for its entire connected component. Thus, if the input instance admits a valid assignment, then such an assignment can be found efficiently by iterating over all colours of a single node. In particular, the problem of deciding if a given instance admits a satisfying assignment can be solved in polynomial time.
Thevalueof a unique label cover instance is the fraction of constraints that can be satisfied by any assignment. For satisfiable instances, this value is 1 and is easy to find. On the other hand, it seems to be very difficult to determine the value of an unsatisfiable game, even approximately. The unique games conjecture formalises this difficulty.
More formally, the (c,s)-gap label-cover problem with unique constraints is the following promise problem (Lyes,Lno):
whereGis an instance of the label cover problem with unique constraints.
The unique games conjecture states that for every sufficiently small pair of constantsε,δ> 0, there exists a constantksuch that the (1 −δ,ε)-gap label-cover problem with unique constraints over alphabet of sizekisNP-hard.
Consider the following system of linear equations over the integers modulo k:
When each equation involves exactly two variables, this is an instance of the label cover problem with unique constraints; such instances are known as instances of the Max2Lin(k) problem. It is not immediately obvious that the inapproximability of Max2Lin(k) is equivalent to the UGC, but this is in fact the case, by a reduction.[5]Namely, the UGC is equivalent to: for every sufficiently small pair of constantsε,δ> 0, there exists a constantksuch that the (1 −δ,ε)-gap Max2Lin(k) problem isNP-hard.
It has been argued that the UGC is essentially a question ofcomputational topology,[6]involving local-global principles (the latter are also evident in the proof of the 2-2 Games Conjecture, see below).
Linial[7]observed that unique label cover is an instance of the Maximum Section of a Covering Graph problem (covering graphs is the terminology fromtopology; in the context of unique games these are often referred to as graph lifts). To date, all known problems whose inapproximability is equivalent to the UGC are instances of this problem, including Unique Label Cover and Max2Lin(k). When the latter two problems are viewed as instances of Max Section of a Covering Graph, the reduction between them[5]preserves the structure of the graph covering spaces,[6]so not only the problems, but the reduction between them has a natural topological interpretation. Grochow and Tucker-Foltz exhibited a third computational topology problem whose inapproximability is equivalent to the UGC: 1-Cohomology Localization on Triangulations of 2-Manifolds.[6]
Aunique gameis a special case of atwo-prover one-round (2P1R) game. A two-prover one-round game has two players (also known as provers) and a referee. The referee sends each player a question drawn from a knownprobability distribution, and the players each have to send an answer. The answers come from a set of fixed size. The game is specified by a predicate that depends on the questions sent to the players and the answers provided by them.
The players may decide on a strategy beforehand, although they cannot communicate with each other during the game. The players win if the predicate is satisfied by their questions and their answers.
A two-prover one-round game is called aunique gameif for every question and every answer by the first player, there is exactly one answer by the second player that results in a win for the players, and vice versa. Thevalueof a game is the maximum winning probability for the players over all strategies.
Theunique games conjecturestates that for every sufficiently small pair of constantsε,δ> 0, there exists a constantksuch that the followingpromise problem(Lyes,Lno) isNP-hard:
whereGis a unique game whose answers come from a set of sizek.
Alternatively, the unique games conjecture postulates the existence of a certain type ofprobabilistically checkable prooffor problems in NP.
A unique game can be viewed as a special kind of nonadaptive probabilistically checkable proof with query complexity 2, where for each pair of possible queries of the verifier and each possible answer to the first query, there is exactly one possible answer to the second query that makes the verifier accept, and vice versa.
The unique games conjecture states that for every sufficiently small pair of constantsε,δ>0{\displaystyle \varepsilon ,\delta >0}there is a constantK{\displaystyle K}such that every problem in NP has a probabilistically checkable proof over an alphabet of sizeK{\displaystyle K}with completeness1−δ{\displaystyle 1-\delta }, soundnessε{\displaystyle \varepsilon }, and randomness complexityO(logn){\displaystyle O(\log n)}which is a unique game.
Some very natural, intrinsically interesting statements about things like voting and foams just popped out of studying the UGC.... Even if the UGC turns out to be false, it has inspired a lot of interesting math research.
The unique games conjecture was introduced bySubhash Khotin 2002 in order to make progress on certain questions in the theory ofhardness of approximation.
The truth of the unique games conjecture would imply the optimality of many knownapproximation algorithms(assuming P ≠ NP). For example, the approximation ratio achieved by thealgorithm of Goemans and Williamsonfor approximating themaximum cutin agraphis optimal to within any additive constant assuming the unique games conjecture and P ≠ NP.
A list of results that the unique games conjecture is known to imply is shown in the adjacent table together with the corresponding best results for the weaker assumption P ≠ NP. A constant ofc+ε{\displaystyle c+\varepsilon }orc−ε{\displaystyle c-\varepsilon }means that the result holds for everyconstant(with respect to the problem size) strictly greater than or less thanc{\displaystyle c}, respectively.
Currently, there is no consensus regarding the truth of the unique games conjecture. Certain stronger forms of the conjecture have been disproved.
A different form of the conjecture postulates that distinguishing the case when the value of a unique game is at least1−δ{\displaystyle 1-\delta }from the case when the value is at mostε{\displaystyle \varepsilon }is impossible forpolynomial-time algorithms(but perhaps not NP-hard). This form of the conjecture would still be useful for applications in hardness of approximation.
The constantδ>0{\displaystyle \delta >0}in the above formulations of the conjecture is necessary unless P = NP. If the uniqueness requirement is removed the corresponding statement is known to be true by theparallel repetition theorem, evenwhenδ=0{\displaystyle \delta =0}.
Marek Karpinskiand Warren Schudy have constructed linear time approximation schemes for dense instances of unique games problem.[21]
In 2008, Prasad Raghavendra has shown that if the unique games conjecture is true, then for everyconstraint satisfaction problemthe best approximation ratio is given by a certain simplesemidefinite programminginstance, which is in particular polynomial.[22]
In 2010, Prasad Raghavendra and David Steurer defined thegap-small-set expansionproblem, and conjectured that it is NP-hard. The resultingsmall set expansion hypothesisimplies the unique games conjecture.[23]It has also been used to prove stronghardness of approximationresults for findingcomplete bipartite subgraphs.[24]
In 2010,Sanjeev Arora, Boaz Barak and David Steurer found a subexponential time approximation algorithm for the unique games problem.[25]A key ingredient in their result was the spectral algorithm of Alexandra Kolla[26](see also the earlier manuscript of A. Kolla and Madhur Tulsiani[27]). The latter also re-proved[28]that unique games on expander graphs could be solved in polynomial time, and was one of (if not the) first graph algorithms to take advantage of the full spectrum of a graph rather than just its first two eigenvalues.
In 2012, it was shown that distinguishing instances with value at most38+δ{\displaystyle {\tfrac {3}{8}}+\delta }from instances with value at least12{\displaystyle {\tfrac {1}{2}}}is NP-hard.[29]
In 2018, after a series of papers, a weaker version of the conjecture, called the 2-2 games conjecture, was proven. In a certain sense, this proves "a half" of the original conjecture.[30][31]This also improves the best known gap for unique label cover: it is NP-hard to distinguish instances with value at mostδ{\displaystyle \delta }from instances with value at least12{\displaystyle {\tfrac {1}{2}}}.[32]
|
https://en.wikipedia.org/wiki/Unique_games_conjecture
|
This article is alist of notable unsolved problemsincomputer science. A problem in computer science is considered unsolved when no solution is known or when experts in the field disagree about proposed solutions.
The graph isomorphism problem involves determining whether two finite graphs are isomorphic, meaning there is a one-to-one correspondence between their vertices and edges that preserves adjacency. While the problem is known to be in NP, it is not known whether it is NP-complete or solvable in polynomial time. This uncertainty places it in a unique complexity class, making it a significant open problem in computer science.[2]
|
https://en.wikipedia.org/wiki/Unsolved_problems_in_computer_science
|
Incomputer science,2-satisfiability,2-SATor just2SATis acomputational problemof assigning values to variables, each of which has two possible values, in order to satisfy a system ofconstraintson pairs of variables. It is a special case of the generalBoolean satisfiability problem, which can involve constraints on more than two variables, and ofconstraint satisfaction problems, which can allow more than two choices for the value of each variable. But in contrast to those more general problems, which areNP-complete, 2-satisfiability can be solved inpolynomial time.
Instances of the 2-satisfiability problem are typically expressed asBoolean formulasof a special type, calledconjunctive normal form(2-CNF) orKrom formulas. Alternatively, they may be expressed as a special type ofdirected graph, theimplication graph, which expresses the variables of an instance and their negations as vertices in a graph, and constraints on pairs of variables as directed edges. Both of these kinds of inputs may be solved inlinear time, either by a method based onbacktrackingor by using thestrongly connected componentsof the implication graph.Resolution, a method for combining pairs of constraints to make additional valid constraints, also leads to a polynomial time solution. The 2-satisfiability problems provide one of two major subclasses of the conjunctive normal form formulas that can be solved in polynomial time; the other of the two subclasses isHorn-satisfiability.
2-satisfiability may be applied to geometry and visualization problems in which a collection of objects each have two potential locations and the goal is to find a placement for each object that avoids overlaps with other objects. Other applications include clustering data to minimize the sum of the diameters of the clusters, classroom and sports scheduling, and recovering shapes from information about their cross-sections.
Incomputational complexity theory, 2-satisfiability provides an example of anNL-completeproblem, one that can be solved non-deterministically using a logarithmic amount of storage and that is among the hardest of the problems solvable in this resource bound. The set of all solutions to a 2-satisfiability instance can be given the structure of amedian graph, but counting these solutions is#P-completeand therefore not expected to have a polynomial-time solution. Random instances undergo a sharp phase transition from solvable to unsolvable instances as the ratio of constraints to variables increases past 1, a phenomenon conjectured but unproven for more complicated forms of the satisfiability problem. A computationally difficult variation of 2-satisfiability, finding a truth assignment that maximizes the number of satisfied constraints, has anapproximation algorithmwhose optimality depends on theunique games conjecture, and another difficult variation, finding a satisfying assignment minimizing the number of true variables, is an important test case forparameterized complexity.
A 2-satisfiability problem may be described using aBoolean expressionwith a special restricted form. It is aconjunction(a Booleanandoperation) ofclauses, where each clause is adisjunction(a Booleanoroperation) of two variables or negated variables. The variables or their negations appearing in this formula are known asliterals.[1]For example, the following formula is in conjunctive normal form, with seven variables, eleven clauses, and 22 literals:(x0∨x2)∧(x0∨¬x3)∧(x1∨¬x3)∧(x1∨¬x4)∧(x2∨¬x4)∧(x0∨¬x5)∧(x1∨¬x5)∧(x2∨¬x5)∧(x3∨x6)∧(x4∨x6)∧(x5∨x6).{\displaystyle {\begin{aligned}&(x_{0}\lor x_{2})\land (x_{0}\lor \lnot x_{3})\land (x_{1}\lor \lnot x_{3})\land (x_{1}\lor \lnot x_{4})\land {}\\&(x_{2}\lor \lnot x_{4})\land {}(x_{0}\lor \lnot x_{5})\land (x_{1}\lor \lnot x_{5})\land (x_{2}\lor \lnot x_{5})\land {}\\&(x_{3}\lor x_{6})\land (x_{4}\lor x_{6})\land (x_{5}\lor x_{6}).\end{aligned}}}
The 2-satisfiability problem is to find atruth assignmentto these variables that makes the whole formula true. Such an assignment chooses whether to make each of the variables true or false, so that at least one literal in every clause becomes true. For the expression shown above, one possible satisfying assignment is the one that sets all seven of the variables to true. Every clause has at least one non-negated variable, so this assignment satisfies every clause. There are also 15 other ways of setting all the variables so that the formula becomes true. Therefore, the 2-satisfiability instance represented by this expression is satisfiable.
Formulas in this form are known as 2-CNF formulas. The "2" in this name stands for the number of literals per clause, and "CNF" stands forconjunctive normal form, a type of Boolean expression in the form of a conjunction of disjunctions.[1]They are also called Krom formulas, after the work ofUC Davismathematician Melven R. Krom, whose 1967 paper was one of the earliest works on the 2-satisfiability problem.[2]
Each clause in a 2-CNF formula islogically equivalentto an implication from one variable or negated variable to the other. For example, the second clause in the example may be written in any of three equivalent ways:(x0∨¬x3)≡(¬x0⇒¬x3)≡(x3⇒x0).{\displaystyle (x_{0}\lor \lnot x_{3})\;\equiv \;(\lnot x_{0}\Rightarrow \lnot x_{3})\;\equiv \;(x_{3}\Rightarrow x_{0}).}Because of this equivalence between these different types of operation, a 2-satisfiability instance may also be written inimplicative normal form, in which we replace eachorclause in the conjunctive normal form by the two implications to which it is equivalent.[3]
A third, more graphical way of describing a 2-satisfiability instance is as animplication graph. An implication graph is adirected graphin which there is onevertexper variable or negated variable, and an edge connecting one vertex to another whenever the corresponding variables are related by an implication in the implicative normal form of the instance. An implication graph must be askew-symmetric graph, meaning that it has asymmetrythat takes each variable to its negation and reverses the orientations of all of the edges.[4]
Several algorithms are known for solving the 2-satisfiability problem. The most efficient of them takelinear time.[2][4][5]
Krom (1967)described the followingpolynomial timedecision procedure for solving 2-satisfiability instances.[2]
Suppose that a 2-satisfiability instance contains two clauses that both use the same variablex, but thatxis negated in one clause and not in the other. Then the two clauses may be combined to produce a third clause, having the two other literals in the two clauses; this third clause must also be satisfied whenever the first two clauses are both satisfied. This is calledresolution. For instance, we may combine the clauses(a∨b){\displaystyle (a\lor b)}and(¬b∨¬c){\displaystyle (\lnot b\lor \lnot c)}in this way to produce the clause(a∨¬c){\displaystyle (a\lor \lnot c)}. In terms of the implicative form of a 2-CNF formula, this rule amounts to finding two implications¬a⇒b{\displaystyle \lnot a\Rightarrow b}andb⇒¬c{\displaystyle b\Rightarrow \lnot c}, and inferring bytransitivitya third implication¬a⇒¬c{\displaystyle \lnot a\Rightarrow \lnot c}.[2]
Krom writes that a formula isconsistentif repeated application of this inference rule cannot generate both the clauses(x∨x){\displaystyle (x\lor x)}and(¬x∨¬x){\displaystyle (\lnot x\lor \lnot x)}, for any variablex{\displaystyle x}. As he proves, a 2-CNF formula is satisfiable if and only if it is consistent. For, if a formula is not consistent, it is not possible to satisfy both of the two clauses(x∨x){\displaystyle (x\lor x)}and(¬x∨¬x){\displaystyle (\lnot x\lor \lnot x)}simultaneously. And, if it is consistent, then the formula can be extended by repeatedly adding one clause of the form(x∨x){\displaystyle (x\lor x)}or(¬x∨¬x){\displaystyle (\lnot x\lor \lnot x)}at a time, preserving consistency at each step, until it includes such a clause for every variable. At each of these extension steps, one of these two clauses may always be added while preserving consistency, for if not then the other clause could be generated using the inference rule. Once all variables have a clause of this form in the formula, a satisfying assignment of all of the variables may be generated by setting a variablex{\displaystyle x}to true if the formula contains the clause(x∨x){\displaystyle (x\lor x)}and setting it to false if the formula contains the clause(¬x∨¬x){\displaystyle (\lnot x\lor \lnot x)}.[2]
Krom was concerned primarily withcompletenessof systems of inference rules, rather than with the efficiency of algorithms. However, his method leads to apolynomial timebound for solving 2-satisfiability problems. By grouping together all of the clauses that use the same variable, and applying the inference rule to each pair of clauses, it is possible to find all inferences that are possible from a given 2-CNF instance, and to test whether it is consistent, in total timeO(n3), wherenis the number of variables in the instance. This formula comes from multiplying the number of variables by theO(n2)number of pairs of clauses involving a given variable, to which the inference rule may be applied. Thus, it is possible to determine whether a given 2-CNF instance is satisfiable in timeO(n3). Because finding a satisfying assignment using Krom's method involves a sequence ofO(n)consistency checks, it would take timeO(n4).Even, Itai & Shamir (1976)quote a faster time bound ofO(n2)for this algorithm, based on more careful ordering of its operations. Nevertheless, even this smaller time bound was greatly improved by the later linear time algorithms ofEven, Itai & Shamir (1976)andAspvall, Plass & Tarjan (1979).
In terms of the implication graph of the 2-satisfiability instance, Krom's inference rule can be interpreted as constructing thetransitive closureof the graph. AsCook (1971)observes, it can also be seen as an instance of theDavis–Putnam algorithmfor solving satisfiability problems using the principle ofresolution. Its correctness follows from the more general correctness of the Davis–Putnam algorithm. Its polynomial time bound follows from the fact that each resolution step increases the number of clauses in the instance, which is upper bounded by a quadratic function of the number of variables.[6]
Even, Itai & Shamir (1976)describe a technique involving limitedbacktrackingfor solving constraint satisfaction problems withbinary variablesand pairwise constraints. They apply this technique to a problem of classroom scheduling, but they also observe that it applies to other problems including 2-SAT.[5]
The basic idea of their approach is to build a partial truth assignment, one variable at a time. Certain steps of the algorithms are "choice points", points at which a variable can be given either of two different truth values, and later steps in the algorithm may cause it to backtrack to one of these choice points. However, only the most recent choice can be backtracked over. All choices made earlier than the most recent one are permanent.[5]
Initially, there is no choice point, and all variables are unassigned. At each step, the algorithm chooses the variable whose value to set, as follows:
Intuitively, the algorithm follows all chains of inference after making each of its choices. This either leads to a contradiction and a backtracking step, or, if no contradiction is derived, it follows that the choice was a correct one that leads to a satisfying assignment. Therefore, the algorithm either correctly finds a satisfying assignment or it correctly determines that the input is unsatisfiable.[5]
Even et al. did not describe in detail how to implement this algorithm efficiently. They state only that by "using appropriate data structures in order to find the implications of any decision", each step of the algorithm (other than the backtracking) can be performed quickly. However, some inputs may cause the algorithm to backtrack many times, each time performing many steps before backtracking, so its overall complexity may be nonlinear. To avoid this problem, they modify the algorithm so that, after reaching each choice point, it begins simultaneously testing both of the two assignments for the variable set at the choice point, spending equal numbers of steps on each of the two assignments. As soon as the test for one of these two assignments would create another choice point, the other test is stopped, so that at any stage of the algorithm there are only two branches of the backtracking tree that are still being tested. In this way, the total time spent performing the two tests for any variable is proportional to the number of variables and clauses of the input formula whose values are permanently assigned. As a result, the algorithm takeslinear timein total.[5]
Aspvall, Plass & Tarjan (1979)found a simpler linear time procedure for solving 2-satisfiability instances, based on the notion ofstrongly connected componentsfromgraph theory.[4]
Two vertices in a directed graph are said to be strongly connected to each other if there is a directed path from one to the other and vice versa. This is anequivalence relation, and the vertices of the graph may be partitioned into strongly connected components, subsets within which every two vertices are strongly connected. There are several efficient linear time algorithms for finding the strongly connected components of a graph, based ondepth-first search:Tarjan's strongly connected components algorithm[7]and thepath-based strong component algorithm[8]each perform a single depth-first search.Kosaraju's algorithmperforms two depth-first searches, but is very simple.
In terms of the implication graph, two literals belong to the same strongly connected component whenever there exist chains of implications from one literal to the other and vice versa. Therefore, the two literals must have the same value in any satisfying assignment to the given 2-satisfiability instance. In particular, if a variable and its negation both belong to the same strongly connected component, the instance cannot be satisfied, because it is impossible to assign both of these literals the same value. As Aspvall et al. showed, this is anecessary and sufficient condition: a 2-CNF formula is satisfiable if and only if there is no variable that belongs to the same strongly connected component as its negation.[4]
This immediately leads to a linear time algorithm for testing satisfiability of 2-CNF formulae: simply perform a strong connectivity analysis on the implication graph and check that each variable and its negation belong to different components. However, as Aspvall et al. also showed, it also leads to a linear time algorithm for finding a satisfying assignment, when one exists. Their algorithm performs the following steps:
Due to the reverse topological ordering and the skew-symmetry, when a literal is set to true, all literals that can be reached from it via a chain of implications will already have been set to true. Symmetrically, when a literalxis set to false, all literals that lead to it via a chain of implications will themselves already have been set to false. Therefore, the truth assignment constructed by this procedure satisfies the given formula, which also completes the proof of correctness of the necessary and sufficient condition identified by Aspvall et al.[4]
As Aspvall et al. show, a similar procedure involving topologically ordering the strongly connected components of the implication graph may also be used to evaluatefully quantified Boolean formulaein which the formula being quantified is a 2-CNF formula.[4]
A number of exact and approximate algorithms for theautomatic label placementproblem are based on 2-satisfiability. This problem concerns placing textual labels on the features of a diagram or map. Typically, the set of possible locations for each label is highly constrained, not only by the map itself (each label must be near the feature it labels, and must not obscure other features), but by each other: every two labels should avoid overlapping each other, for otherwise they would become illegible. In general, finding a label placement that obeys these constraints is anNP-hardproblem. However, if each feature has only two possible locations for its label (say, extending to the left and to the right of the feature) then label placement may be solved in polynomial time. For, in this case, one may create a 2-satisfiability instance that has a variable for each label and that has a clause for each pair of labels that could overlap, preventing them from being assigned overlapping positions. If the labels are all congruent rectangles, the corresponding 2-satisfiability instance can be shown to have only linearly many constraints, leading to near-linear time algorithms for finding a labeling.[10]Poon, Zhu & Chin (1998)describe a map labeling problem in which each label is a rectangle that may be placed in one of three positions with respect to a line segment that it labels: it may have the segment as one of its sides, or it may be centered on the segment. They represent these three positions using two binary variables in such a way that, again, testing the existence of a valid labeling becomes a 2-satisfiability problem.[11]
Formann & Wagner (1991)use 2-satisfiability as part of anapproximation algorithmfor the problem of finding square labels of the largest possible size for a given set of points, with the constraint that each label has one of its corners on the point that it labels. To find a labeling with a given size, they eliminate squares that, if doubled, would overlap another point, and they eliminate points that can be labeled in a way that cannot possibly overlap with another point's label. They show that these elimination rules cause the remaining points to have only two possible label placements per point, allowing a valid label placement (if one exists) to be found as the solution to a 2-satisfiability instance. By searching for the largest label size that leads to a solvable 2-satisfiability instance, they find a valid label placement whose labels are at least half as large as the optimal solution. That is, theapproximation ratioof their algorithm is at most two.[10][12]Similarly, if each label is rectangular and must be placed in such a way that the point it labels is somewhere along its bottom edge, then using 2-satisfiability to find the largest label size for which there is a solution in which each label has the point on a bottom corner leads to an approximation ratio of at most two.[13]
Similar applications of 2-satisfiability have been made for other geometric placement problems. Ingraph drawing, if the vertex locations are fixed and each edge must be drawn as a circular arc with one of two possible locations (for instance as anarc diagram), then the problem of choosing which arc to use for each edge in order to avoid crossings is a 2-satisfiability problem with a variable for each edge and a constraint for each pair of placements that would lead to a crossing. However, in this case it is possible to speed up the solution, compared to an algorithm that builds and then searches an explicit representation of the implication graph, by searching the graphimplicitly.[14]InVLSIintegrated circuit design, if a collection of modules must be connected by wires that can each bend at most once, then again there are two possible routes for the wires, and the problem of choosing which of these two routes to use, in such a way that all wires can be routed in a single layer of the circuit, can be solved as a 2-satisfiability instance.[15]
Boros et al. (1999)consider another VLSI design problem: the question of whether or not to mirror-reverse each module in a circuit design. This mirror reversal leaves the module's operations unchanged, but it changes the order of the points at which the input and output signals of the module connect to it, possibly changing how well the module fits into the rest of the design. Boroset al.consider a simplified version of the problem in which the modules have already been placed along a single linear channel, in which the wires between modules must be routed, and there is a fixed bound on the density of the channel (the maximum number of signals that must pass through any cross-section of the channel). They observe that this version of the problem may be solved as a 2-satisfiability instance, in which the constraints relate the orientations of pairs of modules that are directly across the channel from each other. As a consequence, the optimal density may also be calculated efficiently, by performing a binary search in which each step involves the solution of a 2-satisfiability instance.[16]
One way ofclustering a set of data pointsin ametric spaceinto two clusters is to choose the clusters in such a way as to minimize the sum of thediametersof the clusters, where the diameter of any single cluster is the largest distance between any two of its points. This is preferable to minimizing the maximum cluster size, which may lead to very similar points being assigned to different clusters. If the target diameters of the two clusters are known, a clustering that achieves those targets may be found by solving a 2-satisfiability instance. The instance has one variable per point, indicating whether that point belongs to the first cluster or the second cluster. Whenever any two points are too far apart from each other for both to belong to the same cluster, a clause is added to the instance that prevents this assignment.
The same method also can be used as a subroutine when the individual cluster diameters are unknown. To test whether a given sum of diameters can be achieved without knowing the individual cluster diameters, one may try all maximal pairs of target diameters that add up to at most the given sum, representing each pair of diameters as a 2-satisfiability instance and using a 2-satisfiability algorithm to determine whether that pair can be realized by a clustering. To find the optimal sum of diameters one may perform a binary search in which each step is a feasibility test of this type. The same approach also works to find clusterings that optimize other combinations than sums of the cluster diameters, and that use arbitrary dissimilarity numbers (rather than distances in a metric space) to measure the size of a cluster.[17]The time bound for this algorithm is dominated by the time to solve a sequence of 2-satisfiability instances that are closely related to each other, andRamnath (2004)shows how to solve these related instances more quickly than if they were solved independently from each other, leading to a total time bound ofO(n3)for the sum-of-diameters clustering problem.[18]
Even, Itai & Shamir (1976)consider a model of classroom scheduling in which a set ofnteachers must be scheduled to teach each ofmcohorts of students. The number of hours per week that teacheri{\displaystyle i}spends with cohortj{\displaystyle j}is described by entryRij{\displaystyle R_{ij}}of a matrixR{\displaystyle R}given as input to the problem, and each teacher also has a set of hours during which he or she is available to be scheduled. As they show, the problem isNP-complete, even when each teacher has at most three available hours, but it can be solved as an instance of 2-satisfiability when each teacher only has two available hours. (Teachers with only a single available hour may easily be eliminated from the problem.) In this problem, each variablevij{\displaystyle v_{ij}}corresponds to an hour that teacheri{\displaystyle i}must spend with cohortj{\displaystyle j}, the assignment to the variable specifies whether that hour is the first or the second of the teacher's available hours, and there is a 2-satisfiability clause preventing any conflict of either of two types: two cohorts assigned to a teacher at the same time as each other, or one cohort assigned to two teachers at the same time.[5]
Miyashiro & Matsui (2005)apply 2-satisfiability to a problem of sports scheduling, in which the pairings of around-robin tournamenthave already been chosen and the games must be assigned to the teams' stadiums. In this problem, it is desirable to alternate home and away games to the extent possible, avoiding "breaks" in which a team plays two home games in a row or two away games in a row. At most two teams can avoid breaks entirely, alternating between home and away games; no other team can have the same home-away schedule as these two, because then it would be unable to play the team with which it had the same schedule. Therefore, an optimal schedule has two breakless teams and a single break for every other team. Once one of the breakless teams is chosen, one can set up a 2-satisfiability problem in which each variable represents the home-away assignment for a single team in a single game, and the constraints enforce the properties that any two teams have a consistent assignment for their games, that each team have at most one break before and at most one break after the game with the breakless team, and that no team has two breaks. Therefore, testing whether a schedule admits a solution with the optimal number of breaks can be done by solving a linear number of 2-satisfiability problems, one for each choice of the breakless team. A similar technique also allows finding schedules in which every team has a single break, and maximizing rather than minimizing the number of breaks (to reduce the total mileage traveled by the teams).[19]
Tomographyis the process of recovering shapes from their cross-sections. Indiscrete tomography, a simplified version of the problem that has been frequently studied, the shape to be recovered is apolyomino(a subset of the squares in the two-dimensionalsquare lattice), and the cross-sections provide aggregate information about the sets of squares in individual rows and columns of the lattice. For instance, in the popularnonogrampuzzles, also known as paint by numbers or griddlers, the set of squares to be determined represents the darkpixelsin abinary image, and the input given to the puzzle solver tells him or her how many consecutive blocks of dark pixels to include in each row or column of the image, and how long each of those blocks should be. In other forms of digital tomography, even less information about each row or column is given: only the total number of squares, rather than the number and length of the blocks of squares. An equivalent version of the problem is that we must recover a given0-1 matrixgiven only the sums of the values in each row and in each column of the matrix.
Although there exist polynomial time algorithms to find a matrix having given row and column sums,[20]the solution may be far from unique: any submatrix in the form of a 2 × 2identity matrixcan be complemented without affecting the correctness of the solution. Therefore, researchers have searched for constraints on the shape to be reconstructed that can be used to restrict the space of solutions. For instance, one might assume that the shape is connected; however, testing whether there exists a connected solution is NP-complete.[21]An even more constrained version that is easier to solve is that the shape isorthogonally convex: having a single contiguous block of squares in each row and column.
Improving several previous solutions,Chrobak & Dürr (1999)showed how to reconstruct connected orthogonally convex shapes efficiently, using 2-SAT.[22]The idea of their solution is to guess the indexes of rows containing the leftmost and rightmost cells of the shape to be reconstructed, and then to set up a 2-satisfiability problem that tests whether there exists a shape consistent with these guesses and with the given row and column sums. They use four 2-satisfiability variables for each square that might be part of the given shape, one to indicate whether it belongs to each of four possible "corner regions" of the shape, and they use constraints that force these regions to be disjoint, to have the desired shapes, to form an overall shape with contiguous rows and columns, and to have the desired row and column sums. Their algorithm takes timeO(m3n)wheremis the smaller of the two dimensions of the input shape andnis the larger of the two dimensions. The same method was later extended to orthogonally convex shapes that might be connected only diagonally instead of requiring orthogonal connectivity.[23]
A part of a solver for full nonogram puzzles, Batenburg and Kosters (2008,2009) used 2-satisfiability to combine information obtained from several otherheuristics. Given a partial solution to the puzzle, they usedynamic programmingwithin each row or column to determine whether the constraints of that row or column force any of its squares to be white or black, and whether any two squares in the same row or column can be connected by an implication relation. They also transform the nonogram into a digital tomography problem by replacing the sequence of block lengths in each row and column by its sum, and use amaximum flowformulation to determine whether this digital tomography problem combining all of the rows and columns has any squares whose state can be determined or pairs of squares that can be connected by an implication relation. If either of these two heuristics determines the value of one of the squares, it is included in the partial solution and the same calculations are repeated. However, if both heuristics fail to set any squares, the implications found by both of them are combined into a 2-satisfiability problem and a 2-satisfiability solver is used to find squares whose value is fixed by the problem, after which the procedure is again repeated. This procedure may or may not succeed in finding a solution, but it is guaranteed to run in polynomial time. Batenburg and Kosters report that, although most newspaper puzzles do not need its full power, both this procedure and a more powerful but slower procedure which combines this 2-satisfiability approach with the limited backtracking ofEven, Itai & Shamir (1976)[5]are significantly more effective than the dynamic programming and flow heuristics without 2-satisfiability when applied to more difficult randomly generated nonograms.[24]
Next to 2-satisfiability, the other major subclass of satisfiability problems that can be solved in polynomial time isHorn-satisfiability. In this class of satisfiability problems, the input is again a formula in conjunctive normal form. It can have arbitrarily many literals per clause but at most one positive literal.Lewis (1978)found a generalization of this class,renamable Horn satisfiability, that can still be solved in polynomial time by means of an auxiliary 2-satisfiability instance. A formula isrenamable Hornwhen it is possible to put it into Horn form by replacing some variables by their negations. To do so, Lewis sets up a 2-satisfiability instance with one variable for each variable of the renamable Horn instance, where the 2-satisfiability variables indicate whether or not to negate the corresponding renamable Horn variables.
In order to produce a Horn instance, no two variables that appear in the same clause of the renamable Horn instance should appear positively in that clause; this constraint on a pair of variables is a 2-satisfiability constraint. By finding a satisfying assignment to the resulting 2-satisfiability instance, Lewis shows how to turn any renamable Horn instance into a Horn instance in polynomial time.[25]By breaking up long clauses into multiple smaller clauses, and applying a linear-time 2-satisfiability algorithm, it is possible to reduce this to linear time.[26]
2-satisfiability has also been applied to problems of recognizingundirected graphsthat can be partitioned into anindependent setand a small number ofcomplete bipartite subgraphs,[27]inferring business relationships among autonomous subsystems of the internet,[28]and reconstruction ofevolutionary trees.[29]
A nondeterministic algorithm for determining whether a 2-satisfiability instance isnotsatisfiable, using only alogarithmicamount of writable memory, is easy to describe: simply choose (nondeterministically) a variablevand search (nondeterministically) for a chain of implications leading fromvto its negation and then back tov. If such a chain is found, the instance cannot be satisfiable. By theImmerman–Szelepcsényi theorem, it is also possible in nondeterministic logspace to verify that a satisfiable 2-satisfiability instance is satisfiable.
2-satisfiability isNL-complete,[30]meaning that it is one of the "hardest" or "most expressive" problems in thecomplexity classNLof problems solvable nondeterministically in logarithmic space. Completeness here means that a deterministic Turing machine using only logarithmic space can transform any other problem inNLinto an equivalent 2-satisfiability problem. Analogously to similar results for the more well-known complexity classNP, this transformation together with the Immerman–Szelepcsényi theorem allow any problem in NL to be represented as asecond order logicformula with a single existentially quantified predicate with clauses limited to length 2. Such formulae are known as SO-Krom.[31]Similarly, theimplicative normal formcan be expressed infirst order logicwith the addition of an operator fortransitive closure.[31]
The set of all solutions to a 2-satisfiability instance has the structure of amedian graph, in which an edge corresponds to the operation of flipping the values of a set of variables that are all constrained to be equal or unequal to each other. In particular, by following edges in this way one can get from any solution to any other solution. Conversely, any median graph can be represented as the set of solutions to a 2-satisfiability instance in this way. The median of any three solutions is formed by setting each variable to the value it holds in themajorityof the three solutions. This median always forms another solution to the instance.[32]
Feder (1994)describes an algorithm for efficiently listing all solutions to a given 2-satisfiability instance, and for solving several related problems.[33]There also exist algorithms for finding two satisfying assignments that have the maximalHamming distancefrom each other.[34]
#2SATis the problem of counting the number of satisfying assignments to a given 2-CNF formula. Thiscounting problemis#P-complete,[35]which implies that it is not solvable inpolynomial timeunlessP = NP. Moreover, there is nofully polynomial randomized approximation schemefor #2SAT unlessNP=RPand this even holds when the input is restricted to monotone 2-CNF formulas, i.e., 2-CNF formulas in which eachliteralis a positive occurrence of a variable.[36]
The fastest known algorithm for computing the exact number of satisfying assignments to a 2SAT formula runs in timeO(1.2377n){\displaystyle O(1.2377^{n})}.[37][38][39]
One can form a 2-satisfiability instance at random, for a given numbernof variables andmof clauses, by choosing each clause uniformly at random from the set of all possible two-variable clauses. Whenmis small relative ton, such an instance will likely be satisfiable, but larger values ofmhave smaller probabilities of being satisfiable. More precisely, ifm/nis fixed as a constant α ≠ 1, the probability of satisfiability tends to alimitasngoes to infinity: if α < 1, the limit is one, while if α > 1, the limit is zero. Thus, the problem exhibits aphase transitionat α = 1.[40]
In the maximum-2-satisfiability problem (MAX-2-SAT), the input is a formula inconjunctive normal formwith twoliteralsper clause, and the task is to determine the maximum number of clauses that can be simultaneously satisfied by an assignment. Like the more generalmaximum satisfiability problem, MAX-2-SAT isNP-hard. The proof is by reduction from3SAT.[41]
By formulating MAX-2-SAT as a problem of finding acut(that is, a partition of the vertices into two subsets) maximizing the number of edges that have one endpoint in the first subset and one endpoint in the second, in a graph related to the implication graph, and applyingsemidefinite programmingmethods to this cut problem, it is possible to find in polynomial time an approximate solution that satisfies at least 0.940... times the optimal number of clauses.[42]AbalancedMAX 2-SAT instance is an instance of MAX 2-SAT where every variable appears positively and negatively with equal weight. For this problem, Austrin has improved the approximation ratio tomin{(3−cosθ)−1(2+(2/π)θ):π/2≤θ≤π}=0.943...{\displaystyle \min \left\{(3-\cos \theta )^{-1}(2+(2/\pi )\theta )\,:\,\pi /2\leq \theta \leq \pi \right\}=0.943...}.[43]
If theunique games conjectureis true, then it is impossible to approximate MAX 2-SAT, balanced or not, with anapproximation constantbetter than 0.943... in polynomial time.[44]Under the weaker assumption thatP ≠ NP, the problem is only known to be inapproximable within a constant better than 21/22 = 0.95454...[45]
Various authors have also explored exponential worst-case time bounds for exact solution of MAX-2-SAT instances.[46]
In the weighted 2-satisfiability problem (W2SAT), the input is ann{\displaystyle n}-variable 2SAT instance and an integerk, and the problem is to decide whether there exists a satisfying assignment in which exactlykof the variables are true.[47]
The W2SAT problem includes as a special case thevertex cover problem, of finding a set ofkvertices that together touch all the edges of a given undirected graph. For any given instance of the vertex cover problem, one can construct an equivalent W2SAT problem with a variable for each vertex of a graph. Each edgeuvof the graph may be represented by a 2SAT clauseu∨vthat can be satisfied only by including eitheruorvamong the true variables of the solution. Then the satisfying instances of the resulting 2SAT formula encode solutions to the vertex cover problem, and there is a satisfying assignment withktrue variables if and only if there is a vertex cover withkvertices. Therefore, like vertex cover, W2SAT isNP-complete.
Moreover, inparameterized complexityW2SAT provides a naturalW[1]-completeproblem,[47]which implies that W2SAT is notfixed-parameter tractableunless this holds for all problems inW[1]. That is, it is unlikely that there exists an algorithm for W2SAT whose running time takes the formf(k)·nO(1). Even more strongly, W2SAT cannot be solved in timeno(k)unless theexponential time hypothesisfails.[48]
As well as finding the first polynomial-time algorithm for 2-satisfiability,Krom (1967)also formulated the problem of evaluatingfully quantified Boolean formulaein which the formula being quantified is a 2-CNF formula. The 2-satisfiability problem is the special case of this quantified 2-CNF problem, in which all quantifiers areexistential. Krom also developed an effective decision procedure for these formulae.Aspvall, Plass & Tarjan (1979)showed that it can be solved in linear time, by an extension of their technique of strongly connected components and topological ordering.[2][4]
The 2-satisfiability problem can also be asked for propositionalmany-valued logics. The algorithms are not usually linear, and for some logics the problem is even NP-complete. See Hähnle (2001,2003) for surveys.[49]
|
https://en.wikipedia.org/wiki/2-satisfiability
|
Inlogicandcomputer science, theBoolean satisfiability problem(sometimes calledpropositional satisfiability problemand abbreviatedSATISFIABILITY,SATorB-SAT) asks whether there exists aninterpretationthatsatisfiesa givenBooleanformula. In other words, it asks whether the formula's variables can be consistently replaced by the values TRUE or FALSE to make the formula evaluate to TRUE. If this is the case, the formula is calledsatisfiable, elseunsatisfiable. For example, the formula "aAND NOTb" is satisfiable because one can find the valuesa= TRUE andb= FALSE, which make (aAND NOTb) = TRUE. In contrast, "aAND NOTa" is unsatisfiable.
SAT is the first problem that was proven to beNP-complete—this is theCook–Levin theorem. This means that all problems in the complexity classNP, which includes a wide range of natural decision and optimization problems, are at most as difficult to solve as SAT. There is no known algorithm that efficiently solves each SAT problem (where "efficiently" informally means "deterministically in polynomial time"), and it is generally believed that no such algorithm exists, but this belief has not been proven mathematically, and resolving the question of whether SAT has apolynomial-timealgorithm is equivalent to theP versus NP problem, which is a famous open problem in the theory of computing.
Nevertheless, as of 2007, heuristic SAT-algorithms are able to solve problem instances involving tens of thousands of variables and formulas consisting of millions of symbols,[1]which is sufficient for many practical SAT problems from, e.g.,artificial intelligence,circuit design,[2]andautomatic theorem proving.
Apropositional logicformula, also calledBoolean expression, is built fromvariables, operators AND (conjunction, also denoted by ∧), OR (disjunction, ∨), NOT (negation, ¬), and parentheses. A formula is said to besatisfiableif it can be made TRUE by assigning appropriatelogical values(i.e. TRUE, FALSE) to its variables. TheBoolean satisfiability problem(SAT) is, given a formula, to check whether it is satisfiable. Thisdecision problemis of central importance in many areas ofcomputer science, includingtheoretical computer science,complexity theory,[3][4]algorithmics,cryptography[5][6]andartificial intelligence.[7][additional citation(s) needed]
Aliteralis either a variable (in which case it is called apositive literal) or the negation of a variable (called anegative literal). Aclauseis a disjunction of literals (or a single literal). A clause is called aHorn clauseif it contains at most one positive literal. A formula is inconjunctive normal form(CNF) if it is a conjunction of clauses (or a single clause).
For example,x1is a positive literal,¬x2is a negative literal, andx1∨ ¬x2is a clause. The formula(x1∨ ¬x2) ∧ (¬x1∨x2∨x3) ∧ ¬x1is in conjunctive normal form; its first and third clauses are Horn clauses, but its second clause is not. The formula is satisfiable, by choosingx1= FALSE,x2= FALSE, andx3arbitrarily, since (FALSE ∨ ¬FALSE) ∧ (¬FALSE ∨ FALSE ∨x3) ∧ ¬FALSE evaluates to (FALSE ∨ TRUE) ∧ (TRUE ∨ FALSE ∨x3) ∧ TRUE, and in turn to TRUE ∧ TRUE ∧ TRUE (i.e. to TRUE). In contrast, the CNF formulaa∧ ¬a, consisting of two clauses of one literal, is unsatisfiable, since fora=TRUE ora=FALSE it evaluates to TRUE ∧ ¬TRUE (i.e., FALSE) or FALSE ∧ ¬FALSE (i.e., again FALSE), respectively.
For some versions of the SAT problem, it is useful to define the notion of ageneralized conjunctive normal formformula, viz. as a conjunction of arbitrarily manygeneralized clauses, the latter being of the formR(l1,...,ln)for someBoolean functionRand (ordinary) literalsli. Different sets of allowed Boolean functions lead to different problem versions. As an example,R(¬x,a,b) is a generalized clause, andR(¬x,a,b) ∧R(b,y,c) ∧R(c,d,¬z) is a generalized conjunctive normal form. This formula is usedbelow, withRbeing the ternary operator that is TRUE just when exactly one of its arguments is.
Using the laws ofBoolean algebra, every propositional logic formula can be transformed into an equivalent conjunctive normal form, which may, however, be exponentially longer. For example, transforming the formula (x1∧y1) ∨ (x2∧y2) ∨ ... ∨ (xn∧yn) into conjunctive normal form yields
while the former is a disjunction ofnconjunctions of 2 variables, the latter consists of 2nclauses ofnvariables.
However, with use of theTseytin transformation, we may find an equisatisfiable conjunctive normal form formula with length linear in the size of the original propositional logic formula.
SAT was the first problem known to beNP-complete, as proved byStephen Cookat theUniversity of Torontoin 1971[8]and independently byLeonid Levinat theRussian Academy of Sciencesin 1973.[9]Until that time, the concept of an NP-complete problem did not even exist. The proof shows how every decision problem in thecomplexity classNPcan bereducedto the SAT problem for CNF[a]formulas, sometimes calledCNFSAT. A useful property of Cook's reduction is that it preserves the number of accepting answers. For example, deciding whether a givengraphhas a3-coloringis another problem in NP; if a graph has 17 valid 3-colorings, then the SAT formula produced by the Cook–Levin reduction will have 17 satisfying assignments.
NP-completeness only refers to the run-time of the worst case instances. Many of the instances that occur in practical applications can be solved much more quickly. See§Algorithms for solving SATbelow.
Like the satisfiability problem for arbitrary formulas, determining the satisfiability of a formula in conjunctive normal form where each clause is limited to at most three literals is NP-complete also; this problem is called3-SAT,3CNFSAT, or3-satisfiability. To reduce the unrestricted SAT problem to 3-SAT, transform each clausel1∨ ⋯ ∨lnto a conjunction ofn- 2clauses
wherex2, ⋯ ,xn−2arefresh variablesnot occurring elsewhere. Although the two formulas are notlogically equivalent, they areequisatisfiable. The formula resulting from transforming all clauses is at most 3 times as long as its original; that is, the length growth is polynomial.[10]
3-SAT is one ofKarp's 21 NP-complete problems, and it is used as a starting point for proving that other problems are alsoNP-hard.[b]This is done bypolynomial-time reductionfrom 3-SAT to the other problem. An example of a problem where this method has been used is theclique problem: given a CNF formula consisting ofcclauses, the correspondinggraphconsists of a vertex for each literal, and an edge between each two non-contradicting[c]literals from different clauses; see the picture. The graph has ac-clique if and only if the formula is satisfiable.[11]
There is a simple randomized algorithm due to Schöning (1999) that runs in time (4/3)nwherenis the number of variables in the 3-SAT proposition, and succeeds with high probability to correctly decide 3-SAT.[12]
Theexponential time hypothesisasserts that no algorithm can solve 3-SAT (or indeedk-SAT for anyk> 2) inexp(o(n))time (that is, fundamentally faster than exponential inn).
Selman, Mitchell, and Levesque (1996) give empirical data on the difficulty of randomly generated 3-SAT formulas, depending on their size parameters. Difficulty is measured in number recursive calls made by aDPLL algorithm. They identified a phase transition region from almost-certainly-satisfiable to almost-certainly-unsatisfiable formulas at the clauses-to-variables ratio at about 4.26.[13]
3-satisfiability can be generalized tok-satisfiability(k-SAT, alsok-CNF-SAT), when formulas in CNF are considered with each clause containing up tokliterals.[citation needed]However, since for anyk≥ 3, this problem can neither be easier than 3-SAT nor harder than SAT, and the latter two are NP-complete, so must be k-SAT.
Some authors restrict k-SAT to CNF formulas withexactly k literals.[citation needed]This does not lead to a different complexity class either, as each clausel1∨ ⋯ ∨ljwithj<kliterals can be padded with fixed dummy variables tol1∨ ⋯ ∨lj∨dj+1∨ ⋯ ∨dk. After padding all clauses, 2k–1 extra clauses[d]must be appended to ensure that onlyd1= ⋯ =dk= FALSEcan lead to a satisfying assignment. Sincekdoes not depend on the formula length, the extra clauses lead to a constant increase in length. For the same reason, it does not matter whetherduplicate literalsare allowed in clauses, as in¬x∨ ¬y∨ ¬y.
Conjunctive normal form (in particular with 3 literals per clause) is often considered the canonical representation for SAT formulas. As shown above, the general SAT problem reduces to 3-SAT, the problem of determining satisfiability for formulas in this form.
SAT is trivial if the formulas are restricted to those indisjunctive normal form, that is, they are a disjunction of conjunctions of literals. Such a formula is indeed satisfiable if and only if at least one of its conjunctions is satisfiable, and a conjunction is satisfiable if and only if it does not contain bothxand NOTxfor some variablex. This can be checked in linear time. Furthermore, if they are restricted to being infull disjunctive normal form, in which every variable appears exactly once in every conjunction, they can be checked in constant time (each conjunction represents one satisfying assignment). But it can take exponential time and space to convert a general SAT problem to disjunctive normal form; to obtain an example, exchange "∧" and "∨" in theaboveexponential blow-up example for conjunctive normal forms.
Another NP-complete variant of the 3-satisfiability problem is theone-in-three 3-SAT(also known variously as1-in-3-SATandexactly-1 3-SAT). Given a conjunctive normal form with three literals per clause, the problem is to determine whether there exists a truth assignment to the variables so that each clause hasexactlyone TRUE literal (and thus exactly two FALSE literals).
Another variant is thenot-all-equal 3-satisfiabilityproblem (also calledNAE3SAT). Given a conjunctive normal form with three literals per clause, the problem is to determine if an assignment to the variables exists such that in no clause all three literals have the same truth value. This problem is NP-complete, too, even if no negation symbols are admitted, by Schaefer's dichotomy theorem.[14]
A 3-SAT formula isLinear SAT(LSAT) if each clause (viewed as a set of literals) intersects at most one other clause, and, moreover, if two clauses intersect, then they have exactly one literal in common. An LSAT formula can be depicted as a set of disjoint semi-closed intervals on a line. Deciding whether an LSAT formula is satisfiable is NP-complete.[15]
SAT is easier if the number of literals in a clause is limited to at most 2, in which case the problem is called2-SAT. This problem can be solved in polynomial time, and in fact iscompletefor the complexity classNL. If additionally all OR operations in literals are changed toXORoperations, then the result is calledexclusive-or 2-satisfiability, which is a problem complete for the complexity classSL=L.
The problem of deciding the satisfiability of a given conjunction ofHorn clausesis calledHorn-satisfiability, orHORN-SAT. It can be solved in polynomial time by a single step of theunit propagationalgorithm, which produces the single minimal model of the set of Horn clauses (w.r.t. the set of literals assigned to TRUE). Horn-satisfiability isP-complete. It can be seen asP'sversion of the Boolean satisfiability problem. Also, deciding the truth of quantified Horn formulas can be done in polynomial time.[16]
Horn clauses are of interest because they are able to expressimplicationof one variable from a set of other variables. Indeed, one such clause ¬x1∨ ... ∨ ¬xn∨ycan be rewritten asx1∧ ... ∧xn→y; that is, ifx1,...,xnare all TRUE, thenymust be TRUE as well.
A generalization of the class of Horn formulas is that of renameable-Horn formulae, which is the set of formulas that can be placed in Horn form by replacing some variables with their respective negation. For example, (x1∨ ¬x2) ∧ (¬x1∨x2∨x3) ∧ ¬x1is not a Horn formula, but can be renamed to the Horn formula (x1∨ ¬x2) ∧ (¬x1∨x2∨ ¬y3) ∧ ¬x1by introducingy3as negation ofx3. In contrast, no renaming of (x1∨ ¬x2∨ ¬x3) ∧ (¬x1∨x2∨x3) ∧ ¬x1leads to a Horn formula. Checking the existence of such a replacement can be done in linear time; therefore, the satisfiability of such formulae is in P as it can be solved by first performing this replacement and then checking the satisfiability of the resulting Horn formula.
Another special case is the class of problems where each clause contains XOR (i.e.exclusive or) rather than (plain) OR operators.[e]This is inP, since an XOR-SAT formula can also be viewed as a system of linear equations mod 2, and can be solved in cubic time byGaussian elimination;[17]see the box for an example. This recast is based on thekinship between Boolean algebras and Boolean rings, and the fact that arithmetic modulo two forms afinite field. SinceaXORbXORcevaluates to TRUE if and only if exactly 1 or 3 members of {a,b,c} are TRUE, each solution of the 1-in-3-SAT problem for a given CNF formula is also a solution of the XOR-3-SAT problem, and in turn each solution of XOR-3-SAT is a solution of 3-SAT; see the picture. As a consequence, for each CNF formula, it is possible to solve the XOR-3-SAT problem defined by the formula, and based on the result infer either that the 3-SAT problem is solvable or that the 1-in-3-SAT problem is unsolvable.
Provided that thecomplexity classes P and NP are not equal, neither 2-, nor Horn-, nor XOR-satisfiability is NP-complete, unlike SAT.
The restrictions above (CNF, 2CNF, 3CNF, Horn, XOR-SAT) bound the considered formulae to be conjunctions of subformulas; each restriction states a specific form for all subformulas: for example, only binary clauses can be subformulas in 2CNF.
Schaefer's dichotomy theorem states that, for any restriction to Boolean functions that can be used to form these subformulas, the corresponding satisfiability problem is in P or NP-complete. The membership in P of the satisfiability of 2CNF, Horn, and XOR-SAT formulae are special cases of this theorem.[14]
The following table summarizes some common variants of SAT.
An extension that has gained significant popularity since 2003 issatisfiability modulo theories(SMT) that can enrich CNF formulas with linear constraints, arrays, all-different constraints,uninterpreted functions,[18]etc. Such extensions typically remain NP-complete, but very efficient solvers are now available that can handle many such kinds of constraints.
The satisfiability problem becomes more difficult if both "for all" (∀) and "there exists" (∃)quantifiersare allowed to bind the Boolean variables. An example of such an expression would be∀x∀y∃z(x∨y∨z) ∧ (¬x∨ ¬y∨ ¬z); it is valid, since for all values ofxandy, an appropriate value ofzcan be found, viz.z=TRUE if bothxandyare FALSE, andz=FALSE else. SAT itself (tacitly) uses only ∃ quantifiers. If only ∀ quantifiers are allowed instead, the so-calledtautologyproblemis obtained, which isco-NP-complete. If any number of both quantifiers are allowed, the problem is called thequantified Boolean formula problem(QBF), which can be shown to bePSPACE-complete. It is widely believed that PSPACE-complete problems are strictly harder than any problem in NP, although this has not yet been proved. Using highly parallelP systems, QBF-SAT problems can be solved in linear time.[19]
Ordinary SAT asks if there is at least one variable assignment that makes the formula true. A variety of variants deal with the number of such assignments:
Other generalizations include satisfiability forfirst- andsecond-order logic,constraint satisfaction problems,0-1 integer programming.
While SAT is adecision problem, thesearch problemof finding a satisfying assignment reduces to SAT. That is, each algorithm which correctly answers whether an instance of SAT is solvable can be used to find a satisfying assignment. First, the question is asked on the given formula Φ. If the answer is "no", the formula is unsatisfiable. Otherwise, the question is asked on the partly instantiated formula Φ{x1=TRUE}, that is, Φ with the first variablex1replaced by TRUE, and simplified accordingly. If the answer is "yes", thenx1=TRUE, otherwisex1=FALSE. Values of other variables can be found subsequently in the same way. In total,n+1 runs of the algorithm are required, wherenis the number of distinct variables in Φ.
This property is used in several theorems in complexity theory:
Since the SAT problem is NP-complete, only algorithms with exponential worst-case complexity are known for it. In spite of this, efficient and scalable algorithms for SAT were developed during the 2000s and have contributed to dramatic advances in the ability to automatically solve problem instances involving tens of thousands of variables and millions of constraints (i.e. clauses).[1]Examples of such problems inelectronic design automation(EDA) includeformal equivalence checking,model checking,formal verificationofpipelined microprocessors,[18]automatic test pattern generation,routingofFPGAs,[26]planning, andscheduling problems, and so on. A SAT-solving engine is also considered to be an essential component in theelectronic design automationtoolbox.
Major techniques used by modern SAT solvers include theDavis–Putnam–Logemann–Loveland algorithm(or DPLL),conflict-driven clause learning(CDCL), andstochasticlocal searchalgorithms such asWalkSAT. Almost all SAT solvers include time-outs, so they will terminate in reasonable time even if they cannot find a solution. Different SAT solvers will find different instances easy or hard, and some excel at proving unsatisfiability, and others at finding solutions. Recent[when?]attempts have been made to learn an instance's satisfiability using deep learning techniques.[27]
SAT solvers are developed and compared in SAT-solving contests.[28]Modern SAT solvers are also having significant impact on the fields of software verification, constraint solving in artificial intelligence, and operations research, among others.
(by date of publication)
|
https://en.wikipedia.org/wiki/Boolean_satisfiability_problem
|
Intheoretical computer science, thecircuit satisfiability problem(also known asCIRCUIT-SAT,CircuitSAT,CSAT, etc.) is thedecision problemof determining whether a givenBoolean circuithas an assignment of its inputs that makes the output true.[1]In other words, it asks whether the inputs to a given Boolean circuit can be consistently set to1or0such that the circuit outputs1. If that is the case, the circuit is calledsatisfiable. Otherwise, the circuit is calledunsatisfiable.In the figure to the right, the left circuit can be satisfied by setting both inputs to be1, but the right circuit is unsatisfiable.
CircuitSAT is closely related toBoolean satisfiability problem (SAT), and likewise, has been proven to beNP-complete.[2]It is a prototypical NP-complete problem; theCook–Levin theoremis sometimes proved on CircuitSAT instead of on the SAT, and then CircuitSAT can be reduced to the other satisfiability problems to prove their NP-completeness.[1][3]The satisfiability of a circuit containingm{\displaystyle m}arbitrary binary gates can be decided in timeO(20.4058m){\displaystyle O(2^{0.4058m})}.[4]
Given a circuit and a satisfying set of inputs, one can compute the output of each gate in constant time. Hence, the output of the circuit is verifiable in polynomial time. Thus Circuit SAT belongs tocomplexity classNP. To showNP-hardness, it is possible to construct areductionfrom3SATto Circuit SAT.
Suppose the original 3SAT formula has variablesx1,x2,…,xn{\displaystyle x_{1},x_{2},\dots ,x_{n}}, and operators (AND, OR, NOT)y1,y2,…,yk{\displaystyle y_{1},y_{2},\dots ,y_{k}}. Design a circuit such that it has an input corresponding to every variable and a gate corresponding to every operator. Connect the gates according to the 3SAT formula. For instance, if the 3SAT formula is(¬x1∧x2)∨x3,{\displaystyle (\lnot x_{1}\land x_{2})\lor x_{3},}the circuit will have 3 inputs, one AND, one OR, and one NOT gate. The input corresponding tox1{\displaystyle x_{1}}will be inverted before sending to an AND gate withx2,{\displaystyle x_{2},}and the output of the AND gate will be sent to anOR gatewithx3.{\displaystyle x_{3}.}
Notice that the 3SAT formula is equivalent to the circuit designed above, hence their output is same for same input. Hence, If the 3SAT formula has a satisfying assignment, then the corresponding circuit will output 1, and vice versa. So, this is a valid reduction, and Circuit SAT is NP-hard.
This completes the proof that Circuit SAT is NP-Complete.
Assume that we are given a planar Boolean circuit (i.e. a Boolean circuit whose underlying graph isplanar) containing onlyNANDgates with exactly two inputs. Planar Circuit SAT is the decision problem of determining whether this circuit has an assignment of its inputs that makes the output true. This problem is NP-complete. Moreover, if the restrictions are changed so that any gate in the circuit is aNORgate, the resulting problem remains NP-complete.[5]
Circuit UNSAT is the decision problem of determining whether a given Boolean circuit outputs false for all possible assignments of its inputs. This is the complement of the Circuit SAT problem, and is thereforeCo-NP-complete.
Reduction from CircuitSAT or its variants can be used to show NP-hardness of certain problems, and provides us with an alternative to dual-rail and binary logic reductions. The gadgets that such a reduction needs to construct are:
This problem asks whether it is possible to locate all the bombs given aMinesweeperboard. It has been proven to beCoNP-Completevia a reduction from Circuit UNSAT problem.[6]The gadgets constructed for this reduction are: wire, split, AND and NOT gates and terminator.[7]There are three crucial observations regarding these gadgets. First, the split gadget can also be used as the NOT gadget and the turn gadget. Second, constructing AND and NOT gadgets is sufficient, because together they can simulate the universal NAND gate. Finally, since three NANDs can be composed intersection-free to implement an XOR, and since XOR is enough to build a crossover,[8]this gives us the needed crossover gadget.
TheTseytin transformationis a straightforward reduction from Circuit-SAT toSAT. The transformation is easy to describe if the circuit is wholly constructed out of 2-inputNAND gates(afunctionally-completeset of Boolean operators): assign everynetin the circuit a variable, then for each NAND gate, construct theconjunctive normal formclauses (v1∨v3) ∧ (v2∨v3) ∧ (¬v1∨ ¬v2∨ ¬v3), wherev1andv2are the inputs to the NAND gate andv3is the output. These clauses completely describe the relationship between the three variables. Conjoining the clauses from all the gates with an additional clause constraining the circuit's output variable to be true completes the reduction; an assignment of the variables satisfying all of the constraints existsif and only ifthe original circuit is satisfiable, and any solution is a solution to the original problem of finding inputs that make the circuit output 1.[1][9]The converse—that SAT is reducible to Circuit-SAT—follows trivially by rewriting the Boolean formula as a circuit and solving it.
|
https://en.wikipedia.org/wiki/Circuit_satisfiability
|
Incomputational complexity theory,Karp's 21 NP-complete problemsare a set ofcomputational problemswhich areNP-complete. In his 1972 paper, "Reducibility Among Combinatorial Problems",[1]Richard KarpusedStephen Cook's 1971 theorem that theboolean satisfiability problemis NP-complete[2](also called theCook–Levin theorem) to show that there is apolynomial timemany-one reductionfrom the boolean satisfiability problem to each of 21combinatorialandgraph theoreticalcomputational problems, thereby showing that they are all NP-complete. This was one of the first demonstrations that many natural computational problems occurring throughoutcomputer sciencearecomputationally intractable, and it drove interest in the study of NP-completeness and theP versus NP problem.
Karp's 21 problems are shown below, many with their original names. The nesting indicates the direction of the reductions used. For example,Knapsackwas shown to be NP-complete by reducingExact covertoKnapsack.
As time went on it was discovered that many of the problems can be solved efficiently if restricted to special cases, or can be solved within any fixed percentage of the optimal result. However,David Zuckermanshowed in 1996 that every one of these 21 problems has a constrained optimization version that is impossible to approximate within any constant factor unless P = NP, by showing that Karp's approach to reduction generalizes to a specific type of approximability reduction.[3]However, these may be different from the standard optimization versions of the problems, which may have approximation algorithms (as in the case of maximum cut).
|
https://en.wikipedia.org/wiki/Karp%27s_21_NP-complete_problems
|
Inlogic, specifically indeductive reasoning, anargumentisvalidif and only ifit takes a form that makes it impossible for thepremisesto betrueand the conclusion nevertheless to befalse.[1]It is not required for a valid argument to have premises that are actually true,[2]but to have premises that, if they were true, would guarantee the truth of the argument's conclusion. Valid arguments must be clearly expressed by means of sentences calledwell-formed formulas(also calledwffsor simplyformulas).
Thevalidityof an argument can be tested, proved or disproved, and depends on itslogical form.[3]
In logic, anargumentis a set of related statements expressing thepremises(which may consists of non-empirical evidence, empirical evidence or may contain some axiomatic truths) and anecessary conclusion based on the relationship of the premises.
An argument isvalidif and only if it would be contradictory for the conclusion to be false if all of the premises are true.[3]Validity does not require the truth of the premises, instead it merelynecessitatesthat conclusion follows from the premises without violating the correctness of thelogical form. If also the premises of a valid argument are proven true, this is said to besound.[3]
Thecorresponding conditionalof a valid argument is alogical truthand the negation of its corresponding conditional is acontradiction. The conclusion is anecessary consequenceof its premises.
An argument that is not valid is said to be "invalid".
An example of a valid (andsound) argument is given by the following well-knownsyllogism:
What makes this a valid argument is not that it has true premises and a true conclusion. Validity is about the tie in relationship between the two premises the necessity of the conclusion. There needs to be a relationship established between the premises i.e., a middle term between the premises. If you just have two unrelated premises there is no argument. Notice some of the terms repeat: men is a variation man in premises one and two, Socrates and the term mortal repeats in the conclusion. The argument would be just as valid if both premises and conclusion were false. The following argument is of the samelogical formbut with false premises and a false conclusion, and it is equally valid:
No matter how the universe might be constructed, it could never be the case that these arguments should turn out to have simultaneously true premises but a false conclusion. The above arguments may be contrasted with the following invalid one:
In this case, the conclusion contradicts the deductive logic of the preceding premises, rather than deriving from it. Therefore, the argument is logically 'invalid', even though the conclusion could be considered 'true' in general terms. The premise 'All men are immortal' would likewise be deemed false outside of the framework of classical logic. However, within that system 'true' and 'false' essentially function more like mathematical states such as binary 1s and 0s than the philosophical concepts normally associated with those terms. Formal arguments that are invalid are often associated with at least one fallacy which should be verifiable.
A standard view is that whether an argument is valid is a matter of the argument's logical form. Many techniques are employed by logicians to represent an argument's logical form. A simple example, applied to two of the above illustrations, is the following: Let the letters 'P', 'Q', and 'S' stand, respectively, for the set of men, the set of mortals, and Socrates. Using these symbols, the first argument may be abbreviated as:
Similarly, the third argument becomes:
An argument is termed formally valid if it has structural self-consistency, i.e. if when the operands between premises are all true, the derived conclusion is always also true. In the third example, the initial premises cannot logically result in the conclusion and is therefore categorized as an invalid argument.
A formula of aformal languageis a valid formula if and only if it is true under every possibleinterpretationof the language. In propositional logic, they aretautologies.
A statement can be called valid, i.e. logical truth, in some systems of logic like in Modal logic if the statement is true in all interpretations. In Aristotelian logic statements are not valid per se. Validity refers to entire arguments. The same is true in propositional logic (statements can be true or false but not called valid or invalid).
Validity of deduction is not affected by the truth of the premise or the truth of the conclusion. The following deduction is perfectly valid:
The problem with the argument is that it is notsound. In order for a deductive argument to be sound, the argument must be validandall the premises must be true.[3]
Model theoryanalyzes formulae with respect to particular classes of interpretation in suitable mathematical structures. On this reading, a formula is valid if all such interpretations make it true. An inference is valid if all interpretations that validate the premises validate the conclusion. This is known assemantic validity.[4]
Intruth-preservingvalidity, the interpretation under which all variables are assigned atruth valueof 'true' produces a truth value of 'true'.
In afalse-preservingvalidity, the interpretation under which all variables are assigned a truth value of 'false' produces a truth value of 'false'.[5]
|
https://en.wikipedia.org/wiki/Validity_(logic)
|
Inartificial intelligenceandoperations research,constraint satisfactionis the process of finding a solution through
a set ofconstraintsthat impose conditions that thevariablesmustsatisfy.[1]A solution is therefore an assignment of values to the variables that satisfies all constraints—that is, a point in thefeasible region.
The techniques used in constraint satisfaction depend on the kind of constraints being considered. Often used areconstraints on a finite domain, to the point thatconstraint satisfaction problemsare typically identified with problems based on constraints on a finite domain. Such problems are usually solved viasearch, in particular a form ofbacktrackingorlocal search.Constraint propagationis another family of methods used on such problems; most of them are incomplete in general, that is, they may solve the problem or prove it unsatisfiable, but not always. Constraint propagation methods are also used in conjunction with search to make a given problem simpler to solve. Other considered kinds of constraints are onrealorrationalnumbers; solving problems on these constraints is done viavariable eliminationor thesimplex algorithm.
Constraint satisfaction as a general problem originated in the field ofartificial intelligencein the 1970s (see for example (Laurière 1978)). However, when the constraints are expressed as multivariatelinear equationsdefining (in)equalities, the field goes back toJoseph Fourierin the 19th century:George Dantzig's invention of thesimplex algorithmforlinear programming(a special case of mathematical optimization) in 1946 has allowed determining feasible solutions to problems containing hundreds of variables.
During the 1980s and 1990s, embedding of constraints into aprogramming languagewas developed. The first language devised expressly with intrinsic support forconstraint programmingwasProlog. Since then, constraint-programming libraries have become available in other languages, such asC++orJava(e.g., Choco for Java[2]).
As originally defined in artificial intelligence, constraints enumerate the possible values a set of variables may take in a given world. Apossible worldis a total assignment of values to variables representing a way the world (real or imaginary) could be.[3]Informally, a finite domain is a finite set of arbitrary elements. A constraint satisfaction problem on such domain contains a set of variables whose values can only be taken from the domain, and a set of constraints, each constraint specifying the allowed values for a group of variables. A solution to this problem is an evaluation of the variables that satisfies all constraints. In other words, a solution is a way for assigning a value to each variable in such a way that all constraints are satisfied by these values.
In some circumstances, there may exist additional requirements: one may be interested not only in the solution (and in the fastest or most computationally efficient way to reach it) but in how it was reached; e.g. one may want the "simplest" solution ("simplest" in a logical, non-computational sense that has to be precisely defined). This is often the case in logic games such asSudoku.
In practice, constraints are often expressed in compact form, rather than enumerating all the values of the variables that would satisfy the constraint. One of the most-used constraints is the (obvious) one establishing that the values of the affected variables must be all different.
Problems that can be expressed as constraint satisfaction problems are theeight queens puzzle, theSudokusolving problem and many other logic puzzles, theBoolean satisfiability problem,schedulingproblems,bounded-error estimationproblems and various problems on graphs such as thegraph coloringproblem.
While usually not included in the above definition of a constraint satisfaction problem, arithmetic equations and inequalities bound the values of the variables they contain and can therefore be considered a form of constraints. Their domain is the set of numbers (either integer, rational, or real), which is infinite: therefore, the relations of these constraints may be infinite as well; for example,X=Y+1{\displaystyle X=Y+1}has an infinite number of pairs of satisfying values. Arithmetic equations and inequalities are often not considered within the definition of a "constraint satisfaction problem", which is limited to finite domains. They are however used often inconstraint programming.
It can be shown that the arithmetic inequalities or equations present in some types of finite logic puzzles such asFutoshikiorKakuro(also known as Cross Sums) can be dealt with as non-arithmetic constraints (seePattern-Based Constraint Satisfaction and Logic Puzzles[4]).
Constraint satisfaction problems on finite domains are typically solved using a form ofsearch. The most-used techniques are variants ofbacktracking,constraint propagation, andlocal search. These techniques are used on problems withnonlinearconstraints.
Variable eliminationand thesimplex algorithmare used for solvinglinearandpolynomialequations and inequalities, and problems containing variables with infinite domain. These are typically solved asoptimizationproblems in which the optimized function is the number of violated constraints.
Solving a constraint satisfaction problem on a finite domain is anNP-completeproblem with respect to the domain size. Research has shown a number oftractablesubcases, some limiting the allowed constraint relations, some requiring the scopes of constraints to form a tree, possibly in a reformulated version of the problem. Research has also established relationships of the constraint satisfaction problem with problems in other areas such asfinite model theory.
Constraint programming is the use of constraints as a programming language to encode and solve problems. This is often done by embedding constraints into aprogramming language, which is called the host language. Constraint programming originated from a formalization of equalities oftermsinProlog II, leading to a general framework for embedding constraints into alogic programminglanguage. The most common host languages areProlog,C++, andJava, but other languages have been used as well.
A constraint logic program is alogic programthat contains constraints in the bodies of clauses. As an example, the clauseA(X):-X>0,B(X)is a clause containing the constraintX>0in the body. Constraints can also be present in the goal. The constraints in the goal and in the clauses used to prove the goal are accumulated into a set calledconstraint store. This set contains the constraints the interpreter has assumed satisfiable in order to proceed in the evaluation. As a result, if this set is detected unsatisfiable, the interpreter backtracks. Equations of terms, as used in logic programming, are considered a particular form of constraints, which can be simplified usingunification. As a result, the constraint store can be considered an extension of the concept ofsubstitutionthat is used in regular logic programming. The most common kinds of constraints used in constraint logic programming are constraints over integers/rational/real numbers and constraints over finite domains.
Concurrent constraint logic programminglanguages have also been developed. They significantly differ from non-concurrent constraint logic programming in that they are aimed at programmingconcurrent processesthat may not terminate.Constraint handling rulescan be seen as a form of concurrent constraint logic programming, but are also sometimes used within a non-concurrent constraint logic programming language. They allow for rewriting constraints or to infer new ones based on the truth of conditions.
Constraint satisfaction toolkits aresoftware librariesforimperative programming languagesthat are used to encode and solve a constraint satisfaction problem.
Constraint toolkits are a way for embedding constraints into animperative programming language. However, they are only used as external libraries for encoding and solving problems. An approach in which constraints are integrated into an imperative programming language is taken in theKaleidoscope programming language.
Constraints have also been embedded intofunctional programming languages.
|
https://en.wikipedia.org/wiki/Constraint_satisfaction
|
Lutz's resource-bounded measureis a generalisation ofLebesgue measuretocomplexity classes. It was originally developed byJack Lutz. Just as Lebesgue measure gives a method to quantify the size of subsets of theEuclidean spaceRn{\displaystyle \mathbb {R} ^{n}}, resource bounded measure gives a method to classify the size of subsets of complexity classes.
For instance, computer scientists generally believe that the complexity classP(the set of alldecision problemssolvable inpolynomial time) is not equal to the complexity classNP(the set of all decision problems checkable, but not necessarily solvable, in polynomial time). Since P is asubsetof NP, this would mean that NP contains more problems than P. A stronger hypothesis than "P is not NP" is the statement "NP does not have p-measure 0". Here, p-measure is a generalization of Lebesgue measure to subsets of the complexity classE, in which P is contained. P is known to have p-measure 0, and so the hypothesis "NP does not have p-measure 0" would imply not only that NP and P are unequal, but that NP is, in ameasure-theoreticsense, "much bigger than P".
{0,1}∞{\displaystyle \{0,1\}^{\infty }}is the set of allinfinite binary sequences. We can view areal numberin theunit intervalas an infinite binary sequence, by considering itsbinary expansion. We may also view alanguage(a set of binarystrings) as an infinite binary sequence, by setting thenthbitof the sequence to 1 if and only if thenth binary string (inlexicographical order) is contained in the language. Thus, sets of real numbers in the unit interval and complexity classes (which are sets of languages) may both be viewed as sets of infinite binary sequences, and thus the techniques ofmeasure theoryused to measure the size of sets of real numbers may be applied to measure complexity classes. However, since each computable complexity class contains only acountablenumber of elements(because the number of computable languages is countable), each complexity class hasLebesgue measure0. Thus, to do measure theory inside of complexity classes, we must define an alternativemeasurethat works meaningfully on countable sets of infinite sequences. For this measure to be meaningful, it should reflect something about the underlying definition of each complexity class; namely, that they are defined bycomputational problemsthat can be solved within a given resource bound.
The foundation of resource-bounded measure is Ville's formulation ofmartingales. Amartingaleis a functiond:{0,1}∗→[0,∞){\displaystyle d:\{0,1\}^{*}\to [0,\infty )}such that, for all finite stringsw,
(This is Ville's original definition of a martingale, later extended byJoseph Leo Doob.) A martingaledis said tosucceedon a sequenceS∈{0,1}∞{\displaystyle S\in \{0,1\}^{\infty }}iflim supn→∞d(S↾n)=∞,{\displaystyle \limsup _{n\to \infty }d(S\upharpoonright n)=\infty ,}whereS↾n{\displaystyle S\upharpoonright n}is the firstnbits ofS. A martingalesucceedson a set of sequencesX⊆{0,1}∞{\displaystyle X\subseteq \{0,1\}^{\infty }}if it succeeds on every sequence inX.
Intuitively, a martingale is a gambler that starts with some finite amount of money (say, one dollar). It reads a sequence of bits indefinitely. After reading the finite prefixw∈{0,1}∗{\displaystyle w\in \{0,1\}^{*}}, it bets some of its current money that the next bit will be a 0, and the remainder of its money that the next bit will be a 1. It doubles whatever money was placed on the bit that appears next, and it loses the money placed on the bit that did not appear. It must bet all of its money, but it may "bet nothing" by placing half of its money on each bit. For a martingaled,d(w) represents the amount of moneydhas after reading the stringw. Although the definition of a martingale has the martingale calculating how much money it will have, rather than calculating what bets to place, because of the constrained nature of the game, knowledge the valuesd(w),d(w0), andd(w1) suffices to calculate the bets thatdplaced on 0 and 1 after seeing the stringw. The fact that the martingale is a function that takes as input the string seen so far means that the bets placed are solely a function of the bits already read; no other information may affect the bets (other information being the so-calledfiltrationin thegeneralized theory of martingales).
The key result relating measure to martingales is Ville's observation that a setX⊆{0,1}∞{\displaystyle X\subseteq \{0,1\}^{\infty }}has Lebesgue measure 0 if and only if there is a martingale that succeeds onX. Thus, we can define a measure 0 set to be one for which there exists a martingale that succeeds on all elements of the set.
To extend this type of measure to complexity classes, Lutz considered restricting the computational power of the martingale. For instance, if instead of allowing any martingale, we require the martingale to bepolynomial-timecomputable, then we obtain a definition of p-measure: a set of sequences has p-measure 0 if there is apolynomial-time computablemartingale that succeeds on the set. We define a set to have p-measure 1 if its complement has p-measure 0. For example, proving the above-mentioned conjecture, that NP does not have p-measure 0, amounts to proving that no polynomial-time martingale succeeds on all of NP.
A problem isalmost completefor acomplexity classC if it is in C and "many" other problems in C reduce to it. More specifically, the subset of problems of C which reduce to the problem is a measure one set, in terms of the resource bounded measure. This is a weaker requirement than the problem beingcompletefor the class.
|
https://en.wikipedia.org/wiki/Almost_complete
|
Incomputational complexity theory, agadgetis a subunit of a problem instance that simulates the behavior of one of the fundamental units of a different computational problem. Gadgets are typically used to constructreductionsfrom one computational problem to another, as part of proofs ofNP-completenessor other types of computational hardness. Thecomponent designtechnique is a method for constructing reductions by using gadgets.[1]
Szabó (2009)traces the use of gadgets to a 1954 paper ingraph theorybyW. T. Tutte, in which Tutte provided gadgets for reducing the problem of finding asubgraphwith givendegreeconstraints to aperfect matchingproblem. However, the "gadget" terminology has a later origin, and does not appear in Tutte's paper.[2][3]
Many NP-completeness proofs are based onmany-one reductionsfrom3-satisfiability, the problem of finding a satisfying assignment to a Boolean formula that is a conjunction (BooleanAND) of clauses, each clause being the disjunction (BooleanOR) of three terms, and each term being a Boolean variable or its negation. A reduction from this problem to a hard problem onundirected graphs, such as theHamiltonian cycleproblem orgraph coloring, would typically be based on gadgets in the form of subgraphs that simulate the behavior of the variables and clauses of a given 3-satisfiability instance. These gadgets would then be glued together to form a single graph, a hard instance for the graph problem in consideration.[4]
For instance, the problem of testing 3-colorability of graphs may be proven NP-complete by a reduction from 3-satisfiability of this type. The reduction uses two special graph vertices, labeled as "Ground" and "False", that are not part of any gadget. As shown in the figure, the gadget for a variablexconsists of two vertices connected in a triangle with the ground vertex; one of the two vertices of the gadget is labeled withxand the other is labeled with the negation ofx. The gadget for a clause(t0∨t1∨t2)consists of six vertices, connected to each other, to the vertices representing the termst0,t1, andt2, and to the ground and false vertices by the edges shown. Any3-CNFformula may be converted into a graph by constructing a separate gadget for each of its variables and clauses and connecting them as shown.[5]
In any 3-coloring of the resulting graph, one may designate the three colors as being true, false, or ground, where false and ground are the colors given to the false and ground vertices (necessarily different, as these vertices are made adjacent by the construction) and true is the remaining color not used by either of these vertices. Within a variable gadget, only two colorings are possible: the vertex labeled with the variable must be colored either true or false, and the vertex labeled with the variable's negation must correspondingly be colored either false or true. In this way, valid assignments of colors to the variable gadgets correspond one-for-one with truth assignments to the variables: the behavior of the gadget with respect to coloring simulates the behavior of a variable with respect to truth assignment.
Each clause assignment has a valid 3-coloring if at least one of its adjacent term vertices is colored true, and cannot be 3-colored if all of its adjacent term vertices are colored false. In this way, the clause gadget can be colored if and only if the corresponding truth assignment satisfies the clause, so again the behavior of the gadget simulates the behavior of a clause.
Agrawal et al. (1997)considered what they called "a radically simple form of gadget reduction", in which each bit describing part of a gadget may depend only on a bounded number of bits of the input, and used these reductions to prove an analogue of theBerman–Hartmanis conjecturestating that all NP-complete sets are polynomial-time isomorphic.[6]
The standard definition of NP-completeness involvespolynomial timemany-one reductions: a problem in NP is by definition NP-complete if every other problem in NP has a reduction of this type to it, and the standard way of proving that a problem in NP is NP-complete is to find a polynomial time many-one reduction from a known NP-complete problem to it. But (in what Agrawal et al. called "a curious, often observed fact") all sets known to be NP-complete at that time could be proved complete using the stronger notion ofAC0many-one reductions: that is, reductions that can be computed by circuits of polynomial size, constant depth, and unbounded fan-in. Agrawal et al. proved that every set that is NP-complete under AC0reductions is complete under an even more restricted type of reduction,NC0many-one reductions, using circuits of polynomial size, constant depth, and bounded fan-in. In an NC0reduction, each output bit of the reduction can depend only on a constant number of input bits.[6]
The Berman–Hartmanis conjecture is an unsolved problem in computational complexity theory stating that all NP-complete problem classes are polynomial-time isomorphic. That is, ifAandBare two NP-complete problem classes, there is a polynomial-time one-to-one reduction fromAtoBwhose inverse is also computable in polynomial time. Agrawal et al. used their equivalence between AC0reductions and NC0reductions to show that all sets complete for NP under AC0reductions are AC0-isomorphic.[6]
One application of gadgets is in provinghardness of approximationresults, by reducing a problem that is known to be hard to approximate to another problem whose hardness is to be proven. In this application, one typically has a family of instances of the first problem in which there is a gap in the objective function values, and in which it is hard to determine whether a given instance has an objective function that is on the low side or on the high side of the gap. The reductions used in these proofs, and the gadgets used in the reductions, must preserve the existence of this gap, and the strength of the inapproximability result derived from the reduction will depend on how well the gap is preserved.
Trevisan et al. (2000)formalize the problem of finding gap-preserving gadgets, for families ofconstraint satisfaction problemsin which the goal is to maximize the number of satisfied constraints.[7]They give as an example a reduction from3-satisfiabilityto2-satisfiabilitybyGarey, Johnson & Stockmeyer (1976), in which the gadget representing a 3-SAT clause consists of ten 2-SAT clauses, and in which a truth assignment that satisfies 3-SAT clause also satisfies at least seven clauses in the gadget, while a truth assignment that fails to satisfy a 3-SAT clause also fails to satisfy more than six clauses of the gadget.[8]Using this gadget, and the fact that (unlessP = NP) there is nopolynomial-time approximation schemefor maximizing the number of 3-SAT clauses that a truth assignment satisfies, it can be shown that there is similarly no approximation scheme for MAX 2-SAT.
Trevisan et al. show that, in many cases of the constraint satisfaction problems they study, the gadgets leading to the strongest possible inapproximability results may be constructed automatically, as the solution to alinear programmingproblem. The same gadget-based reductions may also be used in the other direction, to transfer approximation algorithms from easier problems to harder problems. For instance, Trevisan et al. provide an optimal gadget for reducing 3-SAT to a weighted variant of 2-SAT (consisting of seven weighted 2-SAT clauses) that is stronger than the one byGarey, Johnson & Stockmeyer (1976); using it, together with knownsemidefinite programmingapproximation algorithms for MAX 2-SAT, they provide an approximation algorithm for MAX 3-SAT with approximation ratio 0.801, better than previously known algorithms.
|
https://en.wikipedia.org/wiki/Gadget_(computer_science)
|
Incomputational complexity, problems that are in thecomplexity classNPbut are neither in the classPnorNP-completeare calledNP-intermediate, and the class of such problems is calledNPI.Ladner's theorem, shown in 1975 byRichard E. Ladner,[1]is a result asserting that, ifP ≠ NP, then NPI is not empty; that is, NP contains problems that are neither in P nor NP-complete. Since it is also true that if NPI problems exist, then P ≠ NP, it follows that P = NP if and only if NPI is empty.
Under the assumption that P ≠ NP, Ladner explicitly constructs a problem in NPI, although this problem is artificial and otherwise uninteresting. It is an open question whether any "natural" problem has the same property:Schaefer's dichotomy theoremprovides conditions under which classes of constrained Boolean satisfiability problems cannot be in NPI.[2][3]Some problems that are considered good candidates for being NP-intermediate are thegraph isomorphism problem, anddecision versionsoffactoringand thediscrete logarithm.
Under theexponential time hypothesis, there exist natural problems that requirequasi-polynomial time, and can be solved in that time, including finding a large disjoint set ofunit disksfrom a given set of disks in thehyperbolic plane,[4]and finding a graph with few vertices that is not aninduced subgraphof a given graph.[5]The exponential time hypothesis also implies that no quasi-polynomial-time problem can be NP-complete, so under this assumption these problems must be NP-intermediate.
|
https://en.wikipedia.org/wiki/Ladner%27s_theorem
|
This is a list of some of the more commonly known problems that areNP-completewhen expressed asdecision problems. As there are thousands of such problems known, this list is in no way comprehensive. Many problems of this type can be found inGarey & Johnson (1979).
Graphsoccur frequently in everyday applications. Examples include biological or social networks, which contain hundreds, thousands and even billions of nodes in some cases (e.g.FacebookorLinkedIn).
General
Specific problems
|
https://en.wikipedia.org/wiki/List_of_NP-complete_problems
|
Incomputational complexity theory, a computational problemHis calledNP-hardif, for every problemLwhich can be solved innon-deterministic polynomial-time, there is apolynomial-time reductionfromLtoH. That is, assuming a solution forHtakes 1 unit time,H's solution can be used to solveLin polynomial time.[1][2]As a consequence, finding a polynomial time algorithm to solve a single NP-hard problem would give polynomial time algorithms for all the problems in the complexity classNP. As it is suspected, but unproven, thatP≠NP, it is unlikely that any polynomial-time algorithms for NP-hard problems exist.[3][4]
A simple example of an NP-hard problem is thesubset sum problem.
Informally, ifHis NP-hard, then it is at least as difficult to solve as the problems inNP. However, the opposite direction is not true: some problems areundecidable, and therefore even more difficult to solve than all problems in NP, but they are probably not NP-hard (unless P=NP).[5]
Adecision problemHis NP-hard when for every problemLin NP, there is apolynomial-time many-one reductionfromLtoH.[1]: 80
Another definition is to require that there be a polynomial-time reduction from anNP-completeproblemGtoH.[1]: 91As any problemLin NP reduces in polynomial time toG,Lreduces in turn toHin polynomial time so this new definition implies the previous one. It does not restrict the class NP-hard to decision problems, and it also includessearch problemsoroptimization problems.
If P ≠ NP, then NP-hard problems could not be solved in polynomial time.
Some NP-hard optimization problems can be polynomial-timeapproximatedup to some constant approximation ratio (in particular, those inAPX) or even up to any approximation ratio (those inPTASorFPTAS). There are many classes of approximability, each one enabling approximation up to a different level.[6]
AllNP-completeproblems are also NP-hard (seeList of NP-complete problems). For example, the optimization problem of finding the least-cost cyclic route through all nodes of a weighted graph—commonly known as thetravelling salesman problem—is NP-hard.[7]Thesubset sum problemis another example: given a set of integers, does any non-empty subset of them add up to zero? That is adecision problemand happens to be NP-complete.
There are decision problems that areNP-hardbut notNP-completesuch as thehalting problem. That is the problem which asks "given a program and its input, will it run forever?" That is ayes/noquestion and so is a decision problem. It is easy to prove that the halting problem is NP-hard but not NP-complete. For example, theBoolean satisfiability problemcan be reduced to the halting problem by transforming it to the description of aTuring machinethat tries alltruth valueassignments and when it finds one that satisfies the formula it halts and otherwise it goes into an infinite loop. It is also easy to see that the halting problem is not inNPsince all problems in NP are decidable in a finite number of operations, but the halting problem, in general, isundecidable. There are also NP-hard problems that are neitherNP-completenorUndecidable. For instance, the language oftrue quantified Boolean formulasis decidable inpolynomial space, but not in non-deterministic polynomial time (unless NP =PSPACE).[8]
NP-hard problems do not have to be elements of the complexity class NP.
As NP plays a central role incomputational complexity, it is used as the basis of several classes:
NP-hard problems are often tackled with rules-based languages in areas including:
Problems that are decidable but notNP-complete, often are optimization problems:
|
https://en.wikipedia.org/wiki/NP-hard
|
TheP versus NP problemis a majorunsolved problemintheoretical computer science. Informally, it asks whether every problem whose solution can be quickly verified can also be quickly solved.
Here, "quickly" means an algorithm exists that solves the task and runs inpolynomial time(as opposed to, say,exponential time), meaning the task completion time isbounded aboveby apolynomial functionon the size of the input to the algorithm. The general class of questions that somealgorithmcan answer in polynomial time is "P" or "class P". For some questions, there is no known way to find an answer quickly, but if provided with an answer, it can be verified quickly. The class of questions where an answer can beverifiedin polynomial time is"NP", standing for "nondeterministic polynomial time".[Note 1]
An answer to the P versus NP question would determine whether problems that can be verified in polynomial time can also be solved in polynomial time. If P ≠ NP, which is widely believed, it would mean that there are problems in NP that are harder to compute than to verify: they could not be solved in polynomial time, but the answer could be verified in polynomial time.
The problem has been called the most important open problem incomputer science.[1]Aside from being an important problem incomputational theory, a proof either way would have profound implications for mathematics,cryptography, algorithm research,artificial intelligence,game theory, multimedia processing,philosophy,economicsand many other fields.[2]
It is one of the sevenMillennium Prize Problemsselected by theClay Mathematics Institute, each of which carries a US$1,000,000 prize for the first correct solution.
Consider the following yes/no problem: given an incompleteSudokugrid of sizen2×n2{\displaystyle n^{2}\times n^{2}}, is there at least one legal solution where every row, column, andn×n{\displaystyle n\times n}square contains the integers 1 throughn2{\displaystyle n^{2}}? It is straightforward to verify "yes" instances of this generalized Sudoku problem given a candidate solution. However, it is not known whether there is a polynomial-time algorithm that can correctly answer "yes" or "no" to all instances of this problem. Therefore, generalized Sudoku is in NP (quickly verifiable), but may or may not be in P (quickly solvable). (It is necessary to consider a generalized version of Sudoku, as any fixed size Sudoku has only a finite number of possible grids. In this case the problem is in P, as the answer can be found by table lookup.)
The precise statement of the P versus NP problem was introduced in 1971 byStephen Cookin his seminal paper "The complexity of theorem proving procedures"[3](and independently byLeonid Levinin 1973[4]).
Although the P versus NP problem was formally defined in 1971, there were previous inklings of the problems involved, the difficulty of proof, and the potential consequences. In 1955, mathematicianJohn Nashwrote a letter to theNSA, speculating that cracking a sufficiently complex code would require time exponential in the length of the key.[5]If proved (and Nash was suitably skeptical), this would imply what is now called P ≠ NP, since a proposed key can be verified in polynomial time. Another mention of the underlying problem occurred in a 1956 letter written byKurt GödeltoJohn von Neumann. Gödel asked whether theorem-proving (now known to beco-NP-complete) could be solved inquadraticorlinear time,[6]and pointed out one of the most important consequences—that if so, then the discovery of mathematical proofs could be automated.
The relation between thecomplexity classesP and NP is studied incomputational complexity theory, the part of thetheory of computationdealing with the resources required during computation to solve a given problem. The most common resources are time (how many steps it takes to solve a problem) and space (how much memory it takes to solve a problem).
In such analysis, a model of the computer for which time must be analyzed is required. Typically such models assume that the computer isdeterministic(given the computer's present state and any inputs, there is only one possible action that the computer might take) andsequential(it performs actions one after the other).
In this theory, the class P consists of alldecision problems(definedbelow) solvable on a deterministic sequential machine in a durationpolynomialin the size of the input; the classNPconsists of all decision problems whose positive solutions are verifiable inpolynomial timegiven the right information, or equivalently, whose solution can be found in polynomial time on anon-deterministicmachine.[7]Clearly, P ⊆ NP. Arguably, the biggest open question intheoretical computer scienceconcerns the relationship between those two classes:
Since 2002,William Gasarchhas conducted three polls of researchers concerning this and related questions.[8][9][10]Confidence that P ≠ NP has been increasing – in 2019, 88% believed P ≠ NP, as opposed to 83% in 2012 and 61% in 2002. When restricted to experts, the 2019 answers became 99% believed P ≠ NP.[10]These polls do not imply whether P = NP, Gasarch himself stated: "This does not bring us any closer to solving P=?NP or to knowing when it will be solved, but it attempts to be an objective report on the subjective opinion of this era."
To attack the P = NP question, the concept of NP-completeness is very useful. NP-complete problems are problems that any other NP problem is reducible to in polynomial time and whose solution is still verifiable in polynomial time. That is, any NP problem can be transformed into any NP-complete problem. Informally, an NP-complete problem is an NP problem that is at least as "tough" as any other problem in NP.
NP-hardproblems are those at least as hard as NP problems; i.e., all NP problems can be reduced (in polynomial time) to them. NP-hard problems need not be in NP; i.e., they need not have solutions verifiable in polynomial time.
For instance, theBoolean satisfiability problemis NP-complete by theCook–Levin theorem, soanyinstance ofanyproblem in NP can be transformed mechanically into a Boolean satisfiability problem in polynomial time. The Boolean satisfiability problem is one of many NP-complete problems. If any NP-complete problem is in P, then it would follow that P = NP. However, many important problems are NP-complete, and no fast algorithm for any of them is known.
From the definition alone it is unintuitive that NP-complete problems exist; however, a trivial NP-complete problem can be formulated as follows: given aTuring machineMguaranteed to halt in polynomial time, does a polynomial-size input thatMwill accept exist?[11]It is in NP because (given an input) it is simple to check whetherMaccepts the input by simulatingM; it is NP-complete because the verifier for any particular instance of a problem in NP can be encoded as a polynomial-time machineMthat takes the solution to be verified as input. Then the question of whether the instance is a yes or no instance is determined by whether a valid input exists.
The first natural problem proven to be NP-complete was the Boolean satisfiability problem, also known as SAT. As noted above, this is the Cook–Levin theorem; its proof that satisfiability is NP-complete contains technical details about Turing machines as they relate to the definition of NP. However, after this problem was proved to be NP-complete,proof by reductionprovided a simpler way to show that many other problems are also NP-complete, including the game Sudoku discussed earlier. In this case, the proof shows that a solution of Sudoku in polynomial time could also be used to completeLatin squaresin polynomial time.[12]This in turn gives a solution to the problem of partitioningtri-partite graphsinto triangles,[13]which could then be used to find solutions for the special case of SAT known as 3-SAT,[14]which then provides a solution for general Boolean satisfiability. So a polynomial-time solution to Sudoku leads, by a series of mechanical transformations, to a polynomial time solution of satisfiability, which in turn can be used to solve any other NP-problem in polynomial time. Using transformations like this, a vast class of seemingly unrelated problems are all reducible to one another, and are in a sense "the same problem".
Although it is unknown whether P = NP, problems outside of P are known. Just as the class P is defined in terms of polynomial running time, the classEXPTIMEis the set of all decision problems that haveexponentialrunning time. In other words, any problem in EXPTIME is solvable by adeterministic Turing machineinO(2p(n)) time, wherep(n) is a polynomial function ofn. A decision problem isEXPTIME-completeif it is in EXPTIME, and every problem in EXPTIME has apolynomial-time many-one reductionto it. A number of problems are known to be EXPTIME-complete. Because it can be shown that P ≠ EXPTIME, these problems are outside P, and so require more than polynomial time. In fact, by thetime hierarchy theorem, they cannot be solved in significantly less than exponential time. Examples include finding a perfect strategy forchesspositions on anN×Nboard[15]and similar problems for other board games.[16]
The problem of deciding the truth of a statement inPresburger arithmeticrequires even more time.FischerandRabinproved in 1974[17]that every algorithm that decides the truth of Presburger statements of lengthnhas a runtime of at least22cn{\displaystyle 2^{2^{cn}}}for some constantc. Hence, the problem is known to need more than exponential run time. Even more difficult are theundecidable problems, such as thehalting problem. They cannot be completely solved by any algorithm, in the sense that for any particular algorithm there is at least one input for which that algorithm will not produce the right answer; it will either produce the wrong answer, finish without giving a conclusive answer, or otherwise run forever without producing any answer at all.
It is also possible to consider questions other than decision problems. One such class, consisting of counting problems, is called#P: whereas an NP problem asks "Are there any solutions?", the corresponding #P problem asks "How many solutions are there?". Clearly, a #P problem must be at least as hard as the corresponding NP problem, since a count of solutions immediately tells if at least one solution exists, if the count is greater than zero. Surprisingly, some #P problems that are believed to be difficult correspond to easy (for example linear-time) P problems.[18]For these problems, it is very easy to tell whether solutions exist, but thought to be very hard to tell how many. Many of these problems are#P-complete, and hence among the hardest problems in #P, since a polynomial time solution to any of them would allow a polynomial time solution to all other #P problems.
In 1975,Richard E. Ladnershowed that if P ≠ NP, then there exist problems in NP that are neither in P nor NP-complete.[19]Such problems are called NP-intermediate problems. Thegraph isomorphism problem, thediscrete logarithm problem, and theinteger factorization problemare examples of problems believed to be NP-intermediate. They are some of the very few NP problems not known to be in P or to be NP-complete.
The graph isomorphism problem is the computational problem of determining whether two finitegraphsareisomorphic. An important unsolved problem in complexity theory is whether the graph isomorphism problem is in P, NP-complete, or NP-intermediate. The answer is not known, but it is believed that the problem is at least not NP-complete.[20]If graph isomorphism is NP-complete, thepolynomial time hierarchycollapses to its second level.[21]Since it is widely believed that the polynomial hierarchy does not collapse to any finite level, it is believed that graph isomorphism is not NP-complete. The best algorithm for this problem, due toLászló Babai, runs inquasi-polynomial time.[22]
The integer factorization problem is the computational problem of determining theprime factorizationof a given integer. Phrased as a decision problem, it is the problem of deciding whether the input has a factor less thank. No efficient integer factorization algorithm is known, and this fact forms the basis of several modern cryptographic systems, such as theRSAalgorithm. The integer factorization problem is in NP and inco-NP(and even inUPand co-UP[23]). If the problem is NP-complete, the polynomial time hierarchy will collapse to its first level (i.e., NP = co-NP). The mostefficientknown algorithm for integer factorization is thegeneral number field sieve, which takes expected time
to factor ann-bit integer. The best knownquantum algorithmfor this problem,Shor's algorithm, runs in polynomial time, although this does not indicate where the problem lies with respect to non-quantum complexity classes.
All of the above discussion has assumed that P means "easy" and "not in P" means "difficult", an assumption known asCobham's thesis. It is a common assumption in complexity theory; but there are caveats.
First, it can be false in practice. A theoretical polynomial algorithm may have extremely large constant factors or exponents, rendering it impractical. For example, the problem ofdecidingwhether a graphGcontainsHas aminor, whereHis fixed, can be solved in a running time ofO(n2),[25]wherenis the number of vertices inG. However, thebig O notationhides a constant that depends superexponentially onH. The constant is greater than2↑↑(2↑↑(2↑↑(h/2))){\displaystyle 2\uparrow \uparrow (2\uparrow \uparrow (2\uparrow \uparrow (h/2)))}(usingKnuth's up-arrow notation), and wherehis the number of vertices inH.[26]
On the other hand, even if a problem is shown to be NP-complete, and even if P ≠ NP, there may still be effective approaches to the problem in practice. There are algorithms for many NP-complete problems, such as theknapsack problem, thetraveling salesman problem, and theBoolean satisfiability problem, that can solve to optimality many real-world instances in reasonable time. The empiricalaverage-case complexity(time vs. problem size) of such algorithms can be surprisingly low. An example is thesimplex algorithminlinear programming, which works surprisingly well in practice; despite having exponential worst-casetime complexity, it runs on par with the best known polynomial-time algorithms.[27]
Finally, there are types of computations which do not conform to the Turing machine model on which P and NP are defined, such asquantum computationandrandomized algorithms.
Cook provides a restatement of the problem inThe P Versus NP Problemas "Does P = NP?"[28]According to polls,[8][29]most computer scientists believe that P ≠ NP. A key reason for this belief is that after decades of studying these problems no one has been able to find a polynomial-time algorithm for any of more than 3,000 important known NP-complete problems (seeList of NP-complete problems). These algorithms were sought long before the concept of NP-completeness was even defined (Karp's 21 NP-complete problems, among the first found, were all well-known existing problems at the time they were shown to be NP-complete). Furthermore, the result P = NP would imply many other startling results that are currently believed to be false, such as NP =co-NPand P =PH.
It is also intuitively argued that the existence of problems that are hard to solve but whose solutions are easy to verify matches real-world experience.[30]
If P = NP, then the world would be a profoundly different place than we usually assume it to be. There would be no special value in "creative leaps", no fundamental gap between solving a problem and recognizing the solution once it's found.
On the other hand, some researchers believe that it is overconfident to believe P ≠ NP and that researchers should also explore proofs of P = NP. For example, in 2002 these statements were made:[8]
The main argument in favor of P ≠ NP is the total lack of fundamental progress in the area of exhaustive search. This is, in my opinion, a very weak argument. The space of algorithms is very large and we are only at the beginning of its exploration. [...] The resolution ofFermat's Last Theoremalso shows that very simple questions may be settled only by very deep theories.
Being attached to a speculation is not a good guide to research planning. One should always try both directions of every problem. Prejudice has caused famous mathematicians to fail to solve famous problems whose solution was opposite to their expectations, even though they had developed all the methods required.
When one substitutes "linear time on a multitape Turing machine" for "polynomial time" in the definitions of P and NP, one obtains the classesDLINandNLIN.
It is known[31]that DLIN ≠ NLIN.
One of the reasons the problem attracts so much attention is the consequences of the possible answers. Either direction of resolution would advance theory enormously, and perhaps have huge practical consequences as well.
A proof that P = NP could have stunning practical consequences if the proof leads to efficient methods for solving some of the important problems in NP. The potential consequences, both positive and negative, arise since various NP-complete problems are fundamental in many fields.
It is also very possible that a proof wouldnotlead to practical algorithms for NP-complete problems. The formulation of the problem does not require that the bounding polynomial be small or even specifically known. Anon-constructive proofmight show a solution exists without specifying either an algorithm to obtain it or a specific bound. Even if the proof is constructive, showing an explicit bounding polynomial and algorithmic details, if the polynomial is not very low-order the algorithm might not be sufficiently efficient in practice. In this case the initial proof would be mainly of interest to theoreticians, but the knowledge that polynomial time solutions are possible would surely spur research into better (and possibly practical) methods to achieve them.
A solution showing P = NP could upend the field ofcryptography, which relies on certain problems being difficult. A constructive and efficient solution[Note 2]to an NP-complete problem such as3-SATwould break most existing cryptosystems including:
These would need modification or replacement withinformation-theoretically securesolutions that do not assume P ≠ NP.
There are also enormous benefits that would follow from rendering tractable many currently mathematically intractable problems. For instance, many problems inoperations researchare NP-complete, such as types ofinteger programmingand thetravelling salesman problem. Efficient solutions to these problems would have enormous implications for logistics. Many other important problems, such as some problems inprotein structure prediction, are also NP-complete;[35]making these problems efficiently solvable could considerably advance life sciences and biotechnology.
These changes could be insignificant compared to the revolution that efficiently solving NP-complete problems would cause in mathematics itself. Gödel, in his early thoughts on computational complexity, noted that a mechanical method that could solve any problem would revolutionize mathematics:[36][37]
If there really were a machine with φ(n) ∼k⋅n(or even ∼k⋅n2), this would have consequences of the greatest importance. Namely, it would obviously mean that in spite of the undecidability of theEntscheidungsproblem, the mental work of a mathematician concerning Yes-or-No questions could be completely replaced by a machine. After all, one would simply have to choose the natural numbernso large that when the machine does not deliver a result, it makes no sense to think more about the problem.
Similarly,Stephen Cook(assuming not only a proof, but a practically efficient algorithm) says:[28]
... it would transform mathematics by allowing a computer to find a formal proof of any theorem which has a proof of a reasonable length, since formal proofs can easily be recognized in polynomial time. Example problems may well include all of theCMI prize problems.
Research mathematicians spend their careers trying to prove theorems, and some proofs have taken decades or even centuries to find after problems have been stated—for instance,Fermat's Last Theoremtook over three centuries to prove. A method guaranteed to find a proof if a "reasonable" size proof exists, would essentially end this struggle.
Donald Knuthhas stated that he has come to believe that P = NP, but is reserved about the impact of a possible proof:[38]
[...] if you imagine a numberMthat's finite but incredibly large—like say the number 10↑↑↑↑3 discussed in my paper on "coping with finiteness"—then there's a humongous number of possible algorithms that donMbitwise or addition or shift operations onngiven bits, and it's really hard to believe that all of those algorithms fail.
My main point, however, is that I don't believe that the equality P = NP will turn out to be helpful even if it is proved, because such a proof will almost surely be nonconstructive.
A proof of P ≠ NP would lack the practical computational benefits of a proof that P = NP, but would represent a great advance in computational complexity theory and guide future research. It would demonstrate that many common problems cannot be solved efficiently, so that the attention of researchers can be focused on partial solutions or solutions to other problems. Due to widespread belief in P ≠ NP, much of this focusing of research has already taken place.[39]
P ≠ NP still leaves open theaverage-case complexityof hard problems in NP. For example, it is possible that SAT requires exponential time in the worst case, but that almost all randomly selected instances of it are efficiently solvable.Russell Impagliazzohas described five hypothetical "worlds" that could result from different possible resolutions to the average-case complexity question.[40]These range from "Algorithmica", where P = NP and problems like SAT can be solved efficiently in all instances, to "Cryptomania", where P ≠ NP and generating hard instances of problems outside P is easy, with three intermediate possibilities reflecting different possible distributions of difficulty over instances of NP-hard problems. The "world" where P ≠ NP but all problems in NP are tractable in the average case is called "Heuristica" in the paper. APrinceton Universityworkshop in 2009 studied the status of the five worlds.[41]
Although the P = NP problem itself remains open despite a million-dollar prize and a huge amount of dedicated research, efforts to solve the problem have led to several new techniques. In particular, some of the most fruitful research related to the P = NP problem has been in showing that existing proof techniques are insufficient for answering the question, suggesting novel technical approaches are required.
As additional evidence for the difficulty of the problem, essentially all known proof techniques incomputational complexity theoryfall into one of the following classifications, all insufficient to prove P ≠ NP:
These barriers are another reason why NP-complete problems are useful: if a polynomial-time algorithm can be demonstrated for an NP-complete problem, this would solve the P = NP problem in a way not excluded by the above results.
These barriers lead some computer scientists to suggest the P versus NP problem may beindependentof standard axiom systems likeZFC(cannot be proved or disproved within them). An independence result could imply that either P ≠ NP and this is unprovable in (e.g.) ZFC, or that P = NP but it is unprovable in ZFC that any polynomial-time algorithms are correct.[45]However, if the problem is undecidable even with much weaker assumptions extending thePeano axiomsfor integer arithmetic, then nearly polynomial-time algorithms exist for all NP problems.[46]Therefore, assuming (as most complexity theorists do) some NP problems don't have efficient algorithms, proofs of independence with those techniques are impossible. This also implies proving independence from PA or ZFC with current techniques is no easier than proving all NP problems have efficient algorithms.
The P = NP problem can be restated as certain classes of logical statements, as a result of work indescriptive complexity.
Consider all languages of finite structures with a fixedsignatureincluding alinear orderrelation. Then, all such languages in P are expressible infirst-order logicwith the addition of a suitable leastfixed-point combinator. Recursive functions can be defined with this and the order relation. As long as the signature contains at least one predicate or function in addition to the distinguished order relation, so that the amount of space taken to store such finite structures is actually polynomial in the number of elements in the structure, this precisely characterizes P.
Similarly, NP is the set of languages expressible in existentialsecond-order logic—that is, second-order logic restricted to excludeuniversal quantificationover relations, functions, and subsets. The languages in thepolynomial hierarchy,PH, correspond to all of second-order logic. Thus, the question "is P a proper subset of NP" can be reformulated as "is existential second-order logic able to describe languages (of finite linearly ordered structures with nontrivial signature) that first-order logic with least fixed point cannot?".[47]The word "existential" can even be dropped from the previous characterization, since P = NP if and only if P = PH (as the former would establish that NP = co-NP, which in turn implies that NP = PH).
No known algorithm for a NP-complete problem runs in polynomial time. However, there are algorithms known for NP-complete problems that if P = NP, the algorithm runs in polynomial time on accepting instances (although with enormous constants, making the algorithm impractical). However, these algorithms do not qualify as polynomial time because their running time on rejecting instances are not polynomial. The following algorithm, due toLevin(without any citation), is such an example below. It correctly accepts the NP-complete languageSUBSET-SUM. It runs in polynomial time on inputs that are in SUBSET-SUM if and only if P = NP:
This is a polynomial-time algorithm accepting an NP-complete language only if P = NP. "Accepting" means it gives "yes" answers in polynomial time, but is allowed to run forever when the answer is "no" (also known as asemi-algorithm).
This algorithm is enormously impractical, even if P = NP. If the shortest program that can solve SUBSET-SUM in polynomial time isbbits long, the above algorithm will try at least2b− 1other programs first.
Adecision problemis a problem that takes as input somestringwover an alphabet Σ, and outputs "yes" or "no". If there is analgorithm(say aTuring machine, or acomputer programwith unbounded memory) that produces the correct answer for any input string of lengthnin at mostcnksteps, wherekandcare constants independent of the input string, then we say that the problem can be solved inpolynomial timeand we place it in the class P. Formally, P is the set of languages that can be decided by a deterministic polynomial-time Turing machine. Meaning,
where
and a deterministic polynomial-time Turing machine is a deterministic Turing machineMthat satisfies two conditions:
NP can be defined similarly using nondeterministic Turing machines (the traditional way). However, a modern approach uses the concept ofcertificateandverifier. Formally, NP is the set of languages with a finite alphabet and verifier that runs in polynomial time. The following defines a "verifier":
LetLbe a language over a finite alphabet, Σ.
L∈ NP if, and only if, there exists a binary relationR⊂Σ∗×Σ∗{\displaystyle R\subset \Sigma ^{*}\times \Sigma ^{*}}and a positive integerksuch that the following two conditions are satisfied:
A Turing machine that decidesLRis called averifierforLand aysuch that (x,y) ∈Ris called acertificate of membershipofxinL.
Not all verifiers must be polynomial-time. However, forLto be in NP, there must be a verifier that runs in polynomial time.
Let
Whether a value ofxiscompositeis equivalent to of whetherxis a member of COMPOSITE. It can be shown that COMPOSITE ∈ NP by verifying that it satisfies the above definition (if we identify natural numbers with their binary representations).
COMPOSITE also happens to be in P, a fact demonstrated by the invention of theAKS primality test.[48]
There are many equivalent ways of describing NP-completeness.
LetLbe a language over a finite alphabet Σ.
Lis NP-complete if, and only if, the following two conditions are satisfied:
Alternatively, ifL∈ NP, and there is another NP-complete problem that can be polynomial-time reduced toL, thenLis NP-complete. This is a common way of proving some new problem is NP-complete.
While the P versus NP problem is generally considered unsolved,[49]many amateur and some professional researchers have claimed solutions.Gerhard J. Woegingercompiled a list of 116 purported proofs from 1986 to 2016, of which 61 were proofs of P = NP, 49 were proofs of P ≠ NP, and 6 proved other results, e.g. that the problem is undecidable.[50]Some attempts at resolving P versus NP have received brief media attention,[51]though these attempts have been refuted.
The filmTravelling Salesman, by director Timothy Lanzone, is the story of four mathematicians hired by the US government to solve the P versus NP problem.[52]
In the sixth episode ofThe Simpsons'seventh season "Treehouse of Horror VI", the equation P = NP is seen shortly after Homer accidentally stumbles into the "third dimension".[53][54]
In the second episode of season 2 ofElementary,"Solve for X"Sherlock and Watson investigate the murders of mathematicians who were attempting to solve P versus NP.[55][56]
|
https://en.wikipedia.org/wiki/P_%3D_NP_problem
|
Incomputational complexity,strong NP-completenessis a property of computational problems that is a special case ofNP-completeness. A general computational problem may have numerical parameters. For example, the input to thebin packingproblem is a list of objects of specific sizes and a size for the bins that must contain the objects—these object sizes and bin size are numerical parameters.
A problem is said to be stronglyNP-complete(NP-complete in the strong sense), if it remains NP-complete even when all of its numerical parameters are bounded by a polynomial in the length of the input.[1]A problem is said to be stronglyNP-hardif a strongly NP-complete problem has a polynomial reduction to it; in combinatorial optimization, particularly, the phrase "strongly NP-hard" is reserved for problems that are not known to have a polynomial reduction to another strongly NP-complete problem.
Normally numerical parameters to a problem are given inpositional notation, so a problem of input sizenmight contain parameters whose size isexponentialinn. If we redefine the problem to have the parameters given inunary notation, then the parameters must be bounded by the input size. Thus strong NP-completeness or NP-hardness may also be defined as the NP-completeness or NP-hardness of this unary version of the problem.
For example,bin packingis strongly NP-complete while the0-1 Knapsack problemis onlyweakly NP-complete. Thus the version of bin packing where the object and bin sizes are integers bounded by a polynomial remains NP-complete, while the corresponding version of the Knapsack problem can be solved inpseudo-polynomial timebydynamic programming.
From a theoretical perspective any strongly NP-hard optimization problem with a polynomially bounded objective function cannot have afully polynomial-time approximation scheme(orFPTAS) unless P = NP.[2][3]However, the converse fails: e.g. if P does not equal NP,knapsack with two constraintsis not strongly NP-hard, but has no FPTAS even when the optimal objective is polynomially bounded.[4]
Some strongly NP-complete problems may still be easy to solveon average, but it's more likely that difficult instances will be encountered in practice.
Assuming P ≠ NP, the following are true for computational problems on integers:[5]
|
https://en.wikipedia.org/wiki/Strongly_NP-complete
|
Travelling Salesmanis a 2012 intellectualthrillerfilm about four mathematicians who solve theP versus NP problem, one of the most challengingmathematicalproblems in history. The title refers to thetravelling salesman problem, anoptimization problemthat acts like a key to solving other difficult mathematical problems. It has been proven that a quick travelling salesman algorithm, if one exists, could be converted into quick algorithms for many other difficult tasks, such as factoring large numbers. Since manycryptographicschemes rely on the difficulty offactoring integersto protect their data, a quick solution would enable access to encrypted private data like personal correspondence, bank accounts and, possibly, government secrets.
The story was written and directed byTimothy Lanzoneand premiered at the International House inPhiladelphiaon June 16, 2012.[2]After screenings in eight countries, spanning four continents, including screenings at theUniversity of Pennsylvaniaand theUniversity of Cambridge,[3]the film was released globally on September 10, 2013.
The four mathematicians are gathered and meet with a top official of theUnited States Department of Defense. After some discussion, the group agrees that they must be wary with whom to trust and control their solution. The official offers them a reward of $10 million in exchange for their portion of the algorithm, swaying them by attempting to address their concerns. Only one of the four speaks out against the sale, and in doing so is forced to reveal a dark truth about his portion of the solution. Before they sign a license to the government, however, they wrestle with the ethical consequences of their discovery.
The film premiered in Philadelphia,Pennsylvaniaon June 16, 2012, and early reviews were favorable:
"It is a great premise that writers Andy and Timothy Lanzone use to explore the theme of scientifichubris."[4]
"Travelling Salesman’s mathematicians are all too aware of what their work will do to the world, and watching them argue how to handle the consequences offers a thriller far more cerebral than most."[4]
Mathematicians who have discussed the film praised the writer's attempt to bring a serious math problem to the big screen, although they questioned whether the world would be as dramatically affected by its solution:
"Despite our caveat that a solution to [the travelling salesman problem] might not be to die for, let alone to kill for, it would certainly be a huge change in our knowledge of the world. The implications could be unlimited. We certainly hope the movie raises awareness of computer science theory and the life importance of its subject matter."[1][5]
The film also garnered favorable reviews after the University of Cambridge screening:
"And at the heart of this story was that mathematics now underpins so much of our lives, meaning that mathematical discoveries could have a dramatic impact on the world, leading to new advances or to potential catastrophe and all the moral dilemmas that entails. Perhaps an ethics class, or at least a trip to see this movie, might become an obligatory part of all maths degrees."[6]
2012 Silicon Valley Film Festival - Best Feature Film, Best Lead Actor (Danny Barclay), Best Editing (Christopher McGlynn) .[7]
2012 New York International Film Festival - Official Selection.[8]
|
https://en.wikipedia.org/wiki/Travelling_Salesman_(2012_film)
|
Inmathematics,probabilistic metric spacesare a generalization ofmetric spaceswhere thedistanceno longer takes values in the non-negativereal numbersR≥0, but in distribution functions.[1]
LetD+be the set of allprobability distribution functionsFsuch thatF(0) = 0 (Fis a nondecreasing, leftcontinuous mappingfromRinto [0, 1] such thatmax(F) = 1).
Then given anon-emptysetSand a functionF:S×S→D+where we denoteF(p,q) byFp,qfor every (p,q) ∈S×S, theordered pair(S,F) is said to be a probabilistic metric space if:
Probabilistic metric spaces are initially introduced by Menger, which were termedstatistical metrics.[3]Shortly after, Wald criticized the generalizedtriangle inequalityand proposed an alternative one.[4]However, both authors had come to the conclusion that in some respects the Wald inequality was too stringent a requirement to impose on all probability metric spaces, which is partly included in the work of Schweizer and Sklar.[5]Later, the probabilistic metric spaces found to be very suitable to be used with fuzzy sets[6]and further called fuzzy metric spaces[7]
A probability metricDbetween tworandom variablesXandYmay be defined, for example, asD(X,Y)=∫−∞∞∫−∞∞|x−y|F(x,y)dxdy{\displaystyle D(X,Y)=\int _{-\infty }^{\infty }\int _{-\infty }^{\infty }|x-y|F(x,y)\,dx\,dy}whereF(x,y) denotes the joint probability density function of the random variablesXandY. IfXandYare independent from each other, then the equation above transforms intoD(X,Y)=∫−∞∞∫−∞∞|x−y|f(x)g(y)dxdy{\displaystyle D(X,Y)=\int _{-\infty }^{\infty }\int _{-\infty }^{\infty }|x-y|f(x)g(y)\,dx\,dy}wheref(x) andg(y) are probability density functions ofXandYrespectively.
One may easily show that such probability metrics do not satisfy the firstmetricaxiom or satisfies itif, and only if, both of argumentsXandYare certain events described byDirac deltadensityprobability distribution functions. In this case:D(X,Y)=∫−∞∞∫−∞∞|x−y|δ(x−μx)δ(y−μy)dxdy=|μx−μy|{\displaystyle D(X,Y)=\int _{-\infty }^{\infty }\int _{-\infty }^{\infty }|x-y|\delta (x-\mu _{x})\delta (y-\mu _{y})\,dx\,dy=|\mu _{x}-\mu _{y}|}the probability metric simply transforms into the metric betweenexpected valuesμx{\displaystyle \mu _{x}},μy{\displaystyle \mu _{y}}of the variablesXandY.
For all otherrandom variablesX,Ythe probability metric does not satisfy theidentity of indiscerniblescondition required to be satisfied by the metric of the metric space, that is:D(X,X)>0.{\displaystyle D\left(X,X\right)>0.}
For example if bothprobability distribution functionsof random variablesXandYarenormal distributions(N) having the samestandard deviationσ{\displaystyle \sigma }, integratingD(X,Y){\displaystyle D\left(X,Y\right)}yields:DNN(X,Y)=μxy+2σπexp(−μxy24σ2)−μxyerfc(μxy2σ){\displaystyle D_{NN}(X,Y)=\mu _{xy}+{\frac {2\sigma }{\sqrt {\pi }}}\exp \left(-{\frac {\mu _{xy}^{2}}{4\sigma ^{2}}}\right)-\mu _{xy}\operatorname {erfc} \left({\frac {\mu _{xy}}{2\sigma }}\right)}whereμxy=|μx−μy|,{\displaystyle \mu _{xy}=\left|\mu _{x}-\mu _{y}\right|,}anderfc(x){\displaystyle \operatorname {erfc} (x)}is the complementaryerror function.
In this case:limμxy→0DNN(X,Y)=DNN(X,X)=2σπ.{\displaystyle \lim _{\mu _{xy}\to 0}D_{NN}(X,Y)=D_{NN}(X,X)={\frac {2\sigma }{\sqrt {\pi }}}.}
The probability metric of random variables may be extended into metricD(X,Y) ofrandom vectorsX,Yby substituting|x−y|{\displaystyle |x-y|}with any metric operatord(x,y):D(X,Y)=∫Ω∫Ωd(x,y)F(x,y)dΩxdΩy{\displaystyle D(\mathbf {X} ,\mathbf {Y} )=\int _{\Omega }\int _{\Omega }d(\mathbf {x} ,\mathbf {y} )F(\mathbf {x} ,\mathbf {y} )\,d\Omega _{x}d\Omega _{y}}whereF(X,Y) is the joint probability density function of random vectorsXandY. For example substitutingd(x,y) withEuclidean metricand providing the vectorsXandYare mutually independent would yield to:D(X,Y)=∫Ω∫Ω∑i|xi−yi|2F(x)G(y)dΩxdΩy.{\displaystyle D(\mathbf {X} ,\mathbf {Y} )=\int _{\Omega }\int _{\Omega }{\sqrt {\sum _{i}|x_{i}-y_{i}|^{2}}}F(\mathbf {x} )G(\mathbf {y} )\,d\Omega _{x}d\Omega _{y}.}
|
https://en.wikipedia.org/wiki/Probabilistic_metric_space
|
Arandomness extractor, often simply called an "extractor", is a function, which being applied to output from a weakentropy source, together with a short, uniformly random seed, generates a highlyrandomoutput that appearsindependentfrom the source anduniformly distributed.[1]Examples of weakly random sources includeradioactive decayorthermal noise; the only restriction on possible sources is that there is no way they can be fully controlled, calculated or predicted, and that a lower bound on their entropy rate can be established. For a given source, a randomness extractor can even be considered to be a true random number generator (TRNG); but there is no single extractor that has been proven to produce truly random output from any type of weakly random source.
Sometimes the term "bias" is used to denote a weakly random source's departure from uniformity, and in older literature, some extractors are calledunbiasing algorithms,[2]as they take the randomness from a so-called "biased" source and output a distribution that appears unbiased. The weakly random source will always be longer than the extractor's output, but an efficient extractor is one that lowers this ratio of lengths as much as possible, while simultaneously keeping the seed length low. Intuitively, this means that as much randomness as possible has been "extracted" from the source.
An extractor has some conceptual similarities with apseudorandom generator(PRG), but the two concepts are not identical. Both are functions that take as input a small, uniformly random seed and produce a longer output that "looks" uniformly random. Some pseudorandom generators are, in fact, also extractors. (When a PRG is based on the existence ofhard-core predicates, one can think of the weakly random source as a set of truth tables of such predicates and prove that the output is statistically close to uniform.[3]) However, the general PRG definition does not specify that a weakly random source must be used, and while in the case of an extractor, the output should bestatistically closeto uniform, in a PRG it is only required to becomputationally indistinguishablefrom uniform, a somewhat weaker concept.
Themin-entropyof a distributionX{\displaystyle X}(denotedH∞(X){\displaystyle H_{\infty }(X)}), is the largest real numberk{\displaystyle k}such thatPr[X=x]≤2−k{\displaystyle \Pr[X=x]\leq 2^{-k}}for everyx{\displaystyle x}in the range ofX{\displaystyle X}. In essence, this measures how likelyX{\displaystyle X}is to take its most likely value, giving a worst-case bound on how randomX{\displaystyle X}appears. LettingUℓ{\displaystyle U_{\ell }}denote the uniform distribution over{0,1}ℓ{\displaystyle \{0,1\}^{\ell }}, clearlyH∞(Uℓ)=ℓ{\displaystyle H_{\infty }(U_{\ell })=\ell }.
For ann-bit distributionX{\displaystyle X}with min-entropyk, we say thatX{\displaystyle X}is an(n,k){\displaystyle (n,k)}distribution.
Definition (Extractor):(k,ε)-extractor
LetExt:{0,1}n×{0,1}d→{0,1}m{\displaystyle {\text{Ext}}:\{0,1\}^{n}\times \{0,1\}^{d}\to \{0,1\}^{m}}be a function that takes as input a sample from an(n,k){\displaystyle (n,k)}distributionX{\displaystyle X}and ad-bit seed fromUd{\displaystyle U_{d}}, and outputs anm-bit string.Ext{\displaystyle {\text{Ext}}}is a(k,ε)-extractor, if for all(n,k){\displaystyle (n,k)}distributionsX{\displaystyle X}, the output distribution ofExt{\displaystyle {\text{Ext}}}isε-close toUm{\displaystyle U_{m}}.
In the above definition,ε-close refers tostatistical distance.
Intuitively, an extractor takes a weakly randomn-bit input and a short, uniformly random seed and produces anm-bit output that looks uniformly random. The aim is to have a lowd{\displaystyle d}(i.e. to use as little uniform randomness as possible) and as high anm{\displaystyle m}as possible (i.e. to get out as many close-to-random bits of output as we can).
An extractor is strong ifconcatenatingthe seed with the extractor's output yields a distribution that is still close to uniform.
Definition (Strong Extractor):A(k,ϵ){\displaystyle (k,\epsilon )}-strong extractor is a function
such that for every(n,k){\displaystyle (n,k)}distributionX{\displaystyle X}the distributionUd∘Ext(X,Ud){\displaystyle U_{d}\circ {\text{Ext}}(X,U_{d})}(the two copies ofUd{\displaystyle U_{d}}denote the same random variable) isϵ{\displaystyle \epsilon }-close to the uniform distribution onUm+d{\displaystyle U_{m+d}}.
Using theprobabilistic method, it can be shown that there exists a (k,ε)-extractor, i.e. that the construction is possible. However, it is usually not enough merely to show that an extractor exists. An explicit construction is needed, which is given as follows:
Definition (Explicit Extractor):For functionsk(n),ε(n),d(n),m(n) a family Ext = {Extn} of functions
is an explicit (k,ε)-extractor, if Ext(x,y) can be computed inpolynomial time(in its input length) and for everyn, Extnis a (k(n),ε(n))-extractor.
By the probabilistic method, it can be shown that there exists a (k,ε)-extractor with seed length
and output length
A variant of the randomness extractor with weaker properties is thedisperser.
One of the most important aspects ofcryptographyis randomkey generation.[5]It is often necessary to generate secret and random keys from sources that are semi-secret or which may be compromised to some degree. By taking a single, short (and secret) random key as a source, an extractor can be used to generate a longer pseudo-random key, which then can be used for public key encryption. More specifically, when a strong extractor is used its output will appear be uniformly random, even to someone who sees part (but not all) of the source. For example, if the source is known but the seed is not known (or vice versa). This property of extractors is particularly useful in what is commonly calledExposure-Resilientcryptography in which the desired extractor is used as anExposure-Resilient Function(ERF). Exposure-Resilient cryptography takes into account that the fact that it is difficult to keep secret the initial exchange of data which often takes place during the initialization of anencryptionapplication e.g., the sender of encrypted information has to provide the receivers with information which is required for decryption.
The following paragraphs define and establish an important relationship between two kinds of ERF--k-ERFandk-APRF--which are useful in Exposure-Resilient cryptography.
Definition (k-ERF):An adaptive k-ERF is a functionf{\displaystyle f}where, for a random inputr{\displaystyle r}, when a computationally unbounded adversaryA{\displaystyle A}can adaptively read all ofr{\displaystyle r}except fork{\displaystyle k}bits,|Pr{Ar(f(r))=1}−Pr{Ar(R)=1}|≤ϵ(n){\displaystyle |\Pr\{A^{r}(f(r))=1\}-\Pr\{A^{r}(R)=1\}|\leq \epsilon (n)}for some negligible functionϵ(n){\displaystyle \epsilon (n)}(defined below).
The goal is to construct an adaptive ERF whose output is highly random and uniformly distributed. But a stronger condition is often needed in which every output occurs with almost uniform probability. For this purposeAlmost-Perfect Resilient Functions(APRF) are used. The definition of an APRF is as follows:
Definition (k-APRF):Ak=k(n){\displaystyle k=k(n)}APRF is a functionf{\displaystyle f}where, for any setting ofn−k{\displaystyle n-k}bits of the inputr{\displaystyle r}to any fixed values, the probability vectorp{\displaystyle p}of the outputf(r){\displaystyle f(r)}over the random choices for thek{\displaystyle k}remaining bits satisfies|pi−2−m|<2−mϵ(n){\displaystyle |p_{i}-2^{-m}|<2^{-m}\epsilon (n)}for alli{\displaystyle i}and for some negligible functionϵ(n){\displaystyle \epsilon (n)}.
Kamp and Zuckerman[6]have proved a theorem stating that if a functionf{\displaystyle f}is ak-APRF, thenf{\displaystyle f}is also ak-ERF. More specifically,anyextractor having sufficiently small error and taking as input anoblivious, bit-fixing source is also an APRF and therefore also ak-ERF. A more specific extractor is expressed in this lemma:
Lemma:Any2−mϵ(n){\displaystyle 2^{-m}\epsilon (n)}-extractorf:{0,1}n→{0,1}m{\displaystyle f:\{0,1\}^{n}\rightarrow \{0,1\}^{m}}for the set of(n,k){\displaystyle (n,k)}oblivious bit-fixing sources, whereϵ(n){\displaystyle \epsilon (n)}is negligible, is also a k-APRF.
This lemma is proved by Kamp and Zuckerman.[6]The lemma is proved by examining the distance from uniform of the output, which in a2−mϵ(n){\displaystyle 2^{-m}\epsilon (n)}-extractor obviously is at most2−mϵ(n){\displaystyle 2^{-m}\epsilon (n)}, which satisfies the condition of the APRF.
The lemma leads to the following theorem, stating that there in fact exists ak-APRF function as described:
Theorem (existence):For any positive constantγ≤12{\displaystyle \gamma \leq {\frac {1}{2}}}, there exists an explicit k-APRFf:{0,1}n→{0,1}m{\displaystyle f:\{0,1\}^{n}\rightarrow \{0,1\}^{m}}, computable in a linear number of arithmetic operations onm{\displaystyle m}-bit strings, withm=Ω(n2γ){\displaystyle m=\Omega (n^{2\gamma })}andk=n12+γ{\displaystyle k=n^{{\frac {1}{2}}+\gamma }}.
Definition (negligible function):In the proof of this theorem, we need a definition of anegligible function. A functionϵ(n){\displaystyle \epsilon (n)}is defined as being negligible ifϵ(n)=O(1nc){\displaystyle \epsilon (n)=O\left({\frac {1}{n^{c}}}\right)}for all constantsc{\displaystyle c}.
Proof:Consider the followingϵ{\displaystyle \epsilon }-extractor: The functionf{\displaystyle f}is an extractor for the set of(n,δn){\displaystyle (n,\delta n)}oblivious bit-fixing source:f:{0,1}n→{0,1}m{\displaystyle f:\{0,1\}^{n}\rightarrow \{0,1\}^{m}}.f{\displaystyle f}hasm=Ω(δ2n){\displaystyle m=\Omega (\delta ^{2}n)},ϵ=2−cm{\displaystyle \epsilon =2^{-cm}}andc>1{\displaystyle c>1}.
The proof of this extractor's existence withδ≤1{\displaystyle \delta \leq 1}, as well as the fact that it is computable in linear computing time on the length ofm{\displaystyle m}can be found in the paper by Jesse Kamp and David Zuckerman (p. 1240).
That this extractor fulfills the criteria of the lemma is trivially true asϵ=2−cm{\displaystyle \epsilon =2^{-cm}}is a negligible function.
The size ofm{\displaystyle m}is:
Since we knowδ≤1{\displaystyle \delta \leq 1}then the lower bound onm{\displaystyle m}is dominated byn{\displaystyle n}. In the last step we use the fact thatγ≤12{\displaystyle \gamma \leq {\frac {1}{2}}}which means that the power ofn{\displaystyle n}is at most1{\displaystyle 1}. And sincen{\displaystyle n}is a positive integer we know thatn2γ{\displaystyle n^{2\gamma }}is at mostn{\displaystyle n}.
The value ofk{\displaystyle k}is calculated by using the definition of the extractor, where we know:
and by using the value ofm{\displaystyle m}we have:
Using this value ofm{\displaystyle m}we account for the worst case, wherek{\displaystyle k}is on its lower bound. Now by algebraic calculations we get:
Which inserted in the value ofk{\displaystyle k}gives
which proves that there exists an explicit k-APRF extractor with the given properties.◻{\displaystyle \Box }
Perhaps the earliest example is due toJohn von Neumann. From the input stream, his extractor took bits, two at a time (first and second, then third and fourth, and so on). If the two bits matched, no output was generated. If the bits differed, the value of the first bit was output. The Von Neumann extractor can be shown to produce a uniform output even if the distribution of input bits is not uniform so long as each bit has the same probability of being one and there is nocorrelationbetween successive bits.[7]
Thus, it takes as input aBernoulli sequencewithpnot necessarily equal to 1/2, and outputs a Bernoulli sequence withp=1/2.{\displaystyle p=1/2.}More generally, it applies to anyexchangeable sequence—it only relies on the fact that for any pair, 01 and 10 areequallylikely: for independent trials, these have probabilitiesp⋅(1−p)=(1−p)⋅p{\displaystyle p\cdot (1-p)=(1-p)\cdot p}, while for an exchangeable sequence the probability may be more complicated, but both are equally likely. To put it simply, because the bits arestatistically independentand due to thecommutative propertyof multiplication, it would follow thatP(A∩B)=P(A)P(B)=P(B)P(A)=P(B∩A){\displaystyle P(A\cap B)=P(A)P(B)=P(B)P(A)=P(B\cap A)}. Hence, if pairs of 01 and 10 are mapped onto bits 0 and 1 and pairs 00 and 11 are discarded, then the output will be a uniform distribution.
Iterations upon the Von Neumann extractor include the Elias and Peres extractor, the latter of which reuses bits in order to produce larger output streams than the Von Neumann extractor given the same size input stream.[8]
Another approach is to use the output of achaos machineapplied to the input stream. This approach generally relies on properties ofchaotic systems. Input bits are pushed to the machine, evolving orbits and trajectories in multiple dynamical systems. Thus, small differences in the input produce very different outputs. Such a machine has a uniform output even if the distribution of input bits is not uniform or has serious flaws, and can therefore use weakentropy sources. Additionally, this scheme allows for increased complexity, quality, and security of the output stream, controlled by specifying three parameters:time cost,memory required, andsecret key.
Note that while true chaotic systems are mathematically sound for 'amplifying' entropy, this is predicated on the availability of real numbers with an infinite precision. When implemented in digital computers with finite precision number representation, as in chaos machines usingIEEE 754 Floating-Point, the periodicity has been shown to fall far short of the full space for a given bit length.[9]
It is also possible to use acryptographic hash functionas a randomness extractor. However, not every hashing algorithm is suitable for this purpose.[citation needed]
Randomness extractors are used widely in cryptographic applications, whereby acryptographic hashfunction is applied to a high-entropy, but non-uniform source, such as disk drive timing information or keyboard delays, to yield a uniformly random result.
Randomness extractors have played a major role in recentquantum cryptographydevelopments, for example, distilling the raw output from a quantum random number generators into a shorter, secure and uniformly random output.[10]
Randomness extraction is also used in some branches ofcomputational complexity theoryand in the construction of list-decodableerror correcting codes.
|
https://en.wikipedia.org/wiki/Randomness_extractor
|
Instatisticsand related fields, asimilarity measureorsimilarity functionorsimilarity metricis areal-valued functionthat quantifies the similarity between two objects. Although no single definition of a similarity exists, usually such measures are in some sense the inverse ofdistance metrics: they take on large values for similar objects and either zero or a negative value for very dissimilar objects. Though, in more broad terms, a similarity function may also satisfy metric axioms.
Cosine similarityis a commonly used similarity measure for real-valued vectors, used in (among other fields)information retrievalto score the similarity of documents in thevector space model. Inmachine learning, commonkernel functionssuch as theRBF kernelcan be viewed as similarity functions.[1]
Different types of similarity measures exist for various types of objects, depending on the objects being compared. For each type of object there are various similarity measurement formulas.[2]
Similarity between two data points
There are many various options available when it comes to finding similarity between two data points, some of which are a combination of other similarity methods. Some of the methods for similarity measures between two data points include Euclidean distance, Manhattan distance, Minkowski distance, and Chebyshev distance. The Euclidean distance formula is used to find the distance between two points on a plane, which is visualized in the image below. Manhattan distance is commonly used inGPSapplications, as it can be used to find the shortest route between two addresses.[citation needed]When you generalize the Euclidean distance formula and Manhattan distance formula you are left with theMinkowski distanceformulas, which can be used in a wide variety of applications.
Similarity between strings
For comparing strings, there are various measures ofstring similaritythat can be used. Some of these methods include edit distance, Levenshtein distance, Hamming distance, and Jaro distance. The best-fit formula is dependent on the requirements of the application. For example, edit distance is frequently used fornatural language processingapplications and features, such as spell-checking. Jaro distance is commonly used in record linkage to compare first and last names to other sources.
Similarity between two probability distributions
Typical measures of similarity forprobability distributionsare theBhattacharyya distanceand theHellinger distance. Both provide a quantification of similarity for two probability distributions on the same domain, and they are mathematically closely linked. The Bhattacharyya distance does not fulfill thetriangle inequality, meaning it does not form ametric. The Hellinger distance does form a metric on the space of probability distributions.
Similarity between two sets
TheJaccard indexformula measures the similarity between twosetsbased on the number of items that are present in both sets relative to the total number of items. It is commonly used inrecommendation systemsandsocial media analysis[citation needed]. TheSørensen–Dice coefficientalso compares the number of items in both sets to the total number of items present but the weight for the number of shared items is larger. The Sørensen–Dice coefficient is commonly used inbiologyapplications, measuring the similarity between two sets of genes or species[citation needed].
Similarity between two sequences
When comparing temporal sequences (time series), some similarity measures must additionally account for similarity of two sequences that are not fully aligned.
Clustering orCluster analysisis a data mining technique that is used to discover patterns in data by grouping similar objects together. It involves partitioning a set of data points into groups or clusters based on their similarities. One of the fundamental aspects of clustering is how to measure similarity between data points.
Similarity measures play a crucial role in many clustering techniques, as they are used to determine how closely related two data points are and whether they should be grouped together in the same cluster. A similarity measure can take many different forms depending on the type of data being clustered and the specific problem being solved.
One of the most commonly used similarity measures is theEuclidean distance, which is used in many clustering techniques includingK-means clusteringandHierarchical clustering. The Euclidean distance is a measure of the straight-line distance between two points in a high-dimensional space. It is calculated as the square root of the sum of the squared differences between the corresponding coordinates of the two points. For example, if we have two data points(x1,y1){\displaystyle (x_{1},y_{1})}and(x2,y2){\displaystyle (x_{2},y_{2})}, the Euclidean distance between them isd=√[(x2−x1)2+(y2−y1)2]{\displaystyle d=\surd [(x_{2}-x_{1})^{2}+(y_{2}-y_{1})^{2}]}.
Another commonly used similarity measure is theJaccard indexor Jaccard similarity, which is used in clustering techniques that work with binary data such as presence/absence data[3]or Boolean data; The Jaccard similarity is particularly useful for clustering techniques that work with text data, where it can be used to identify clusters of similar documents based on their shared features or keywords.[4]It is calculated as the size of the intersection of two sets divided by the size of the union of the two sets:J(A,B)=A⋂BA⋃B{\displaystyle J(A,B)={A\bigcap B \over A\bigcup B}}.
Similarities among 162 Relevant Nuclear Profile are tested using the Jaccard Similarity measure (see figure with heatmap). The Jaccard similarity of the nuclear profile ranges from 0 to 1, with 0 indicating no similarity between the two sets and 1 indicating perfect similarity with the aim of clustering the most similar nuclear profile.
Manhattan distance, also known asTaxicab geometry, is a commonly used similarity measure in clustering techniques that work with continuous data. It is a measure of the distance between two data points in a high-dimensional space, calculated as the sum of the absolute differences between the corresponding coordinates of the two points|x1−x2|+|y1−y2|{\displaystyle \left\vert x_{1}-x_{2}\right\vert +\left\vert y_{1}-y_{2}\right\vert }.
When dealing with mixed-type data, including nominal, ordinal, and numerical attributes per object,Gower's distance(or similarity) is a common choice as it can handle different types of variables implicitly. It first computes similarities between the pair of variables in each object, and then combines those similarities to a single weighted average per object-pair. As such, for two objectsi{\displaystyle i}andj{\displaystyle j}havingp{\displaystyle p}descriptors, the similarityS{\displaystyle S}is defined as:Sij=∑k=1pwijksijk∑k=1pwijk,{\displaystyle S_{ij}={\frac {\sum _{k=1}^{p}w_{ijk}s_{ijk}}{\sum _{k=1}^{p}w_{ijk}}},}where thewijk{\displaystyle w_{ijk}}are non-negative weights andsijk{\displaystyle s_{ijk}}is the similarity between the two objects regarding theirk{\displaystyle k}-th variable.
Inspectral clustering, a similarity, or affinity, measure is used to transform data to overcome difficulties related to lack of convexity in the shape of the data distribution.[5]The measure gives rise to an(n,n){\displaystyle (n,n)}-sizedsimilarity matrixfor a set ofnpoints, where the entry(i,j){\displaystyle (i,j)}in the matrix can be simply the (reciprocal of the)Euclidean distancebetweeni{\displaystyle i}andj{\displaystyle j}, or it can be a more complex measure of distance such as the Gaussiane−‖s1−s2‖2/2σ2{\displaystyle e^{-\|s_{1}-s_{2}\|^{2}/2\sigma ^{2}}}.[5]Further modifying this result with network analysis techniques is also common.[6]
The choice of similarity measure depends on the type of data being clustered and the specific problem being solved. For example, working with continuous data such as gene expression data, the Euclidean distance or cosine similarity may be appropriate. If working with binary data such as the presence of a genomic loci in a nuclear profile, the Jaccard index may be more appropriate. Lastly, working with data that is arranged in a grid or lattice structure, such as image or signal processing data, the Manhattan distance is particularly useful for the clustering.
Similarity measures are used to developrecommender systems. It observes a user's perception and liking of multiple items. On recommender systems, the method is using a distance calculation such asEuclidean DistanceorCosine Similarityto generate asimilarity matrixwith values representing the similarity of any pair of targets. Then, by analyzing and comparing the values in the matrix, it is possible to match two targets to a user's preference or link users based on their marks. In this system, it is relevant to observe the value itself and the absolute distance between two values.[7]Gathering this data can indicate a mark's likeliness to a user as well as how mutually closely two marks are either rejected or accepted. It is possible then to recommend to a user targets with high similarity to the user's likes.
Recommender systems are observed in multiple online entertainment platforms, in social media and streaming websites. The logic for the construction of this systems is based on similarity measures.[citation needed]
Similarity matrices are used insequence alignment. Higher scores are given to more-similar characters, and lower or negative scores for dissimilar characters.
Nucleotidesimilarity matrices are used to alignnucleic acidsequences. Because there are only four nucleotides commonly found inDNA(Adenine(A),Cytosine(C),Guanine(G) andThymine(T)), nucleotide similarity matrices are much simpler thanproteinsimilarity matrices. For example, a simple matrix will assign identical bases a score of +1 and non-identical bases a score of −1. A more complicated matrix would give a higher score to transitions (changes from apyrimidinesuch as C or T to another pyrimidine, or from apurinesuch as A or G to another purine) than to transversions (from a pyrimidine to a purine or vice versa).
The match/mismatch ratio of the matrix sets the target evolutionary distance.[8][9]The +1/−3 DNA matrix used by BLASTN is best suited for finding matches between sequences that are 99% identical; a +1/−1 (or +4/−4) matrix is much more suited to sequences with about 70% similarity. Matrices for lower similarity sequences require longer sequence alignments.
Amino acidsimilarity matrices are more complicated, because there are 20 amino acids coded for by thegenetic code, and so a larger number of possible substitutions. Therefore, the similarity matrix for amino acids contains 400 entries (although it is usuallysymmetric). The first approach scored all amino acid changes equally. A later refinement was to determine amino acid similarities based on how many base changes were required to change a codon to code for that amino acid. This model is better, but it doesn't take into account the selective pressure of amino acid changes. Better models took into account the chemical properties of amino acids.
One approach has been to empirically generate the similarity matrices. TheDayhoffmethod used phylogenetic trees and sequences taken from species on the tree. This approach has given rise to thePAMseries of matrices. PAM matrices are labelled based on how many nucleotide changes have occurred, per 100 amino acids.
While the PAM matrices benefit from having a well understood evolutionary model, they are most useful at short evolutionary distances (PAM10–PAM120). At long evolutionary distances, for example PAM250 or 20% identity, it has been shown that theBLOSUMmatrices are much more effective.
The BLOSUM series were generated by comparing a number of divergent sequences. The BLOSUM series are labeled based on how much entropy remains unmutated between all sequences, so a lower BLOSUM number corresponds to a higher PAM number.
|
https://en.wikipedia.org/wiki/Similarity_measure
|
ATuring machineis amathematical model of computationdescribing anabstract machine[1]that manipulates symbols on a strip of tape according to a table of rules.[2]Despite the model's simplicity, it is capable of implementing anycomputer algorithm.[3]
The machine operates on an infinite[4]memory tape divided intodiscretecells,[5]each of which can hold a single symbol drawn from afinite setof symbols called thealphabetof the machine. It has a "head" that, at any point in the machine's operation, is positioned over one of these cells, and a "state" selected from a finite set of states. At each step of its operation, the head reads the symbol in its cell. Then, based on the symbol and the machine's own present state, the machine writes a symbol into the same cell, and moves the head one step to the left or the right,[6]or halts the computation. The choice of which replacement symbol to write, which direction to move the head, and whether to halt is based on a finite table that specifies what to do for each combination of the current state and the symbol that is read.
As with a real computer program, it is possible for a Turing machine to go into aninfinite loopwhich will never halt.
The Turing machine was invented in 1936 byAlan Turing,[7][8]who called it an "a-machine" (automatic machine).[9]It was Turing's doctoral advisor,Alonzo Church, who later coined the term "Turing machine" in a review.[10]With this model, Turing was able to answer two questions in the negative:
Thus by providing a mathematical description of a very simple device capable of arbitrary computations, he was able to prove properties of computation in general—and in particular, theuncomputabilityof theEntscheidungsproblem, or 'decision problem' (whether every mathematical statement is provable or disprovable).[13]
Turing machines proved the existence of fundamental limitations on the power of mechanical computation.[14]
While they can express arbitrary computations, their minimalist design makes them too slow for computation in practice: real-worldcomputersare based on different designs that, unlike Turing machines, userandom-access memory.
Turing completenessis the ability for acomputational modelor a system of instructions to simulate a Turing machine. A programming language that is Turing complete is theoretically capable of expressing all tasks accomplishable by computers; nearly all programming languages are Turing complete if the limitations of finite memory are ignored.
A Turing machine is an idealised model of acentral processing unit(CPU) that controls all data manipulation done by a computer, with the canonical machine using sequential memory to store data. Typically, the sequential memory is represented as a tape of infinite length on which the machine can perform read and write operations.
In the context offormal languagetheory, a Turing machine (automaton) is capable ofenumeratingsome arbitrary subset of valid strings of analphabet. A set of strings which can be enumerated in this manner is called arecursively enumerable language. The Turing machine can equivalently be defined as a model that recognises valid input strings, rather than enumerating output strings.
Given a Turing machineMand an arbitrary strings, it is generally not possible to decide whetherMwill eventually produces. This is due to the fact that thehalting problemis unsolvable, which has major implications for the theoretical limits of computing.
The Turing machine is capable of processing anunrestricted grammar, which further implies that it is capable of robustly evaluatingfirst-order logicin an infinite number of ways. This is famously demonstrated throughlambda calculus.
A Turing machine that is able to simulate any other Turing machine is called auniversal Turing machine(UTM, or simply a universal machine). Another mathematical formalism,lambda calculus, with a similar "universal" nature was introduced byAlonzo Church. Church's work intertwined with Turing's to form the basis for theChurch–Turing thesis. This thesis states that Turing machines, lambda calculus, and other similar formalisms ofcomputationdo indeed capture the informal notion ofeffective methodsinlogicandmathematicsand thus provide a model through which one can reason about analgorithmor "mechanical procedure" in a mathematically precise way without being tied to any particular formalism. Studying theabstract propertiesof Turing machines has yielded many insights intocomputer science,computability theory, andcomplexity theory.
In his 1948 essay, "Intelligent Machinery", Turing wrote that his machine consists of:
...an unlimited memory capacity obtained in the form of an infinite tape marked out into squares, on each of which a symbol could be printed. At any moment there is one symbol in the machine; it is called the scanned symbol. The machine can alter the scanned symbol, and its behavior is in part determined by that symbol, but the symbols on the tape elsewhere do not affect the behavior of the machine. However, the tape can be moved back and forth through the machine, this being one of the elementary operations of the machine. Any symbol on the tape may therefore eventually have an innings.[15]
The Turing machine mathematically models a machine that mechanically operates on a tape. On this tape are symbols, which the machine can read and write, one at a time, using a tape head. Operation is fully determined by a finite set of elementary instructions such as "in state 42, if the symbol seen is 0, write a 1; if the symbol seen is 1, change into state 17; in state 17, if the symbol seen is 0, write a 1 and change to state 6;" etc. In the original article ("On Computable Numbers, with an Application to the Entscheidungsproblem", see alsoreferences below), Turing imagines not a mechanism, but a person whom he calls the "computer", who executes these deterministic mechanical rules slavishly (or as Turing puts it, "in a desultory manner").
More explicitly, a Turing machine consists of:
In the 4-tuple models, erasing or writing a symbol (aj1) and moving the head left or right (dk) are specified as separate instructions. The table tells the machine to (ia) erase or write a symbolor(ib) move the head left or right,and then(ii) assume the same or a new state as prescribed, but not both actions (ia) and (ib) in the same instruction. In some models, if there is no entry in the table for the current combination of symbol and state, then the machine will halt; other models require all entries to be filled.
Every part of the machine (i.e. its state, symbol-collections, and used tape at any given time) and its actions (such as printing, erasing and tape motion) isfinite,discreteanddistinguishable; it is the unlimited amount of tape and runtime that gives it an unbounded amount ofstorage space.
FollowingHopcroft & Ullman (1979, p. 148), a (one-tape) Turing machine can be formally defined as a 7-tupleM=⟨Q,Γ,b,Σ,δ,q0,F⟩{\displaystyle M=\langle Q,\Gamma ,b,\Sigma ,\delta ,q_{0},F\rangle }where
A variant allows "no shift", say N, as a third element of the set of directions{L,R}{\displaystyle \{L,R\}}.
The 7-tuple for the 3-statebusy beaverlooks like this (see more about this busy beaver atTuring machine examples):
Initially all tape cells are marked with0{\displaystyle 0}.
In the words of van Emde Boas (1990), p. 6: "The set-theoretical object [his formal seven-tuple description similar to the above] provides only partial information on how the machine will behave and what its computations will look like."
For instance,
Definitions in literature sometimes differ slightly, to make arguments or proofs easier or clearer, but this is always done in such a way that the resulting machine has the same computational power. For example, the set could be changed from{L,R}{\displaystyle \{L,R\}}to{L,R,N}{\displaystyle \{L,R,N\}}, whereN("None" or "No-operation") would allow the machine to stay on the same tape cell instead of moving left or right. This would not increase the machine's computational power.
The most common convention represents each "Turing instruction" in a "Turing table" by one of nine 5-tuples, per the convention of Turing/Davis (Turing (1936) inThe Undecidable, p. 126–127 and Davis (2000) p. 152):
Other authors (Minsky (1967) p. 119, Hopcroft and Ullman (1979) p. 158, Stone (1972) p. 9) adopt a different convention, with new stateqmlisted immediately after the scanned symbol Sj:
For the remainder of this article "definition 1" (the Turing/Davis convention) will be used.
In the following table, Turing's original model allowed only the first three lines that he called N1, N2, N3 (cf. Turing inThe Undecidable, p. 126). He allowed for erasure of the "scanned square" by naming a 0th symbol S0= "erase" or "blank", etc. However, he did not allow for non-printing, so every instruction-line includes "print symbol Sk" or "erase" (cf. footnote 12 in Post (1947),The Undecidable, p. 300). The abbreviations are Turing's (The Undecidable, p. 119). Subsequent to Turing's original paper in 1936–1937, machine-models have allowed all nine possible types of five-tuples:
Any Turing table (list of instructions) can be constructed from the above nine 5-tuples. For technical reasons, the three non-printing or "N" instructions (4, 5, 6) can usually be dispensed with. For examples seeTuring machine examples.
Less frequently the use of 4-tuples are encountered: these represent a further atomization of the Turing instructions (cf. Post (1947), Boolos & Jeffrey (1974, 1999), Davis-Sigal-Weyuker (1994)); also see more atPost–Turing machine.
The word "state" used in context of Turing machines can be a source of confusion, as it can mean two things. Most commentators after Turing have used "state" to mean the name/designator of the current instruction to be performed—i.e. the contents of the state register. But Turing (1936) made a strong distinction between a record of what he called the machine's "m-configuration", and the machine's (or person's) "state of progress" through the computation—the current state of the total system. What Turing called "the state formula" includes both the current instruction andallthe symbols on the tape:
Thus the state of progress of the computation at any stage is completely determined by the note of instructions and the symbols on the tape. That is, thestate of the systemmay be described by a single expression (sequence of symbols) consisting of the symbols on the tape followed by Δ (which is supposed to not to appear elsewhere) and then by the note of instructions. This expression is called the "state formula".
Earlier in his paper Turing carried this even further: he gives an example where he placed a symbol of the current "m-configuration"—the instruction's label—beneath the scanned square, together with all the symbols on the tape (The Undecidable, p. 121); this he calls "thecomplete configuration" (The Undecidable, p. 118). To print the "complete configuration" on one line, he places the state-label/m-configuration to theleftof the scanned symbol.
A variant of this is seen in Kleene (1952) whereKleeneshows how to write theGödel numberof a machine's "situation": he places the "m-configuration" symbol q4over the scanned square in roughly the center of the 6 non-blank squares on the tape (see the Turing-tape figure in this article) and puts it to therightof the scanned square. But Kleene refers to "q4" itself as "the machine state" (Kleene, p. 374–375). Hopcroft and Ullman call this composite the "instantaneous description" and follow the Turing convention of putting the "current state" (instruction-label, m-configuration) to theleftof the scanned symbol (p. 149), that is, the instantaneous description is the composite of non-blank symbols to the left, state of the machine, the current symbol scanned by the head, and the non-blank symbols to the right.
Example: total state of 3-state 2-symbol busy beaver after 3 "moves"(taken from example "run" in the figure below):
This means: after three moves the tape has ... 000110000 ... on it, the head is scanning the right-most 1, and the state isA. Blanks (in this case represented by "0"s) can be part of the total state as shown here:B01; the tape has a single 1 on it, but the head is scanning the 0 ("blank") to its left and the state isB.
"State" in the context of Turing machines should be clarified as to which is being described: the current instruction, or the list of symbols on the tape together with the current instruction, or the list of symbols on the tape together with the current instruction placed to the left of the scanned symbol or to the right of the scanned symbol.
Turing's biographer Andrew Hodges (1983: 107) has noted and discussed this confusion.
To the right: the above table as expressed as a "state transition" diagram.
Usually large tables are better left as tables (Booth, p. 74). They are more readily simulated by computer in tabular form (Booth, p. 74). However, certain concepts—e.g. machines with "reset" states and machines with repeating patterns (cf. Hill and Peterson p. 244ff)—can be more readily seen when viewed as a drawing.
Whether a drawing represents an improvement on its table must be decided by the reader for the particular context.
The reader should again be cautioned that such diagrams represent a snapshot of their table frozen in time,notthe course ("trajectory") of a computationthroughtime and space. While every time the busy beaver machine "runs" it will always follow the same state-trajectory, this is not true for the "copy" machine that can be provided with variable input "parameters".
The diagram "progress of the computation" shows the three-state busy beaver's "state" (instruction) progress through its computation from start to finish. On the far right is the Turing "complete configuration" (Kleene "situation", Hopcroft–Ullman "instantaneous description") at each step. If the machine were to be stopped and cleared to blank both the "state register" and entire tape, these "configurations" could be used to rekindle a computation anywhere in its progress (cf. Turing (1936)The Undecidable, pp. 139–140).
Many machines that might be thought to have more computational capability than a simple universal Turing machine can be shown to have no more power (Hopcroft and Ullman p. 159, cf. Minsky (1967)). They might compute faster, perhaps, or use less memory, or their instruction set might be smaller, but they cannot compute more powerfully (i.e. more mathematical functions). (TheChurch–Turing thesishypothesisesthis to be true for any kind of machine: that anything that can be "computed" can be computed by some Turing machine.)
A Turing machine is equivalent to a single-stackpushdown automaton(PDA) that has been made more flexible and concise by relaxing thelast-in-first-out(LIFO) requirement of its stack. In addition, a Turing machine is also equivalent to a two-stack PDA with standard LIFO semantics, by using one stack to model the tape left of the head and the other stack for the tape to the right.
At the other extreme, some very simple models turn out to beTuring-equivalent, i.e. to have the same computational power as the Turing machine model.
Common equivalent models are themulti-tape Turing machine,multi-track Turing machine, machines with input and output, and thenon-deterministicTuring machine(NDTM) as opposed to thedeterministicTuring machine (DTM) for which the action table has at most one entry for each combination of symbol and state.
Read-only, right-moving Turing machinesare equivalent toDFAs(as well asNFAsby conversion using theNFA to DFA conversionalgorithm).
For practical and didactic intentions, the equivalentregister machinecan be used as a usualassemblyprogramming language.
A relevant question is whether or not the computation model represented by concrete programming languages is Turing equivalent. While the computation of a real computer is based on finite states and thus not capable to simulate a Turing machine, programming languages themselves do not necessarily have this limitation. Kirner et al., 2009 have shown that among the general-purpose programming languages some are Turing complete while others are not. For example,ANSI Cis not Turing complete, as all instantiations of ANSI C (different instantiations are possible as the standard deliberately leaves certain behaviour undefined for legacy reasons) imply a finite-space memory. This is because the size of memory reference data types, calledpointers, is accessible inside the language. However, other programming languages likePascaldo not have this feature, which allows them to be Turing complete in principle.
It is just Turing complete in principle, asmemory allocationin a programming language is allowed to fail, which means the programming language can be Turing complete when ignoring failed memory allocations, but the compiled programs executable on a real computer cannot.
Early in his paper (1936) Turing makes a distinction between an "automatic machine"—its "motion ... completely determined by the configuration" and a "choice machine":
...whose motion is only partially determined by the configuration ... When such a machine reaches one of these ambiguous configurations, it cannot go on until some arbitrary choice has been made by an external operator. This would be the case if we were using machines to deal with axiomatic systems.
Turing (1936) does not elaborate further except in a footnote in which he describes how to use an a-machine to "find all the provable formulae of the [Hilbert] calculus" rather than use a choice machine. He "suppose[s] that the choices are always between two possibilities 0 and 1. Each proof will then be determined by a sequence of choices i1, i2, ..., in(i1= 0 or 1, i2= 0 or 1, ..., in= 0 or 1), and hence the number 2n+ i12n-1+ i22n-2+ ... +incompletely determines the proof. The automatic machine carries out successively proof 1, proof 2, proof 3, ..." (Footnote ‡,The Undecidable, p. 138)
This is indeed the technique by which a deterministic (i.e., a-) Turing machine can be used to mimic the action of anondeterministic Turing machine; Turing solved the matter in a footnote and appears to dismiss it from further consideration.
Anoracle machineor o-machine is a Turing a-machine that pauses its computation at state "o" while, to complete its calculation, it "awaits the decision" of "the oracle"—an entity unspecified by Turing "apart from saying that it cannot be a machine" (Turing (1939),The Undecidable, p. 166–168).
As Turing wrote inThe Undecidable, p. 128 (italics added):
It is possible to invent asingle machinewhich can be used to computeanycomputable sequence. If this machineUis supplied with the tape on the beginning of which is written the string of quintuples separated by semicolons of some computing machineM, thenUwill compute the same sequence asM.
This finding is now taken for granted, but at the time (1936) it was considered astonishing.[citation needed]The model of computation that Turing called his "universal machine"—"U" for short—is considered by some (cf. Davis (2000)) to have been the fundamental theoretical breakthrough that led to the notion of thestored-program computer.
Turing's paper ... contains, in essence, the invention of the modern computer and some of the programming techniques that accompanied it.
In terms ofcomputational complexity, a multi-tape universal Turing machine need only be slower bylogarithmicfactor compared to the machines it simulates. This result was obtained in 1966 by F. C. Hennie andR. E. Stearns. (Arora and Barak, 2009, theorem 1.9)
Turing machines are more powerful than some other kinds of automata, such asfinite-state machinesandpushdown automata. According to theChurch–Turing thesis, they are as powerful as real machines, and are able to execute any operation that a real program can. What is neglected in this statement is that, because a real machine can only have a finite number ofconfigurations, it is nothing but a finite-state machine, whereas a Turing machine has an unlimited amount of storage space available for its computations.
There are a number of ways to explain why Turing machines are useful models of real computers:
A limitation of Turing machines is that they do not model the strengths of a particular arrangement well. For instance, modern stored-program computers are actually instances of a more specific form ofabstract machineknown as therandom-access stored-program machineor RASP machine model. Like the universal Turing machine, the RASP stores its "program" in "memory" external to its finite-state machine's "instructions". Unlike the universal Turing machine, the RASP has an infinite number of distinguishable, numbered but unbounded "registers"—memory "cells" that can contain any integer (cf. Elgot and Robinson (1964), Hartmanis (1971), and in particular Cook-Rechow (1973); references atrandom-access machine). The RASP's finite-state machine is equipped with the capability for indirect addressing (e.g., the contents of one register can be used as an address to specify another register); thus the RASP's "program" can address any register in the register-sequence. The upshot of this distinction is that there are computational optimizations that can be performed based on the memory indices, which are not possible in a general Turing machine; thus when Turing machines are used as the basis for bounding running times, a "false lower bound" can be proven on certain algorithms' running times (due to the false simplifying assumption of a Turing machine). An example of this isbinary search, an algorithm that can be shown to perform more quickly when using the RASP model of computation rather than the Turing machine model.
In the early days of computing, computer use was typically limited tobatch processing, i.e., non-interactive tasks, each producing output data from given input data. Computability theory, which studies computability of functions from inputs to outputs, and for which Turing machines were invented, reflects this practice.
Since the 1970s,interactiveuse of computers became much more common. In principle, it is possible to model this by having an external agent read from the tape and write to it at the same time as a Turing machine, but this rarely matches how interaction actually happens; therefore, when describing interactivity, alternatives such asI/O automataare usually preferred.
Thearithmetic model of computationdiffers from the Turing model in two aspects:[20]: 32
Some algorithms run in polynomial time in one model but not in the other one. For example:
However, if an algorithm runs in polynomial time in the arithmetic model, and in addition, the binary length of all involved numbers is polynomial in the length of the input, then it is always polynomial-time in the Turing model. Such an algorithm is said to run instrongly polynomial time.
Robin Gandy(1919–1995)—a student of Alan Turing (1912–1954), and his lifelong friend—traces the lineage of the notion of "calculating machine" back toCharles Babbage(circa 1834) and actually proposes "Babbage's Thesis":
That the whole of development and operations of analysis are now capable of being executed by machinery.
Gandy's analysis of Babbage'sanalytical enginedescribes the following five operations (cf. p. 52–53):
Gandy states that "the functions which can be calculated by (1), (2), and (4) are precisely those which areTuring computable." (p. 53). He cites other proposals for "universal calculating machines" including those ofPercy Ludgate(1909),Leonardo Torres Quevedo(1914),[21][22]Maurice d'Ocagne(1922),Louis Couffignal(1933),Vannevar Bush(1936),Howard Aiken(1937). However:
… the emphasis is on programming a fixed iterable sequence of arithmetical operations. The fundamental importance of conditional iteration and conditional transfer for a general theory of calculating machines is not recognized…
With regard toHilbert's problemsposed by the famous mathematicianDavid Hilbertin 1900, an aspect ofproblem #10had been floating about for almost 30 years before it was framed precisely. Hilbert's original expression for No. 10 is as follows:
10. Determination of the solvability of a Diophantine equation. Given aDiophantine equationwith any number of unknown quantities and with rational integral coefficients: To devise a process according to which it can be determined in a finite number of operations whether the equation is solvable in rational integers.
The Entscheidungsproblem [decision problem forfirst-order logic] is solved when we know a procedure that allows for any given logical expression to decide by finitely many operations its validity or satisfiability ... The Entscheidungsproblem must be considered the main problem of mathematical logic.
By 1922, this notion of "Entscheidungsproblem" had developed a bit, andH. Behmannstated that
... most general form of the Entscheidungsproblem [is] as follows:
A quite definite generally applicable prescription is required which will allow one to decide in a finite number of steps the truth or falsity of a given purely logical assertion ...
Behmann remarks that ... the general problem is equivalent to the problem of deciding which mathematical propositions are true.
If one were able to solve the Entscheidungsproblem then one would have a "procedure for solving many (or even all) mathematical problems".
By the 1928 international congress of mathematicians, Hilbert "made his questions quite precise. First, was mathematicscomplete... Second, was mathematicsconsistent... And thirdly, was mathematicsdecidable?" (Hodges p. 91, Hawking p. 1121). The first two questions were answered in 1930 byKurt Gödelat the very same meeting where Hilbert delivered his retirement speech (much to the chagrin of Hilbert); the third—the Entscheidungsproblem—had to wait until the mid-1930s.
The problem was that an answer first required a precise definition of "definite general applicable prescription", which Princeton professorAlonzo Churchwould come to call "effective calculability", and in 1928 no such definition existed. But over the next 6–7 yearsEmil Postdeveloped his definition of a worker moving from room to room writing and erasing marks per a list of instructions (Post 1936), as did Church and his two studentsStephen KleeneandJ. B. Rosserby use of Church's lambda-calculus and Gödel'srecursion theory(1934). Church's paper (published 15 April 1936) showed that the Entscheidungsproblem was indeed "undecidable"[23]and beat Turing to the punch by almost a year (Turing's paper submitted 28 May 1936, published January 1937). In the meantime, Emil Post submitted a brief paper in the fall of 1936, so Turing at least had priority over Post. While Church refereed Turing's paper, Turing had time to study Church's paper and add an Appendix where he sketched a proof that Church's lambda-calculus and his machines would compute the same functions.
But what Church had done was something rather different, and in a certain sense weaker. ... the Turing construction was more direct, and provided an argument from first principles, closing the gap in Church's demonstration.
And Post had only proposed a definition ofcalculabilityand criticised Church's "definition", but had proved nothing.
In the spring of 1935, Turing as a young Master's student atKing's College, Cambridge, took on the challenge; he had been stimulated by the lectures of the logicianM. H. A. Newman"and learned from them of Gödel's work and the Entscheidungsproblem ... Newman used the word 'mechanical' ... In his obituary of Turing 1955 Newman writes:
To the question 'what is a "mechanical" process?' Turing returned the characteristic answer 'Something that can be done by a machine' and he embarked on the highly congenial task of analysing the general notion of a computing machine.
Gandy states that:
I suppose, but do not know, that Turing, right from the start of his work, had as his goal a proof of the undecidability of the Entscheidungsproblem. He told me that the 'main idea' of the paper came to him when he was lying in Grantchester meadows in the summer of 1935. The 'main idea' might have either been his analysis of computation or his realization that there was a universal machine, and so adiagonal argumentto prove unsolvability.
While Gandy believed that Newman's statement above is "misleading", this opinion is not shared by all. Turing had a lifelong interest in machines: "Alan had dreamt of inventing typewriters as a boy; [his mother] Mrs. Turing had a typewriter; and he could well have begun by asking himself what was meant by calling a typewriter 'mechanical'" (Hodges p. 96). While at Princeton pursuing his PhD, Turing built a Boolean-logic multiplier (see below). His PhD thesis, titled "Systems of Logic Based on Ordinals", contains the following definition of "a computable function":
It was stated above that 'a function is effectively calculable if its values can be found by some purely mechanical process'. We may take this statement literally, understanding by a purely mechanical process one which could be carried out by a machine. It is possible to give a mathematical description, in a certain normal form, of the structures of these machines. The development of these ideas leads to the author's definition of a computable function, and to an identification of computability with effective calculability. It is not difficult, though somewhat laborious, to prove that these three definitions [the 3rd is the λ-calculus] are equivalent.
Alan Turing invented the "a-machine" (automatic machine) in 1936.[7]Turing submitted his paper on 31 May 1936 to the London Mathematical Society for itsProceedings(cf. Hodges 1983:112), but it was published in early 1937 and offprints were available in February 1937 (cf. Hodges 1983:129) It was Turing's doctoral advisor,Alonzo Church, who later coined the term "Turing machine" in a review.[10]With this model, Turing was able to answer two questions in the negative:
Thus by providing a mathematical description of a very simple device capable of arbitrary computations, he was able to prove properties of computation in general—and in particular, theuncomputabilityof theEntscheidungsproblem('decision problem').[13]
When Turing returned to the UK he ultimately became jointly responsible for breaking the German secret codes created by encryption machines called "The Enigma"; he also became involved in the design of the ACE (Automatic Computing Engine), "[Turing's] ACE proposal was effectively self-contained, and its roots lay not in theEDVAC[the USA's initiative], but in his own universal machine" (Hodges p. 318). Arguments still continue concerning the origin and nature of what has been named by Kleene (1952)Turing's Thesis. But what Turingdid provewith his computational-machine model appears in his paper "On Computable Numbers, with an Application to the Entscheidungsproblem" (1937):
[that] the Hilbert Entscheidungsproblem can have no solution ... I propose, therefore to show that there can be no general process for determining whether a given formula U of the functional calculus K is provable, i.e. that there can be no machine which, supplied with any one U of these formulae, will eventually say whether U is provable.
Turing's example (his second proof): If one is to ask for a general procedure to tell us: "Does this machine ever print 0", the question is "undecidable".
In 1937, while at Princeton working on his PhD thesis, Turing built a digital (Boolean-logic) multiplier from scratch, making his own electromechanicalrelays(Hodges p. 138). "Alan's task was to embody the logical design of a Turing machine in a network of relay-operated switches ..." (Hodges p. 138). While Turing might have been just initially curious and experimenting, quite-earnest work in the same direction was going in Germany (Konrad Zuse(1938)), and in the United States (Howard Aiken) andGeorge Stibitz(1937); the fruits of their labors were used by both the Axis and Allied militaries inWorld War II(cf. Hodges p. 298–299). In the early to mid-1950sHao WangandMarvin Minskyreduced the Turing machine to a simpler form (a precursor to thePost–Turing machineofMartin Davis); simultaneously European researchers were reducing the new-fangledelectronic computerto a computer-like theoretical object equivalent to what was now being called a "Turing machine". In the late 1950s and early 1960s, the coincidentally parallel developments of Melzak and Lambek (1961), Minsky (1961), and Shepherdson and Sturgis (1961) carried the European work further and reduced the Turing machine to a more friendly, computer-like abstract model called thecounter machine; Elgot and Robinson (1964), Hartmanis (1971), Cook and Reckhow (1973) carried this work even further with theregister machineandrandom-access machinemodels—but basically all are just multi-tape Turing machines with an arithmetic-like instruction set.
Today, the counter, register and random-access machines and their sire the Turing machine continue to be the models of choice for theorists investigating questions in thetheory of computation. In particular,computational complexity theorymakes use of the Turing machine:
Depending on the objects one likes to manipulate in the computations (numbers like nonnegative integers or alphanumeric strings), two models have obtained a dominant position in machine-based complexity theory:
the off-line multitape Turing machine..., which represents the standard model for string-oriented computation, and
therandom access machine (RAM)as introduced by Cook and Reckhow ..., which models the idealised Von Neumann-style computer.
Only in the related area of analysis of algorithms this role is taken over by the RAM model.
|
https://en.wikipedia.org/wiki/Turing_machine
|
The following articles containlists of problems:
|
https://en.wikipedia.org/wiki/Lists_of_problems
|
List of unsolved problemsmay refer to several notableconjecturesoropen problemsin various academic fields:
|
https://en.wikipedia.org/wiki/List_of_unsolved_problems
|
Incomputability theoryandcomputational complexity theory, areductionis analgorithmfor transforming oneprobleminto another problem. A sufficiently efficient reduction from one problem to another may be used to show that the second problem is at least as difficult as the first.
Intuitively, problemAisreducibleto problemB, if an algorithm for solving problemBefficiently (if it exists) could also be used as a subroutine to solve problemAefficiently. When this is true, solvingAcannot be harder than solvingB. "Harder" means having a higher estimate of the required computational resources in a given context (e.g., highertime complexity, greater memory requirement, expensive need for extra hardware processor cores for a parallel solution compared to a single-threaded solution, etc.). The existence of a reduction fromAtoBcan be written in the shorthand notationA≤mB, usually with a subscript on the ≤ to indicate the type of reduction being used (m :many-one reduction, p :polynomial reduction).
The mathematical structure generated on a set of problems by the reductions of a particular type generally forms apreorder, whoseequivalence classesmay be used to definedegrees of unsolvabilityandcomplexity classes.
There are two main situations where we need to use reductions:
A very simple example of a reduction is frommultiplicationtosquaring. Suppose all we know how to do is to add, subtract, take squares, and divide by two. We can use this knowledge, combined with the following formula, to obtain the product of any two numbers:
We also have a reduction in the other direction; obviously, if we can multiply two numbers, we can square a number. This seems to imply that these two problems are equally hard. This kind of reduction corresponds toTuring reduction.
However, the reduction becomes much harder if we add the restriction that we can only use the squaring function one time, and only at the end. In this case, even if we're allowed to use all the basic arithmetic operations, including multiplication, no reduction exists in general, because in order to get the desired result as a square we have to compute its square root first, and this square root could be anirrational numberlike2{\displaystyle {\sqrt {2}}}that cannot be constructed by arithmetic operations on rational numbers. Going in the other direction, however, we can certainly square a number with just one multiplication, only at the end. Using this limited form of reduction, we have shown the unsurprising result that multiplication is harder in general than squaring. This corresponds tomany-one reduction.
A reduction is apreordering, that is areflexiveandtransitive relation, onP(N)×P(N), whereP(N) is thepower setof thenatural numbers.
As described in the example above, there are two main types of reductions used in computational complexity, themany-one reductionand theTuring reduction. Many-one reductions mapinstancesof one problem toinstancesof another; Turing reductionscomputethe solution to one problem, assuming the other problem is easy to solve. The many-one reduction is a stronger type of Turing reduction, and is more effective at separating problems into distinct complexity classes. However, the increased restrictions on many-one reductions make them more difficult to find.
A problem iscompletefor a complexity class if every problem in the class reduces to that problem, and it is also in the class itself. In this sense the problem represents the class, since any solution to it can, in combination with the reductions, be used to solve every problem in the class.
However, in order to be useful, reductions must beeasy. For example, it's quite possible to reduce a difficult-to-solveNP-completeproblem like theboolean satisfiability problemto a trivial problem, like determining if a number equals zero, by having the reduction machine solve the problem in exponential time and output zero only if there is a solution. However, this does not achieve much, because even though we can solve the new problem, performing the reduction is just as hard as solving the old problem. Likewise, a reduction computing anoncomputable functioncan reduce anundecidable problemto a decidable one. As Michael Sipser points out inIntroduction to the Theory of Computation: "The reduction must be easy, relative to the complexity of typical problems in the class [...] If the reduction itself were difficult to compute, an easy solution to the complete problem wouldn't necessarily yield an easy solution to the problems reducing to it."
Therefore, the appropriate notion of reduction depends on the complexity class being studied. When studying the complexity classNPand harder classes such as thepolynomial hierarchy,polynomial-time reductionsare used. When studying classes within P such asNCandNL,log-space reductionsare used. Reductions are also used incomputability theoryto show whether problems are or are not solvable by machines at all; in this case, reductions are restricted only tocomputable functions.
In case of optimization (maximization or minimization) problems, we often think in terms ofapproximation-preserving reduction. Suppose we have two optimization problems such that instances of one problem can be mapped onto instances of the other, in a way that nearly optimal solutions to instances of the latter problem can be transformed back to yield nearly optimal solutions to the former. This way, if we have an optimization algorithm (orapproximation algorithm) that finds near-optimal (or optimal) solutions to instances of problem B, and an efficient approximation-preserving reduction from problem A to problem B, by composition we obtain an optimization algorithm that yields near-optimal solutions to instances of problem A. Approximation-preserving reductions are often used to provehardness of approximationresults: if some optimization problem A is hard to approximate (under some complexity assumption) within a factor better than α for some α, and there is a β-approximation-preserving reduction from problem A to problem B, we can conclude that problem B is hard to approximate within factor α/β.
The following example shows how to use reduction from the halting problem to prove that a language is undecidable. SupposeH(M,w) is the problem of determining whether a givenTuring machineMhalts (by accepting or rejecting) on input stringw. This language is known to be undecidable. SupposeE(M) is the problem of determining whether the language a given Turing machineMaccepts is empty (in other words, whetherMaccepts any strings at all). We show thatEis undecidable by a reduction fromH.
To obtain a contradiction, supposeRis a decider forE. We will use this to produce a deciderSforH(which we know does not exist). Given inputMandw(a Turing machine and some input string), defineS(M,w) with the following behavior:Screates a Turing machineNthat accepts only if the input string toNiswandMhalts on inputw, and does not halt otherwise. The deciderScan now evaluateR(N) to check whether the language accepted byNis empty. IfRacceptsN, then the language accepted byNis empty, so in particularMdoes not halt on inputw, soScan reject. IfRrejectsN, then the language accepted byNis nonempty, soMdoes halt on inputw, soScan accept. Thus, if we had a deciderRforE, we would be able to produce a deciderSfor the halting problemH(M,w) for any machineMand inputw. Since we know that such anScannot exist, it follows that the languageEis also undecidable.
|
https://en.wikipedia.org/wiki/Reduction_(complexity)
|
Inphilosophy,unknowabilityis the possibility of inherently unaccessibleknowledge. It addresses theepistemologyof that which cannot be known. Some related concepts include the limits of knowledge,ignorabimus,unknown unknowns, thehalting problem, andchaos theory.
Nicholas Rescherprovides the most recent focused scholarship for this area inUnknowability: An Inquiry into the Limits of Knowledge,[1]where he offered three high level categories, logical unknowability, conceptual unknowability, and in-principle unknowability.
Speculation about what is knowable and unknowable has been part of the philosophical tradition since the inception of philosophy. In particular,Baruch Spinoza's Theory of Attributes[2]argues that a human's finite mind cannot understand infinite substance; accordingly, infinite substance, as it is in itself, is in-principle unknowable to the finite mind.
Immanuel Kantbrought focus to unknowability theory in his use of thenoumenonconcept. He postulated that, while we can know the noumenal exists, it is not itself sensible and must therefore remain unknowable.
Modern inquiry encompassesundecidable problemsand questions such as the halting problem, which in their very nature cannot be possibly answered. This area of study has a long and somewhat diffuse history as the challenge arises in many areas of scholarly and practical investigations.
Rescher organizes unknowability in three major categories:
In-principle unknowability may also be due to a need for more energy and matter than is available in the universe to answer a question, or due to fundamental reasons associated with the quantum nature of matter. In the physics ofspecialandgeneral relativity, thelight conemarks the boundary of physically knowable events.[3][4]
The halting problem – namely, the problem of determining if arbitrary computer programs will ever finish running – is a prominent example of an unknowability associated with the established mathematical field ofcomputability theory. In 1936,Alan Turingproved that the halting problem is undecidable. This means that there is no algorithm that can take as input a program and determine whether it will halt. In 1970,Yuri Matiyasevichproved that the Diophantine problem (closely related toHilbert's tenth problem) is also undecidable by reducing it to the halting problem.[5]This means that there is no algorithm that can take as input aDiophantine equationand always determine whether it has a solution in integers.
The undecidability of the halting problem and the Diophantine problem has a number of implications for mathematics and computer science. For example, it means that there is no general algorithm for proving that a given mathematical statement is true or false. It also means that there is no general algorithm for finding solutions to Diophantine equations.
In principle, many problems can be reduced to the halting problem. See thelist of undecidable problems.
Gödel's incompleteness theoremsdemonstrate the implicit in-principle unknowability of methods to prove consistency and completeness of foundation mathematical systems.
There are various graduations of unknowability associated with frameworks of discussion. For example:
Treatment ofknowledgehas been wide and diverse.Wikipediaitself is an initiate to capture and record knowledge using contemporary technological tools. Earlier attempts to capture and record knowledge include writing deep tracts on specific topics as well as the use ofencyclopediasto organize and summarize entire fields or event the entirety of human knowledge.
An associated topic that comes up frequently is that of Limits of Knowledge.
Examples of scholarly discussions involvinglimits of knowledgeinclude:
Gregory Chaitindiscusses unknowability in many of his works.
Popular discussion of unknowability grew with the use of the phraseThere are unknown unknownsbyUnited States Secretary of DefenseDonald Rumsfeldat a news briefing on February 12, 2002. In addition to unknown unknowns there are known unknowns and unknown knowns. These category labels appeared in discussion of identification of chemical substances.[10][11][12]
Chaos theoryis a theory of dynamics that argues that, for sufficiently complex systems, even if we know initial conditions fairly well, measurement errors and computational limitations render fully correct long-term prediction impossible, hence guaranteeing ultimate unknowability of physical system behaviors.
|
https://en.wikipedia.org/wiki/Unknowability
|
Inprobability theoryandinformation theory, themutual information(MI) of tworandom variablesis a measure of the mutualdependencebetween the two variables. More specifically, it quantifies the "amount of information" (inunitssuch asshannons(bits),natsorhartleys) obtained about one random variable by observing the other random variable. The concept of mutual information is intimately linked to that ofentropyof a random variable, a fundamental notion in information theory that quantifies the expected "amount of information" held in a random variable.
Not limited to real-valued random variables and linear dependence like thecorrelation coefficient, MI is more general and determines how different thejoint distributionof the pair(X,Y){\displaystyle (X,Y)}is from the product of the marginal distributions ofX{\displaystyle X}andY{\displaystyle Y}. MI is theexpected valueof thepointwise mutual information(PMI).
The quantity was defined and analyzed byClaude Shannonin his landmark paper "A Mathematical Theory of Communication", although he did not call it "mutual information". This term was coined later byRobert Fano.[2]Mutual Information is also known asinformation gain.
Let(X,Y){\displaystyle (X,Y)}be a pair ofrandom variableswith values over the spaceX×Y{\displaystyle {\mathcal {X}}\times {\mathcal {Y}}}. If their joint distribution isP(X,Y){\displaystyle P_{(X,Y)}}and the marginal distributions arePX{\displaystyle P_{X}}andPY{\displaystyle P_{Y}}, the mutual information is defined as
whereDKL{\displaystyle D_{\mathrm {KL} }}is theKullback–Leibler divergence, andPX⊗PY{\displaystyle P_{X}\otimes P_{Y}}is theouter productdistribution which assigns probabilityPX(x)⋅PY(y){\displaystyle P_{X}(x)\cdot P_{Y}(y)}to each(x,y){\displaystyle (x,y)}.
Expressed in terms of theentropyH(⋅){\displaystyle H(\cdot )}and theconditional entropyH(⋅|⋅){\displaystyle H(\cdot |\cdot )}of the random variablesX{\displaystyle X}andY{\displaystyle Y}, one also has (Seerelation to conditional and joint entropy):
Notice, as per property of theKullback–Leibler divergence, thatI(X;Y){\displaystyle I(X;Y)}is equal to zero precisely when the joint distribution coincides with the product of the marginals, i.e. whenX{\displaystyle X}andY{\displaystyle Y}are independent (and hence observingY{\displaystyle Y}tells you nothing aboutX{\displaystyle X}).I(X;Y){\displaystyle I(X;Y)}is non-negative, it is a measure of the price for encoding(X,Y){\displaystyle (X,Y)}as a pair of independent random variables when in reality they are not.
If thenatural logarithmis used, the unit of mutual information is thenat. If thelog base2 is used, the unit of mutual information is theshannon, also known as the bit. If thelog base10 is used, the unit of mutual information is thehartley, also known as the ban or the dit.
The mutual information of two jointly discrete random variablesX{\displaystyle X}andY{\displaystyle Y}is calculated as a double sum:[3]: 20
whereP(X,Y){\displaystyle P_{(X,Y)}}is thejoint probabilitymassfunctionofX{\displaystyle X}andY{\displaystyle Y}, andPX{\displaystyle P_{X}}andPY{\displaystyle P_{Y}}are themarginal probabilitymass functions ofX{\displaystyle X}andY{\displaystyle Y}respectively.
In the case of jointly continuous random variables, the double sum is replaced by adouble integral:[3]: 251
whereP(X,Y){\displaystyle P_{(X,Y)}}is now the joint probabilitydensityfunction ofX{\displaystyle X}andY{\displaystyle Y}, andPX{\displaystyle P_{X}}andPY{\displaystyle P_{Y}}are the marginal probability density functions ofX{\displaystyle X}andY{\displaystyle Y}respectively.
Intuitively, mutual information measures the information thatX{\displaystyle X}andY{\displaystyle Y}share: It measures how much knowing one of these variables reduces uncertainty about the other. For example, ifX{\displaystyle X}andY{\displaystyle Y}are independent, then knowingX{\displaystyle X}does not give any information aboutY{\displaystyle Y}and vice versa, so their mutual information is zero. At the other extreme, ifX{\displaystyle X}is a deterministic function ofY{\displaystyle Y}andY{\displaystyle Y}is a deterministic function ofX{\displaystyle X}then all information conveyed byX{\displaystyle X}is shared withY{\displaystyle Y}: knowingX{\displaystyle X}determines the value ofY{\displaystyle Y}and vice versa. As a result, the mutual information is the same as the uncertainty contained inY{\displaystyle Y}(orX{\displaystyle X}) alone, namely theentropyofY{\displaystyle Y}(orX{\displaystyle X}). A very special case of this is whenX{\displaystyle X}andY{\displaystyle Y}are the same random variable.
Mutual information is a measure of the inherent dependence expressed in thejoint distributionofX{\displaystyle X}andY{\displaystyle Y}relative to the marginal distribution ofX{\displaystyle X}andY{\displaystyle Y}under the assumption of independence. Mutual information therefore measures dependence in the following sense:I(X;Y)=0{\displaystyle \operatorname {I} (X;Y)=0}if and only ifX{\displaystyle X}andY{\displaystyle Y}are independent random variables. This is easy to see in one direction: ifX{\displaystyle X}andY{\displaystyle Y}are independent, thenp(X,Y)(x,y)=pX(x)⋅pY(y){\displaystyle p_{(X,Y)}(x,y)=p_{X}(x)\cdot p_{Y}(y)}, and therefore:
Moreover, mutual information is nonnegative (i.e.I(X;Y)≥0{\displaystyle \operatorname {I} (X;Y)\geq 0}see below) andsymmetric(i.e.I(X;Y)=I(Y;X){\displaystyle \operatorname {I} (X;Y)=\operatorname {I} (Y;X)}see below).
UsingJensen's inequalityon the definition of mutual information we can show thatI(X;Y){\displaystyle \operatorname {I} (X;Y)}is non-negative, i.e.[3]: 28
The proof is given considering the relationship with entropy, as shown below.
IfC{\displaystyle C}is independent of(A,B){\displaystyle (A,B)}, then
Mutual information can be equivalently expressed as:
whereH(X){\displaystyle \mathrm {H} (X)}andH(Y){\displaystyle \mathrm {H} (Y)}are the marginalentropies,H(X∣Y){\displaystyle \mathrm {H} (X\mid Y)}andH(Y∣X){\displaystyle \mathrm {H} (Y\mid X)}are theconditional entropies, andH(X,Y){\displaystyle \mathrm {H} (X,Y)}is thejoint entropyofX{\displaystyle X}andY{\displaystyle Y}.
Notice the analogy to the union, difference, and intersection of two sets: in this respect, all the formulas given above are apparent from the Venn diagram reported at the beginning of the article.
In terms of a communication channel in which the outputY{\displaystyle Y}is a noisy version of the inputX{\displaystyle X}, these relations are summarised in the figure:
BecauseI(X;Y){\displaystyle \operatorname {I} (X;Y)}is non-negative, consequently,H(X)≥H(X∣Y){\displaystyle \mathrm {H} (X)\geq \mathrm {H} (X\mid Y)}. Here we give the detailed deduction ofI(X;Y)=H(Y)−H(Y∣X){\displaystyle \operatorname {I} (X;Y)=\mathrm {H} (Y)-\mathrm {H} (Y\mid X)}for the case of jointly discrete random variables:
The proofs of the other identities above are similar. The proof of the general case (not just discrete) is similar, with integrals replacing sums.
Intuitively, if entropyH(Y){\displaystyle \mathrm {H} (Y)}is regarded as a measure of uncertainty about a random variable, thenH(Y∣X){\displaystyle \mathrm {H} (Y\mid X)}is a measure of whatX{\displaystyle X}doesnotsay aboutY{\displaystyle Y}. This is "the amount of uncertainty remaining aboutY{\displaystyle Y}afterX{\displaystyle X}is known", and thus the right side of the second of these equalities can be read as "the amount of uncertainty inY{\displaystyle Y}, minus the amount of uncertainty inY{\displaystyle Y}which remains afterX{\displaystyle X}is known", which is equivalent to "the amount of uncertainty inY{\displaystyle Y}which is removed by knowingX{\displaystyle X}". This corroborates the intuitive meaning of mutual information as the amount of information (that is, reduction in uncertainty) that knowing either variable provides about the other.
Note that in the discrete caseH(Y∣Y)=0{\displaystyle \mathrm {H} (Y\mid Y)=0}and thereforeH(Y)=I(Y;Y){\displaystyle \mathrm {H} (Y)=\operatorname {I} (Y;Y)}. ThusI(Y;Y)≥I(X;Y){\displaystyle \operatorname {I} (Y;Y)\geq \operatorname {I} (X;Y)}, and one can formulate the basic principle that a variable contains at least as much information about itself as any other variable can provide.
For jointly discrete or jointly continuous pairs(X,Y){\displaystyle (X,Y)}, mutual information is theKullback–Leibler divergencefrom the product of themarginal distributions,pX⋅pY{\displaystyle p_{X}\cdot p_{Y}}, of thejoint distributionp(X,Y){\displaystyle p_{(X,Y)}}, that is,
Furthermore, letp(X,Y)(x,y)=pX∣Y=y(x)∗pY(y){\displaystyle p_{(X,Y)}(x,y)=p_{X\mid Y=y}(x)*p_{Y}(y)}be the conditional mass or density function. Then, we have the identity
The proof for jointly discrete random variables is as follows:
Similarly this identity can be established for jointly continuous random variables.
Note that here the Kullback–Leibler divergence involves integration over the values of the random variableX{\displaystyle X}only, and the expressionDKL(pX∣Y∥pX){\displaystyle D_{\text{KL}}(p_{X\mid Y}\parallel p_{X})}still denotes a random variable becauseY{\displaystyle Y}is random. Thus mutual information can also be understood as theexpectationof the Kullback–Leibler divergence of theunivariate distributionpX{\displaystyle p_{X}}ofX{\displaystyle X}from theconditional distributionpX∣Y{\displaystyle p_{X\mid Y}}ofX{\displaystyle X}givenY{\displaystyle Y}: the more different the distributionspX∣Y{\displaystyle p_{X\mid Y}}andpX{\displaystyle p_{X}}are on average, the greater theinformation gain.
If samples from a joint distribution are available, a Bayesian approach can be used to estimate the mutual information of that distribution. The first work to do this, which also showed how to do Bayesian estimation of many other information-theoretic properties besides mutual information, was.[5]Subsequent researchers have rederived[6]and extended[7]this analysis. See[8]for a recent paper based on a prior specifically tailored to estimation of mutual
information per se. Besides, recently an estimation method accounting for continuous and multivariate outputs,Y{\displaystyle Y}, was proposed in
.[9]
The Kullback-Leibler divergence formulation of the mutual information is predicated on that one is interested in comparingp(x,y){\displaystyle p(x,y)}to the fully factorizedouter productp(x)⋅p(y){\displaystyle p(x)\cdot p(y)}. In many problems, such asnon-negative matrix factorization, one is interested in less extreme factorizations; specifically, one wishes to comparep(x,y){\displaystyle p(x,y)}to a low-rank matrix approximation in some unknown variablew{\displaystyle w}; that is, to what degree one might have
Alternately, one might be interested in knowing how much more informationp(x,y){\displaystyle p(x,y)}carries over its factorization. In such a case, the excess information that the full distributionp(x,y){\displaystyle p(x,y)}carries over the matrix factorization is given by the Kullback-Leibler divergence
The conventional definition of the mutual information is recovered in the extreme case that the processW{\displaystyle W}has only one value forw{\displaystyle w}.
Several variations on mutual information have been proposed to suit various needs. Among these are normalized variants and generalizations to more than two variables.
Many applications require ametric, that is, a distance measure between pairs of points. The quantity
satisfies the properties of a metric (triangle inequality,non-negativity,indiscernabilityand symmetry), where equalityX=Y{\displaystyle X=Y}is understood to mean thatX{\displaystyle X}can be completely determined fromY{\displaystyle Y}.[10]
This distance metric is also known as thevariation of information.
IfX,Y{\displaystyle X,Y}are discrete random variables then all the entropy terms are non-negative, so0≤d(X,Y)≤H(X,Y){\displaystyle 0\leq d(X,Y)\leq \mathrm {H} (X,Y)}and one can define a normalized distance
Plugging in the definitions shows that
This is known as the Rajski Distance.[11]In a set-theoretic interpretation of information (see the figure forConditional entropy), this is effectively theJaccard distancebetweenX{\displaystyle X}andY{\displaystyle Y}.
Finally,
is also a metric.
Sometimes it is useful to express the mutual information of two random variables conditioned on a third.
For jointlydiscrete random variablesthis takes the form
which can be simplified as
For jointlycontinuous random variablesthis takes the form
which can be simplified as
Conditioning on a third random variable may either increase or decrease the mutual information, but it is always true that
for discrete, jointly distributed random variablesX,Y,Z{\displaystyle X,Y,Z}. This result has been used as a basic building block for proving otherinequalities in information theory.
Several generalizations of mutual information to more than two random variables have been proposed, such astotal correlation(or multi-information) anddual total correlation. The expression and study of multivariate higher-degree mutual information was achieved in two seemingly independent works: McGill (1954)[12]who called these functions "interaction information", and Hu Kuo Ting (1962).[13]Interaction information is defined for one variable as follows:
and forn>1,{\displaystyle n>1,}
Some authors reverse the order of the terms on the right-hand side of the preceding equation, which changes the sign when the number of random variables is odd. (And in this case, the single-variable expression becomes the negative of the entropy.) Note that
The multivariate mutual information functions generalize the pairwise independence case that states thatX1,X2{\displaystyle X_{1},X_{2}}if and only ifI(X1;X2)=0{\displaystyle I(X_{1};X_{2})=0}, to arbitrary numerous variable. n variables are mutually independent if and only if the2n−n−1{\displaystyle 2^{n}-n-1}mutual information functions vanishI(X1;…;Xk)=0{\displaystyle I(X_{1};\ldots ;X_{k})=0}withn≥k≥2{\displaystyle n\geq k\geq 2}(theorem 2[14]). In this sense, theI(X1;…;Xk)=0{\displaystyle I(X_{1};\ldots ;X_{k})=0}can be used as a refined statistical independence criterion.
For 3 variables, Brenner et al. applied multivariate mutual information toneural codingand called its negativity "synergy"[15]and Watkinson et al. applied it to genetic expression.[16]For arbitrary k variables, Tapia et al. applied multivariate mutual information to gene expression.[17][14]It can be zero, positive, or negative.[13]The positivity corresponds to relations generalizing the pairwise correlations, nullity corresponds to a refined notion of independence, and negativity detects high dimensional "emergent" relations and clusterized datapoints[17]).
One high-dimensional generalization scheme which maximizes the mutual information between the joint distribution and other target variables is found to be useful infeature selection.[18]
Mutual information is also used in the area of signal processing as ameasure of similaritybetween two signals. For example, FMI metric[19]is an image fusion performance measure that makes use of mutual information in order to measure the amount of information that the fused image contains about the source images. TheMatlabcode for this metric can be found at.[20]A python package for computing all multivariate mutual informations,conditional mutual information, joint entropies, total correlations, information distance in a dataset of n variables is available.[21]
Directed information,I(Xn→Yn){\displaystyle \operatorname {I} \left(X^{n}\to Y^{n}\right)}, measures the amount of information that flows from the processXn{\displaystyle X^{n}}toYn{\displaystyle Y^{n}}, whereXn{\displaystyle X^{n}}denotes the vectorX1,X2,...,Xn{\displaystyle X_{1},X_{2},...,X_{n}}andYn{\displaystyle Y^{n}}denotesY1,Y2,...,Yn{\displaystyle Y_{1},Y_{2},...,Y_{n}}. The termdirected informationwas coined byJames Masseyand is defined as
Note that ifn=1{\displaystyle n=1}, the directed information becomes the mutual information. Directed information has many applications in problems wherecausalityplays an important role, such ascapacity of channelwith feedback.[22][23]
Normalized variants of the mutual information are provided by thecoefficients of constraint,[24]uncertainty coefficient[25]or proficiency:[26]
The two coefficients have a value ranging in [0, 1], but are not necessarily equal. This measure is not symmetric. If one desires a symmetric measure they can consider the followingredundancymeasure:
which attains a minimum of zero when the variables are independent and a maximum value of
when one variable becomes completely redundant with the knowledge of the other. See alsoRedundancy (information theory).
Another symmetrical measure is thesymmetric uncertainty(Witten & Frank 2005), given by
which represents theharmonic meanof the two uncertainty coefficientsCXY,CYX{\displaystyle C_{XY},C_{YX}}.[25]
If we consider mutual information as a special case of thetotal correlationordual total correlation, the normalized version are respectively,
This normalized version also known asInformation Quality Ratio (IQR)which quantifies the amount of information of a variable based on another variable against total uncertainty:[27]
There's a normalization[28]which derives from first thinking of mutual information as an analogue tocovariance(thusShannon entropyis analogous tovariance). Then the normalized mutual information is calculated akin to thePearson correlation coefficient,
In the traditional formulation of the mutual information,
eacheventorobjectspecified by(x,y){\displaystyle (x,y)}is weighted by the corresponding probabilityp(x,y){\displaystyle p(x,y)}. This assumes that all objects or events are equivalentapart fromtheir probability of occurrence. However, in some applications it may be the case that certain objects or events are moresignificantthan others, or that certain patterns of association are more semantically important than others.
For example, the deterministic mapping{(1,1),(2,2),(3,3)}{\displaystyle \{(1,1),(2,2),(3,3)\}}may be viewed as stronger than the deterministic mapping{(1,3),(2,1),(3,2)}{\displaystyle \{(1,3),(2,1),(3,2)\}}, although these relationships would yield the same mutual information. This is because the mutual information is not sensitive at all to any inherent ordering in the variable values (Cronbach 1954,Coombs, Dawes & Tversky 1970,Lockhead 1970), and is therefore not sensitive at all to theformof the relational mapping between the associated variables. If it is desired that the former relation—showing agreement on all variable values—be judged stronger than the later relation, then it is possible to use the followingweighted mutual information(Guiasu 1977).
which places a weightw(x,y){\displaystyle w(x,y)}on the probability of each variable value co-occurrence,p(x,y){\displaystyle p(x,y)}. This allows that certain probabilities may carry more or less significance than others, thereby allowing the quantification of relevantholisticorPrägnanzfactors. In the above example, using larger relative weights forw(1,1){\displaystyle w(1,1)},w(2,2){\displaystyle w(2,2)}, andw(3,3){\displaystyle w(3,3)}would have the effect of assessing greaterinformativenessfor the relation{(1,1),(2,2),(3,3)}{\displaystyle \{(1,1),(2,2),(3,3)\}}than for the relation{(1,3),(2,1),(3,2)}{\displaystyle \{(1,3),(2,1),(3,2)\}}, which may be desirable in some cases of pattern recognition, and the like. This weighted mutual information is a form of weighted KL-Divergence, which is known to take negative values for some inputs,[29]and there are examples where the weighted mutual information also takes negative values.[30]
A probability distribution can be viewed as apartition of a set. One may then ask: if a set were partitioned randomly, what would the distribution of probabilities be? What would the expectation value of the mutual information be? Theadjusted mutual informationor AMI subtracts the expectation value of the MI, so that the AMI is zero when two different distributions are random, and one when two distributions are identical. The AMI is defined in analogy to theadjusted Rand indexof two different partitions of a set.
Using the ideas ofKolmogorov complexity, one can consider the mutual information of two sequences independent of any probability distribution:
To establish that this quantity is symmetric up to a logarithmic factor (IK(X;Y)≈IK(Y;X){\displaystyle \operatorname {I} _{K}(X;Y)\approx \operatorname {I} _{K}(Y;X)}) one requires thechain rule for Kolmogorov complexity(Li & Vitányi 1997). Approximations of this quantity viacompressioncan be used to define adistance measureto perform ahierarchical clusteringof sequences without having anydomain knowledgeof the sequences (Cilibrasi & Vitányi 2005).
Unlike correlation coefficients, such as theproduct moment correlation coefficient, mutual information contains information about all dependence—linear and nonlinear—and not just linear dependence as the correlation coefficient measures. However, in the narrow case that the joint distribution forX{\displaystyle X}andY{\displaystyle Y}is abivariate normal distribution(implying in particular that both marginal distributions are normally distributed), there is an exact relationship betweenI{\displaystyle \operatorname {I} }and the correlation coefficientρ{\displaystyle \rho }(Gel'fand & Yaglom 1957).
The equation above can be derived as follows for a bivariate Gaussian:
Therefore,
WhenX{\displaystyle X}andY{\displaystyle Y}are limited to be in a discrete number of states, observation data is summarized in acontingency table, with row variableX{\displaystyle X}(ori{\displaystyle i}) and column variableY{\displaystyle Y}(orj{\displaystyle j}). Mutual information is one of the measures ofassociationorcorrelationbetween the row and column variables.
Other measures of association includePearson's chi-squared teststatistics,G-teststatistics, etc. In fact, with the same log base, mutual information will be equal to theG-testlog-likelihood statistic divided by2N{\displaystyle 2N}, whereN{\displaystyle N}is the sample size.
In many applications, one wants to maximize mutual information (thus increasing dependencies), which is often equivalent to minimizingconditional entropy. Examples include:
|
https://en.wikipedia.org/wiki/Mutual_information
|
Theconditional quantum entropyis anentropy measureused inquantum information theory. It is a generalization of theconditional entropyofclassical information theory. For a bipartite stateρAB{\displaystyle \rho ^{AB}}, the conditional entropy is writtenS(A|B)ρ{\displaystyle S(A|B)_{\rho }}, orH(A|B)ρ{\displaystyle H(A|B)_{\rho }}, depending on the notation being used for thevon Neumann entropy. The quantum conditional entropy was defined in terms of a conditional density operatorρA|B{\displaystyle \rho _{A|B}}byNicolas CerfandChris Adami,[1][2]who showed that quantum conditional entropies can be negative, something that is forbidden in classical physics. The negativity of quantum conditional entropy is a sufficient criterion for quantumnon-separability.
In what follows, we use the notationS(⋅){\displaystyle S(\cdot )}for thevon Neumann entropy, which will simply be called "entropy".
Given a bipartite quantum stateρAB{\displaystyle \rho ^{AB}}, the entropy of the joint system AB isS(AB)ρ=defS(ρAB){\displaystyle S(AB)_{\rho }\ {\stackrel {\mathrm {def} }{=}}\ S(\rho ^{AB})}, and the entropies of the subsystems areS(A)ρ=defS(ρA)=S(trBρAB){\displaystyle S(A)_{\rho }\ {\stackrel {\mathrm {def} }{=}}\ S(\rho ^{A})=S(\mathrm {tr} _{B}\rho ^{AB})}andS(B)ρ{\displaystyle S(B)_{\rho }}. The von Neumann entropy measures an observer's uncertainty about the value of the state, that is, how much the state is amixed state.
By analogy with the classical conditional entropy, one defines the conditional quantum entropy asS(A|B)ρ=defS(AB)ρ−S(B)ρ{\displaystyle S(A|B)_{\rho }\ {\stackrel {\mathrm {def} }{=}}\ S(AB)_{\rho }-S(B)_{\rho }}.
An equivalent operational definition of the quantum conditional entropy (as a measure of thequantum communicationcost or surplus when performingquantum statemerging) was given byMichał Horodecki,Jonathan Oppenheim, andAndreas Winter.[3]
Unlike the classicalconditional entropy, the conditional quantum entropy can be negative. This is true even though the (quantum) von Neumann entropy of single variable is never negative. The negative conditional entropy is also known as thecoherent information, and gives the additional number of bits above the classical limit that can be transmitted in a quantum dense coding protocol. Positive conditional entropy of a state thus means the state cannot reach even the classical limit, while the negative conditional entropy provides for additional information.
|
https://en.wikipedia.org/wiki/Conditional_quantum_entropy
|
Inprobability theoryandinformation theory, thevariation of informationorshared information distanceis a measure of the distance between two clusterings (partitions of elements). It is closely related tomutual information; indeed, it is a simple linear expression involving the mutual information. Unlike the mutual information, however, the variation of information is a truemetric, in that it obeys thetriangle inequality.[1][2][3]
Suppose we have twopartitionsX{\displaystyle X}andY{\displaystyle Y}of asetA{\displaystyle A}, namelyX={X1,X2,…,Xk}{\displaystyle X=\{X_{1},X_{2},\ldots ,X_{k}\}}andY={Y1,Y2,…,Yl}{\displaystyle Y=\{Y_{1},Y_{2},\ldots ,Y_{l}\}}.
Let:
Then the variation of information between the two partitions is:
This is equivalent to theshared information distancebetween the random variablesiandjwith respect to the uniform probability measure onA{\displaystyle A}defined byμ(B):=|B|/n{\displaystyle \mu (B):=|B|/n}forB⊆A{\displaystyle B\subseteq A}.
We can rewrite this definition in terms that explicitly highlight the information content of this metric.
The set of all partitions of a set form a compactlatticewhere the partial order induces two operations, the meet∧{\displaystyle \wedge }and the join∨{\displaystyle \vee }, where the maximum1¯{\displaystyle {\overline {\mathrm {1} }}}is the partition with only one block, i.e., all elements grouped together, and the minimum is0¯{\displaystyle {\overline {\mathrm {0} }}}, the partition consisting of all elements as singletons. The meet of two partitionsX{\displaystyle X}andY{\displaystyle Y}is easy to understand as that partition formed by all pair intersections of one block of,Xi{\displaystyle X_{i}}, ofX{\displaystyle X}and one,Yi{\displaystyle Y_{i}}, ofY{\displaystyle Y}. It then follows thatX∧Y⊆X{\displaystyle X\wedge Y\subseteq X}andX∧Y⊆Y{\displaystyle X\wedge Y\subseteq Y}.
Let's define the entropy of a partitionX{\displaystyle X}as
wherepi=|Xi|/n{\displaystyle p_{i}=|X_{i}|/n}. Clearly,H(1¯)=0{\displaystyle H({\overline {\mathrm {1} }})=0}andH(0¯)=logn{\displaystyle H({\overline {\mathrm {0} }})=\log \,n}. The entropy of a partition is a monotonous function on the lattice of partitions in the sense thatX⊆Y⇒H(X)≥H(Y){\displaystyle X\subseteq Y\Rightarrow H(X)\geq H(Y)}.
Then the VI distance betweenX{\displaystyle X}andY{\displaystyle Y}is given by
The differenced(X,Y)≡|H(X)−H(Y)|{\displaystyle d(X,Y)\equiv |H\left(X\right)-H\left(Y\right)|}is a pseudo-metric asd(X,Y)=0{\displaystyle d(X,Y)=0}doesn't necessarily imply thatX=Y{\displaystyle X=Y}. From the definition of1¯{\displaystyle {\overline {\mathrm {1} }}}, it isVI(X,1)=H(X){\displaystyle \mathrm {VI} (X,\mathrm {1} )\,=\,H\left(X\right)}.
If in theHasse diagramwe draw an edge from every partition to the maximum1¯{\displaystyle {\overline {\mathrm {1} }}}and assign it a weight equal to the VI distance between the given partition and1¯{\displaystyle {\overline {\mathrm {1} }}}, we can interpret the VI distance as basically an average of differences of edge weights to the maximum
ForH(X){\displaystyle H(X)}as defined above, it holds that the joint information of two partitions coincides with the entropy of the meet
and we also have thatd(X,X∧Y)=H(X∧Y|X){\displaystyle d(X,X\wedge Y)\,=\,H(X\wedge Y|X)}coincides with the conditional entropy of the meet (intersection)X∧Y{\displaystyle X\wedge Y}relative toX{\displaystyle X}.
The variation of information satisfies
whereH(X){\displaystyle H(X)}is theentropyofX{\displaystyle X}, andI(X,Y){\displaystyle I(X,Y)}ismutual informationbetweenX{\displaystyle X}andY{\displaystyle Y}with respect to the uniform probability measure onA{\displaystyle A}. This can be rewritten as
whereH(X,Y){\displaystyle H(X,Y)}is thejoint entropyofX{\displaystyle X}andY{\displaystyle Y}, or
whereH(X|Y){\displaystyle H(X|Y)}andH(Y|X){\displaystyle H(Y|X)}are the respectiveconditional entropies.
The variation of information can also be bounded, either in terms of the number of elements:
Or with respect to a maximum number of clusters,K∗{\displaystyle K^{*}}:
To verify the triangle inequalityVI(X;Z)≤VI(X;Y)+VI(Y;Z){\displaystyle \mathrm {VI} (X;Z)\leq \mathrm {VI} (X;Y)+\mathrm {VI} (Y;Z)}, expand using the identityVI(X;Y)=H(X|Y)+H(Y|X){\displaystyle \mathrm {VI} (X;Y)=H(X|Y)+H(Y|X)}. It suffices to proveH(X|Z)≤H(X|Y)+H(Y|Z){\displaystyle H(X|Z)\leq H(X|Y)+H(Y|Z)}The right side has a lower boundH(X|Y)+H(Y|Z)≥H(X|Y,Z)+H(Y|Z)=H(X,Y|Z){\displaystyle H(X|Y)+H(Y|Z)\geq H(X|Y,Z)+H(Y|Z)=H(X,Y|Z)}which is no less than the left side.
|
https://en.wikipedia.org/wiki/Variation_of_information
|
Ininformation theory, theentropy power inequality(EPI) is a result that relates to so-called "entropy power" ofrandom variables. It shows that the entropy power of suitablywell-behavedrandom variables is asuperadditivefunction. The entropy power inequality was proved in 1948 byClaude Shannonin his seminal paper "A Mathematical Theory of Communication". Shannon also provided a sufficient condition for equality to hold; Stam (1959) showed that the condition is in fact necessary.
For a random vectorX:Ω→Rn{\displaystyle X:\Omega \to \mathbb {R} ^{n}}withprobability density functionf:Rn→R{\displaystyle f:\mathbb {R} ^{n}\to \mathbb {R} }, thedifferential entropyofX{\displaystyle X}, denotedh(X){\displaystyle h(X)}, is defined to be
and the entropy power ofX{\displaystyle X}, denotedN(X){\displaystyle N(X)}, is defined to be
In particular,N(X)=|K|1/n{\displaystyle N(X)=|K|^{1/n}}whenX{\displaystyle X}is normally distributed with covariance matrixK{\displaystyle K}.
LetX{\displaystyle X}andY{\displaystyle Y}beindependent random variableswith probability density functions in theLp{\displaystyle L^{p}}spaceLp(Rn){\displaystyle L^{p}(\mathbb {R} ^{n})}for somep>1{\displaystyle p>1}. Then
Moreover, equality holdsif and only ifX{\displaystyle X}andY{\displaystyle Y}aremultivariate normalrandom variables with proportionalcovariance matrices.
The entropy power inequality can be rewritten in an equivalent form that does not explicitly depend on the definition of entropy power (see Costa and Cover reference below).
LetX{\displaystyle X}andY{\displaystyle Y}beindependent random variables, as above. Then, letX′{\displaystyle X'}andY′{\displaystyle Y'}be independent random variables with Gaussian distributions such that
Then,
|
https://en.wikipedia.org/wiki/Entropy_power_inequality
|
Alikelihood function(often simply called thelikelihood) measures how well astatistical modelexplainsobserved databy calculating the probability of seeing that data under differentparametervalues of the model. It is constructed from thejoint probability distributionof therandom variablethat (presumably) generated the observations.[1][2][3]When evaluated on the actual data points, it becomes a function solely of the model parameters.
Inmaximum likelihood estimation, theargument that maximizesthe likelihood function serves as apoint estimatefor the unknown parameter, while theFisher information(often approximated by the likelihood'sHessian matrixat the maximum) gives an indication of the estimate'sprecision.
In contrast, inBayesian statistics, the estimate of interest is theconverseof the likelihood, the so-calledposterior probabilityof the parameter given the observed data, which is calculated viaBayes' rule.[4]
The likelihood function, parameterized by a (possibly multivariate) parameterθ{\textstyle \theta }, is usually defined differently fordiscrete and continuousprobability distributions(a more general definition is discussed below). Given a probability density or mass function
x↦f(x∣θ),{\displaystyle x\mapsto f(x\mid \theta ),}
wherex{\textstyle x}is a realization of the random variableX{\textstyle X}, the likelihood function isθ↦f(x∣θ),{\displaystyle \theta \mapsto f(x\mid \theta ),}often writtenL(θ∣x).{\displaystyle {\mathcal {L}}(\theta \mid x).}
In other words, whenf(x∣θ){\textstyle f(x\mid \theta )}is viewed as a function ofx{\textstyle x}withθ{\textstyle \theta }fixed, it is a probability density function, and when viewed as a function ofθ{\textstyle \theta }withx{\textstyle x}fixed, it is a likelihood function. In thefrequentist paradigm, the notationf(x∣θ){\textstyle f(x\mid \theta )}is often avoided and insteadf(x;θ){\textstyle f(x;\theta )}orf(x,θ){\textstyle f(x,\theta )}are used to indicate thatθ{\textstyle \theta }is regarded as a fixed unknown quantity rather than as arandom variablebeing conditioned on.
The likelihood function doesnotspecify the probability thatθ{\textstyle \theta }is the truth, given the observed sampleX=x{\textstyle X=x}. Such an interpretation is a common error, with potentially disastrous consequences (seeprosecutor's fallacy).
LetX{\textstyle X}be a discreterandom variablewithprobability mass functionp{\textstyle p}depending on a parameterθ{\textstyle \theta }. Then the function
L(θ∣x)=pθ(x)=Pθ(X=x),{\displaystyle {\mathcal {L}}(\theta \mid x)=p_{\theta }(x)=P_{\theta }(X=x),}
considered as a function ofθ{\textstyle \theta }, is thelikelihood function, given theoutcomex{\textstyle x}of the random variableX{\textstyle X}. Sometimes the probability of "the valuex{\textstyle x}ofX{\textstyle X}for the parameter valueθ{\textstyle \theta }" is written asP(X=x|θ)orP(X=x;θ). The likelihood is the probability that a particular outcomex{\textstyle x}is observed when the true value of the parameter isθ{\textstyle \theta }, equivalent to the probability mass onx{\textstyle x}; it isnota probability density over the parameterθ{\textstyle \theta }. The likelihood,L(θ∣x){\textstyle {\mathcal {L}}(\theta \mid x)}, should not be confused withP(θ∣x){\textstyle P(\theta \mid x)}, which is the posterior probability ofθ{\textstyle \theta }given the datax{\textstyle x}.
Consider a simple statistical model of a coin flip: a single parameterpH{\textstyle p_{\text{H}}}that expresses the "fairness" of the coin. The parameter is the probability that a coin lands heads up ("H") when tossed.pH{\textstyle p_{\text{H}}}can take on any value within the range 0.0 to 1.0. For a perfectlyfair coin,pH=0.5{\textstyle p_{\text{H}}=0.5}.
Imagine flipping a fair coin twice, and observing two heads in two tosses ("HH"). Assuming that each successive coin flip isi.i.d., then the probability of observing HH is
P(HH∣pH=0.5)=0.52=0.25.{\displaystyle P({\text{HH}}\mid p_{\text{H}}=0.5)=0.5^{2}=0.25.}
Equivalently, the likelihood of observing "HH" assumingpH=0.5{\textstyle p_{\text{H}}=0.5}is
L(pH=0.5∣HH)=0.25.{\displaystyle {\mathcal {L}}(p_{\text{H}}=0.5\mid {\text{HH}})=0.25.}
This is not the same as saying thatP(pH=0.5∣HH)=0.25{\textstyle P(p_{\text{H}}=0.5\mid HH)=0.25}, a conclusion which could only be reached viaBayes' theoremgiven knowledge about the marginal probabilitiesP(pH=0.5){\textstyle P(p_{\text{H}}=0.5)}andP(HH){\textstyle P({\text{HH}})}.
Now suppose that the coin is not a fair coin, but instead thatpH=0.3{\textstyle p_{\text{H}}=0.3}. Then the probability of two heads on two flips is
P(HH∣pH=0.3)=0.32=0.09.{\displaystyle P({\text{HH}}\mid p_{\text{H}}=0.3)=0.3^{2}=0.09.}
Hence
L(pH=0.3∣HH)=0.09.{\displaystyle {\mathcal {L}}(p_{\text{H}}=0.3\mid {\text{HH}})=0.09.}
More generally, for each value ofpH{\textstyle p_{\text{H}}}, we can calculate the corresponding likelihood. The result of such calculations is displayed in Figure 1. The integral ofL{\textstyle {\mathcal {L}}}over [0, 1] is 1/3; likelihoods need not integrate or sum to one over the parameter space.
LetX{\textstyle X}be arandom variablefollowing anabsolutely continuous probability distributionwithdensity functionf{\textstyle f}(a function ofx{\textstyle x}) which depends on a parameterθ{\textstyle \theta }. Then the function
L(θ∣x)=fθ(x),{\displaystyle {\mathcal {L}}(\theta \mid x)=f_{\theta }(x),}
considered as a function ofθ{\textstyle \theta }, is thelikelihood function(ofθ{\textstyle \theta }, given theoutcomeX=x{\textstyle X=x}). Again,L{\textstyle {\mathcal {L}}}is not a probability density or mass function overθ{\textstyle \theta }, despite being a function ofθ{\textstyle \theta }given the observationX=x{\textstyle X=x}.
The use of theprobability densityin specifying the likelihood function above is justified as follows. Given an observationxj{\textstyle x_{j}}, the likelihood for the interval[xj,xj+h]{\textstyle [x_{j},x_{j}+h]}, whereh>0{\textstyle h>0}is a constant, is given byL(θ∣x∈[xj,xj+h]){\textstyle {\mathcal {L}}(\theta \mid x\in [x_{j},x_{j}+h])}. Observe thatargmaxθL(θ∣x∈[xj,xj+h])=argmaxθ1hL(θ∣x∈[xj,xj+h]),{\displaystyle \mathop {\operatorname {arg\,max} } _{\theta }{\mathcal {L}}(\theta \mid x\in [x_{j},x_{j}+h])=\mathop {\operatorname {arg\,max} } _{\theta }{\frac {1}{h}}{\mathcal {L}}(\theta \mid x\in [x_{j},x_{j}+h]),}sinceh{\textstyle h}is positive and constant. Becauseargmaxθ1hL(θ∣x∈[xj,xj+h])=argmaxθ1hPr(xj≤x≤xj+h∣θ)=argmaxθ1h∫xjxj+hf(x∣θ)dx,{\displaystyle \mathop {\operatorname {arg\,max} } _{\theta }{\frac {1}{h}}{\mathcal {L}}(\theta \mid x\in [x_{j},x_{j}+h])=\mathop {\operatorname {arg\,max} } _{\theta }{\frac {1}{h}}\Pr(x_{j}\leq x\leq x_{j}+h\mid \theta )=\mathop {\operatorname {arg\,max} } _{\theta }{\frac {1}{h}}\int _{x_{j}}^{x_{j}+h}f(x\mid \theta )\,dx,}
wheref(x∣θ){\textstyle f(x\mid \theta )}is the probability density function, it follows that
argmaxθL(θ∣x∈[xj,xj+h])=argmaxθ1h∫xjxj+hf(x∣θ)dx.{\displaystyle \mathop {\operatorname {arg\,max} } _{\theta }{\mathcal {L}}(\theta \mid x\in [x_{j},x_{j}+h])=\mathop {\operatorname {arg\,max} } _{\theta }{\frac {1}{h}}\int _{x_{j}}^{x_{j}+h}f(x\mid \theta )\,dx.}
The firstfundamental theorem of calculusprovides thatlimh→0+1h∫xjxj+hf(x∣θ)dx=f(xj∣θ).{\displaystyle \lim _{h\to 0^{+}}{\frac {1}{h}}\int _{x_{j}}^{x_{j}+h}f(x\mid \theta )\,dx=f(x_{j}\mid \theta ).}
ThenargmaxθL(θ∣xj)=argmaxθ[limh→0+L(θ∣x∈[xj,xj+h])]=argmaxθ[limh→0+1h∫xjxj+hf(x∣θ)dx]=argmaxθf(xj∣θ).{\displaystyle {\begin{aligned}\mathop {\operatorname {arg\,max} } _{\theta }{\mathcal {L}}(\theta \mid x_{j})&=\mathop {\operatorname {arg\,max} } _{\theta }\left[\lim _{h\to 0^{+}}{\mathcal {L}}(\theta \mid x\in [x_{j},x_{j}+h])\right]\\[4pt]&=\mathop {\operatorname {arg\,max} } _{\theta }\left[\lim _{h\to 0^{+}}{\frac {1}{h}}\int _{x_{j}}^{x_{j}+h}f(x\mid \theta )\,dx\right]\\[4pt]&=\mathop {\operatorname {arg\,max} } _{\theta }f(x_{j}\mid \theta ).\end{aligned}}}
Therefore,argmaxθL(θ∣xj)=argmaxθf(xj∣θ),{\displaystyle \mathop {\operatorname {arg\,max} } _{\theta }{\mathcal {L}}(\theta \mid x_{j})=\mathop {\operatorname {arg\,max} } _{\theta }f(x_{j}\mid \theta ),}and so maximizing the probability density atxj{\textstyle x_{j}}amounts to maximizing the likelihood of the specific observationxj{\textstyle x_{j}}.
Inmeasure-theoretic probability theory, thedensity functionis defined as theRadon–Nikodym derivativeof the probability distribution relative to a common dominating measure.[5]The likelihood function is this density interpreted as a function of the parameter, rather than the random variable.[6]Thus, we can construct a likelihood function for any distribution, whether discrete, continuous, a mixture, or otherwise. (Likelihoods are comparable, e.g. for parameter estimation, only if they are Radon–Nikodym derivatives with respect to the same dominating measure.)
The above discussion of the likelihood for discrete random variables uses thecounting measure, under which the probability density at any outcome equals the probability of that outcome.
The above can be extended in a simple way to allow consideration of distributions which contain both discrete and continuous components. Suppose that the distribution consists of a number of discrete probability massespk(θ){\textstyle p_{k}(\theta )}and a densityf(x∣θ){\textstyle f(x\mid \theta )}, where the sum of all thep{\textstyle p}'s added to the integral off{\textstyle f}is always one. Assuming that it is possible to distinguish an observation corresponding to one of the discrete probability masses from one which corresponds to the density component, the likelihood function for an observation from the continuous component can be dealt with in the manner shown above. For an observation from the discrete component, the likelihood function for an observation from the discrete component is simplyL(θ∣x)=pk(θ),{\displaystyle {\mathcal {L}}(\theta \mid x)=p_{k}(\theta ),}wherek{\textstyle k}is the index of the discrete probability mass corresponding to observationx{\textstyle x}, because maximizing the probability mass (or probability) atx{\textstyle x}amounts to maximizing the likelihood of the specific observation.
The fact that the likelihood function can be defined in a way that includes contributions that are not commensurate (the density and the probability mass) arises from the way in which the likelihood function is defined up to a constant of proportionality, where this "constant" can change with the observationx{\textstyle x}, but not with the parameterθ{\textstyle \theta }.
In the context of parameter estimation, the likelihood function is usually assumed to obey certain conditions, known as regularity conditions. These conditions areassumedin various proofs involving likelihood functions, and need to be verified in each particular application. For maximum likelihood estimation, the existence of a global maximum of the likelihood function is of the utmost importance. By theextreme value theorem, it suffices that the likelihood function iscontinuouson acompactparameter space for the maximum likelihood estimator to exist.[7]While the continuity assumption is usually met, the compactness assumption about the parameter space is often not, as the bounds of the true parameter values might be unknown. In that case,concavityof the likelihood function plays a key role.
More specifically, if the likelihood function is twice continuously differentiable on thek-dimensional parameter spaceΘ{\textstyle \Theta }assumed to be anopenconnectedsubset ofRk,{\textstyle \mathbb {R} ^{k}\,,}there exists a unique maximumθ^∈Θ{\textstyle {\hat {\theta }}\in \Theta }if thematrix of second partialsH(θ)≡[∂2L∂θi∂θj]i,j=1,1ni,nj{\displaystyle \mathbf {H} (\theta )\equiv \left[\,{\frac {\partial ^{2}L}{\,\partial \theta _{i}\,\partial \theta _{j}\,}}\,\right]_{i,j=1,1}^{n_{\mathrm {i} },n_{\mathrm {j} }}\;}isnegative definitefor everyθ∈Θ{\textstyle \,\theta \in \Theta \,}at which the gradient∇L≡[∂L∂θi]i=1ni{\textstyle \;\nabla L\equiv \left[\,{\frac {\partial L}{\,\partial \theta _{i}\,}}\,\right]_{i=1}^{n_{\mathrm {i} }}\;}vanishes,
and if the likelihood function approaches a constant on theboundaryof the parameter space,∂Θ,{\textstyle \;\partial \Theta \;,}i.e.,limθ→∂ΘL(θ)=0,{\displaystyle \lim _{\theta \to \partial \Theta }L(\theta )=0\;,}which may include the points at infinity ifΘ{\textstyle \,\Theta \,}is unbounded. Mäkeläinen and co-authors prove this result usingMorse theorywhile informally appealing to a mountain pass property.[8]Mascarenhas restates their proof using themountain pass theorem.[9]
In the proofs ofconsistencyand asymptotic normality of the maximum likelihood estimator, additional assumptions are made about the probability densities that form the basis of a particular likelihood function. These conditions were first established by Chanda.[10]In particular, foralmost allx{\textstyle x}, and for allθ∈Θ,{\textstyle \,\theta \in \Theta \,,}∂logf∂θr,∂2logf∂θr∂θs,∂3logf∂θr∂θs∂θt{\displaystyle {\frac {\partial \log f}{\partial \theta _{r}}}\,,\quad {\frac {\partial ^{2}\log f}{\partial \theta _{r}\partial \theta _{s}}}\,,\quad {\frac {\partial ^{3}\log f}{\partial \theta _{r}\,\partial \theta _{s}\,\partial \theta _{t}}}\,}exist for allr,s,t=1,2,…,k{\textstyle \,r,s,t=1,2,\ldots ,k\,}in order to ensure the existence of aTaylor expansion. Second, for almost allx{\textstyle x}and for everyθ∈Θ{\textstyle \,\theta \in \Theta \,}it must be that|∂f∂θr|<Fr(x),|∂2f∂θr∂θs|<Frs(x),|∂3f∂θr∂θs∂θt|<Hrst(x){\displaystyle \left|{\frac {\partial f}{\partial \theta _{r}}}\right|<F_{r}(x)\,,\quad \left|{\frac {\partial ^{2}f}{\partial \theta _{r}\,\partial \theta _{s}}}\right|<F_{rs}(x)\,,\quad \left|{\frac {\partial ^{3}f}{\partial \theta _{r}\,\partial \theta _{s}\,\partial \theta _{t}}}\right|<H_{rst}(x)}whereH{\textstyle H}is such that∫−∞∞Hrst(z)dz≤M<∞.{\textstyle \,\int _{-\infty }^{\infty }H_{rst}(z)\mathrm {d} z\leq M<\infty \;.}This boundedness of the derivatives is needed to allow fordifferentiation under the integral sign. And lastly, it is assumed that theinformation matrix,I(θ)=∫−∞∞∂logf∂θr∂logf∂θsfdz{\displaystyle \mathbf {I} (\theta )=\int _{-\infty }^{\infty }{\frac {\partial \log f}{\partial \theta _{r}}}\ {\frac {\partial \log f}{\partial \theta _{s}}}\ f\ \mathrm {d} z}ispositive definiteand|I(θ)|{\textstyle \,\left|\mathbf {I} (\theta )\right|\,}is finite. This ensures that thescorehas a finite variance.[11]
The above conditions are sufficient, but not necessary. That is, a model that does not meet these regularity conditions may or may not have a maximum likelihood estimator of the properties mentioned above. Further, in case of non-independently or non-identically distributed observations additional properties may need to be assumed.
In Bayesian statistics, almost identical regularity conditions are imposed on the likelihood function in order to proof asymptotic normality of theposterior probability,[12][13]and therefore to justify aLaplace approximationof the posterior in large samples.[14]
Alikelihood ratiois the ratio of any two specified likelihoods, frequently written as:Λ(θ1:θ2∣x)=L(θ1∣x)L(θ2∣x).{\displaystyle \Lambda (\theta _{1}:\theta _{2}\mid x)={\frac {{\mathcal {L}}(\theta _{1}\mid x)}{{\mathcal {L}}(\theta _{2}\mid x)}}.}
The likelihood ratio is central tolikelihoodist statistics: thelaw of likelihoodstates that the degree to which data (considered as evidence) supports one parameter value versus another is measured by the likelihood ratio.
Infrequentist inference, the likelihood ratio is the basis for atest statistic, the so-calledlikelihood-ratio test. By theNeyman–Pearson lemma, this is the mostpowerfultest for comparing twosimple hypothesesat a givensignificance level. Numerous other tests can be viewed as likelihood-ratio tests or approximations thereof.[15]The asymptotic distribution of the log-likelihood ratio, considered as a test statistic, is given byWilks' theorem.
The likelihood ratio is also of central importance inBayesian inference, where it is known as theBayes factor, and is used inBayes' rule. Stated in terms ofodds, Bayes' rule states that theposteriorodds of two alternatives,A1{\displaystyle A_{1}}andA2{\displaystyle A_{2}}, given an eventB{\displaystyle B}, is thepriorodds, times the likelihood ratio. As an equation:O(A1:A2∣B)=O(A1:A2)⋅Λ(A1:A2∣B).{\displaystyle O(A_{1}:A_{2}\mid B)=O(A_{1}:A_{2})\cdot \Lambda (A_{1}:A_{2}\mid B).}
The likelihood ratio is not directly used in AIC-based statistics. Instead, what is used is the relative likelihood of models (see below).
Inevidence-based medicine, likelihood ratiosare used in diagnostic testingto assess the value of performing adiagnostic test.
Since the actual value of the likelihood function depends on the sample, it is often convenient to work with a standardized measure. Suppose that themaximum likelihood estimatefor the parameterθisθ^{\textstyle {\hat {\theta }}}. Relative plausibilities of otherθvalues may be found by comparing the likelihoods of those other values with the likelihood ofθ^{\textstyle {\hat {\theta }}}. Therelative likelihoodofθis defined to be[16][17][18][19][20]R(θ)=L(θ∣x)L(θ^∣x).{\displaystyle R(\theta )={\frac {{\mathcal {L}}(\theta \mid x)}{{\mathcal {L}}({\hat {\theta }}\mid x)}}.}Thus, the relative likelihood is the likelihood ratio (discussed above) with the fixed denominatorL(θ^){\textstyle {\mathcal {L}}({\hat {\theta }})}. This corresponds to standardizing the likelihood to have a maximum of 1.
Alikelihood regionis the set of all values ofθwhose relative likelihood is greater than or equal to a given threshold. In terms of percentages, ap% likelihood regionforθis defined to be[16][18][21]
{θ:R(θ)≥p100}.{\displaystyle \left\{\theta :R(\theta )\geq {\frac {p}{100}}\right\}.}
Ifθis a single real parameter, ap% likelihood region will usually comprise anintervalof real values. If the region does comprise an interval, then it is called alikelihood interval.[16][18][22]
Likelihood intervals, and more generally likelihood regions, are used forinterval estimationwithin likelihoodist statistics: they are similar toconfidence intervalsin frequentist statistics andcredible intervalsin Bayesian statistics. Likelihood intervals are interpreted directly in terms of relative likelihood, not in terms ofcoverage probability(frequentism) orposterior probability(Bayesianism).
Given a model, likelihood intervals can be compared to confidence intervals. Ifθis a single real parameter, then under certain conditions, a 14.65% likelihood interval (about 1:7 likelihood) forθwill be the same as a 95% confidence interval (19/20 coverage probability).[16][21]In a slightly different formulation suited to the use of log-likelihoods (seeWilks' theorem), the test statistic is twice the difference in log-likelihoods and the probability distribution of the test statistic is approximately achi-squared distributionwith degrees-of-freedom (df) equal to the difference in df's between the two models (therefore, thee−2likelihood interval is the same as the 0.954 confidence interval; assuming difference in df's to be 1).[21][22]
In many cases, the likelihood is a function of more than one parameter but interest focuses on the estimation of only one, or at most a few of them, with the others being considered asnuisance parameters. Several alternative approaches have been developed to eliminate such nuisance parameters, so that a likelihood can be written as a function of only the parameter (or parameters) of interest: the main approaches are profile, conditional, and marginal likelihoods.[23][24]These approaches are also useful when a high-dimensional likelihood surface needs to be reduced to one or two parameters of interest in order to allow agraph.
It is possible to reduce the dimensions by concentrating the likelihood function for a subset of parameters by expressing the nuisance parameters as functions of the parameters of interest and replacing them in the likelihood function.[25][26]In general, for a likelihood function depending on the parameter vectorθ{\textstyle \mathbf {\theta } }that can be partitioned intoθ=(θ1:θ2){\textstyle \mathbf {\theta } =\left(\mathbf {\theta } _{1}:\mathbf {\theta } _{2}\right)}, and where a correspondenceθ^2=θ^2(θ1){\textstyle \mathbf {\hat {\theta }} _{2}=\mathbf {\hat {\theta }} _{2}\left(\mathbf {\theta } _{1}\right)}can be determined explicitly, concentration reducescomputational burdenof the original maximization problem.[27]
For instance, in alinear regressionwith normally distributed errors,y=Xβ+u{\textstyle \mathbf {y} =\mathbf {X} \beta +u}, the coefficient vector could bepartitionedintoβ=[β1:β2]{\textstyle \beta =\left[\beta _{1}:\beta _{2}\right]}(and consequently thedesign matrixX=[X1:X2]{\textstyle \mathbf {X} =\left[\mathbf {X} _{1}:\mathbf {X} _{2}\right]}). Maximizing with respect toβ2{\textstyle \beta _{2}}yields an optimal value functionβ2(β1)=(X2TX2)−1X2T(y−X1β1){\textstyle \beta _{2}(\beta _{1})=\left(\mathbf {X} _{2}^{\mathsf {T}}\mathbf {X} _{2}\right)^{-1}\mathbf {X} _{2}^{\mathsf {T}}\left(\mathbf {y} -\mathbf {X} _{1}\beta _{1}\right)}. Using this result, the maximum likelihood estimator forβ1{\textstyle \beta _{1}}can then be derived asβ^1=(X1T(I−P2)X1)−1X1T(I−P2)y{\displaystyle {\hat {\beta }}_{1}=\left(\mathbf {X} _{1}^{\mathsf {T}}\left(\mathbf {I} -\mathbf {P} _{2}\right)\mathbf {X} _{1}\right)^{-1}\mathbf {X} _{1}^{\mathsf {T}}\left(\mathbf {I} -\mathbf {P} _{2}\right)\mathbf {y} }whereP2=X2(X2TX2)−1X2T{\textstyle \mathbf {P} _{2}=\mathbf {X} _{2}\left(\mathbf {X} _{2}^{\mathsf {T}}\mathbf {X} _{2}\right)^{-1}\mathbf {X} _{2}^{\mathsf {T}}}is theprojection matrixofX2{\textstyle \mathbf {X} _{2}}. This result is known as theFrisch–Waugh–Lovell theorem.
Since graphically the procedure of concentration is equivalent to slicing the likelihood surface along the ridge of values of the nuisance parameterβ2{\textstyle \beta _{2}}that maximizes the likelihood function, creating anisometricprofileof the likelihood function for a givenβ1{\textstyle \beta _{1}}, the result of this procedure is also known asprofile likelihood.[28][29]In addition to being graphed, the profile likelihood can also be used to computeconfidence intervalsthat often have better small-sample properties than those based on asymptoticstandard errorscalculated from the full likelihood.[30][31]
Sometimes it is possible to find asufficient statisticfor the nuisance parameters, and conditioning on this statistic results in a likelihood which does not depend on the nuisance parameters.[32]
One example occurs in 2×2 tables, where conditioning on all four marginal totals leads to a conditional likelihood based on the non-centralhypergeometric distribution. This form of conditioning is also the basis forFisher's exact test.
Sometimes we can remove the nuisance parameters by considering a likelihood based on only part of the information in the data, for example by using the set of ranks rather than the numerical values. Another example occurs in linearmixed models, where considering a likelihood for the residuals only after fitting the fixed effects leads toresidual maximum likelihoodestimation of the variance components.
A partial likelihood is an adaption of the full likelihood such that only a part of the parameters (the parameters of interest) occur in it.[33]It is a key component of theproportional hazards model: using a restriction on the hazard function, the likelihood does not contain the shape of the hazard over time.
The likelihood, given two or moreindependentevents, is the product of the likelihoods of each of the individual events:Λ(A∣X1∧X2)=Λ(A∣X1)⋅Λ(A∣X2).{\displaystyle \Lambda (A\mid X_{1}\land X_{2})=\Lambda (A\mid X_{1})\cdot \Lambda (A\mid X_{2}).}This follows from the definition of independence in probability: the probabilities of two independent events happening, given a model, is the product of the probabilities.
This is particularly important when the events are fromindependent and identically distributed random variables, such as independent observations orsampling with replacement. In such a situation, the likelihood function factors into a product of individual likelihood functions.
The empty product has value 1, which corresponds to the likelihood, given no event, being 1: before any data, the likelihood is always 1. This is similar to auniform priorin Bayesian statistics, but in likelihoodist statistics this is not animproper priorbecause likelihoods are not integrated.
Log-likelihood functionis the logarithm of the likelihood function, often denoted by a lowercaselorℓ{\displaystyle \ell }, to contrast with the uppercaseLorL{\textstyle {\mathcal {L}}}for the likelihood. Because logarithms arestrictly increasingfunctions, maximizing the likelihood is equivalent to maximizing the log-likelihood. But for practical purposes it is more convenient to work with the log-likelihood function inmaximum likelihood estimation, in particular since most commonprobability distributions—notably theexponential family—are onlylogarithmically concave,[34][35]andconcavityof theobjective functionplays a key role in themaximization.
Given the independence of each event, the overall log-likelihood of intersection equals the sum of the log-likelihoods of the individual events. This is analogous to the fact that the overalllog-probabilityis the sum of the log-probability of the individual events. In addition to the mathematical convenience from this, the adding process of log-likelihood has an intuitive interpretation, as often expressed as "support" from the data. When the parameters are estimated using the log-likelihood for themaximum likelihood estimation, each data point is used by being added to the total log-likelihood. As the data can be viewed as an evidence that support the estimated parameters, this process can be interpreted as "support from independent evidenceadds",and the log-likelihood is the "weight of evidence". Interpreting negative log-probability asinformation contentorsurprisal, the support (log-likelihood) of a model, given an event, is the negative of the surprisal of the event, given the model: a model is supported by an event to the extent that the event is unsurprising, given the model.
A logarithm of a likelihood ratio is equal to the difference of the log-likelihoods:logL(A)L(B)=logL(A)−logL(B)=ℓ(A)−ℓ(B).{\displaystyle \log {\frac {{\mathcal {L}}(A)}{{\mathcal {L}}(B)}}=\log {\mathcal {L}}(A)-\log {\mathcal {L}}(B)=\ell (A)-\ell (B).}
Just as the likelihood, given no event, being 1, the log-likelihood, given no event, is 0, which corresponds to the value of the empty sum: without any data, there is no support for any models.
Thegraphof the log-likelihood is called thesupport curve(in theunivariatecase).[36]In the multivariate case, the concept generalizes into asupport surfaceover theparameter space.
It has a relation to, but is distinct from, thesupport of a distribution.
The term was coined byA. W. F. Edwards[36]in the context ofstatistical hypothesis testing, i.e. whether or not the data "support" one hypothesis (or parameter value) being tested more than any other.
The log-likelihood function being plotted is used in the computation of thescore(the gradient of the log-likelihood) andFisher information(the curvature of the log-likelihood). Thus, the graph has a direct interpretation in the context ofmaximum likelihood estimationandlikelihood-ratio tests.
If the log-likelihood function issmooth, itsgradientwith respect to the parameter, known as thescoreand writtensn(θ)≡∇θℓn(θ){\textstyle s_{n}(\theta )\equiv \nabla _{\theta }\ell _{n}(\theta )}, exists and allows for the application ofdifferential calculus. The basic way to maximize a differentiable function is to find thestationary points(the points where thederivativeis zero); since the derivative of a sum is just the sum of the derivatives, but the derivative of a product requires theproduct rule, it is easier to compute the stationary points of the log-likelihood of independent events than for the likelihood of independent events.
The equations defined by the stationary point of the score function serve asestimating equationsfor the maximum likelihood estimator.sn(θ)=0{\displaystyle s_{n}(\theta )=\mathbf {0} }In that sense, the maximum likelihood estimator is implicitly defined by the value at0{\textstyle \mathbf {0} }of theinverse functionsn−1:Ed→Θ{\textstyle s_{n}^{-1}:\mathbb {E} ^{d}\to \Theta }, whereEd{\textstyle \mathbb {E} ^{d}}is thed-dimensionalEuclidean space, andΘ{\textstyle \Theta }is the parameter space. Using theinverse function theorem, it can be shown thatsn−1{\textstyle s_{n}^{-1}}iswell-definedin anopen neighborhoodabout0{\textstyle \mathbf {0} }with probability going to one, andθ^n=sn−1(0){\textstyle {\hat {\theta }}_{n}=s_{n}^{-1}(\mathbf {0} )}is a consistent estimate ofθ{\textstyle \theta }. As a consequence there exists a sequence{θ^n}{\textstyle \left\{{\hat {\theta }}_{n}\right\}}such thatsn(θ^n)=0{\textstyle s_{n}({\hat {\theta }}_{n})=\mathbf {0} }asymptoticallyalmost surely, andθ^n→pθ0{\textstyle {\hat {\theta }}_{n}\xrightarrow {\text{p}} \theta _{0}}.[37]A similar result can be established usingRolle's theorem.[38][39]
The second derivative evaluated atθ^{\textstyle {\hat {\theta }}}, known asFisher information, determines the curvature of the likelihood surface,[40]and thus indicates theprecisionof the estimate.[41]
The log-likelihood is also particularly useful forexponential familiesof distributions, which include many of the commonparametric probability distributions. The probability distribution function (and thus likelihood function) for exponential families contain products of factors involvingexponentiation. The logarithm of such a function is a sum of products, again easier to differentiate than the original function.
An exponential family is one whose probability density function is of the form (for some functions, writing⟨−,−⟩{\textstyle \langle -,-\rangle }for theinner product):
p(x∣θ)=h(x)exp(⟨η(θ),T(x)⟩−A(θ)).{\displaystyle p(x\mid {\boldsymbol {\theta }})=h(x)\exp {\Big (}\langle {\boldsymbol {\eta }}({\boldsymbol {\theta }}),\mathbf {T} (x)\rangle -A({\boldsymbol {\theta }}){\Big )}.}
Each of these terms has an interpretation,[a]but simply switching from probability to likelihood and taking logarithms yields the sum:
ℓ(θ∣x)=⟨η(θ),T(x)⟩−A(θ)+logh(x).{\displaystyle \ell ({\boldsymbol {\theta }}\mid x)=\langle {\boldsymbol {\eta }}({\boldsymbol {\theta }}),\mathbf {T} (x)\rangle -A({\boldsymbol {\theta }})+\log h(x).}
Theη(θ){\textstyle {\boldsymbol {\eta }}({\boldsymbol {\theta }})}andh(x){\textstyle h(x)}each correspond to achange of coordinates, so in these coordinates, the log-likelihood of an exponential family is given by the simple formula:
ℓ(η∣x)=⟨η,T(x)⟩−A(η).{\displaystyle \ell ({\boldsymbol {\eta }}\mid x)=\langle {\boldsymbol {\eta }},\mathbf {T} (x)\rangle -A({\boldsymbol {\eta }}).}
In words, the log-likelihood of an exponential family is inner product of the natural parameterη{\displaystyle {\boldsymbol {\eta }}}and thesufficient statisticT(x){\displaystyle \mathbf {T} (x)}, minus the normalization factor (log-partition function)A(η){\displaystyle A({\boldsymbol {\eta }})}. Thus for example the maximum likelihood estimate can be computed by taking derivatives of the sufficient statisticTand the log-partition functionA.
Thegamma distributionis an exponential family with two parameters,α{\textstyle \alpha }andβ{\textstyle \beta }. The likelihood function is
L(α,β∣x)=βαΓ(α)xα−1e−βx.{\displaystyle {\mathcal {L}}(\alpha ,\beta \mid x)={\frac {\beta ^{\alpha }}{\Gamma (\alpha )}}x^{\alpha -1}e^{-\beta x}.}
Finding the maximum likelihood estimate ofβ{\textstyle \beta }for a single observed valuex{\textstyle x}looks rather daunting. Its logarithm is much simpler to work with:
logL(α,β∣x)=αlogβ−logΓ(α)+(α−1)logx−βx.{\displaystyle \log {\mathcal {L}}(\alpha ,\beta \mid x)=\alpha \log \beta -\log \Gamma (\alpha )+(\alpha -1)\log x-\beta x.\,}
To maximize the log-likelihood, we first take thepartial derivativewith respect toβ{\textstyle \beta }:
∂logL(α,β∣x)∂β=αβ−x.{\displaystyle {\frac {\partial \log {\mathcal {L}}(\alpha ,\beta \mid x)}{\partial \beta }}={\frac {\alpha }{\beta }}-x.}
If there are a number of independent observationsx1,…,xn{\textstyle x_{1},\ldots ,x_{n}}, then the joint log-likelihood will be the sum of individual log-likelihoods, and the derivative of this sum will be a sum of derivatives of each individual log-likelihood:
∂logL(α,β∣x1,…,xn)∂β=∂logL(α,β∣x1)∂β+⋯+∂logL(α,β∣xn)∂β=nαβ−∑i=1nxi.{\displaystyle {\begin{aligned}&{\frac {\partial \log {\mathcal {L}}(\alpha ,\beta \mid x_{1},\ldots ,x_{n})}{\partial \beta }}\\&={\frac {\partial \log {\mathcal {L}}(\alpha ,\beta \mid x_{1})}{\partial \beta }}+\cdots +{\frac {\partial \log {\mathcal {L}}(\alpha ,\beta \mid x_{n})}{\partial \beta }}\\&={\frac {n\alpha }{\beta }}-\sum _{i=1}^{n}x_{i}.\end{aligned}}}
To complete the maximization procedure for the joint log-likelihood, the equation is set to zero and solved forβ{\textstyle \beta }:
β^=αx¯.{\displaystyle {\widehat {\beta }}={\frac {\alpha }{\bar {x}}}.}
Hereβ^{\textstyle {\widehat {\beta }}}denotes the maximum-likelihood estimate, andx¯=1n∑i=1nxi{\textstyle \textstyle {\bar {x}}={\frac {1}{n}}\sum _{i=1}^{n}x_{i}}is thesample meanof the observations.
The term "likelihood" has been in use in English since at least lateMiddle English.[42]Its formal use to refer to a specificfunctionin mathematical statistics was proposed byRonald Fisher,[43]in two research papers published in 1921[44]and 1922.[45]The 1921 paper introduced what is today called a "likelihood interval"; the 1922 paper introduced the term "method of maximum likelihood". Quoting Fisher:
[I]n 1922, I proposed the term 'likelihood,' in view of the fact that, with respect to [the parameter], it is not a probability, and does not obey the laws of probability, while at the same time it bears to the problem of rational choice among the possible values of [the parameter] a relation similar to that which probability bears to the problem of predicting events in games of chance. . . . Whereas, however, in relation to psychological judgment, likelihood has some resemblance to probability, the two concepts are wholly distinct. . . ."[46]
The concept of likelihood should not be confused with probability as mentioned by Sir Ronald Fisher
I stress this because in spite of the emphasis that I have always laid upon the difference between probability and likelihood there is still a tendency to treat likelihood as though it were a sort of probability. The first result is thus that there are two different measures of rational belief appropriate to different cases. Knowing the population we can express our incomplete knowledge of, or expectation of, the sample in terms of probability; knowing the sample we can express our incomplete knowledge of the population in terms of likelihood.[47]
Fisher's invention of statistical likelihood was in reaction against an earlier form of reasoning calledinverse probability.[48]His use of the term "likelihood" fixed the meaning of the term within mathematical statistics.
A. W. F. Edwards(1972) established the axiomatic basis for use of the log-likelihood ratio as a measure of relative support for one hypothesis against another. Thesupport functionis then the natural logarithm of the likelihood function. Both terms are used inphylogenetics, but were not adopted in a general treatment of the topic of statistical evidence.[49]
Among statisticians, there is no consensus about what thefoundation of statisticsshould be. There are four main paradigms that have been proposed for the foundation:frequentism,Bayesianism,likelihoodism, andAIC-based.[50]For each of the proposed foundations, the interpretation of likelihood is different. The four interpretations are described in the subsections below.
InBayesian inference, although one can speak about the likelihood of any proposition orrandom variablegiven another random variable: for example the likelihood of a parameter value or of astatistical model(seemarginal likelihood), given specified data or other evidence,[51][52][53][54]the likelihood function remains the same entity, with the additional interpretations of (i) aconditional densityof the data given the parameter (since the parameter is then a random variable) and (ii) a measure or amount of information brought by the data about the parameter value or even the model.[51][52][53][54][55]Due to the introduction of a probability structure on the parameter space or on the collection of models, it is possible that a parameter value or a statistical model have a large likelihood value for given data, and yet have a lowprobability, or vice versa.[53][55]This is often the case in medical contexts.[56]FollowingBayes' Rule, the likelihood when seen as a conditional density can be multiplied by theprior probabilitydensity of the parameter and then normalized, to give aposterior probabilitydensity.[51][52][53][54][55]More generally, the likelihood of an unknown quantityX{\textstyle X}given another unknown quantityY{\textstyle Y}is proportional to theprobability ofY{\textstyle Y}givenX{\textstyle X}.[51][52][53][54][55]
In frequentist statistics, the likelihood function is itself astatisticthat summarizes a single sample from a population, whose calculated value depends on a choice of several parametersθ1...θp, wherepis the count of parameters in some already-selectedstatistical model. The value of the likelihood serves as a figure of merit for the choice used for the parameters, and the parameter set with maximum likelihood is the best choice, given the data available.
The specific calculation of the likelihood is the probability that the observed sample would be assigned, assuming that the model chosen and the values of the several parametersθgive an accurate approximation of thefrequency distributionof the population that the observed sample was drawn from. Heuristically, it makes sense that a good choice of parameters is those which render the sample actually observed the maximum possiblepost-hocprobability of having happened.Wilks' theoremquantifies the heuristic rule by showing that the difference in the logarithm of the likelihood generated by the estimate's parameter values and the logarithm of the likelihood generated by population's "true" (but unknown) parameter values is asymptoticallyχ2distributed.
Each independent sample's maximum likelihood estimate is a separate estimate of the "true" parameter set describing the population sampled. Successive estimates from many independent samples will cluster together with the population's "true" set of parameter values hidden somewhere in their midst. The difference in the logarithms of the maximum likelihood and adjacent parameter sets' likelihoods may be used to draw aconfidence regionon a plot whose co-ordinates are the parametersθ1...θp. The region surrounds the maximum-likelihood estimate, and all points (parameter sets) within that region differ at most in log-likelihood by some fixed value. Theχ2distributiongiven byWilks' theoremconverts the region's log-likelihood differences into the "confidence" that the population's "true" parameter set lies inside. The art of choosing the fixed log-likelihood difference is to make the confidence acceptably high while keeping the region acceptably small (narrow range of estimates).
As more data are observed, instead of being used to make independent estimates, they can be combined with the previous samples to make a single combined sample, and that large sample may be used for a new maximum likelihood estimate. As the size of the combined sample increases, the size of the likelihood region with the same confidence shrinks. Eventually, either the size of the confidence region is very nearly a single point, or the entire population has been sampled; in both cases, the estimated parameter set is essentially the same as the population parameter set.
Under theAICparadigm, likelihood is interpreted within the context ofinformation theory.[57][58][59]
|
https://en.wikipedia.org/wiki/Likelihood_function
|
Arandomness test(ortest for randomness), in data evaluation, is a test used to analyze the distribution of a set of data to see whether it can be described asrandom(patternless). Instochastic modeling, as in somecomputer simulations, the hoped-for randomness of potential input data can be verified, by a formal test for randomness, to show that the data are valid for use in simulation runs. In some cases, data reveals an obvious non-random pattern, as with so-called "runs in the data" (such as expecting random 0–9 but finding "4 3 2 1 0 4 3 2 1..." and rarely going above 4). If a selected set of data fails the tests, then parameters can be changed or other randomized data can be used which does pass the tests for randomness.
The issue of randomness is an important philosophical and theoretical question. Tests for randomness can be used to determine whether a data set has a recognisable pattern, which would indicate that the process that generated it is significantly non-random. For the most part, statistical analysis has, in practice, been much more concerned with finding regularities in data as opposed to testing for randomness. Many "random number generators" in use today are defined by algorithms, and so are actuallypseudo-randomnumber generators. The sequences they produce are called pseudo-random sequences. These generators do not always generate sequences which are sufficiently random, but instead can produce sequences which contain patterns. For example, the infamousRANDUroutine fails many randomness tests dramatically, including thespectral test.
Stephen Wolframused randomness tests on the output ofRule 30to examine its potential for generating random numbers,[1]though it was shown to have an effective key size far smaller than its actual size[2]and to perform poorly on achi-squared test.[3]The use of an ill-conceived random number generator can put the validity of an experiment in doubt by violating statistical assumptions. Though there are commonly used statistical testing techniques such as NIST standards, Yongge Wang showed that NIST standards are not sufficient. Furthermore, Yongge Wang[4]designed statistical–distance–based and law–of–the–iterated–logarithm–based testing techniques. Using this technique, Yongge Wang and Tony Nicol[5]detected the weakness in commonly used pseudorandom generators such as the well knownDebian version of OpenSSL pseudorandom generatorwhich was fixed in 2008.
There have been a fairly small number of different types of (pseudo-)random number generators used in practice. They can be found in thelist of random number generators, and have included:
These different generators have varying degrees of success in passing the accepted test suites. Several widely used generators fail the tests more or less badly, while other 'better' and prior generators (in the sense that they passed all current batteries of tests and they already existed) have been largely ignored.
There are many practical measures ofrandomnessfor abinary sequence. These include measures based onstatistical tests,transforms, andcomplexityor a mixture of these. A well-known and widely used collection of tests wasthe Diehard Battery of Tests, introduced by Marsaglia; this was extended to theTestU01suite by L'Ecuyer and Simard. The use ofHadamard transformto measure randomness was proposed byS. Kakand developed further by Phillips, Yuen, Hopkins, Beth and Dai, Mund, andMarsagliaand Zaman.[6]
Several of these tests, which are of linear complexity, provide spectral measures of randomness. T. Beth and Z-D. Dai purported to show thatKolmogorov complexityand linear complexity are practically the same,[7]although Y. Wang later showed their claims are incorrect.[8]Nevertheless, Wang also demonstrated that forMartin-Löf randomsequences, the Kolmogorov complexity is essentially the same as linear complexity.
These practical tests make it possible to compare the randomness ofstrings. On probabilistic grounds, all strings of a given length have the same randomness. However different strings have a different Kolmogorov complexity. For example, consider the following two strings.
String 1 admits a short linguistic description: "32 repetitions of '01'". This description has 22 characters, and it can be efficiently constructed out of some basis sequences.[clarification needed]String 2 has no obvious simple description other than writing down the string itself, which has 64 characters,[clarification needed]and it has no comparably efficientbasis functionrepresentation. Using linear Hadamard spectral tests (seeHadamard transform), the first of these sequences will be found to be of much less randomness than the second one, which agrees with intuition.
|
https://en.wikipedia.org/wiki/Randomness_test
|
Incryptography, anSP-network, orsubstitution–permutation network(SPN), is a series of linked mathematical operations used inblock cipheralgorithms such asAES (Rijndael),3-Way,Kalyna,Kuznyechik,PRESENT,SAFER,SHARK, andSquare.
Such a network takes a block of theplaintextand thekeyas inputs, and applies several alternatingroundsorlayersofsubstitution boxes(S-boxes) andpermutation boxes(P-boxes) to produce theciphertextblock. The S-boxes and P-boxes transform(sub-)blocksof inputbitsinto output bits. It is common for these transformations to be operations that are efficient to perform in hardware, such asexclusive or(XOR) andbitwise rotation. The key is introduced in each round, usually in the form of "round keys" derived from it. (In some designs, theS-boxesthemselves depend on the key.)
Decryptionis done by simply reversing the process (using the inverses of the S-boxes and P-boxes and applying the round keys in reversed order).
AnS-boxsubstitutes a small block of bits (the input of the S-box) by another block of bits (the output of the S-box). This substitution should beone-to-one, to ensure invertibility (hence decryption). In particular, the length of the output should be the same as the length of the input (the picture on the right has S-boxes with 4 input and 4 output bits), which is different from S-boxes in general that could also change the length, as inData Encryption Standard(DES), for example. An S-box is usually not simply apermutationof the bits. Rather, in a good S-box each output bit will be affected by every input bit. More precisely, in a good S-box each output bit will be changed with 50% probability by every input bit. Since each output bit changes with the 50% probability, about half of the output bits will actually change with an input bit change (cf.Strict avalanche criterion).[1]
AP-boxis apermutationof all the bits: it takes the outputs of all the S-boxes of one round, permutes the bits, and feeds them into the S-boxes of the next round. A good P-box has the property that the output bits of any S-box are distributed to as many S-box inputs as possible.
At each round, theround key(obtained from thekeywith some simple operations, for instance, using S-boxes and P-boxes) is combined using some group operation, typicallyXOR.
A single typical S-box or a single P-box alone does not have much cryptographic strength: an S-box could be thought of as asubstitution cipher, while a P-box could be thought of as atransposition cipher. However, a well-designed SP network with several alternating rounds of S- and P-boxes already satisfiesShannon'sconfusion and diffusionproperties:
Although aFeistel networkthat uses S-boxes (such asDES) is quite similar to SP networks, there are some differences that make either this or that more applicable in certain situations. For a given amount ofconfusion and diffusion, an SP network has more "inherent parallelism"[2]and so — given a CPU with manyexecution units— can be computed faster than a Feistel network.[3]CPUs with few execution units — such as mostsmart cards— cannot take advantage of this inherent parallelism. Also SP ciphers require S-boxes to be invertible (to perform decryption); Feistel inner functions have no such restriction and can be constructed asone-way functions.
|
https://en.wikipedia.org/wiki/Substitution%E2%80%93permutation_network
|
Thelifting schemeis a technique for both designingwaveletsand performing thediscrete wavelet transform(DWT). In an implementation, it is often worthwhile to merge these steps and design the wavelet filterswhileperforming the wavelet transform. This is then called thesecond-generation wavelet transform. The technique was introduced byWim Sweldens.[1]
The lifting scheme factorizes any discrete wavelet transform with finite filters into a series of elementary convolution operators, so-called lifting steps, which reduces the number of arithmetic operations by nearly a factor two. Treatment of signal boundaries is also simplified.[2]
The discrete wavelet transform applies several filters separately to the same signal. In contrast to that, for the lifting scheme, the signal is divided like a zipper. Then a series ofconvolution–accumulate operationsacross the divided signals is applied.
The simplest version of a forward wavelet transform expressed in the lifting scheme is shown in the figure above.P{\displaystyle P}means predict step, which will be considered in isolation. The predict step calculates the wavelet function in the wavelet transform. This is a high-pass filter. The update step calculates the scaling function, which results in a smoother version of the data.
As mentioned above, the lifting scheme is an alternative technique for performing the DWT using biorthogonal wavelets. In order to perform the DWT using the lifting scheme, the corresponding lifting and scaling steps must be derived from the biorthogonal wavelets. The analysis filters (g,h{\displaystyle g,h}) of the particular wavelet are first written in polyphase matrix
wheredetP(z)=z−m{\displaystyle \det P(z)=z^{-m}}.
The polyphase matrix is a 2 × 2 matrix containing the analysis low-pass and high-pass filters, each split up into their even and odd polynomial coefficients and normalized. From here the matrix is factored into a series of 2 × 2 upper- and lower-triangular matrices, each with diagonal entries equal to 1. The upper-triangular matrices contain the coefficients for the predict steps, and the lower-triangular matrices contain the coefficients for the update steps. A matrix consisting of all zeros with the exception of the diagonal values may be extracted to derive the scaling-step coefficients. The polyphase matrix is factored into the form
wherea{\displaystyle a}is the coefficient for the predict step, andb{\displaystyle b}is the coefficient for the update step.
An example of a more complicated extraction having multiple predict and update steps, as well as scaling steps, is shown below;a{\displaystyle a}is the coefficient for the first predict step,b{\displaystyle b}is the coefficient for the first update step,c{\displaystyle c}is the coefficient for the second predict step,d{\displaystyle d}is the coefficient for the second update step,k1{\displaystyle k_{1}}is the odd-sample scaling coefficient, andk2{\displaystyle k_{2}}is the even-sample scaling coefficient:
According to matrix theory, any matrix having polynomial entries and a determinant of 1 can be factored as described above. Therefore, every wavelet transform with finite filters can be decomposed into a series of lifting and scaling steps. Daubechies and Sweldens discuss lifting-step extraction in further detail.[3]
To perform the CDF 9/7 transform, a total of four lifting steps are required: two predict and two update steps.
The lifting factorization leads to the following sequence of filtering steps.[3]
Every transform by the lifting scheme can be inverted. Every perfect-reconstruction filter bank can be decomposed into lifting steps by theEuclidean algorithm. That is, "lifting-decomposable filter bank" and "perfect-reconstruction filter bank" denotes the same. Every two perfect-reconstruction filter banks can be transformed into each other by a sequence of lifting steps. For a better understanding, ifP{\displaystyle P}andQ{\displaystyle Q}arepolyphase matriceswith the same determinant, then the lifting sequence fromP{\displaystyle P}toQ{\displaystyle Q}is the same as the one from the lazy polyphase matrixI{\displaystyle I}toP−1⋅Q{\displaystyle P^{-1}\cdot Q}.
Speedup is by a factor of two. This is only possible because lifting is restricted to perfect-reconstruction filter banks. That is, lifting somehow squeezes out redundancies caused by perfect reconstruction.
The transformation can be performed immediately in the memory of the input data (in place, in situ) with only constant memory overhead.
The convolution operations can be replaced by any other operation. For perfect reconstruction only the invertibility of the addition operation is relevant. This way rounding errors in convolution can be tolerated and bit-exact reconstruction is possible. However, the numeric stability may be reduced by the non-linearities. This must be respected if the transformed signal is processed like inlossy compression. Although every reconstructable filter bank can be expressed in terms of lifting steps, a general description of the lifting steps is not obvious from a description of a wavelet family. However, for instance, for simple cases of theCohen–Daubechies–Feauveau wavelet, there is an explicit formula for their lifting steps.
A lifting modifies biorthogonal filters in order to increase the number of vanishing moments of the resulting biorthogonal wavelets, and hopefully their stability and regularity. Increasing the number of vanishing moments decreases the amplitude of wavelet coefficients in regions where the signal is regular, which produces a more sparse representation. However, increasing the number of vanishing moments with a lifting also increases the wavelet support, which is an adverse effect that increases the number of large coefficients produced by isolated singularities. Each lifting step maintains the filter biorthogonality but provides no control on the Riesz bounds and thus on the stability of the resulting wavelet biorthogonal basis. When a basis is orthogonal then the dual basis is equal to the original basis. Having a dual basis that is similar to the original basis is, therefore, an indication of stability. As a result, stability is generally improved when dual wavelets have as much vanishing moments as original wavelets and a support of similar size. This is why a lifting procedure also increases the number of vanishing moments of dual wavelets. It can also improve the regularity of the dual wavelet. A lifting design is computed by adjusting the number of vanishing moments. The stability and regularity of the resulting biorthogonal wavelets are measured a posteriori, hoping for the best. This is the main weakness of this wavelet design procedure.
Thegeneralized lifting schemewas developed by Joel Solé and Philippe Salembier and published in Solé's PhD dissertation.[4]It is based on the classical lifting scheme and generalizes it by breaking out a restriction hidden in the scheme structure. The classical lifting scheme has three kinds of operations:
The scheme is invertible due to its structure. In thereceiver, the update step is computed first with its result added back to the even samples, and then it is possible to compute exactly the same prediction to add to the odd samples. In order to recover the original signal, the lazy wavelet transform has to be inverted. Generalized lifting scheme has the same three kinds of operations. However, this scheme avoids the addition-subtraction restriction that offered classical lifting, which has some consequences. For example, the design of all steps must guarantee the scheme invertibility (not guaranteed if the addition-subtraction restriction is avoided).
Generalized lifting schemeis a dyadic transform that follows these rules:
Obviously, these mappings cannot be any functions. In order to guarantee the invertibility of the scheme itself, all mappings involved in the transform must be invertible. In case that mappings arise and arrive on finite sets (discrete bounded value signals), this condition is equivalent to saying that mappings areinjective(one-to-one). Moreover, if a mapping goes from one set to a set of the same cardinality, it should bebijective.
In the generalized lifting scheme the addition/subtraction restriction is avoided by including this step in the mapping. In this way the classical lifting scheme is generalized.
Some designs have been developed for the prediction-step mapping. The update-step design has not been considered as thoroughly, because it remains to be answered how exactly the update step is useful. The main application of this technique isimage compression.[5][6][7][8]
|
https://en.wikipedia.org/wiki/Lifting_scheme
|
Incryptography,format-preserving encryption(FPE), refers to encrypting in such a way that the output (theciphertext) is in the same format as the input (theplaintext). The meaning of "format" varies. Typically only finite sets of characters are used; numeric, alphabetic or alphanumeric. For example:
For such finite domains, and for the purposes of the discussion below, the cipher is equivalent to a permutation ofNintegers{0, ... ,N−1} whereNis the size of the domain.
One motivation for using FPE comes from the problems associated with integrating encryption into existing applications, with well-defined data models. A typical example would be acredit card number, such as1234567812345670(16 bytes long, digits only).
Adding encryption to such applications might be challenging if data models are to be changed, as it usually involves changing field length limits or data types. For example, output from a typicalblock cipherwould turn credit card number into ahexadecimal(e.g.0x96a45cbcf9c2a9425cde9e274948cb67, 34 bytes, hexadecimal digits) orBase64value (e.g.lqRcvPnCqUJc3p4nSUjLZw==, 24 bytes, alphanumeric and special characters), which will break any existing applications expecting the credit card number to be a 16-digit number.
Apart from simple formatting problems, using AES-128-CBC, this credit card number might get encrypted to the hexadecimal value0xde015724b081ea7003de4593d792fd8b695b39e095c98f3a220ff43522a2df02. In addition to the problems caused by creating invalid characters and increasing the size of the data, data encrypted using the CBC mode of an encryption algorithm also changes its value when it is decrypted and encrypted again. This happens because therandom seed valuethat is used to initialize the encryption algorithm and is included as part of the encrypted value is different for each encryption operation. Because of this, it is impossible to use data that has been encrypted with the CBC mode as aunique keyto identify a row in a database.
FPE attempts to simplify the transition process by preserving the formatting and length of the original data, allowing a drop-in replacement of plaintext values with their ciphertexts in legacy applications.
Although a truly random permutation is the ideal FPE cipher, for large domains it is infeasible to pre-generate and remember a truly random permutation. So the problem of FPE is to generate a pseudorandom permutation from a secret key, in such a way that the computation time for a single value is small (ideally constant, but most importantly smaller thanO(N)).
Ann-bit block cipher technicallyisa FPE on the set{0, ..., 2n-1}. If an FPE is needed on one of these standard sized sets (for example,n= 64 forDESandn= 128 for AES) a block cipher of the right size can be used.
However, in typical usage, a block cipher is used in amode of operationthat allows it to encrypt arbitrarily long messages, and with aninitialization vectoras discussed above. In this mode, a block cipher is not an FPE.
In cryptographic literature (see most of the references below), the measure of a "good" FPE is whether an attacker can distinguish the FPE from a truly random permutation. Various types of attackers are postulated, depending on whether they have access to oracles or known ciphertext/plaintext pairs.
In most of the approaches listed here, a well-understoodblock cipher(such asAES) is used as a primitive to take the place of an ideal random function. This has the advantage that incorporation of a secret key into the algorithm is easy. Where AES is mentioned in the following discussion, any other good block cipher would work as well.
Implementing FPE with security provably related to that of the underlying block cipher was first undertaken in a paper by cryptographersJohn BlackandPhillip Rogaway,[1]which described three ways to do this. They proved that each of these techniques is as secure as the block cipher that is used to construct it. This means that if the AES algorithm is used to create an FPE algorithm, then the resulting FPE algorithm is as secure as AES because an adversary capable of defeating the FPE algorithm can also defeat the AES algorithm. Therefore, if AES is secure, then the FPE algorithms constructed from it are also secure. In all of the following,Edenotes the AES encryption operation that is used to construct an FPE algorithm andFdenotes the FPE encryption operation.
One simple way to create an FPE algorithm on{0, ...,N-1}is to assign a pseudorandom weight to each integer, then sort by weight. The weights are defined by applying an existing block cipher to each integer. Black and Rogaway call this technique a "prefix cipher" and showed it was provably as good as the block cipher used.
Thus, to create a FPE on the domain {0,1,2,3}, given a keyKapply AES(K) to each integer, giving, for example,
Sorting [0,1,2,3] by weight gives [3,1,2,0], so the cipher is
This method is only useful for small values ofN. For larger values, the size of the lookup table and the required number of encryptions to initialize the table gets too big to be practical.
If there is a setMof allowed values within the domain of a pseudorandom permutationP(for examplePcan be a block cipher like AES), an FPE algorithm can be created from the block cipher by repeatedly applying the block cipher until the result is one of the allowed values (withinM).
The recursion is guaranteed to terminate. (BecausePis one-to-one and the domain is finite, repeated application ofPforms a cycle, so starting with a point inMthe cycle will eventually terminate inM.)
This has the advantage that the elements ofMdo not have to be mapped to a consecutive sequence {0,...,N-1} of integers. It has the disadvantage, whenMis much smaller thanP's domain, that too many iterations might be required for each operation. IfPis a block cipher of a fixed size, such as AES, this is a severe restriction on the sizes ofMfor which this method is efficient.
For example, an application may want to encrypt 100-bit values with AES in a way that creates another 100-bit value. With this technique, AES-128-ECB encryption can be applied until it reaches a value which has all of its 28 highest bits set to 0, which will take an average of 228iterations to happen.
It is also possible to make a FPE algorithm using aFeistel network. A Feistel network needs a source of pseudo-random values for the sub-keys for each round, and the output of the AES algorithm can be used as these pseudo-random values. When this is done, the resulting Feistel construction is good if enough rounds are used.[2]
One way to implement an FPE algorithm using AES and a Feistel network is to use as many bits of AES output as are needed to equal the length of the left or right halves of the Feistel network. If a 24-bit value is needed as a sub-key, for example, it is possible to use the lowest 24 bits of the output of AES for this value.
This may not result in the output of the Feistel network preserving the format of the input, but it is possible to iterate the Feistel network in the same way that the cycle-walking technique does to ensure that format can be preserved. Because it is possible to adjust the size of the inputs to a Feistel network, it is possible to make it very likely that this iteration ends very quickly on average. In the case of credit card numbers, for example, there are 1015possible 16-digit credit card numbers (accounting for the redundantcheck digit), and because the 1015≈ 249.8, using a 50-bit wide Feistel network along with cycle walking will create an FPE algorithm that encrypts fairly quickly on average.
A Thorp shuffle is like an idealized card-shuffle, or equivalently a maximally-unbalanced Feistel cipher where one side is a single bit. It is easier to prove security for unbalanced Feistel ciphers than for balanced ones.[3]
For domain sizes that are a power of two, and an existing block cipher with a smaller block size, a new cipher may be created using VIL mode as described by Bellare, Rogaway.[4]
TheHasty Pudding Cipheruses custom constructions (not depending on existing block ciphers as primitives) to encrypt arbitrary finite small domains.
The FFSEM mode of AES (specification[5]) that has been accepted for consideration by NIST uses the Feistel network construction of Black and Rogaway described above, with AES for the round function, with one slight modification: a single key is used and is tweaked slightly for each round.
As of February 2010, FFSEM has been superseded by the FFX mode written byMihir Bellare, Phillip Rogaway, and Terence Spies. (specification,[6][7]NIST Block Cipher Modes Development, 2010).
InJPEG 2000standard, the marker codes (in the range 0xFF90 through 0xFFFF) should not appear in the plaintext and ciphertext. The simple modular-0xFF90 technique cannot be applied to solve the JPEG 2000 encryption problem. For example, the ciphertext words 0x23FF and 0x9832 are valid, but their combination 0x23FF9832 becomes invalid since it introduces the marker code 0xFF98. Similarly, the simple cycle-walking technique cannot be applied to solve the JPEG2000 encryption problem since two valid ciphertext blocks may give invalid ciphertext when they get combined. For example, if the first ciphertext block ends with bytes "...30FF" and the second ciphertext block starts with bytes "9832...", then the marker code "0xFF98" would appear in the ciphertext.
Two mechanisms for format-preserving encryption of JPEG 2000 were given in the paper "Efficient and Secure Encryption Schemes for JPEG2000"[8]by Hongjun Wu and Di Ma. To perform format-preserving encryption of JPEG 2000, the technique is to exclude the byte "0xFF" in the encryption and decryption. Then a JPEG 2000 encryption mechanism performs modulo-n addition with stream cipher; another JPEG 2000 encryption mechanism performs the cycle-walking technique with block cipher.
Several FPE constructs are based on adding the output of a standard cipher, modulo n, to the data to be encrypted, with various methods of unbiasing the result. The modulo-n addition shared by many of the constructs is the immediately obvious solution to the FPE problem (thus its use in a number of cases), with the main differences being the unbiasing mechanisms used.
Section 8 of theFIPS74,Federal Information Processing Standards Publication 1981 Guidelines for Implementing and Using the NBS Data Encryption Standard,[9]describes a way to use the DES encryption algorithm in a manner that preserves the format of the data via modulo-n addition followed by an unbiasing operation. This standard was withdrawn on May 19, 2005, so the technique should be considered obsolete in terms of being a formal standard.
Another early mechanism for format-preserving encryption wasPeter Gutmann's "Encrypting data with a restricted range of values"[10]which again performs modulo-n addition on any cipher with some adjustments to make the result uniform, with the resulting encryption being as strong as the underlying encryption algorithm on which it is based.
The paper "Using Datatype-Preserving Encryption to Enhance Data Warehouse Security"[11]by Michael Brightwell and Harry Smith describes a way to use theDESencryption algorithm in a way that preserves the format of the plaintext. This technique doesn't appear to apply an unbiasing step as do the other modulo-n techniques referenced here.
The paper "Format-Preserving Encryption"[12]byMihir Bellareand Thomas Ristenpart describes using "nearly balanced" Feistel networks to create secure FPE algorithms.
The paper "Format Controlling Encryption Using Datatype Preserving Encryption"[13]by Ulf Mattsson describes other ways to create FPE algorithms.
An example of FPE algorithm is FNR (Flexible Naor and Reingold).[14]
NIST Special Publication 800-38G, "Recommendation for Block Cipher Modes of Operation: Methods for Format-Preserving Encryption"[15]specifies two methods: FF1 and FF3. Details on the proposals submitted for each can be found at the NIST Block Cipher Modes Development site,[16]including patent and test vector information. Sample values are available for both FF1 and FF3.[17]
Another mode was included in the draft NIST guidance but was removed before final publication.
Korea has also developed a FPE standard, FEA-1 and FEA-2.
Open Source implementations of FF1 and FF3 are publicly available inC language,Go language,Java,Node.js,Python,C#/.NetandRust
|
https://en.wikipedia.org/wiki/Format-preserving_encryption
|
TheLai–Massey schemeis a cryptographic structure used in the design ofblock ciphers,[1][2]an alternative to theFeistel networkfor converting a non-invertible keyed round function to an invertible keyed cipher. It is used inIDEAandIDEA NXT. The scheme was originally introduced byXuejia Lai[3]with the assistance ofJames L. Massey, hence the scheme's name,Lai-Massey.
The Lai-Massey Scheme is similar to a Feistel network in design, but in addition to using a using a non-invertibleround functionwhose input and output is half the data block size, each round uses a full-width invertiblehalf-round function. Either, or preferably both of the functions may take a key input as well.
Initially, the inputs are passed through the half-round function. In each round, the difference between the inputs is passed to the round function along with a sub-key, and the result from the round function is then added to each input. The inputs are then passed through the half-round function. This is then repeated a fixed number of times, and the final output is the encrypted data. Due to its design, it has an advantage over aSubstitution-permutation networksince the round-function does not need to be inverted - just the half-round - enabling it to be more easily inverted, and enabling the round-function to be arbitrarily complex. Theencryptionanddecryptionprocesses are fairly similar, decryption instead requiring a reversal of thekey schedule, an inverted half-round function, and that the round function's output besubtractedinstead ofadded.
LetF{\displaystyle \mathrm {F} }be the round function, andH{\displaystyle \mathrm {H} }a half-round function, and letK0,K1,…,Kn{\displaystyle K_{0},K_{1},\ldots ,K_{n}}be the sub-keys for the rounds0,1,…,n{\displaystyle 0,1,\ldots ,n}respectively.
Then the basic operation is as follows:
Split the plaintext block into two equal pieces, (L0{\displaystyle L_{0}},R0{\displaystyle R_{0}}).
For each roundi=0,1,…,n{\displaystyle i=0,1,\dots ,n}, compute
whereTi=F(Li′−Ri′,Ki){\displaystyle T_{i}=\mathrm {F} (L_{i}'-R_{i}',K_{i})}, and(L0′,R0′)=H(L0,R0){\displaystyle (L_{0}',R_{0}')=\mathrm {H} (L_{0},R_{0})}.
Then the ciphertext is(Ln+1,Rn+1)=(Ln+1′,Rn+1′){\displaystyle (L_{n+1},R_{n+1})=(L_{n+1}',R_{n+1}')}.
Decryption of a ciphertext(Ln+1,Rn+1){\displaystyle (L_{n+1},R_{n+1})}is accomplished by computing fori=n,n−1,…,0{\displaystyle i=n,n-1,\ldots ,0}
whereTi=F(Li+1′−Ri+1′,Ki){\displaystyle T_{i}=\mathrm {F} (L_{i+1}'-R_{i+1}',K_{i})}, and(Ln+1′,Rn+1′)=H−1(Ln+1,Rn+1){\displaystyle (L_{n+1}',R_{n+1}')=\mathrm {H} ^{-1}(L_{n+1},R_{n+1})}.
Then(L0,R0)=(L0′,R0′){\displaystyle (L_{0},R_{0})=(L_{0}',R_{0}')}is the plaintext again.
The Lai–Massey scheme offers security properties similar to those of theFeistel structure. It also shares its advantage over asubstitution–permutation networkthat the round functionF{\displaystyle \mathrm {F} }does not have to be invertible.
The half-round function is required to prevent a trivial distinguishing attack(L0−R0=Ln+1−Rn+1{\displaystyle L_{0}-R_{0}=L_{n+1}-R_{n+1}}). It commonly applies an orthomorphismσ{\displaystyle \sigma }on the left hand side, that is,
where bothσ{\displaystyle \sigma }andx↦σ(x)−x{\displaystyle x\mapsto \sigma (x)-x}arepermutations(in the mathematical sense, that is, a bijection – not apermutation box). Since there are no orthomorphisms for bit blocks (groups of size2n{\displaystyle 2^{n}}), "almost orthomorphisms" are used instead.
H{\displaystyle \mathrm {H} }may depend on the key. If it doesn't, the last application can be omitted, since its inverse is known anyway. The last application is commonly called "roundn.5{\displaystyle n.5}" for a cipher that otherwise hasn{\displaystyle n}rounds.
|
https://en.wikipedia.org/wiki/Lai%E2%80%93Massey_scheme
|
Competitive analysisis a method invented for analyzingonline algorithms, in which the performance of an online algorithm (which must satisfy an unpredictable sequence of requests, completing each request without being able to see the future) is compared to the performance of an optimaloffline algorithmthat can view the sequence of requests in advance. An algorithm iscompetitiveif itscompetitive ratio—the ratio between its performance and the offline algorithm's performance—is bounded. Unlike traditionalworst-case analysis, where the performance of an algorithm is measured only for "hard" inputs, competitive analysis requires that an algorithm perform well both on hard and easy inputs, where "hard" and "easy" are defined by the performance of the optimal offline algorithm.
For many algorithms, performance is dependent not only on the size of the inputs, but also on their values. For example, sorting an array of elements varies in difficulty depending on the initial order. Such data-dependent algorithms are analysed for average-case and worst-case data. Competitive analysis is a way of doing worst case analysis for on-line andrandomized algorithms, which are typically data dependent.
In competitive analysis, one imagines an "adversary" which deliberately chooses difficult data, to maximize the ratio of the cost of the algorithm being studied and some optimal algorithm. When considering a randomized algorithm, one must further distinguish between anoblivious adversary, which has no knowledge of the random choices made by the algorithm pitted against it, and anadaptive adversarywhich has full knowledge of the algorithm's internal state at any point during its execution. (For a deterministic algorithm, there is no difference; either adversary can simply compute what state that algorithm must have at any time in the future, and choose difficult data accordingly.)
For example, thequicksortalgorithm chooses one element, called the "pivot", that is, on average, not too far from the center value of the data being sorted. Quicksort then separates the data into two piles, one of which contains all elements with value less than the value of the pivot, and the other containing the rest of the elements. If quicksort chooses the pivot in some deterministic fashion (for instance, always choosing the first element in the list), then it is easy for an adversary to arrange the data beforehand so that quicksort will perform in worst-case time. If, however, quicksort chooses some random element to be the pivot, then an adversary without knowledge of what random numbers are coming up cannot arrange the data to guarantee worst-case execution time for quicksort.
The classic on-line problem first analysed with competitive analysis (Sleator & Tarjan 1985) is thelist update problem: Given a list of items and a sequence of requests for the various items, minimize the cost of accessing the list where the elements closer to the front of the list cost less to access. (Typically, the cost of accessing an item is equal to its position in the list.) After an access, the list may be rearranged. Most rearrangements have a cost. TheMove-To-Front algorithmsimply moves the requested item to the front after the access, at no cost. TheTranspose algorithmswaps the accessed item with the item immediately before it, also at no cost. Classical methods of analysis showed that Transpose is optimal in certain contexts. In practice, Move-To-Front performed much better. Competitive analysis was used to show that an adversary can make Transpose perform arbitrarily badly compared to an optimal algorithm, whereas Move-To-Front can never be made to incur more than twice the cost of an optimal algorithm.
In the case of online requests from a server, competitive algorithms are used to overcome uncertainties about the future. That is, the algorithm does not "know" the future, while the imaginary adversary (the "competitor") "knows". Similarly, competitive algorithms were developed for distributed systems, where the algorithm has to react to a request arriving at one location, without "knowing" what has just happened in a remote location. This setting was presented in (Awerbuch, Kutten & Peleg 1992).
|
https://en.wikipedia.org/wiki/Competitive_analysis_(online_algorithm)
|
Thek-server problemis a problem oftheoretical computer sciencein the category ofonline algorithms, one of two abstract problems onmetric spacesthat are central to the theory ofcompetitive analysis(the other beingmetrical task systems). In this problem, an online algorithm must control the movement of a set ofkservers, represented as points in a metric space, and handlerequeststhat are also in the form of points in the space. As each request arrives, the algorithm must determine which server to move to the requested point. The goal of the algorithm is to keep the total distance all servers move small, relative to the total distance the servers could have moved by an optimal adversary who knows in advance the entire sequence of requests.
The problem was first posed by Mark Manasse, Lyle A. McGeoch andDaniel Sleator(1988).[1]The most prominent open question concerning thek-server problem is the so-calledk-server conjecture, also posed by Manasse et al. This conjecture states that there is an algorithm for solving thek-server problem in an arbitrarymetric spaceand for any numberkof servers that has competitive ratio exactlyk. Manasse et al. were able to prove their conjecture whenk= 2, and for more general values ofkfor some metric spaces restricted to have exactlyk+1 points.ChrobakandLarmore(1991) proved the conjecture for tree metrics. The special case of metrics in which all distances are equal is called thepaging problembecause it models the problem ofpage replacement algorithmsin memory caches, and was also already known to have ak-competitive algorithm (SleatorandTarjan1985). Fiat et al. (1990) first proved that there exists an algorithm with finite competitive ratio for any constantkand any metric space, and finally Koutsoupias andPapadimitriou(1995) proved that Work Function Algorithm (WFA) has competitive ratio 2k- 1. However, despite the efforts of many other researchers, reducing the competitive ratio tokor providing an improved lower bound remains open as of 2014[update]. The most common believed scenario is that the Work Function Algorithm isk-competitive. To this direction, in 2000 Bartal and Koutsoupias showed that this is true for some special cases (if the metric space is a line, a weighted star or any metric ofk+2 points).
Thek-server conjecture has also a version for randomized algorithms, which asks if exists a randomized algorithm with competitive ratio O(logk) in any arbitrary metric space (with at leastk+ 1 points).[2]In 2011, a randomized algorithm with competitive bound Õ(log2k log3n) was found.[3][4]In 2017, a randomized algorithm with competitive bound O(log6k) was announced,[5]but was later retracted.[6]In 2022 it was shown that the randomized version of the conjecture is false.[2][7][8]
To make the problem more concrete, imagine sending customer support technicians to customers when they have trouble with their equipment. In our example problem there are two technicians, Mary and Noah, serving three customers, in San Francisco, California; Washington, DC; and Baltimore, Maryland. As ak-server problem, the servers are the technicians, sok= 2 and this is a 2-server problem. Washington and Baltimore are 35 miles (56 km) apart, while San Francisco is 3,000 miles (4,800 km) away from both, and initially Mary and Noah are both in San Francisco.
Consider an algorithm for assigning servers to requests that always assigns the closest server to the request, and suppose that each weekday morning the customer in Washington needs assistance while each weekday afternoon the customer in Baltimore needs assistance, and that the customer in San Francisco never needs assistance. Then, our algorithm will assign one of the servers (say Mary) to the Washington area, after which she will always be the closest server and always be assigned to all customer requests. Thus, every day our algorithm incurs the cost of traveling between Washington and Baltimore and back, 70 miles (110 km). After a year of this request pattern, the algorithm will have incurred 20,500 miles (33,000 km) travel: 3,000 to send Mary to the East Coast, and 17,500 for the trips between Washington and Baltimore. On the other hand, an optimal adversary who knows the future request schedule could have sent both Mary and Noah to Washington and Baltimore respectively, paying 6,000 miles (9,700 km) of travel once but then avoiding any future travel costs. The competitive ratio of our algorithm on this input is 20,500/6,000 or approximately 3.4, and by adjusting the parameters of this example the competitive ratio of this algorithm can be made arbitrarily large.
Thus we see that always assigning the closest server can be far from optimal. On the other hand, it seems foolish for an algorithm that does not know future requests to send both of its technicians away from San Francisco, as the next request could be in that city and it would have to send someone back immediately. So it seems that it is difficult or impossible for ak-server algorithm to perform well relative to its adversary. However, for the 2-server problem, there exists an algorithm that always has a total travel distance of at most twice the adversary's distance.
Thek-server conjecture states that similar solutions exist for problems with any larger number of technicians.
|
https://en.wikipedia.org/wiki/K-server_problem
|
Incomputer science, anonline algorithm[1]is one that can process its input piece-by-piece in a serial fashion, i.e., in the order that the input is fed to thealgorithm, without having the entire input available from the start. In contrast, anoffline algorithmis given the whole problem data from the beginning and is required to output an answer which solves the problem at hand.
Inoperations research, the area in which online algorithms are developed is calledonline optimization.
As an example, consider thesorting algorithmsselection sortandinsertion sort: selection sort repeatedly selects the minimum element from the unsorted remainder and places it at the front, which requires access to the entire input; it is thus an offline algorithm. On the other hand, insertion sort considers one input element per iteration and produces a partial solution without considering future elements. Thus insertion sort is an online algorithm.
Note that the final result of an insertion sort is optimum, i.e., a correctly sorted list. For many problems, online algorithms cannot match the performance of offline algorithms. If the ratio between the performance of an online algorithm and an optimal offline algorithm is bounded, the online algorithm is calledcompetitive.[1]
Not everyoffline algorithmhas an efficientonlinecounterpart.
In grammar theory they are associated withStraight-line grammars.
Because it does not know the whole input, an online algorithm is forced to make decisions that may later turn out not to be optimal, and the study of online algorithms has focused on the quality of decision-making that is possible in this setting.Competitive analysisformalizes this idea by comparing the relative performance of an online and offline algorithm for the same problem instance. Specifically, the competitive ratio of an algorithm, is defined as the worst-case ratio of its cost divided by the optimal cost, over all possible inputs. The competitive ratio of an online problem is the best competitive ratio achieved by an online algorithm. Intuitively, the competitive ratio of an algorithm gives a measure on the quality of solutions produced by this algorithm, while the competitive ratio of a problem shows the importance of knowing the future for this problem.
For other points of view ononline inputs to algorithms, see
Someonline algorithms:
A problem exemplifying the concepts of online algorithms is theCanadian traveller problem. The goal of this problem is to minimize the cost of reaching a target in a weighted graph where some of the edges are unreliable and may have been removed from the graph. However, that an edge has been removed (failed) is only revealed tothe travellerwhen she/he reaches one of the edge's endpoints. The worst case for this problem is simply that all of the unreliable edges fail and the problem reduces to the usualshortest path problem. An alternative analysis of the problem can be made with the help of competitive analysis. For this method of analysis, the offline algorithm knows in advance which edges will fail and the goal is to minimize the ratio between the online and offline algorithms' performance. This problem isPSPACE-complete.
There are many formal problems that offer more than oneonline algorithmas solution:
|
https://en.wikipedia.org/wiki/Online_algorithm
|
Buffer overflow protectionis any of various techniques used during software development to enhance the security of executable programs by detectingbuffer overflowsonstack-allocated variables, and preventing them from causing program misbehavior or from becoming serioussecurityvulnerabilities. A stack buffer overflow occurs when a program writes to a memory address on the program's call stack outside of the intended data structure, which is usually a fixed-length buffer. Stack buffer overflow bugs are caused when a program writes more data to a buffer located on the stack than what is actually allocated for that buffer. This almost always results in corruption of adjacent data on the stack, which could lead to program crashes, incorrect operation, or security issues.
Typically, buffer overflow protection modifies the organization of stack-allocated data so it includes acanaryvalue that, when destroyed by a stack buffer overflow, shows that a buffer preceding it in memory has been overflowed. By verifying the canary value, execution of the affected program can be terminated, preventing it from misbehaving or from allowing an attacker to take control over it. Other buffer overflow protection techniques includebounds checking, which checks accesses to each allocated block of memory so they cannot go beyond the actually allocated space, andtagging, which ensures that memory allocated for storing data cannot contain executable code.
Overfilling a buffer allocated on the stack is more likely to influence program execution than overfilling a buffer on theheapbecause the stack contains the return addresses for all active function calls. However, similar implementation-specific protections also exist against heap-based overflows.
There are several implementations of buffer overflow protection, including those for theGNU Compiler Collection,LLVM,Microsoft Visual Studio, and other compilers.
A stack buffer overflow occurs when a program writes to a memory address on the program'scall stackoutside of the intended data structure, which is usually a fixed-length buffer. Stack buffer overflow bugs are caused when a program writes more data to a buffer located on the stack than what is actually allocated for that buffer. This almost always results in corruption of adjacent data on the stack, and in cases where the overflow was triggered by mistake, will often cause the program to crash or operate incorrectly. Stack buffer overflow is a type of the more general programming malfunction known asbuffer overflow(or buffer overrun). Overfilling a buffer on the stack is more likely to derail program execution than overfilling a buffer on the heap because the stack contains the return addresses for all active function calls.[1]
Stack buffer overflow can be caused deliberately as part of an attack known asstack smashing. If the affected program is running with special privileges, or if it accepts data from untrusted network hosts (for example, a publicwebserver), then the bug is a potential security vulnerability that allows anattackerto inject executable code into the running program and take control of the process. This is one of the oldest and more reliable methods for attackers to gain unauthorized access to a computer.[2]
Typically, buffer overflow protection modifies the organization of data in thestack frameof afunction callto include a "canary" value that, when destroyed, shows that a buffer preceding it in memory has been overflowed. This provides the benefit of preventing an entire class of attacks. According to some researchers,[3]the performance impact of these techniques is negligible.
Stack-smashing protection is unable to protect against certain forms of attack. For example, it cannot protect against buffer overflows in the heap. There is no sane way to alter the layout of data within astructure; structures are expected to be the same between modules, especially with shared libraries. Any data in a structure after a buffer is impossible to protect with canaries; thus, programmers must be very careful about how they organize their variables and use their structures.
Canariesorcanary wordsorstack cookiesare known values that are placed between a buffer and control data on the stack to monitor buffer overflows. When the buffer overflows, the first data to be corrupted will usually be the canary, and a failed verification of the canary data will therefore alert of an overflow, which can then be handled, for example, by invalidating the corrupted data. A canary value should not be confused with asentinel value.
The terminology is a reference to the historic practice of usingcanaries in coal mines, since they would be affected by toxic gases earlier than the miners, thus providing a biological warning system. Canaries are alternately known asstack cookies, which is meant to evoke the image of a "broken cookie" when the value is corrupted.
There are three types of canaries in use:terminator,random, andrandomXOR. Current versions of StackGuard support all three, while ProPolice supportsterminatorandrandomcanaries.
Terminator canariesuse the observation that most buffer overflow attacks are based on certain string operations which end at string terminators. The reaction to this observation is that the canaries are built ofnullterminators,CR, LF, andFF. As a result, the attacker must write a null character before writing the return address to avoid altering the canary. This prevents attacks usingstrcpy()and other methods that return upon copying a null character, while the undesirable result is that the canary is known. Even with the protection, an attacker could potentially overwrite the canary with its known value and control information with mismatched values, thus passing the canary check code, which is executed soon before the specific processor's return-from-call instruction.
Random canariesare randomly generated, usually from anentropy-gatheringdaemon, in order to prevent an attacker from knowing their value. Usually, it is not logically possible or plausible to read the canary for exploiting; the canary is a secure value known only by those who need to know it—the buffer overflow protection code in this case.
Normally, a random canary is generated at program initialization, and stored in a global variable. This variable is usually padded by unmapped pages so that attempting to read it using any kinds of tricks that exploit bugs to read off RAM cause a segmentation fault, terminating the program. It may still be possible to read the canary if the attacker knows where it is or can get the program to read from the stack.
Random XOR canariesare random canaries that are XOR-scrambled using all or part of the control data. In this way, once the canary or the control data is clobbered, the canary value is wrong.
Random XOR canaries have the same vulnerabilities as random canaries, except that the "read from stack" method of getting the canary is a bit more complicated. The attacker must get the canary, the algorithm, and the control data in order to re-generate the original canary needed to spoof the protection.
In addition, random XOR canaries can protect against a certain type of attack involving overflowing a buffer in a structure into a pointer to change the pointer to point at a piece of control data. Because of the XOR encoding, the canary will be wrong if the control data or return value is changed. Because of the pointer, the control data or return value can be changed without overflowing over the canary.
Although these canaries protect the control data from being altered by clobbered pointers, they do not protect any other data or the pointers themselves. Function pointers especially are a problem here, as they can be overflowed into and can executeshellcodewhen called.
Bounds checking is a compiler-based technique that adds run-time bounds information for each allocated block of memory, and checks all pointers against those at run-time. For C and C++, bounds checking can be performed at pointer calculation time[4]or at dereference time.[5][6][7]
Implementations of this approach use either a central repository, which describes each allocated block of memory,[4][5][6]orfat pointers,[7]which contain both the pointer and additional data, describing the region that they point to.
Tagging[8]is a compiler-based or hardware-based (requiring atagged architecture) technique for tagging the type of a piece of data in memory, used mainly for type checking. By marking certain areas of memory as non-executable, it effectively prevents memory allocated to store data from containing executable code. Also, certain areas of memory can be marked as non-allocated, preventing buffer overflows.
Historically, tagging has been used for implementing high-level programming languages;[9]with appropriate support from theoperating system, tagging can also be used to detect buffer overflows.[10]An example is theNX bithardware feature, supported byIntel,AMDandARMprocessors.
Stack-smashing protection was first implemented byStackGuardin 1997, and published at the 1998USENIX Security Symposium.[11]StackGuard was introduced as a set of patches to the Intel x86 backend ofGCC2.7. StackGuard was maintained for theImmunixLinux distribution from 1998 to 2003, and was extended with implementations for terminator, random and random XOR canaries. StackGuard was suggested for inclusion in GCC 3.x at the GCC 2003 Summit Proceedings,[12]but this was never achieved.
From 2001 to 2005,IBMdeveloped GCC patches for stack-smashing protection, known asProPolice.[13]It improved on the idea of StackGuard by placing buffers after local pointers and function arguments in the stack frame. This helped avoid the corruption of pointers, preventing access to arbitrary memory locations.
Red Hatengineers identified problems with ProPolice though, and in 2005 re-implemented stack-smashing protection for inclusion in GCC 4.1.[14][15]This work introduced the-fstack-protectorflag, which protects only some vulnerable functions, and the-fstack-protector-allflag, which protects all functions whether they need it or not.[16]
In 2012,Googleengineers implemented the-fstack-protector-strongflag to strike a better balance between security and performance.[17]This flag protects more kinds of vulnerable functions than-fstack-protectordoes, but not every function, providing better performance than-fstack-protector-all. It is available in GCC since its version 4.9.[18]
AllFedorapackages are compiled with-fstack-protectorsince Fedora Core 5, and-fstack-protector-strongsince Fedora 20.[19][20]Most packages inUbuntuare compiled with-fstack-protectorsince 6.10.[21]EveryArch Linuxpackage is compiled with-fstack-protectorsince 2011.[22]All Arch Linux packages built since 4 May 2014 use-fstack-protector-strong.[23]Stack protection is only used for some packages inDebian,[24]and only for theFreeBSDbase system since 8.0.[25]Stack protection is standard in certain operating systems, includingOpenBSD,[26]Hardened Gentoo[27]andDragonFly BSD.[citation needed]
StackGuard and ProPolice cannot protect against overflows in automatically allocated structures that overflow into function pointers. ProPolice at least will rearrange the allocation order to get such structures allocated before function pointers. A separate mechanism forpointer protectionwas proposed in PointGuard[28]and is available on Microsoft Windows.[29]
The compiler suite from Microsoft implements buffer overflow protection since version 2003 through the/GScommand-line switch, which is enabled by default since version 2005.[30]Using/GS-disables the protection.
Stack-smashing protection can be turned on by the compiler flag-qstackprotect.[31]
Clang supports the same-fstack-protectoroptions as GCC[32]and a stronger "safe stack" (-fsanitize=safe-stack) system with similarly low performance impact.[33]Clang also has three buffer overflow detectors, namelyAddressSanitizer(-fsanitize=address),[6]UBSan (-fsanitize=bounds),[34]and the unofficial SafeCode (last updated for LLVM 3.0).[35]
These systems have different tradeoffs in terms of performance penalty, memory overhead, and classes of detected bugs. Stack protection is standard in certain operating systems, includingOpenBSD.[36]
Intel's C and C++ compiler supports stack-smashing protection with options similar to those provided by GCC and Microsoft Visual Studio.[37]
Fail-Safe C[7]is an open-source memory-safe ANSI C compiler that performs bounds checking based on fat pointers and object-oriented memory access.[38]
Invented byMike Frantzen, StackGhost is a simple tweak to the register window spill/fill routines which makes buffer overflows much more difficult to exploit. It uses a unique hardware feature of theSun MicrosystemsSPARCarchitecture (that being: deferred on-stack in-frame register window spill/fill) to detect modifications of returnpointers(a common way for anexploitto hijack execution paths) transparently, automatically protecting all applications without requiring binary or source modifications. The performance impact is negligible, less than one percent. The resultinggdbissues were resolved byMark Kettenistwo years later, allowing enabling of the feature. Following this event, the StackGhost code was integrated (and optimized) intoOpenBSD/SPARC.
|
https://en.wikipedia.org/wiki/Buffer_overflow_protection
|
Aheap overflow,heap overrun, orheap smashingis a type ofbuffer overflowthat occurs in theheapdata area. Heap overflows are exploitable in a different manner to that ofstack-based overflows. Memory on the heap isdynamically allocatedatruntimeand typically contains program data. Exploitation is performed by corrupting this data in specific ways to cause the application to overwrite internal structures such aslinked listpointers. The canonical heap overflow technique overwrites dynamic memory allocation linkage (such asmallocmetadata) and uses the resulting pointer exchange to overwrite a programfunction pointer.
For example, on older versions ofLinux, two buffers allocated next to each other on the heap could result in the first buffer overwriting the second buffer's metadata. By setting the in-use bit to zero of the second buffer and setting the length to a small negative value which allows null bytes to be copied, when the program callsfree()on the first buffer it will attempt to merge these two buffers into a single buffer. When this happens, the buffer that is assumed to be freed will be expected to hold twopointersFD and BK in the first 8 bytes of the formerly allocated buffer. BK gets written into FD and can be used to overwrite a pointer.
An accidental overflow may result indata corruptionor unexpected behavior by any process that accesses the affected memory area. Onoperating systemswithoutmemory protection, this could be any process on the system.
For example, aMicrosoftJPEGGDI+buffer overflow vulnerability could allow remote execution of code on the affected machine.[1]
iOS jailbreakingoften uses heap overflows to gainarbitrary code execution.
As with buffer overflows there are primarily three ways to protect against heap overflows. Several modernoperating systemssuch as Windows andLinuxprovide some implementation of all three.
Since version 2.3.6 theGNU libcincludes protections that can detect heap overflows after the fact, for example by checkingpointerconsistency when callingunlink. However, those protections against prior exploits were almost immediately shown to also be exploitable.[2][3]In addition, Linux has included support forASLRsince 2005, althoughPaXintroduced a better implementation years before. Also Linux has included support for NX-bit since 2004.
Microsofthas included protections against heap resident buffer overflows since April 2003 inWindows Server 2003and August 2004 inWindows XPwithService Pack 2. These mitigations were safe unlinking and heap entry header cookies. Later versions of Windows such asVista, Server 2008 andWindows 7include: Removal of commonly targeted data structures, heap entry metadata randomization, expanded role of heap header cookie, randomized heapbase address,function pointerencoding, termination of heap corruption and algorithm variation. Normal Data Execution Prevention (DEP) and ASLR also help to mitigate this attack.[4]
The most common detection method for heap overflows is online dynamic analysis. This method observes the runtime execution of programs to identify vulnerabilities through the detection of security breaches.[5]
|
https://en.wikipedia.org/wiki/Heap_overflow
|
Buffer overflow protectionis any of various techniques used during software development to enhance the security of executable programs by detectingbuffer overflowsonstack-allocated variables, and preventing them from causing program misbehavior or from becoming serioussecurityvulnerabilities. A stack buffer overflow occurs when a program writes to a memory address on the program's call stack outside of the intended data structure, which is usually a fixed-length buffer. Stack buffer overflow bugs are caused when a program writes more data to a buffer located on the stack than what is actually allocated for that buffer. This almost always results in corruption of adjacent data on the stack, which could lead to program crashes, incorrect operation, or security issues.
Typically, buffer overflow protection modifies the organization of stack-allocated data so it includes acanaryvalue that, when destroyed by a stack buffer overflow, shows that a buffer preceding it in memory has been overflowed. By verifying the canary value, execution of the affected program can be terminated, preventing it from misbehaving or from allowing an attacker to take control over it. Other buffer overflow protection techniques includebounds checking, which checks accesses to each allocated block of memory so they cannot go beyond the actually allocated space, andtagging, which ensures that memory allocated for storing data cannot contain executable code.
Overfilling a buffer allocated on the stack is more likely to influence program execution than overfilling a buffer on theheapbecause the stack contains the return addresses for all active function calls. However, similar implementation-specific protections also exist against heap-based overflows.
There are several implementations of buffer overflow protection, including those for theGNU Compiler Collection,LLVM,Microsoft Visual Studio, and other compilers.
A stack buffer overflow occurs when a program writes to a memory address on the program'scall stackoutside of the intended data structure, which is usually a fixed-length buffer. Stack buffer overflow bugs are caused when a program writes more data to a buffer located on the stack than what is actually allocated for that buffer. This almost always results in corruption of adjacent data on the stack, and in cases where the overflow was triggered by mistake, will often cause the program to crash or operate incorrectly. Stack buffer overflow is a type of the more general programming malfunction known asbuffer overflow(or buffer overrun). Overfilling a buffer on the stack is more likely to derail program execution than overfilling a buffer on the heap because the stack contains the return addresses for all active function calls.[1]
Stack buffer overflow can be caused deliberately as part of an attack known asstack smashing. If the affected program is running with special privileges, or if it accepts data from untrusted network hosts (for example, a publicwebserver), then the bug is a potential security vulnerability that allows anattackerto inject executable code into the running program and take control of the process. This is one of the oldest and more reliable methods for attackers to gain unauthorized access to a computer.[2]
Typically, buffer overflow protection modifies the organization of data in thestack frameof afunction callto include a "canary" value that, when destroyed, shows that a buffer preceding it in memory has been overflowed. This provides the benefit of preventing an entire class of attacks. According to some researchers,[3]the performance impact of these techniques is negligible.
Stack-smashing protection is unable to protect against certain forms of attack. For example, it cannot protect against buffer overflows in the heap. There is no sane way to alter the layout of data within astructure; structures are expected to be the same between modules, especially with shared libraries. Any data in a structure after a buffer is impossible to protect with canaries; thus, programmers must be very careful about how they organize their variables and use their structures.
Canariesorcanary wordsorstack cookiesare known values that are placed between a buffer and control data on the stack to monitor buffer overflows. When the buffer overflows, the first data to be corrupted will usually be the canary, and a failed verification of the canary data will therefore alert of an overflow, which can then be handled, for example, by invalidating the corrupted data. A canary value should not be confused with asentinel value.
The terminology is a reference to the historic practice of usingcanaries in coal mines, since they would be affected by toxic gases earlier than the miners, thus providing a biological warning system. Canaries are alternately known asstack cookies, which is meant to evoke the image of a "broken cookie" when the value is corrupted.
There are three types of canaries in use:terminator,random, andrandomXOR. Current versions of StackGuard support all three, while ProPolice supportsterminatorandrandomcanaries.
Terminator canariesuse the observation that most buffer overflow attacks are based on certain string operations which end at string terminators. The reaction to this observation is that the canaries are built ofnullterminators,CR, LF, andFF. As a result, the attacker must write a null character before writing the return address to avoid altering the canary. This prevents attacks usingstrcpy()and other methods that return upon copying a null character, while the undesirable result is that the canary is known. Even with the protection, an attacker could potentially overwrite the canary with its known value and control information with mismatched values, thus passing the canary check code, which is executed soon before the specific processor's return-from-call instruction.
Random canariesare randomly generated, usually from anentropy-gatheringdaemon, in order to prevent an attacker from knowing their value. Usually, it is not logically possible or plausible to read the canary for exploiting; the canary is a secure value known only by those who need to know it—the buffer overflow protection code in this case.
Normally, a random canary is generated at program initialization, and stored in a global variable. This variable is usually padded by unmapped pages so that attempting to read it using any kinds of tricks that exploit bugs to read off RAM cause a segmentation fault, terminating the program. It may still be possible to read the canary if the attacker knows where it is or can get the program to read from the stack.
Random XOR canariesare random canaries that are XOR-scrambled using all or part of the control data. In this way, once the canary or the control data is clobbered, the canary value is wrong.
Random XOR canaries have the same vulnerabilities as random canaries, except that the "read from stack" method of getting the canary is a bit more complicated. The attacker must get the canary, the algorithm, and the control data in order to re-generate the original canary needed to spoof the protection.
In addition, random XOR canaries can protect against a certain type of attack involving overflowing a buffer in a structure into a pointer to change the pointer to point at a piece of control data. Because of the XOR encoding, the canary will be wrong if the control data or return value is changed. Because of the pointer, the control data or return value can be changed without overflowing over the canary.
Although these canaries protect the control data from being altered by clobbered pointers, they do not protect any other data or the pointers themselves. Function pointers especially are a problem here, as they can be overflowed into and can executeshellcodewhen called.
Bounds checking is a compiler-based technique that adds run-time bounds information for each allocated block of memory, and checks all pointers against those at run-time. For C and C++, bounds checking can be performed at pointer calculation time[4]or at dereference time.[5][6][7]
Implementations of this approach use either a central repository, which describes each allocated block of memory,[4][5][6]orfat pointers,[7]which contain both the pointer and additional data, describing the region that they point to.
Tagging[8]is a compiler-based or hardware-based (requiring atagged architecture) technique for tagging the type of a piece of data in memory, used mainly for type checking. By marking certain areas of memory as non-executable, it effectively prevents memory allocated to store data from containing executable code. Also, certain areas of memory can be marked as non-allocated, preventing buffer overflows.
Historically, tagging has been used for implementing high-level programming languages;[9]with appropriate support from theoperating system, tagging can also be used to detect buffer overflows.[10]An example is theNX bithardware feature, supported byIntel,AMDandARMprocessors.
Stack-smashing protection was first implemented byStackGuardin 1997, and published at the 1998USENIX Security Symposium.[11]StackGuard was introduced as a set of patches to the Intel x86 backend ofGCC2.7. StackGuard was maintained for theImmunixLinux distribution from 1998 to 2003, and was extended with implementations for terminator, random and random XOR canaries. StackGuard was suggested for inclusion in GCC 3.x at the GCC 2003 Summit Proceedings,[12]but this was never achieved.
From 2001 to 2005,IBMdeveloped GCC patches for stack-smashing protection, known asProPolice.[13]It improved on the idea of StackGuard by placing buffers after local pointers and function arguments in the stack frame. This helped avoid the corruption of pointers, preventing access to arbitrary memory locations.
Red Hatengineers identified problems with ProPolice though, and in 2005 re-implemented stack-smashing protection for inclusion in GCC 4.1.[14][15]This work introduced the-fstack-protectorflag, which protects only some vulnerable functions, and the-fstack-protector-allflag, which protects all functions whether they need it or not.[16]
In 2012,Googleengineers implemented the-fstack-protector-strongflag to strike a better balance between security and performance.[17]This flag protects more kinds of vulnerable functions than-fstack-protectordoes, but not every function, providing better performance than-fstack-protector-all. It is available in GCC since its version 4.9.[18]
AllFedorapackages are compiled with-fstack-protectorsince Fedora Core 5, and-fstack-protector-strongsince Fedora 20.[19][20]Most packages inUbuntuare compiled with-fstack-protectorsince 6.10.[21]EveryArch Linuxpackage is compiled with-fstack-protectorsince 2011.[22]All Arch Linux packages built since 4 May 2014 use-fstack-protector-strong.[23]Stack protection is only used for some packages inDebian,[24]and only for theFreeBSDbase system since 8.0.[25]Stack protection is standard in certain operating systems, includingOpenBSD,[26]Hardened Gentoo[27]andDragonFly BSD.[citation needed]
StackGuard and ProPolice cannot protect against overflows in automatically allocated structures that overflow into function pointers. ProPolice at least will rearrange the allocation order to get such structures allocated before function pointers. A separate mechanism forpointer protectionwas proposed in PointGuard[28]and is available on Microsoft Windows.[29]
The compiler suite from Microsoft implements buffer overflow protection since version 2003 through the/GScommand-line switch, which is enabled by default since version 2005.[30]Using/GS-disables the protection.
Stack-smashing protection can be turned on by the compiler flag-qstackprotect.[31]
Clang supports the same-fstack-protectoroptions as GCC[32]and a stronger "safe stack" (-fsanitize=safe-stack) system with similarly low performance impact.[33]Clang also has three buffer overflow detectors, namelyAddressSanitizer(-fsanitize=address),[6]UBSan (-fsanitize=bounds),[34]and the unofficial SafeCode (last updated for LLVM 3.0).[35]
These systems have different tradeoffs in terms of performance penalty, memory overhead, and classes of detected bugs. Stack protection is standard in certain operating systems, includingOpenBSD.[36]
Intel's C and C++ compiler supports stack-smashing protection with options similar to those provided by GCC and Microsoft Visual Studio.[37]
Fail-Safe C[7]is an open-source memory-safe ANSI C compiler that performs bounds checking based on fat pointers and object-oriented memory access.[38]
Invented byMike Frantzen, StackGhost is a simple tweak to the register window spill/fill routines which makes buffer overflows much more difficult to exploit. It uses a unique hardware feature of theSun MicrosystemsSPARCarchitecture (that being: deferred on-stack in-frame register window spill/fill) to detect modifications of returnpointers(a common way for anexploitto hijack execution paths) transparently, automatically protecting all applications without requiring binary or source modifications. The performance impact is negligible, less than one percent. The resultinggdbissues were resolved byMark Kettenistwo years later, allowing enabling of the feature. Following this event, the StackGhost code was integrated (and optimized) intoOpenBSD/SPARC.
|
https://en.wikipedia.org/wiki/Stack-smashing_protection
|
Uncontrolled format stringis a type ofcode injectionvulnerabilitydiscovered around 1989 that can be used insecurity exploits.[1]Originally thought harmless, format string exploits can be used tocrasha program or to execute harmful code. The problem stems from the use ofunchecked user inputas theformat stringparameter in certainCfunctions that perform formatting, such asprintf(). A malicious user may use the%sand%xformat tokens, among others, to print data from thecall stackor possibly other locations in memory. One may also write arbitrary data to arbitrary locations using the%nformat token, which commandsprintf()and similar functions to write the number of bytes formatted to an address stored on the stack.
A typical exploit uses a combination of these techniques to take control of theinstruction pointer(IP) of a process,[2]for example by forcing a program to overwrite the address of a library function or the return address on the stack with a pointer to some maliciousshellcode. The padding parameters to format specifiers are used to control the number of bytes output and the%xtoken is used to pop bytes from the stack until the beginning of the format string itself is reached. The start of the format string is crafted to contain the address that the%nformat token can then overwrite with the address of the malicious code to execute.
This is a common vulnerability because format bugs were previously thought harmless and resulted in vulnerabilities in many common tools.MITRE'sCVEproject lists roughly 500 vulnerable programs as of June 2007, and a trend analysis ranks it the 9th most-reported vulnerability type between 2001 and 2006.[3]
Format string bugs most commonly appear when a programmer wishes to output a string containing user supplied data (either to a file, to a buffer, or to the user). The programmer may mistakenly writeprintf(buffer)instead ofprintf("%s", buffer). The first version interpretsbufferas a format string, and parses any formatting instructions it may contain. The second version simply prints a string to the screen, as the programmer intended. Both versions behave identically in the absence of format specifiers in the string, which makes it easy for the mistake to go unnoticed by the developer.
Format bugs arise because C's argument passing conventions are nottype-safe. In particular, thevarargsmechanism allowsfunctionsto accept any number of arguments (e.g.printf) by "popping" as manyargumentsoff thecall stackas they wish, trusting the early arguments to indicate how many additional arguments are to be popped, and of what types.
Format string bugs can occur in other programming languages besides C, such as Perl, although they appear with less frequency and usually cannot be exploited to execute code of the attacker's choice.[4]
Format bugs were first noted in 1989 by thefuzz testingwork done at the University of Wisconsin, which discovered an "interaction effect" in theC shell(csh) between itscommand historymechanism and an error routine that assumed safe string input.[5]
The use of format string bugs as anattack vectorwas discovered in September 1999 byTymm Twillmanduring asecurity auditof theProFTPDdaemon.[6]The audit uncovered ansnprintfthat directly passed user-generated data without a format string. Extensive tests with contrived arguments to printf-style functions showed that use of this for privilege escalation was possible. This led to the first posting in September 1999 on theBugtraqmailing list regarding this class of vulnerabilities, including a basic exploit.[6]It was still several months, however, before the security community became aware of the full dangers of format string vulnerabilities as exploits for other software using this method began to surface. The first exploits that brought the issue to common awareness (by providing remote root access via code execution) were published simultaneously on theBugtraqlist in June 2000 byPrzemysław Frasunek[7]and a person using the nicknametf8.[8]They were shortly followed by an explanation, posted by a person using the nicknamelamagra.[9]"Format bugs" was posted to theBugtraqlist by Pascal Bouchareine in July 2000.[10]The seminal paper "Format String Attacks"[11]byTim Newshamwas published in September 2000 and other detailed technical explanation papers were published in September 2001 such asExploiting Format String Vulnerabilities, by teamTeso.[2]
Many compilers can statically check format strings and produce warnings for dangerous or suspect formats. Inthe GNU Compiler Collection, the relevant compiler flags are,-Wall,-Wformat,-Wno-format-extra-args,-Wformat-security,-Wformat-nonliteral, and-Wformat=2.[12]
Most of these are only useful for detecting bad format strings that are known at compile-time. If the format string may come from the user or from a source external to the application, the application must validate the format string before using it. Care must also be taken if the application generates or selects format strings on the fly. If the GNU C library is used, the-D_FORTIFY_SOURCE=2parameter can be used to detect certain types of attacks occurring at run-time. The-Wformat-nonliteralcheck is more stringent.
Contrary to many other security issues, the root cause of format string vulnerabilities is relatively easy to detect in x86-compiled executables: Forprintf-family functions, proper use implies a separate argument for the format string and the arguments to be formatted. Faulty uses of such functions can be spotted by simply counting the number of arguments passed to the function; an "argument deficiency"[2]is then a strong indicator that the function was misused.
Counting the number of arguments is often made easy on x86 due to a calling convention where the caller removes the arguments that were pushed onto the stack by adding to the stack pointer after the call, so a simple examination of the stack correction yields the number of arguments passed to theprintf-family function.'[2]
|
https://en.wikipedia.org/wiki/Uncontrolled_format_string
|
Intel MPX(Memory Protection Extensions) are a discontinued set of extensions to thex86instruction set architecture. Withcompiler,runtime libraryandoperating systemsupport, Intel MPX claimed to enhance security tosoftwareby checkingpointer referenceswhose normal compile-time intentions are maliciously exploited at runtime due tobuffer overflows. In practice, there have been too many flaws discovered in the design for it to be useful, and support has been deprecated or removed from most compilers and operating systems.Intelhas listed MPX as removed in 2019 and onward hardware in section 2.5 of its Intel® 64 and IA-32 Architectures Software Developer's Manual Volume 1.[1]
Intel MPX introduces new boundsregisters, and newinstruction setextensions that operate on these registers. Additionally, there is a new set of "bound tables" that store bounds beyond what can fit in the bounds registers.[2][3][4][5][6]
MPX uses four new 128-bit bounds registers,BND0toBND3, each storing a pair of 64-bit lower bound (LB) and upper bound (UB) values of a buffer. The upper bound is stored inones' complementform, withBNDMK(create bounds) andBNDCU(check upper bound) performing the conversion. The architecture includes two configuration registersBNDCFGx(BNDCFGUin user space andBNDCFGSin kernel mode), and a status registerBNDSTATUS, which provides a memory address and error code in case of an exception.[7][8]
Two-level address translation is used for storing bounds in memory. The top layer consists of a Bounds Directory (BD) created on the application startup. Each BD entry is either empty or contains a pointer to a dynamically created Bounds Table (BT), which in turn contains a set of pointer bounds along with the linear addresses of the pointers. The bounds load (BNDLDX) and store (BNDSTX) instructions transparently perform the address translation and access bounds in the proper BT entry.[7][8]
Intel MPX was introduced as part of theSkylakemicroarchitecture.[9]
IntelGoldmontmicroarchitecture also supports Intel MPX.[9]
A study examined a detailed cross-layer dissection of the MPX system stack and comparison with three prominent software-based memory protection mechanisms (AddressSanitizer, SAFECode, and SoftBound) and presents the following conclusions.[8]
In addition, a review concluded MPX was not production ready, andAddressSanitizerwas a better option.[8]A review by Kostya Serebryany at Google, AddressSanitizer's developer,[22]had similar findings.[23]
Another study[24]exploring the scope ofSpectreandMeltdownsecurity vulnerabilities discovered that Meltdown can be used to bypass Intel MPX, using the Bound Range Exceeded (#BR) hardware exception. According to their publication, the researchers were able to leak information through a Flush+Reload covert channel from an out-of-bound access on an array safeguarded by the MPX system. Their Proof Of Concept has not been publicly disclosed.
|
https://en.wikipedia.org/wiki/Intel_MPX
|
Microsoft Windows SDK, and its predecessorsPlatform SDK, and.NET Framework SDK, aresoftware development kits(SDKs) fromMicrosoftthat containdocumentation,header files,libraries, samples and tools required to develop applications forMicrosoft Windowsand.NET Framework.[1]These libraries are also distributed asWindows System Files.
ThePlatform SDKspecializes in developing applications forWindows 2000,XPandWindows Server 2003..NET Framework SDKis dedicated to developing applications for.NET Framework 1.1and.NET Framework 2.0.Windows SDKis the successor of the two and supports developing applications forWindows XPand later, as well as.NET Framework 3.0and later.[2]
Platform SDKis the successor of the original Microsoft Windows SDK forWindows 3.1xand Microsoft Win32 SDK forWindows 9x. It was released in 1999 and is the oldest SDK. Platform SDK containscompilers, tools, documentations, header files, libraries and samples needed for software development onIA-32,x64andIA-64CPU architectures..NET Framework SDKhowever, came to being with.NET Framework. Starting withWindows Vista, the Platform SDK, .NET Framework SDK, Tablet PC SDK and Windows Media SDK are replaced by a new unified kit calledWindows SDK. However, the .NET Framework 1.1 SDK is not included since the .NET Framework 1.1 does not ship with Windows Vista. (Windows Media Center SDK for Windows Vista ships separately.) DirectX SDK was merged into Windows SDK with the release of Windows 8.[3]
Windows SDK allows the user to specify the components to be installed and where to install them. It integrates withVisual Studio, so that multiple copies of the components that both have are not installed; however, there are compatibility caveats if either of the two is not from the same era.[4][5]Information shown can be filtered by content, such as showing only new Windows Vista content, only .NET Framework content, or showing content for a specific language or technology.
Windows SDKs are available for free; they were once available on Microsoft Download Center but were moved toMSDNin 2012.
A developer might want to use an older SDK for a particular reason. For example, the Windows Server 2003 Platform SDK released in February 2003 was the last SDK to provide full support of Visual Studio 6.0. Some older PSDK versions can still be downloaded from the Microsoft Download center; others can be ordered on CD/DVD.[6]
Last Platform SDK toofficially installonWindows 95
Also known as Microsoft Platform SDK for Windows 2000 RC2.
Includes Alpha to AXP64 cross toolset.
Last Platform SDK tofully supportVisual C++ 5.0
Also known as Microsoft Platform SDK for Whistler Beta 1.
Includes preliminary tools for Itanium.
Last Platform SDK toofficially developforWindows 95. (Does notofficially installonWindows 95)
Last Platform SDK tounofficially developforWindows 95. (Does notofficially installonWindows 95)
Includes ARM64 support for the Visual Studio 17.4 release
The Windows SDK documentation includes manuals documenting:
|
https://en.wikipedia.org/wiki/Microsoft_Windows_SDK
|
Valgrind(/ˈvælɡrɪnd/)[6]is aprogramming toolformemory debugging,memory leakdetection, andprofiling.
Valgrind was originally designed to be afreely licensedmemory debugging tool forLinuxonx86, but has since evolved to become a generic framework for creating dynamic analysis tools such as checkers and profilers.
Valgrind is in essence avirtual machineusingjust-in-time compilationtechniques, includingdynamic recompilation. Nothing from the original program ever gets run directly on the hostprocessor. Instead, Valgrind first translates the program into a temporary, simpler form calledintermediate representation(IR), which is a processor-neutral,static single assignment form-based form. After the conversion, a tool (see below) is free to do whatever transformations it would like on the IR, before Valgrind translates the IR back into machine code and lets the host processor run it. Valgrind recompilesbinary codeto run on host and target (or simulated) CPUs of the same architecture. It also includes aGDBstub to allow debugging of the target program as it runs in Valgrind, with "monitor commands" that allow querying the Valgrind tool for various information.
A considerable amount of performance is lost in these transformations (and usually, the code the tool inserts); usually, code run with Valgrind and the "none" tool (which does nothing to the IR) runs at 20% to 25% of the speed of the normal program.[7][8]
There are multiple tools included with Valgrind (and several external ones). The default (and most used) tool isMemcheck. Memcheck inserts extrainstrumentationcode around almost all instructions, which keeps track of thevalidity(all unallocated memory starts as invalid or "undefined", until it is initialized into a deterministic state, possibly from other memory) andaddressability(whether the memory address in question points to an allocated, non-freed memory block), stored in the so-calledV bitsandA bitsrespectively. As data is moved around or manipulated, the instrumentation code keeps track of the A and V bits, so they are always correct on a single-bit level.
In addition, Memcheck replaces the standardC++ allocatorsandC memory allocatorwith its own implementation, which also includesmemory guardsaround all allocated blocks (with the A bits set to "invalid"). This feature enables Memcheck to detectoff-by-one errorswhere a program reads or writes outside an allocated block by a small amount. The problems Memcheck can detect and warn about include the following:
The price of this is lost performance. Programs running under Memcheck usually run 20–30 times slower[9]than running outside Valgrind and use more memory (there is a memory penalty per allocation). Thus, few developers run their code under Memcheck (or any other Valgrind tool) all the time. They most commonly use such tools either to trace down some specific bug, or to verify that there are no latent bugs (of the kind Memcheck can detect) in the code.
In addition to Memcheck, Valgrind has several other tools:[10]
exp-sgcheck(namedexp-ptrcheckprior to version 3.7), was removed in version 3.16.0. It was an experimental tool to find stack and global array overrun errors, which Memcheck cannot find.
There are also several externally developed tools available. One such tool is ThreadSanitizer, another detector ofrace conditions.[12][13]
As of version 3.4.0, Valgrind supportsLinuxonx86,x86-64andPowerPC. Support for Linux onARMv7(used for example in certainsmartphones) was added in version 3.6.0.[14]Support forSolariswas added in version 3.11.0.[5]Support forOS Xwas added in version 3.5.0.[15]Support forFreeBSDx86 and amd64 was added in version 3.18.0. Support for FreeBSD aarch64 was added in version 3.23.0. There are unofficial ports to otherUnix-likeplatforms (likeOpenBSD,[16]NetBSD[17]and QNX[18]). From version 3.7.0 the ARM/Androidplatform support was added.[5]
Since version 3.9.0 there is support for Linux onMIPS64little and big endian, for MIPS DSP ASE onMIPS32, fors390xDecimal Floating Point instructions, forPOWER8(Power ISA 2.07) instructions, for IntelAVX2instructions, for Intel Transactional Synchronization Extensions, both RTM and HLE and initial support for Hardware Transactional Memory on POWER.[4]
The name Valgrind refers to the main entrance toValhallainNorse mythology.[19][20]During development (before release) the project was namedHeimdall; however, the name would have conflicted with a security package.
The original author of Valgrind isJulian Seward, who in 2006 won aGoogle-O'Reilly Open Source Awardfor his work on Valgrind.[21][22]
Several others have also made significant contributions, including Nicholas Nethercote, Bart Van Assche, Florian Krohm, Tom Hughes, Philippe Waroquiers, Mark Wielaard, Paul Floyd, Petar Jovanovic, Carl Love, Cerion Armour-Brown and Ivo Raisr.[23]
It is used by a number of Linux-based projects.[24]
In addition to the performance penalty, an important limitation of Memcheck is its inability to detect all cases of bounds errors in the use of static or stack-allocated data.[25]The following code will pass theMemchecktool in Valgrind without incident, despite containing the errors described in the comments:
The inability to detect all errors involving the access of stack allocated data is especially noteworthy sincecertain types of stack errorsmake softwarevulnerableto the classicstack smashing exploit.
|
https://en.wikipedia.org/wiki/Valgrind
|
Thesoftware release life cycleis the process of developing, testing, and distributing a software product (e.g., anoperating system). It typically consists of several stages, such as pre-alpha, alpha, beta, and release candidate, before the final version, or "gold", is released to the public.
Pre-alpha refers to the early stages of development, when the software is still being designed and built. Alpha testing is the first phase of formal testing, during which the software is tested internally usingwhite-box techniques. Beta testing is the next phase, in which the software is tested by a larger group of users, typically outside of the organization that developed it. The beta phase is focused on reducing impacts on users and may include usability testing.
After beta testing, the software may go through one or more release candidate phases, in which it is refined and tested further, before the final version is released.
Some software, particularly in the internet and technology industries, is released in a perpetual beta state, meaning that it is continuously being updated and improved, and is never considered to be a fully completed product. This approach allows for a more agile development process and enables the software to be released and used by users earlier in the development cycle.
Pre-alpha refers to all activities performed during the software project before formal testing. These activities can includerequirements analysis,software design,software development, andunit testing. In typicalopen sourcedevelopment, there are several types of pre-alpha versions.Milestoneversions include specific sets of functions and are released as soon as the feature is complete.[citation needed]
The alpha phase of the release life cycle is the first phase ofsoftware testing(alpha is the first letter of theGreek alphabet, used as the number 1). In this phase, developers generally test the software usingwhite-box techniques. Additional validation is then performed usingblack-boxorgray-boxtechniques, by another testing team. Moving to black-box testing inside the organization is known asalpha release.[1][2]
Alpha software is not thoroughly tested by the developer before it is released to customers. Alpha software may contain serious errors, and any resulting instability could cause crashes or data loss.[3]Alpha software may not contain all of the features that are planned for the final version.[4]In general, external availability of alpha software is uncommon forproprietary software, whileopen source softwareoften has publicly available alpha versions. The alpha phase usually ends with afeature freeze, indicating that no more features will be added to the software. At this time, the software is said to befeature-complete. A beta test is carried out followingacceptance testingat the supplier's site (the alpha test) and immediately before the general release of the software as a product.[5]
Afeature-complete(FC) version of a piece ofsoftwarehas all of its planned or primaryfeaturesimplemented but is not yet final due tobugs,performanceorstabilityissues.[6]This occurs at the end of alpha testing indevelopment.
Usually, feature-complete software still has to undergobeta testingandbug fixing, as well as performance or stability enhancement before it can go torelease candidate, and finallygoldstatus.
Beta, named afterthe second letter of the Greek alphabet, is the software development phase following alpha. A beta phase generally begins when the software is feature-complete but likely to contain several known or unknown bugs.[7]Software in the beta phase will generally have many more bugs in it than completed software and speed or performance issues, and may still cause crashes or data loss. The focus of beta testing is reducing impacts on users, often incorporatingusability testing. The process of delivering a beta version to the users is calledbeta releaseand is typically the first time that the software is available outside of the organization that developed it. Software beta releases can be eitheropen or closed, depending on whether they are openly available or only available to a limited audience. Beta version software is often useful for demonstrations and previews within an organization and to prospective customers. Some developers refer to this stage as apreview,preview release,prototype,technical previewortechnology preview(TP),[8]orearly access.
Beta testersare people who actively report issues with beta software. They are usually customers or representatives of prospective customers of the organization that develops the software. Beta testers tend to volunteer their services free of charge but often receive versions of the product they test, discounts on the release version, or other incentives.[9][10]
Some software is kept in so-calledperpetual beta, where new features are continually added to the software without establishing a final "stable" release. As theInternethas facilitated the rapid and inexpensive distribution of software, companies have begun to take a looser approach to the use of the wordbeta.[11]
Developers may release either aclosed beta, or anopen beta; closed beta versions are released to a restricted group of individuals for a user test by invitation, while open beta testers are from a larger group, or anyone interested. Private beta could be suitable for the software that is capable of delivering value but is not ready to be used by everyone either due to scaling issues, lack of documentation or still missing vital features. The testers report any bugs that they find, and sometimes suggest additional features they think should be available in the final version.
Open betas serve the dual purpose of demonstrating a product to potential consumers, and testing among a wide user base is likely to bring to light obscure errors that a much smaller testing team might not find.[citation needed]
Arelease candidate(RC), also known as gamma testing or "going silver", is a beta version with the potential to be a stable product, which is ready to release unless significantbugsemerge. In this stage of product stabilization, all product features have been designed, coded, and tested through one or more beta cycles with no known showstopper-class bugs. A release is calledcode completewhen the development team agrees that no entirely new source code will be added to this release. There could still be source code changes to fix defects, changes to documentation and data files, and peripheral code for test cases or utilities.[citation needed]
Also calledproduction release, thestable releaseis the lastrelease candidate(RC) which has passed all stages of verification and tests. Any known remaining bugs are considered acceptable. This release goes toproduction.
Some software products (e.g.Linux distributionslikeDebian) also havelong-term support(LTS) releases which are based on full releases that have already been tried and tested and receive only security updates.[citation needed]
Once released, the software is generally known as a "stable release". The formal term often depends on the method of release: physical media, online release, or a web application.[12]
The term "release to manufacturing" (RTM), also known as "going gold", is a term used when a software product is ready to be delivered. This build may be digitally signed, allowing the end user to verify the integrity and authenticity of the software purchase. The RTM build is known as the "gold master" or GM[13]is sent for mass duplication or disc replication if applicable. The terminology is taken from the audio record-making industry, specifically the process ofmastering. RTM precedes general availability (GA) when the product is released to the public. A golden master build (GM) is typically the final build of a piece of software in the beta stages for developers. Typically, foriOS, it is the final build before a major release, however, there have been a few exceptions.
RTM is typically used in certain retail mass-production software contexts—as opposed to a specialized software production or project in a commercial or government production and distribution—where the software is sold as part of a bundle in a related computer hardware sale and typically where the software and related hardware is ultimately to be available and sold on mass/public basis at retail stores to indicate that the software has met a defined quality level and is ready for mass retail distribution. RTM could also mean in other contexts that the software has been delivered or released to a client or customer for installation or distribution to the related hardware end user computers or machines. The term doesnotdefine the delivery mechanism or volume; it only states that the quality is sufficient for mass distribution. The deliverable from the engineering organization is frequently in the form of a golden master media used for duplication or to produce the image for the web.
General availability(GA) is the marketing stage at which all necessarycommercializationactivities have been completed and a software product is available for purchase, depending, however, on language, region, and electronic vs. media availability.[14]Commercialization activities could include security and compliance tests, as well as localization and worldwide availability. The time between RTM and GA can take from days to months before a generally available release can be declared, due to the time needed to complete all commercialization activities required by GA. At this stage, the software has "gone live".
Release to the Web(RTW) orWeb releaseis a means of software delivery that utilizes the Internet for distribution. No physical media are produced in this type of release mechanism by the manufacturer. Web releases have become more common as Internet usage has grown.[citation needed]
During its supported lifetime, the software is sometimes subjected to service releases,patchesorservice packs, sometimes also called "interim releases" or "maintenance releases" (MR). For example, Microsoft released three major service packs for the32-biteditions ofWindows XPand two service packs for the64-biteditions.[15]Such service releases contain a collection of updates, fixes, and enhancements, delivered in the form of a single installable package. They may also implement new features. Some software is released with the expectation of regular support. Classes of software that generally involve protracted support as the norm includeanti-virus suitesandmassively multiplayer online games. Continuing with this Windows XP example, Microsoft did offer paid updates for five more years after the end of extended support. This means that support ended on April 8, 2019.[16]
When software is no longer sold or supported, the product is said to have reached end-of-life, to be discontinued, retired, deprecated, abandoned, or obsolete, but user loyalty may continue its existence for some time, even long after its platform is obsolete—e.g., theCommon Desktop Environment[17]and SinclairZX Spectrum.[18]
After the end-of-life date, the developer will usually not implement any new features, fix existing defects, bugs, or vulnerabilities (whether known before that date or not), or provide any support for the product. If the developer wishes, they may release the source code, so that the platform may be maintained by volunteers.
Usage of the "alpha/beta" test terminology originated atIBM.[citation needed]Similar terminologies for IBM's software development were used by people involved with IBM from at least the 1950s (and probably earlier). "A" test was theverificationof a new product before the public announcement. The "B" test was the verification before releasing the product to be manufactured. The "C" test was the final test before the general availability of the product. As software became a significant part of IBM's offerings, the alpha test terminology was used to denote the pre-announcement test and the beta test was used to show product readiness for general availability. Martin Belsky, a manager on some of IBM's earlier software projects claimed to have invented the terminology. IBM dropped the alpha/beta terminology during the 1960s, but by then it had received fairly wide notice. The usage of "beta test" to refer to testing done by customers was not done in IBM. Rather, IBM used the term "field test".
Major public betas developed afterward, with early customers having purchased a "pioneer edition" of the WordVision word processor for theIBM PCfor $49.95. In 1984,Stephen Maneswrote that "in a brilliant marketing coup, Bruce and James Program Publishers managed to get people topayfor the privilege of testing the product."[19]In September 2000, aboxed versionofApple'sMac OS X Public Betaoperating system was released.[20]Between September 2005 and May 2006, Microsoft releasedcommunity technology previews (CTPs) forWindows Vista.[21]From 2009 to 2011,Minecraftwas in public beta.
In February 2005,ZDNetpublished an article about the phenomenon of a beta version often staying for years and being used as if it were at the production level.[22]It noted thatGmailandGoogle News, for example, had been in beta for a long time although widely used; Google News left beta in January 2006, followed by Google Apps (now namedGoogle Workspace), including Gmail, in July 2009.[12]Since the introduction ofWindows 8,Microsofthas called pre-release software apreviewrather thanbeta. All pre-release builds released through theWindows Insider Programlaunched in 2014 are termed "Insider Preview builds". "Beta" may also indicate something more like arelease candidate, or as a form of time-limited demo, or marketing technique.[23]
|
https://en.wikipedia.org/wiki/Software_release_life_cycle
|
Backportingis the action of taking parts from a newerversionof asoftware systemorsoftware componentandportingthem to an older version of the same software. It forms part of themaintenancestep in asoftware development process, and it is commonly used for fixingsecurity issuesin older versions of the software and also for providing new features to older versions.
The simplest and probably most common situation of backporting is a fixed security hole in a newer version of a piece of software. Consider this simplified example:
By taking the modification that fixes Software v2.0 and changing it so that it applies to Software v1.0, one has effectively backported the fix.[1]
In real-life situations, the modifications that a single aspect of the software has undergone may be simple (only a few lines ofcodehave changed) up to heavy and massive (many modifications spread across multiplefilesof the code). In the latter case, backporting may become tedious and inefficient and should only be undertaken if the older version of the software is really needed in favour of the newer (if, for example, the newer version still suffersstabilityproblems that prevent its use in mission-critical situations).[2]
The process of backporting can be roughly divided into these steps:[1]
Usually, multiple such modifications are bundled in apatchset.
Backports can be provided by the coredevelopergroup of the software. Since backporting needs access to the source code of a piece of software, this is the only way that backporting is done forclosed source software– the backports will usually be incorporated inbinaryupgradesalong the old version line of the software. Withopen-source software, backports are sometimes created bysoftware distributorsand later sentupstream(that is, submitted to the core developers of the afflicted software).[2]
|
https://en.wikipedia.org/wiki/Backporting
|
Dribbleware, in the context ofcomputer software, is a product for whichpatchesare often being released.[1]The term usually has negative connotations, and can refer to software which hasn't been tested properly prior to release, or for which planned features could not be implemented.
Dribbleware is not necessarily due to poor programming; it can be indicative of a product whose development was rushed to meet a release date.
This article related to a type ofsoftwareis astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Dribbleware
|
The computer toolpatchis aUnixprogramthat updatestext filesaccording to instructions contained in a separate file, called apatch file. The patch file (also called apatchfor short) is a text file that consists of a list of differences and is produced by running the relateddiffprogram with the original and updated file as arguments. Updating files with patch is often referred to asapplying the patchor simplypatchingthe files.
The original patch program was written byLarry Wall(who went on to create thePerlprogramming language) and posted tomod.sources[1](which later becamecomp.sources.unix) in May 1985.
patch was added to XPG4, which later becamePOSIX.[2]Wall's code remains the basis of "patch" programs provided inOpenBSD,[3]FreeBSD,[4]and schilytools.[5][dubious–discuss]TheOpen Software Foundation, which merged intoThe Open Group, is said to have maintained a derived version.[dubious–discuss]
TheGNU project/FSFmaintains its patch, forked from the Larry Wall version. The repository is different from that of GNU diffutils, but the documentation is managed together.[6]
Developed by a programmer for other programmers, patch was frequently used for updatingsource codeto a newer version. Because of this, many people came to associate patches with source code, whereas patches can in fact be applied to any text.Patchedfiles do not accumulate any unneeded text, which is what some people perceive based on the English meaning of the word; patch is as capable of removing text as it is of adding it.
Patches described here should not be confused withbinary patches, which, although can be conceptually similar, are distributed to update binary files comprising the program to a new release.
The diff files that serve as input to patch are readable text files, which means that they can be easily reviewed or modified by humans before use.
In addition to the "diff" program, diffs can also be produced by other programs, such asSubversion,CVS,RCS,MercurialandGit.
Patches have been the crucial component of manysource controlsystems, includingCVS.
When more advanced diffs are used, patches can be applied even to files that have been modified in the meantime, as long as those modifications do not interfere with the patch. This is achieved by using "context diffs" and "unified diffs" (also known as "unidiffs"), which surround each change withcontext, which is the text immediately before and after the changed part. Patch can then use this context to locate the region to be patched even if it has been displaced by changes earlier in the file, using the line numbers in the diffs as a starting point. Because of this property, context and unified diffs are the preferred form of patches for submission to many software projects.
The above features make diff and patch especially popular for exchanging modifications toopen-source software. Outsiders can download the latest publicly available source code, make modifications to it, and send them, in diff form, to the development team. Using diffs, the development team has the ability to effectively review the patches before applying them, and can apply them to a newer code base than the one the outside developer had access to.
To create a patch, one could run the following command in a shell:
To apply a patch, one could run the following command in a shell:
This tells patch to apply the changes to the specified files described inmods.diff. Patches to files in subdirectories require the additional-pnumberoption, wherenumberis 1 if the base directory of the source tree is included in the diff, and 0 otherwise.
Patches can be undone, or reversed, with the '-R' option:
In some cases when the file is not identical to the version the diff was generated against, the patch will not be able to be applied cleanly. For example, if lines of text are inserted at the beginning, the line numbers referred to in the patch will be incorrect. patch is able to recover from this, by looking at nearby lines to relocate the text to be patched. It will also recover when lines ofcontext(for context and unified diffs) are altered; this is described asfuzz.
Originally written for Unix andUnix-likesystems, patch has also been ported toWindowsand many other platforms. Windows ports of patch are provided byGnuWin32andUnxUtils.
Apatchcommand is also part ofASCII'sMSX-DOS2 ToolsforMSX-DOSversion 2.[7]
|
https://en.wikipedia.org/wiki/Patch_(Unix)
|
Insoftware engineering,portingis the process of adaptingsoftwarefor the purpose of achieving some form of execution in acomputing environmentthat is different from the one that a given program (meant for such execution) was originally designed for (e.g., differentCPU, operating system, or third partylibrary). The term is also used when software/hardware is changed to make them usable in different environments.[1][2]
Software isportablewhen the cost of porting it to a new platform is significantly less than the cost of writing it from scratch. The lower the cost of porting software relative to its implementation cost, the more portable it is said to be. This is distinct fromcross-platform software, which is designed from the ground up without any single "native" platform.
The term "port" is derived from the Latinportāre, meaning "to carry".[3]When code is not compatible with a particularoperating systemorarchitecture, the code must be "carried" to the new system.
The term is not generally applied to the process of adapting software to run with less memory on the same CPU and operating system.
Software developers often claim that the software they write isportable, meaning that little effort is needed to adapt it to a new environment. The amount of effort actually needed depends on several factors, including the extent to which the original environment (thesource platform) differs from the new environment (thetarget platform), the experience of the original authors in knowing whichprogramming languageconstructs and third party library calls are unlikely to be portable, and the amount of effort invested by the original authors in only using portable constructs (platform specific constructs often provide a cheaper solution).
The number of significantly different CPUs and operating systems used on the desktop today is much smaller than in the past. The dominance of thex86architecturemeans that most desktop software is never ported to a different CPU. In that same market, the choice of operating systems has effectively been reduced to three:Microsoft Windows,macOS, andLinux. However, in theembedded systemsandmobilemarkets,portabilityremains a significant issue, with theARMbeing a widely used alternative.
International standards, such as those promulgated by theISO, greatly facilitate porting by specifying details of the computing environment in a way that helps reduce differences between different standards-conformingplatforms. Writing software that stays within the bounds specified by these standards represents a practical although nontrivial effort. Porting such a program between two standards-compliant platforms (such asPOSIX.1) can be just a matter of loading the source code andrecompilingit on the new platform, but practitioners often find that various minor corrections are required, due to subtle platform differences. Most standards suffer from "gray areas" where differences in interpretation of standards lead to small variations from platform to platform.
There also exists an ever-increasing number of tools to facilitate porting, such as theGNU Compiler Collection, which provides consistent programming languages on different platforms, andAutotools, which automates the detection of minor variations in the environment and adapts the software accordingly before compilation.
The compilers for somehigh-level programming languages(e.g.Eiffel,Esterel) gain portability by outputting source code in another high levelintermediate language(such asC) for which compilers for many platforms are generally available.
Two activities related to (but distinct from) porting areemulatingandcross-compiling.
Instead of translating directly intomachine code, moderncompilerstranslate to a machine independentintermediate codein order to enhance portability of the compiler and minimize design efforts. The intermediate language defines avirtual machinethat can execute all programs written in theintermediate language(a machine is defined by its language and vice versa).[4]The intermediate code instructions are translated into equivalent machine code sequences by acode generatorto createexecutable code. It is also possible to skip the generation of machine code by actually implementing aninterpreterorJITfor the virtual machine.[5]
The use of intermediate code enhances portability of the compiler, because only the machine dependent code (the interpreter or the code generator) of the compiler itself needs to be ported to the target machine. The remainder of the compiler can be imported as intermediate code and then further processed by the ported code generator or interpreter, thus producing the compiler software or directly executing the intermediate code on the interpreter. The machine independent part can be developed andtestedon another machine (thehost machine). This greatly reduces design efforts, because the machine independent part needs to be developed only once to create portable intermediate code.[6]
An interpreter is less complex and therefore easier to port than a code generator, because it is not able to do code optimizations due to its limited view of the program code (it only sees one instruction at a time, and users need a sequence to do optimization). Some interpreters are extremely easy to port, because they only make minimal assumptions about the instruction set of the underlying hardware. As a result, the virtual machine is even simpler than the target CPU.[7]
Writing the compiler sources entirely in the programming language the compiler is supposed to translate, makes the following approach, better known ascompiler bootstrapping, feasible on the target machine:
The difficult part of coding the optimization routines is done using the high-level language instead of the assembly language of the target.
According to the designers of theBCPLlanguage, interpreted code (in the BCPL case) is more compact than machine code, typically by a factor of two to one. Interpreted code however runs about ten times slower than compiled code on the same machine.[8]
The designers of theJava programming languagetry to take advantage of the compactness of interpreted code, because a Java program may need to be transmitted over the Internet before execution can start on the target'sJava virtual machine(JVM).
Porting is also the term used when avideo gamedesigned to run on one platform, be it anarcade,video game console, orpersonal computer, is converted to run on a different platform, perhaps with some minor differences.[9]From the beginning of video games through to the 1990s, "ports", at the time often known as "conversions", were often not true ports, but rather reworked versions of the games due to the limitations of different systems. For example, the 1982 gameThe Hobbit, a text adventure augmented with graphic images, has significantly different graphic styles across the range of personal computers that its ports were developed for.[10]However, many 21st century video games are developed using software (often inC++) that can output code for one or more consoles as well as for a PC without the need for actual porting (instead relying on the common porting of individual componentlibraries).[10]
Porting arcade games to home systems with inferior hardware was difficult. The ported version ofPac-Manfor theAtari 2600omitted many of the visual features of the original game to compensate for the lack ofROMspace and the hardware struggled when multiple ghosts appeared on the screen creating a flickering effect. The poor performance of theAtari 2600Pac-Manis cited by some scholars as a cause of thevideo game crash of 1983.[11]
Many early ports suffered significant gameplay quality issues because computers greatly differed.[12]Richard Garriottstated in 1984 atOrigins Game FairthatOrigin Systemsdeveloped video games for theApple IIfirst then ported them toCommodore 64andAtari 8-bit computers, because the latter machines'spritesand other sophisticated features made porting from them to Apple "far more difficult, perhaps even impossible".[13]Reviews complained of ports that suffered from "Apple conversionitis",[14]retaining the Apple's "lousy sound and black-white-green-purple graphics";[15][16]after Garriott's statement, whenDan Buntenasked "Atari and Commodore people in the audience, are you happy with the Apple rewrites?" the audience shouted "No!" Garriott responded, "[otherwise] the Apple version will never get done. From a publisher's point of view that's not money wise".[13]
Others worked differently.Ozark Softscape, for example, wroteM.U.L.E.for the Atari first because it preferred to develop for the most advanced computers, removing or altering features as necessary during porting. Such a policy was not always feasible; Bunten stated that "M.U.L.E. can't be done for an Apple",[12]and that the non-Atari versions ofThe Seven Cities of Goldwere inferior.[17]Compute!'s Gazettewrote in 1986 that when porting from Atari to Commodore the original was usually superior. The latter's games' quality improved when developers began creating new software for it in late 1983, the magazine stated.[18]
In portingarcade games, the terms "arcade perfect" or "arcade accurate" were often used to describe how closely the gameplay, graphics, and other assets on the ported version matched the arcade version. Many arcade ports in the early 1980s were far from arcade perfect as home consoles and computers lacked the sophisticated hardware in arcade games, but games could still approximate the gameplay. Notably,Space Invaderson theAtari VCSbecame the console'skiller appdespite its differences,[19]while the laterPac-Manportwas notorious for its deviations from the arcade version.[20]Arcade-accurate games became more prevalent starting in the 1990s as home consoles caught up to the power of arcade systems. Notably, theNeo Geosystem fromSNK, which was introduced as a multi-game arcade system, would also be offered as a home console with the same specifications. This allowed arcade perfect games to be played at home.[10]
A "console port" is a game that was originally or primarily made for a console before a version is created which can be played on apersonal computer. The process of porting games from console to PC is often regarded more cynically than other types of port due to the more powerful hardware some PCs have even at console launch being underutilized, partially due to console hardware being fixed throughout eachgenerationas newer PCs constantly become even more powerful. While broadly similar today, some architectural differences persist, such as the use ofunified memoryand smallerOSson consoles. Other objections arise fromuser interfacedifferences conventional to consoles, such asgamepads,TFUIsaccompanied by narrowFoV, fixedcheckpoints,onlinerestricted to officialserversorP2P, poor or nomoddingsupport, as well as the generally greater reliance among console developers on internalhard codinganddefaultsinstead of externalAPIsandconfigurability, all of which may require expensive deep reaching redesign to avoid a "lazy" feeling port to PC.[21]
|
https://en.wikipedia.org/wiki/Porting
|
Avulnerability database(VDB) is a platform aimed at collecting, maintaining, and disseminating information about discoveredcomputer security vulnerabilities. Thedatabasewill customarily describe the identified vulnerability, assess the potential impact on affected systems, and any workarounds or updates to mitigate the issue. A VDB will assign a unique identifier to each vulnerability cataloged such as a number (e.g. 123456) oralphanumericdesignation (e.g. VDB-2020-12345). Information in the database can be made available via web pages, exports, orAPI. A VDB can provide the information for free, for pay, or a combination thereof.
The first vulnerability database was the "Repaired Security Bugs in Multics", published by February 7, 1973 by Jerome H. Saltzer. He described the list as "a list of all known ways in which a user may break down or circumvent the protection mechanisms ofMultics".[1]The list was initially kept somewhat private with the intent of keeping vulnerability details until solutions could be made available. The published list contained two local privilege escalation vulnerabilities and three local denial of service attacks.[2]
Major vulnerability databases such as the ISS X-Force database, Symantec / SecurityFocus BID database, and theOpen Source Vulnerability Database(OSVDB)[a]aggregate a broad range of publicly disclosed vulnerabilities, including Common Vulnerabilities and Exposures (CVE). The primary purpose of CVE, run byMITRE, is to attempt to aggregate public vulnerabilities and give them a standardized format unique identifier.[3]Many vulnerability databases develop the received intelligence from CVE and investigate further providing vulnerability risk scores, impact ratings, and the requisite workaround. In the past, CVE was paramount for linking vulnerability databases so critical patches and debugs can be shared to inhibit hackers from accessing sensitive information on private systems.[4]TheNational Vulnerability Database(NVD), run by theNational Institute of Standards and Technology(NIST), is operated separately from the MITRE-run CVE database, but only includes vulnerability information from CVE. NVD serves as an enhancement to that data by providingCommon Vulnerability Scoring System(CVSS) risk scoring andCommon Platform Enumeration(CPE) data.
TheOpen Source Vulnerability Databaseprovides an accurate, technical and unbiased index on vulnerability security. The comprehensive database cataloged over 121,000 vulnerabilities. The OSVDB was founded in August 2002 and was launched in March 2004. In its primitive beginning, newly identified vulnerabilities were investigated by site members and explanations were detailed on the website. However, as the necessity for the service thrived, the need for dedicated staff resulted in the inception of the Open Security Foundation (OSF) which was founded as a non-profit organisation in 2005 to provide funding for security projects and primarily the OSVDB.[5]The OSVDB closed in April 2016.[6]
The U.S.National Vulnerability Databaseis a comprehensive cyber security vulnerability database formed in 2005 that reports on CVE.[7]The NVD is a primary cyber security referral tool for individuals and industries alike providing informative resources on current vulnerabilities. The NVD holds in excess of 100,000 records. Similar to the OSVDB, the NVD publishes impact ratings and categorises material into an index to provide users with an intelligible search system.[8]Other countries have their own vulnerability databases, such as theChinese National Vulnerability Databaseand Russia'sData Security Threats Database.
A variety of commercial companies also maintain their own vulnerability databases, offering customers services which deliver new and updated vulnerability data in machine-readable format as well as through web portals. Examples include A.R.P. Syndicate's Exploit Observer, Symantec's DeepSight[9]portal and vulnerability data feed, Secunia's (purchased by Flexera) vulnerability manager[10]and Accenture's vulnerability intelligence service[11](formerly iDefense).
Exploit Observer[12]uses its Vulnerability & Exploit Data Aggregation System (VEDAS) to collect exploits & vulnerabilities from a wide array of global sources, including Chinese and Russian databases.[13]
Vulnerability databases advise organisations to develop, prioritize, and execute patches or other mitigations which attempt to rectify critical vulnerabilities. However, this can often lead to the creation of additional susceptibilities as patches are created hastily to thwart further system exploitations and violations. Depending upon the level of a user or organisation, they warrant appropriate access to a vulnerability database which provides the user with disclosure of known vulnerabilities that may affect them. The justification for limiting access to individuals is to impede hackers from being versed in corporation system vulnerabilities which could potentially be further exploited.[14]
Vulnerability databases contain a vast array of identified vulnerabilities. However, few organisations possess the expertise, staff, and time to revise and remedy all potential system susceptibilities hence vulnerability scoring is a method of quantitatively determining the severity of a system violation. A multitude of scoring methods exist across vulnerability databases such as US-CERT and SANS Institute'sCritical Vulnerability Analysis Scalebut theCommon Vulnerability Scoring System(CVSS) is the prevailing technique for most vulnerability databases including OSVDB, vFeed[15]and NVD. The CVSS is based upon three primary metrics: base, temporal and environmental which each provide a vulnerability rating.[16]
This metric covers the immutable properties of a vulnerability such as the potential impact of the exposure of confidential information, the accessibility of information and the aftermath of the irretrievable deletion of information.
The temporal metrics denote the mutable nature of a vulnerability for example the credibility of an exploitability, the current state of a system violation and the development of any workarounds that could be applied.[17]
This aspect of the CVSS rates the potential loss to individuals or organisations from a vulnerability. Furthermore, it details the primary target of a vulnerability ranging from personal systems to large organisations and the number of potentially affected individuals.[18]
The complication with utilising different scoring systems it that there is no consensus on the severity of a vulnerability thus different organisations may overlook critical system exploitations. The key benefit of a standardised scoring system like CVSS is that published vulnerability scores can be assessed, pursued and remedied rapidly. Organisations and individuals alike can determine the personal impact of a vulnerability on their system. The benefits derived from vulnerability databases to consumers and organisations are exponential as information systems become increasingly embedded, our dependency and reliance on them grows, as does the opportunity for data exploitation.[19]
Although the functionality of a database may appear unblemished, without rigorous testing, the exiguous flaws can allow hackers to infiltrate a system's cyber security. Frequently, databases are published without stringent security controls hence the sensitive material is easily accessible.[20]
Database attacks are the most recurrent form of cyber security breaches recorded on vulnerability databases. SQL and NoSQL injections penetrate traditional information systems and big data platforms respectively and interpolate malicious statements allowing the hackers unregulated system access.[21]
Established databases ordinarily fail to implement crucial patches suggested by vulnerability databases due to an excessive workload and the necessity for exhaustive trialling to ensure the patches update the defective system vulnerability. Database operators concentrate their efforts into major system deficiencies which offers hackers unmitigated system access through neglected patches.[22]
All databases require audit tracks to record when data is amended or accessed. When systems are created without the necessary auditing system, the exploitation of system vulnerabilities are challenging to identify and resolve. Vulnerability databases promulgate the significance of audit tracking as a deterrent of cyber attacks.[23]
Data protection is essential to any business as personal and financial information is a key asset and the purloining of sensitive material can discredit the reputation of a firm. The implementation of data protection strategies is imperative to guard confidential information. Some hold the view that is it the initial apathy of software designers that in turn, necessitates the existence of vulnerability databases. If systems were devised with greater diligence, they may be impenetrable from SQL and NoSQL injections making vulnerability databases redundant.[24]
|
https://en.wikipedia.org/wiki/Vulnerability_database
|
Delta encodingis a way of storing or transmittingdatain the form ofdifferences(deltas) between sequential data rather than complete files; more generally this is known asdata differencing. Delta encoding is sometimes calleddelta compression, particularly where archival histories of changes are required (e.g., inrevision control software).
The differences are recorded in discrete files called "deltas" or "diffs". In situations where differences are small – for example, the change of a few words in a large document or the change of a few records in a large table – delta encoding greatly reduces data redundancy. Collections of unique deltas are substantially more space-efficient than their non-encoded equivalents.
From a logical point of view, the difference between two data values is the information required to obtain one value from the other – seerelative entropy. The difference between identical values (under someequivalence) is often called0or the neutral element.
Perhaps the simplest example is storing values of bytes as differences (deltas) between sequential values, rather than the values themselves. So, instead of 2, 4, 6, 9, 7, we would store 2, 2, 2, 3, −2. This reduces thevariance(range) of the values when neighbor samples are correlated, enabling a lower bit usage for the same data.IFF8SVXsound format applies this encoding to raw sound data before applying compression to it. Not even all 8-bit soundsamplescompress better when delta encoded, and the usability of delta encoding is even smaller for 16-bit and better samples. Therefore, compression algorithms often choose to delta encode only when the compression is better than without. However, invideo compression, delta frames can considerably reduce frame size and are used in virtually every video compressioncodec.
A delta can be defined in 2 ways,symmetric deltaanddirected delta. Asymmetric deltacan be expressed as
wherev1{\displaystyle v_{1}}andv2{\displaystyle v_{2}}represent two versions.
Adirected delta, also called a change, is a sequence of (elementary) change operations which, when applied to one versionv1{\displaystyle v_{1}}, yields another versionv2{\displaystyle v_{2}}(note the correspondence totransaction logsin databases). In computer implementations, they typically take the form of a language with two commands:copy data from v1andwrite literal data.
A variation of delta encoding which encodes differences between theprefixesorsuffixesofstringsis calledincremental encoding. It is particularly effective for sorted lists with small differences between strings, such as a list ofwordsfrom adictionary.
The nature of the data to be encoded influences the effectiveness of a particular compression algorithm.
Delta encoding performs best when data has small or constant variation; for an unsorted data set, there may be little to no compression possible with this method.
In delta encoded transmission over a network where only a single copy of the file is available at each end of the communication channel, specialerror control codesare used to detect which parts of the file have changed since its previous version.
For example,rsyncuses a rollingchecksumalgorithm based on Mark Adler'sadler-32checksum.
The followingCcode performs a simple form of delta encoding and decoding on a sequence of characters:
Another instance of use of delta encoding isRFC 3229, "Delta encoding in HTTP", which proposes thatHTTPservers should be able to send updated Web pages in the form of differences between versions (deltas), which should decrease Internet traffic, as most pages change slowly over time, rather than being completely rewritten repeatedly:
This document describes how delta encoding can be supported as a compatible extension to HTTP/1.1.
Many HTTP (Hypertext Transport Protocol) requests cause the retrieval of slightly modified instances of resources for which the client already has a cache entry. Research has shown that such modifying updates are frequent, and that the modifications are typically much smaller than the actual entity. In such cases, HTTP would make more efficient use of network bandwidth if it could transfer a minimal description of the changes, rather than the entire new instance of the resource.
[...] We believe that it might be possible to support rsync using the "instance manipulation" framework described later in this document, but this has not been worked out in any detail.
The suggested rsync-based framework was implemented in therproxysystem as a pair of HTTP proxies.[1]Like the basic vcdiff-based implementation, both systems are rarely used.
Delta copyingis a fast way of copying a file that is partially changed, when a previous version is present on the destination location. With delta copying, only the changed part of a file is copied. It is usually used inbackuporfile copyingsoftware, often to savebandwidthwhen copying between computers over a private network or the internet. One notable open-source example isrsync.[2][3][4]
Many of theonline backup servicesadopt this methodology, often known simply asdeltas, in order to give their users previous versions of the same file from previous backups. This reduces associated costs, not only in the amount of data that has to be stored as differing versions (as the whole of each changed version of a file has to be offered for users to access), but also those costs in the uploading (and sometimes the downloading) of each file that has been updated (by just the smaller delta having to be used, rather than the whole file).
For large software packages, there is usually little data changed between versions. Many vendors choose to use delta transfers to save time and bandwidth.
Diff is a file comparison program, which is mainly used for text files. By default, it generates symmetric deltas that are reversible. Two formats used for softwarepatches,contextandunified, provides additional context lines that allow for tolerating shifts in line number.
The Git source code control system employs delta compression in an auxiliary "git repack" operation. Objects in the repository that have not yet been delta-compressed ("loose objects") are compared against a heuristically chosen subset of all other objects, and the common data and differences are concatenated into a "pack file" which is then compressed using conventional methods. In common use cases, where source or data files are changed incrementally between commits, this can result in significant space savings. The repack operation is typically performed as part of the " git gc"[5]process, which is triggered automatically when the numbers of loose objects or pack files exceed configured thresholds.
The format is documented in the pack-format page of the Git documentation. It implements a directed delta.[6]
One general format for directed delta encoding is VCDIFF, described inRFC 3284.Free softwareimplementations includeXdeltaand open-vcdiff.
Generic Diff Format (GDIFF) is another directed delta encoding format. It was submitted toW3Cin 1997.[7]In many cases, VCDIFF has better compression rate than GDIFF.
Bsdiff is a binary diff program usingsuffix sorting. For executables that contain many changes in pointer addresses, it performs better than VCDIFF-type "copy and literal" encodings. The intent is to find a way to generate a small diff without needing to parse assembly code (as in Google's Courgette). Bsdiff achieves this by allowing "copy" matches with errors, which are then corrected using an extra "add" array of bytewise differences. Since this array is mostly either zero or repeated values for offset changes, it takes up little space after compression.[8]
Bsdiff is useful for delta updates. Google uses bsdiff in Chromium and Android. Thedeltarpmfeature of theRPM Package Manageris based on a heavily modified bsdiff that can use a hash table for matching.[9]FreeBSDalso uses bsdiff for updates.[10]
Since the 4.3 release of bsdiff in 2005, various improvements or fixes have been produced for it. Google maintains multiple versions of the code for each of its products.[11]FreeBSD takes many of Google's compatible changes, mainly a vulnerability fix and a switch to the fasterdivsufsortsuffix-sorting routine.[12]Debianhas a series of performance tweaks to the program.[13]
ddeltais a rewrite of bsdiff proposed for use in Debian's delta updates. Among other efficiency improvements, it uses a sliding window to reduce memory and CPU cost.[14]
|
https://en.wikipedia.org/wiki/Delta_encoding
|
System Modification Program/Extended(SMP/E), the proprietary version ofSystem Modification Program(SMP), "is a tool designed to manage the installation of software products on [a]z/OSsystem and to track the modifications" to those products.[1]: 1[2][3][4][5]
SMP/E manages multiple software versions, helps apply patches and updates (PTFs), facilitates orderly testing and, if necessary, reversion to a previous state, allows a "trial run" pseudo-installation to verify that actual installation will work, keeps audit and security records to assure only approved software updates occur, and otherwise provides highly evolved, centralized control over all software installation on z/OS.
Although it is possible to design and ship software products that install on z/OS without SMP/E, most mainframe administrators prefer SMP/E-enabled products, at least for non-trivial packages. Using SMP/E typically requires some working knowledge ofJob Control Language(JCL), although most products supply sample JCL. The rigorous software management discipline associated with SMP/E typically extends to product documentation as well, with IBM and other vendors supplying a standardized "Program Directory" manual for each software product that precisely aligns with the SMP/E work processes. The Program Directory provides detailed information on pre-requisites and co-requisites, for example.
Use of SMP/E to manage system updates helps ensure system integrity, by making sure that the system is in a consistent state and that changes to that state are properly audited.[6]
IBM introduced SMP inOS/360andOS/VS[7]to replace semi-manual processes involving tools such as IEBEDIT[8]and IMAPTFLE.[9]IBM introduced 3 subsequent free releases of SMP, with significant changes between releases, especially from SMP3 to SMP4.[10]All four releases store tracking data inpartitioned data sets(PDSs).
IBM introduced SMP/E[11]for OS/VS; however, SMP/E Release 2 is the last release to supportOS/VS1. SMP/E stores tracking data inVSAMdatasets rather than the PDSs that SMP release 1 through 4 use. While originally a separate product, SMP/E is bundled withz/OS.
IBM ultimately introduced similar tools for other operating systems, e.g.,Maintain System History Program (MSHP)forDOS/VS,Virtual Machine Serviceability Enhancements Staged (VM/SP SES), (nowVMSES/E), forVM/SPthroughz/VM.[12]
All IBM and most non-IBM software is assigned at least one seven characterFMID(Function Modification ID) that identifies the piece of software and its release number. This first FMID is called theBase FMID. For Example DB2 Version 9's Base FMID is HDB9910. Separately installable features also have FMIDs (calledDependent FMIDs) that relate in some way to the base product – DB2 English language panels for Version 9's Dependent FMID is JDB9910.
A software package is composed ofelements, individual components such as object files (MOD), macros (MAC), sample programs (SAMP), etc.[1]: p.37
TheCSI(Consolidated Software Inventory) is a dataset containing the information that SMP/E needs to track the contents of the distribution and target libraries. The CSI contains "metadata" identifying the installed FMIDs and elements, the ID of the most recent update, and pointers to the associated libraries.
ASYSMOD(System Modification) is any modification to the system. This includes:[1]: p.38
Each SYSMOD is assigned a seven characterSYSMOD IDto uniquely identify it. When the SYSMOD is installed this ID is recorded in the CSI entry for the element being added or replaced, and is called theRMID(replacement module id).
A simple declarative language calledMCS(Modification Control Statements) provides the information to SMP/E identifying the SYSMOD and providing information on how to install it. Each SYSMOD is prefixed with a number of MCS statements that, for example, identify it as an APAR fix or PTF, supply the SYSMOD ID, identify the applicable FMID, etc.[13]: pp.5ff
Prerequisitesorprereqsare SYSMODS that are required to be installedbeforea second can be installed.Corequisitesorcoreqsare two or more SYSMODS that must be installedtogether, none can be installed without the others. A SYSMODsupersedes, orsupsanother if its functionally replaces the first. This prereq, coreq, and sup information is provided in the MCS. Arequisite chainis the "sequence of SYSMODs that are directly or indirectly identified as requisites for a given SYSMOD," for example, if A is a prereq for B, and B is a prereq for C, then A and B are the requisite chain for C and both need to be installed before C, although not necessarily in a separate run of SMP/E.[1]: pp.231, 226, 236, 232Requisite chains can frequently become extremely involved and comprise hundreds of SYSMODS.
HOLDDATAis a set of MCS statements that indicate that specific SYSMODS contain errors or require manual processing outside the scope of SMP/E before they can be installed.[1]: p.229The user is required to take action to fix the problem, if possible, before installing held SYSMODS.
SMP/E manages two types of libraries.Target libraries(TLIBS) contain the executable code and other information used to run the system. Originally there were a limited number of target libraries: SYS1.LINKLIB for executable programs, SYS1.MACLIB for standardmacros, etc., but as of 2012 each software product usually has its own set of target libraries.Distribution Libraries(DLIBS) contain the master copy of each element for a system. Each product (FMID) has its own set of distribution libraries which are normally used only by SMP/E. Libraries inOS/360 and successors, unlike directories inunix, usually contain only one type and format of data. A software package may haveobjectlibraries (MOD),ISPFpanels(PNL), macro libraries (MAC) and many more.
SMP/E is a single large program which runs as abatch job. A series ofISPFpanelscan be used to interactively build the SMP/E job stream based on user input.[11][14]
One common sequence of steps is calledRECEIVE-APPLY-ACCEPTfrom the commands used for each step.
The SMP/ERECEIVEcommand processes SYSMODs from a source outside of SMP. Previously this might have been aPUT tapedistributed by IBM roughly monthly. More recently it might be a collection of SYSMODS downloaded over the internet. The RECEIVE process uses the MCS to create an entry in the CSI for each SYSMOD, marking its status as "RECEIVED", and stores the MCS information and the actual SYSMOD data.
TheREJECTcommand can be used to delete SYSMODS in "RECEIVED" status.
TheAPPLYcommand installs one or more received SYSMODS into the appropriate target libraries. The SYSMODS to be applied can be selected by various criteria, for example a single SYSMOD can be selected by SYSMOD ID, all SYSMODS received in a group can be selected bySOURCEID, or all un-applied SYSMODS that have been received can be applied. The requisite chains for the specified SYSMODS are checked and SYSMODS without the proper requisites, in hold status, or that have been superseded are flagged as errors and are not installed. Commonly SMP/E is instructed to also automatically apply any requisites in "RECEIVE" status to minimize these errors. SYSMODS installed have their status changed to "APPLIED" in the CSI.APPLY CHECKcan be used to check the SYSMODS to be installed without actually performing the installation.
TheRESTOREcommand can be used to remove an applied SYSMOD that has not been accepted.
TheACCEPTcommand installs SYSMODS permanently into the distribution libraries and marks their status as "ACCEPTED" in the CSI. Normally ACCEPT is done once the SYSMODS are known to be performing correctly before the next APPLY of service. There is no way in SMP/E to undo an ACCEPT operation except to delete all installation libraries including the CSIs (and to start installation again).
SMP/E is a large, complex program; features and datasets are added with every release. The major SMP/E datasets are:[1][13]
SMP
SMP4
SMPE
SMPEREF
|
https://en.wikipedia.org/wiki/SMP/E
|
Automatic bug-fixingis the automaticrepairofsoftware bugswithout the intervention of a human programmer.[1][2][3]It is also commonly referred to asautomatic patch generation,automatic bug repair, orautomatic program repair.[3]The typical goal of such techniques is to automatically generate correctpatchesto eliminate bugs insoftware programswithout causingsoftware regression.[4]
Automatic bug fixing is made according to a specification of the expected behavior which can be for instance aformal specificationor atest suite.[5]
A test-suite – the input/output pairs specify the functionality of the program, possibly captured inassertionscan be used as atest oracleto drive the search. This oracle can in fact be divided between thebug oraclethat exposes the faulty behavior, and theregression oracle, which encapsulates the functionality any program repair method must preserve. Note that a test suite is typically incomplete and does not cover all possible cases. Therefore, it is often possible for a validated patch to produce expected outputs for all inputs in the test suite but incorrect outputs for other inputs.[6]The existence of such validated but incorrect patches is a major challenge for generate-and-validate techniques.[6]Recent successful automatic bug-fixing techniques often rely on additional information other than the test suite, such as information learned from previous human patches, to further identify correct patches among validated patches.[7]
Another way to specify the expected behavior is to useformal specifications[8][9]Verification against full specifications that specify the whole program behavior including functionalities is less common because such specifications are typically not available in practice and the computation cost of suchverificationis prohibitive. For specific classes of errors, however, implicit partial specifications are often available. For example, there are targeted bug-fixing techniques validating that the patched program can no longer trigger overflow errors in the same execution path.[10]
Generate-and-validate approaches compile and test each candidate patch to collect all validated patches that produce expected outputs for all inputs in the test suite.[5][6]Such a technique typically starts with a test suite of the program, i.e., a set oftest cases, at least one of which exposes the bug.[5][7][11][12]An early generate-and-validate bug-fixing systems is GenProg.[5]The effectiveness of generate-and-validate techniques remains controversial, because they typically do not providepatch correctness guarantees.[6]Nevertheless, the reported results of recent state-of-the-art techniques are generally promising. For example, on systematically collected 69 real world bugs in eight largeC software programs, the state-of-the-art bug-fixing system Prophet generates correct patches for 18 out of the 69 bugs.[7]
One way to generate candidate patches is to applymutation operatorson the original program. Mutation operators manipulate the original program, potentially via itsabstract syntax treerepresentation, or a more coarse-grained representation such as operating at thestatement-level orblock-level. Earliergenetic improvementapproaches operate at the statement level and carry out simple delete/replace operations such as deleting an existing statement or replacing an existing statement with another statement in the same source file.[5][13]Recent approaches use more fine-grained operators at theabstract syntax treelevel to generate more diverse set of candidate patches.[12]Notably, the statement deletion mutation operator, and more generally removing code, is a reasonable repair strategy, or at least a good fault localization strategy.[14]
Another way to generate candidate patches consists of using fix templates. Fix templates are typically predefined changes for fixing specific classes of bugs.[15]Examples of fix templates include inserting aconditional statementto check whether the value of a variable is null to fix null pointer exception, or changing an integer constant by one to fix off-by-one errors.[15]
Repair techniques exist that are based on symbolic execution. For example, Semfix[16]uses symbolic execution to extract a repair constraint. Angelix[17]introduced the concept of angelic forest in order to deal with multiline patches.
Under certain assumptions, it is possible to state the repair problem as a synthesis problem.
SemFix[16]uses component-based synthesis.[18]Dynamoth uses dynamic synthesis.[19]S3[20]is based onsyntax-guided synthesis.[21]SearchRepair[22]converts potential patches into an SMT formula and queries candidate patches that allow the patched program to pass all supplied test cases.
Machine learningtechniques can improve the effectiveness of automatic bug-fixing systems.[7]One example of such techniques learns from past successful patches from human developers collected fromopen sourcerepositoriesinGitHubandSourceForge.[7]It then use the learned information to recognize and prioritize potentially correct patches among all generated candidate patches.[7]Alternatively, patches can be directly mined from existing sources. Example approaches include mining patches from donor applications[10]or from QA web sites.[23]
Getafix[24]is a language-agnostic approach developed and used in production atFacebook. Given a sample ofcode commitswhere engineers fixed a certain kind of bug, it learns human-like fix patterns that apply to future bugs of the same kind. Besides using Facebook's owncode repositoriesas training data, Getafix learnt some fixes fromopen sourceJava repositories. When new bugs get detected, Getafix applies its previously learnt patterns to produce candidate fixes and ranks them within seconds. It presents only the top-ranked fix for final validation by tools or an engineer, in order to save resources and ideally be so fast that no human time was spent on fixing the same bug, yet.
For specific classes of errors, targeted automatic bug-fixing techniques use specialized templates:
Comparing to generate-and-validate techniques, template-based techniques tend to have better bug-fixing accuracy but a much narrowed scope.[6][27]
There are multiple uses of automatic bug fixing:
In essence, automatic bug fixing is a search activity, whether deductive-based or heuristic-based. The search space of automatic bug fixing is composed of all edits that can be possibly made to a program. There have been studies to understand the structure of this search space. Qi et al.[30]showed that the original fitness function of Genprog is not better than random search to drive the search. Long et al.'s[31]study indicated that correct patches can be considered as sparse in the search space and that incorrect overfitting patches are vastly more abundant (see also discussion about overfitting below).
Sometimes, in test-suite based program repair, tools generate patches that pass the test suite, yet are actually incorrect, this is known as the "overfitting" problem.[32]"Overfitting" in this context refers to the fact that the patch overfits to the test inputs. There are different kinds of overfitting: incomplete fixing means that only some buggy inputs are fixed, regression introduction means some previously working features are broken after the patch (because they were poorly tested). Early prototypes for automatic repair suffered a lot from overfitting: on the Manybugs C benchmark, Qi et al.[6]reported that 104/110 of plausible GenProg patches were overfitting. In the context of synthesis-based repair, Le et al.[33]obtained more than 80% of overfitting patches.
One way to avoid overfitting is to filter out the generated patches. This can be done based on dynamic analysis.[34]Alternatively, Tian et al. propose heuristic approaches to assess patch correctness.[35][36]
Automatic bug-fixing techniques that rely on a test suite do not provide patch correctness guarantees, because the test suite is incomplete and does not cover all cases.[6]A weak test suite may cause generate-and-validate techniques to produce validated but incorrect patches that have negative effects such as eliminating desirable functionalities, causing memory leaks, and introducing security vulnerabilities.[6]One possible approach is to amplify the failing test suite by automatically generating further test cases that are then labelled as passing or failing. To minimize the human labelling effort, an automatictest oraclecan be trained that gradually learns to automatically classify test cases as passing or failing and only engages the bug-reporting user for uncertain cases.[37]
A limitation of generate-and-validate repair systems is the search space explosion.[31]For a program, there are a large number of statements to change and for each statement there are a large number of possible modifications. State-of-the-art systems address this problem by assuming that a small modification is enough for fixing a bug, resulting in a search space reduction.
The limitation of approaches based on symbolic analysis[16][17]is that real world programs are often converted to intractably large formulas especially for modifying statements withside effects.
Benchmarks of bugs typically focus on one specific programming language.
In C, the Manybugs benchmark collected by GenProg authors contains 69 real world defects and it is widely used to evaluate many other bug-fixing tools for C.[13][7][12][17]
InJava, the main benchmark is Defects4J now extensively used in most research papers on program repair for Java.[38][39]Alternative benchmarks exist, such as the Quixbugs benchmark,[40]which contains original bugs for program repair. Other benchmarks of Java bugs include Bugs.jar,[41]based on past commits.
Automatic bug-fixing is an active research topic in computer science. There are many implementations of various bug-fixing techniques especially for C and Java programs. Note that most of these implementations are research prototypes for demonstrating their techniques, i.e., it is unclear whether their current implementations are ready for industrial usage or not.
|
https://en.wikipedia.org/wiki/Automatic_bug_fixing
|
Shavlik Technologieswas a privately held company founded in 1993 by Mark Shavlik, who also was one of the original developers ofWindows NTin the late 1980s and early 1990s atMicrosoft.[1]
The company providedsoftwareand services for networkvulnerability assessmentand for managing networksecurity patches. Mark Shavlik left his role as CEO when Shavlik Technologies was acquired byVMwarein May 2011, then held the position of Vice President and General Manager at VMware until March 2013. Today Mark Shavlik is the CEO of security automation product creator Senserva.[1]
In April 2013, LANDESK purchased the Shavlik business unit and all rights to the Shavlik products from VMware. During the same period, LANDESK announced a partnership that madeVMwarean Alliance Partner.[2]
In 2017 LANDESK merged with HEAT Software creating a new IT Software company calledIvanti. Today, while the Shavlik name has been retired, the same Shavlik products are vital to the Ivanti security portfolio.[3]
Prior to the acceptance ofWindows NTas a legitimate, enterpriseoperating systemin the late 1990s, most enterprise software was written forUnixor some othermainframeoperating system. Shavlik's roots were in providing consulting services to help organizations make the leap to Microsoft OS's and contributed to them delivering products on NT. Shavlik later extended its services business into software security consulting, primarily with businesses in highly regulated industries such as banking and healthcare. The services centered on providing aCertified Information Systems Security Professional(CISSP) to perform security audits andpenetration testing.
In the early 2000s the failure to keep software up-to-date by applying patches was a common flag on audits. One of the central challenges in addressing the problem was that companies did not have an easy way to determine which machines were out of date and they did not have a methodology to deploy updates. During this era, Microsoft wrestled with addressing this issue internally. They wanted a tool to detect which NT servers in a large NT server environment were missing patches so "hot fixes" (seeHotfix) could be installed on those machines. However, because these NT servers were critical to operations, Microsoft required that this process be completed without installing any extra software, such as an agent, on the servers.
In an effort to address the "hot fix"issue, Shavlik built the first agentless patch scanner for Windows NT.[4]The product was named HFNetChk (the acronym designating HotFix Network Check). The HFNetChk release was followed by another partnership wherein Shavlik helped build theMicrosoft Baseline Security Analyzer(MBSA). This tool did minimal patch scanning along with some basic OS configuration checks. It was delivered by Microsoft as part of theWindows 2000Server Toolkit.
HFNetChk Pro 3.0, which was never released externally, introduced the ability to not only scan for missing patches but also to deploy those patches. This eliminated the need for an IT administrator to apply patches manually.
In 2003, Shavlik brought HFNetChk to market for the first time. Version 4 featured aVisual Basic"web friendly" user interface. Previous versions of HFNetChk were operated via acommand line interface.
In January 2003, theSQL slammer wormexploited a vulnerability inSQL Serverthat allowed a denial of service and slowed traffic on many internet hosts to a crawl. The worm went viral affecting 75,000 systems in the first ten minutes. Microsoft had made a patch available six months prior indicating that a failure to patch led to the widespread, security breach, not the vulnerability itself.[5]
Shavlik's HFNetChk was the first product in the market that could scan for and deploy missing patches on Windows machines. In the aftermath of the SQL Slammer worm and after a series of other highly publicized exploits hit in 2003/2004, Shavlik made the decision to move away from consulting and to fully invest in software development forpatch managementproducts.
Shavlik added standalone and integrated anti-virus capabilities to version 5 of HFNetChk and changed the product name to HFNetChk Protect, eventually dropping HFNetChk.[6]
During the Version 6 timeframe, Protect introduced the capability to patch offlinevirtual machinesand VM templates. This project was the first in a series of partnerships Shavlik entered into with VMware, and the capability meant that Protect could agentlessly patch machines in both physical and virtual environments.
With Version 7 and its various point releases, a new user interface was introduced as well as physical and virtual asset inventory. Agent support was integrated into Protect and was no longer offered as a separately licensed product. Shavlik also shifted more of its detection logic out of Protect and into the content.
Version 8 of Protect fixed many stability issues. Due to a number of customer complaints, Shavlik focused on making the product more stable. Version 9 introducedhypervisorpatching for VMware implementations as well as the ability to patch off-network machines via thecloud.
Shavlik's technological advancements have been significant enough to attract attention from Microsoft, resulting in cooperative efforts between the two companies and the development of theMicrosoft Baseline Security Analyzer(MBSA), which is based on Shavlik's HFNetChk (the acronym designating HotFix Network Checker) released in 2001.[7]This technology has evolved, but is still the core technology in the current product offerings and has been licensed by multipleOEMpartners to provide patch management capabilities to a variety of IT management solutions with a combined install base of millions of users across the globe.[8]
In the late 2000s, the industry saw applications being exploited by hackers shift from Microsoft OS and other Microsoft applications to third-party applications likeJava,Adobe, music players, and non-Microsoft web browsers. During this time, products like MicrosoftSystem Center Configuration Manager(SCCM) provided Windows patch capabilities via the Windows Server Update Services (WSUS); however, it didn't (and still doesn't) patch third-party products. According to Global Analyst FirmGartner, this left administrators with limited choices: don't patch third-party products leaving the network at risk, author and test a custom patch each time a third-party product requires an update, or deploy the patches manually to each affected machine.[9]
In April 2010, Shavlik released SCUPdates – a catalog of patch content that automated the process of building third-party patches and delivering them to Windows clients via an integration with Microsoft System Center Updates Publisher (SCUP) and SCCM. In tandem with the initial SCUPdates release, Microsoft and Shavlik also announced Shavlik's inclusion into the Microsoft System Center Alliance.[10]
In 2010 Shavlik released IT.Shavlik which provided a web-based front-end to the traditional Shavlik toolkit of asset inventory, patch scanning, and patch deployment. ThisSoftware as a Service(SaaS) application simplified the workflow for inventory and systems patching than was possible with the on-premises, Protect solution.
In early 2009, Shavlik formed an OEM partnership with VMware to build a cloud-based application designed to help IT administrators in smaller businesses deploy a virtual environment. VMware Go (vGo) was intended to be an "onramp to virtualization," serving smaller customers until they were ready to upgrade to the more sophisticated vCenter suite. vGo was originally brought to market as a free-use cloud-based product.
VMware and Shavlik invested heavily in vGo, and the product was expanded to include asset inventory, patch scanning, and an IT advisor recommendation engine. Later in attempts to monetize vGo's services, a paid version called VMware Go Pro introduced patch deployment. This led to the migration of users from IT.Shavlik to VMware Go.
VMware's interest in VMware Go as well as the virtual infrastructure patching capabilities within Protect led to its acquisition of Shavlik Technologies in May 2011. The terms of the acquisition were not publicly disclosed.[12]
In January 2013, VMware announced its intent to "sharpen its focus" on the software-defined data center and hybrid cloud services.[13]As part of this realignment, VMware sought to sell off products that weren't contributing to its core business such as its SlideRocket presentation software and other "non-key cloud and virtualization technologies."[14]The Shavlik product line found itself on that list.
In April 2013,LANDeskSoftware purchased the Shavlik business unit and all rights to the Shavlik products from VMware. At the same time LANDesk announced a partnership which added VMware to LANDesk's list of Alliance Partners.[15]Shavlik's move to LANDesk triggered new investment in Shavlik Patch for Microsoft System Center (formerly SCUPdates) as well as other products that enhance the experience for companies using SCCM.
In early 2017,Clearlake Capitalacquired LANDesk and Shavlik, along with Heat Software, Appsense and Wavelink; the combined company uses a new corporate name and product brand, Ivanti.[16][17]
|
https://en.wikipedia.org/wiki/Shavlik_Technologies
|
Awhite hat(or awhite-hat hacker,a whitehat) is an ethicalsecurity hacker.[1][2]Ethical hacking is a term meant to imply a broader category than just penetration testing.[3][4]Under the owner's consent, white-hat hackers aim to identify any vulnerabilities or security issues the current system has.[5]The white hat is contrasted with theblack hat, a malicious hacker; this definitional dichotomy comes fromWestern films, whereheroic and antagonistic cowboys might traditionally wear a white and a black hat, respectively.[6]There is a third kind of hacker known as agrey hatwho hacks with good intentions but at times without permission.[7]
White-hat hackers may also work in teams called "sneakers and/or hacker clubs",[8]red teams, ortiger teams.[9]
One of the first instances of an ethical hack being used was a "security evaluation" conducted by theUnited States Air Force, in which theMulticsoperating systems were tested for "potential use as a two-level (secret/top secret) system." The evaluation determined that while Multics was "significantly better than other conventional systems," it also had "...vulnerabilitiesin hardware security,software securityand procedural security" that could be uncovered with "a relatively low level of effort."[10]The authors performed their tests under a guideline of realism, so their results would accurately represent the kinds of access an intruder could potentially achieve. They performed tests involving simple information-gathering exercises, as well as outright attacks upon the system that might damage its integrity; both results were of interest to the target audience. There are several other now unclassified reports describing ethical hacking activities within theUS military.
By 1981The New York Timesdescribed white-hat activities as part of a "mischievous but perversely positive 'hacker' tradition". When aNational CSSemployee revealed the existence of hispassword cracker, which he had used on customer accounts, the company chastised him not for writing the software but for not disclosing it sooner. The letter of reprimand stated "The Company realizes the benefit to NCSS and encourages the efforts of employees to identify security weaknesses to the VP, the directory, and other sensitive software in files".[11]
On October 20, 2016, theDepartment of Defense(DOD) announced "Hack The Pentagon."[12][13]
The idea to bring this tactic of ethical hacking to assess the security of systems and point out vulnerabilities was formulated byDan FarmerandWietse Venema. To raise the overall level of security on theInternetandintranets, they proceeded to describe how they were able to gather enough information about their targets to have been able to compromise security if they had chosen to do so. They provided several specific examples of how this information could be gathered and exploited to gain control of the target, and how such an attack could be prevented. They gathered up all the tools they had used during their work, packaged them in a single, easy-to-use application, and gave it away to anyone who chose to download it. Their program calledSecurity Administrator Tool for Analyzing Networks, or SATAN, was met with a great amount of media attention around the world in 1992.[9]
Whilepenetration testingconcentrates on attacking software and computer systems from the start – scanning ports, examining known defects in protocols and applications running on the system, and patch installations, for example – ethical hacking may include other things. A full-scale ethical hack might include emailing staff to ask for password details, rummaging through executive dustbins, usually without the knowledge and consent of the targets. Only the owners, CEOs, and Board Members (stakeholders) who asked for such a security review of this magnitude are aware. To try and replicate some of the destructive techniques a real attack might employ, ethical hackers may arrange for cloned test systems, or organize a hack late at night while systems are less critical.[14]In most recent cases these hacks perpetuate for the long-term con (days, if not weeks, of long-term human infiltration into an organization). Some examples include leavingUSB/flash key drives with hidden auto-start software in a public area as if someone lost the small drive and an unsuspecting employee found it and took it.
Some other methods of carrying out these include:
The methods identifiedexploitknown securityvulnerabilitiesand attempt to evade security to gain entry into secured areas. They can do this by hiding software and system 'back-doors' that can be used as a link to information or access that a non-ethical hacker, also known as 'black hat' or 'grey hat', may want to reach.
Belgium legalized white hat hacking in February 2023.[15]
In July 2021, theChinese governmentmoved from a system of voluntary reporting to one of legally mandating that all white hat hackers first report any vulnerabilities to the government before taking any further steps to address the vulnerability or make it known to the public.[16]Commentators described the change as creating a "dual purpose" in which white hat activity also serves the country's intelligence agencies.[16]
Struan Robertson, legal director at Pinsent Masons LLP, and editor ofOUT-LAW.comsays "Broadly speaking, if the access to a system is authorized, the hacking is ethical and legal. If it isn't, there's an offense under theComputer Misuse Act. The unauthorized access offense covers everything from guessing the password to accessing someone's webmail account, to cracking the security of a bank. The maximum penalty for unauthorized access to a computer is two years in prison and a fine. There are higher penalties – up to 10 years in prison – when the hacker also modifies data". Unauthorized access even to expose vulnerabilities for the benefit of many is not legal, says Robertson. "There's no defense in our hacking laws that your behavior is for the greater good. Even if it's what you believe."[4]
The United StatesNational Security Agencyoffers certifications such as the CNSS 4011. Such a certification covers orderly, ethical hacking techniques and team management. Aggressor teams are called "red" teams. Defender teams are called "blue" teams.[8]When the agency recruited atDEF CONin 2020, it promised applicants that "If you have a few, shall we say,indiscretionsin your past, don't be alarmed. You shouldn't automatically assume you won't be hired".[17]
A good "white hat" is a competitive skillful employee for an enterprise since they can be acountermeasureto find thebugsto protect the enterprise network environment. Therefore, a good "white hat" could bring unexpected benefits in reducing the risk across systems, applications, and endpoints for an enterprise.[18]
Recent research has indicated that white-hat hackers are increasingly becoming an important aspect of a company's network security protection. Moving beyond just penetration testing, white hat hackers are building and changing their skill sets, since the threats are also changing. Their skills now involvesocial engineering, mobile tech, andsocial networking.[19]
|
https://en.wikipedia.org/wiki/White_hat_(computer_security)
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.