text
stringlengths 21
172k
| source
stringlengths 32
113
|
|---|---|
Innumber theory,Ramanujan's sum, usually denotedcq(n), is a function of two positive integer variablesqandndefined by the formula
where (a,q) = 1 means thataonly takes on valuescoprimetoq.
Srinivasa Ramanujanmentioned the sums in a 1918 paper.[1]In addition to the expansions discussed in this article, Ramanujan's sums are used in the proof ofVinogradov's theoremthat every sufficiently large odd number is the sum of threeprimes.[2]
For integersaandb,a∣b{\displaystyle a\mid b}is read "adividesb" and means that there is an integercsuch thatba=c.{\displaystyle {\frac {b}{a}}=c.}Similarly,a∤b{\displaystyle a\nmid b}is read "adoes not divideb". The summation symbol
means thatdgoes through all the positive divisors ofm, e.g.
(a,b){\displaystyle (a,\,b)}is thegreatest common divisor,
ϕ(n){\displaystyle \phi (n)}isEuler's totient function,
μ(n){\displaystyle \mu (n)}is theMöbius function, and
ζ(s){\displaystyle \zeta (s)}is theRiemann zeta function.
These formulas come from the definition,Euler's formulaeix=cosx+isinx,{\displaystyle e^{ix}=\cos x+i\sin x,}and elementary trigonometric identities.
and so on (OEIS:A000012,OEIS:A033999,OEIS:A099837,OEIS:A176742,..,OEIS:A100051,...).cq(n) is always an integer.
Letζq=e2πiq.{\displaystyle \zeta _{q}=e^{\frac {2\pi i}{q}}.}Thenζqis a root of the equationxq− 1 = 0. Each of its powers,
is also a root. Therefore, since there areqof them, they are all of the roots. The numbersζqn{\displaystyle \zeta _{q}^{n}}where 1 ≤n≤qare called theq-throots of unity.ζqis called aprimitiveq-th root of unity because the smallest value ofnthat makesζqn=1{\displaystyle \zeta _{q}^{n}=1}isq. The other primitiveq-th roots of unity are the numbersζqa{\displaystyle \zeta _{q}^{a}}where (a,q) = 1. Therefore, there are φ(q) primitiveq-th roots of unity.
Thus, the Ramanujan sumcq(n) is the sum of then-th powers of the primitiveq-th roots of unity.
It is a fact[3]that the powers ofζqare precisely the primitive roots for all the divisors ofq.
Example.Letq= 12. Then
Therefore, if
is the sum of then-th powers of all the roots, primitive and imprimitive,
and byMöbius inversion,
It follows from the identityxq− 1 = (x− 1)(xq−1+xq−2+ ... +x+ 1) that
and this leads to the formula
published by Kluyver in 1906.[4]
This shows thatcq(n) is always an integer. Compare it with the formula
It is easily shown from the definition thatcq(n) ismultiplicativewhen considered as a function ofqfor a fixed value ofn:[5]i.e.
From the definition (or Kluyver's formula) it is straightforward to prove that, ifpis a prime number,
and ifpkis aprime powerwherek> 1,
This result and the multiplicative property can be used to prove
This is called von Sterneck's arithmetic function.[6]The equivalence of it and Ramanujan's sum is due to Hölder.[7][8]
For all positive integersq,
For a fixed value ofqthe absolute value of the sequence{cq(1),cq(2),…}{\displaystyle \{c_{q}(1),c_{q}(2),\ldots \}}is bounded by φ(q), and for a fixed value ofnthe absolute value of the sequence{c1(n),c2(n),…}{\displaystyle \{c_{1}(n),c_{2}(n),\ldots \}}is bounded byn.
Ifq> 1
Letm1,m2> 0,m= lcm(m1,m2). Then[9]Ramanujan's sums satisfy anorthogonality property:
Letn,k> 0. Then[10]
known as theBrauer-Rademacheridentity.
Ifn> 0 andais any integer, we also have[11]
due to Cohen.
Iff(n)is anarithmetic function(i.e. a complex-valued function of the integers or natural numbers), then aconvergent infinite seriesof the form:
or of the form:
where theak∈C, is called aRamanujan expansion[12]off(n).
Ramanujan found expansions of some of the well-known functions of number theory. All of these results are proved in an "elementary" manner (i.e. only using formal manipulations of series and the simplest results about convergence).[13][14][15]
The expansion of thezero functiondepends on a result from the analytic theory of prime numbers, namely that the series
converges to 0, and the results forr(n)andr′(n)depend on theorems in an earlier paper.[16]
All the formulas in this section are from Ramanujan's 1918 paper.
Thegenerating functionsof the Ramanujan sums areDirichlet series:
is a generating function for the sequencecq(1),cq(2), ... whereqis kept constant, and
is a generating function for the sequencec1(n),c2(n), ... wherenis kept constant.
There is also the double Dirichlet series
The polynomial with Ramanujan sum's as coefficients can be expressed withcyclotomic polynomial[17]
σk(n)is thedivisor function(i.e. the sum of thek-th powers of the divisors ofn, including 1 andn).σ0(n), the number of divisors ofn, is usually writtend(n)andσ1(n), the sum of the divisors ofn, is usually writtenσ(n).
Ifs> 0,
Settings= 1gives
If theRiemann hypothesisis true, and−12<s<12,{\displaystyle -{\tfrac {1}{2}}<s<{\tfrac {1}{2}},}
d(n) = σ0(n)is the number of divisors ofn, including 1 andnitself.
whereγ = 0.5772...is theEuler–Mascheroni constant.
Euler's totient functionφ(n)is the number of positive integers less thannand coprime ton. Ramanujan defines a generalization of it, if
is theprime factorizationofn, andsis a complex number, let
so thatφ1(n) =φ(n)is Euler's function.[18]
He proves that
and uses this to show that
Lettings= 1,
Note that the constant is the inverse[19]of the one in the formula forσ(n).
Von Mangoldt's functionΛ(n) = 0unlessn=pkis a power of a prime number, in which case it is thenatural logarithmlogp.
For alln> 0,
This is equivalent to theprime number theorem.[20][21]
r2s(n)is the number of ways of representingnas the sum of2ssquares, counting different orders and signs as different (e.g.,r2(13) = 8, as13 = (±2)2+ (±3)2= (±3)2+ (±2)2.)
Ramanujan defines a functionδ2s(n)and references a paper[22]in which he proved thatr2s(n) = δ2s(n)fors= 1, 2, 3, and 4. Fors> 4he shows thatδ2s(n)is a good approximation tor2s(n).
s= 1has a special formula:
In the following formulas the signs repeat with a period of 4.
and therefore,
r2s′(n){\displaystyle r'_{2s}(n)}is the number of waysncan be represented as the sum of2striangular numbers(i.e. the numbers 1, 3 = 1 + 2, 6 = 1 + 2 + 3, 10 = 1 + 2 + 3 + 4, 15, ...; then-th triangular number is given by the formulann+ 1/2.)
The analysis here is similar to that for squares. Ramanujan refers to the same paper as he did for the squares, where he showed that there is a functionδ2s′(n){\displaystyle \delta '_{2s}(n)}such thatr2s′(n)=δ2s′(n){\displaystyle r'_{2s}(n)=\delta '_{2s}(n)}fors= 1, 2, 3, and 4, and that fors> 4,δ2s′(n){\displaystyle \delta '_{2s}(n)}is a good approximation tor2s′(n).{\displaystyle r'_{2s}(n).}
Again,s= 1requires a special formula:
Ifsis a multiple of 4,
Therefore,
Let
Then fors> 1,
These sums are obviously of great interest, and a few of their properties have been discussed already. But, so far as I know, they have never been considered from the point of view which I adopt in this paper; and I believe that all the results which it contains are new.
The majority of my formulae are "elementary" in the technical sense of the word — they can (that is to say) be proved by a combination of processes involving only finite algebra and simple general theorems concerning infinite series
|
https://en.wikipedia.org/wiki/Ramanujan%27s_sum
|
Inmachine learning, ahyperparameteris aparameterthat can be set in order to define any configurable part of amodel's learning process. Hyperparameters can be classified as either model hyperparameters (such as the topology and size of aneural network) or algorithm hyperparameters (such as thelearning rateand the batch size of anoptimizer). These are namedhyperparameters in contrast toparameters, which are characteristics that the model learns from the data.
Hyperparameters are not required by every model or algorithm. Some simple algorithms such asordinary least squaresregression require none. However, theLASSOalgorithm, for example, adds aregularizationhyperparameter to ordinary least squares which must be set before training.[1]Even models and algorithms without a strict requirement to define hyperparameters may not produce meaningful results if these are not carefully chosen. However, optimal values for hyperparameters are not always easy to predict. Some hyperparameters may have no meaningful effect, or one important variable may be conditional upon the value of another. Often a separate process ofhyperparameter tuningis needed to find a suitable combination for the data and task.
As well was improving model performance, hyperparameters can be used by researchers to introducerobustnessandreproducibilityinto their work, especially if it uses models that incorporaterandom number generation.
The time required to train and test a model can depend upon the choice of its hyperparameters.[2]A hyperparameter is usually of continuous or integer type, leading to mixed-type optimization problems.[2]The existence of some hyperparameters is conditional upon the value of others, e.g. the size of each hidden layer in a neural network can be conditional upon the number of layers.[2]
Theobjective functionis typicallynon-differentiablewith respect to hyperparameters.[clarification needed]As a result, in most instances, hyperparameters cannot be learned usinggradient-based optimization methods(such as gradient descent), which are commonly employed to learn model parameters. These hyperparameters are those parameters describing a model representation that cannot be learned by common optimization methods, but nonetheless affect the loss function. An example would be the tolerance hyperparameter for errors insupport vector machines.
Sometimes, hyperparameters cannot be learned from the training data because they aggressively increase the capacity of a model and can push the loss function to an undesired minimum (overfittingto the data), as opposed to correctly mapping the richness of the structure in the data. For example, if we treat the degree of a polynomial equation fitting a regression model as atrainable parameter, the degree would increase until the model perfectly fit the data, yielding low training error, but poor generalization performance.
Most performance variation can be attributed to just a few hyperparameters.[3][2][4]The tunability of an algorithm, hyperparameter, or interacting hyperparameters is a measure of how much performance can be gained by tuning it.[5]For anLSTM, while thelearning ratefollowed by the network size are its most crucial hyperparameters,[6]batching and momentum have no significant effect on its performance.[7]
Although some research has advocated the use of mini-batch sizes in the thousands, other work has found the best performance with mini-batch sizes between 2 and 32.[8]
An inherent stochasticity in learning directly implies that the empirical hyperparameter performance is not necessarily its true performance.[2]Methods that are notrobustto simple changes in hyperparameters,random seeds, or even different implementations of the same algorithm cannot be integrated into mission critical control systems without significant simplification and robustification.[9]
Reinforcement learningalgorithms, in particular, require measuring their performance over a large number of random seeds, and also measuring their sensitivity to choices of hyperparameters.[9]Their evaluation with a small number of random seeds does not capture performance adequately due to high variance.[9]Some reinforcement learning methods, e.g. DDPG (Deep Deterministic Policy Gradient), are more sensitive to hyperparameter choices than others.[9]
Hyperparameter optimization finds a tuple of hyperparameters that yields an optimal model which minimizes a predefinedloss functionon given test data.[2]The objective function takes a tuple of hyperparameters and returns the associated loss.[2]Typically these methods are not gradient based, and instead apply concepts fromderivative-free optimizationor black box optimization.
Apart from tuning hyperparameters, machine learning involves storing and organizing the parameters and results, and making sure they are reproducible.[10]In the absence of a robust infrastructure for this purpose, research code often evolves quickly and compromises essential aspects like bookkeeping andreproducibility.[11]Online collaboration platforms for machine learning go further by allowing scientists to automatically share, organize and discuss experiments, data, and algorithms.[12]Reproducibility can be particularly difficult fordeep learningmodels.[13]For example, research has shown that deep learning models depend very heavily even on therandom seedselection of therandom number generator.[14]
|
https://en.wikipedia.org/wiki/Hyperparameter_(machine_learning)
|
Inmathematics, theJacobi elliptic functionsare a set of basicelliptic functions. They are found in the description of themotion of a pendulum, as well as in the design of electronicelliptic filters. Whiletrigonometric functionsare defined with reference to a circle, the Jacobi elliptic functions are a generalization which refer to otherconic sections, the ellipse in particular. The relation to trigonometric functions is contained in the notation, for example, by the matching notationsn{\displaystyle \operatorname {sn} }forsin{\displaystyle \sin }. The Jacobi elliptic functions are used more often in practical problems than theWeierstrass elliptic functionsas they do not require notions of complex analysis to be defined and/or understood. They were introduced byCarl Gustav Jakob Jacobi(1829).Carl Friedrich Gausshad already studied special Jacobi elliptic functions in 1797, thelemniscate elliptic functionsin particular,[1]but his work was published much later.
There are twelve Jacobi elliptic functions denoted bypq(u,m){\displaystyle \operatorname {pq} (u,m)}, wherep{\displaystyle \mathrm {p} }andq{\displaystyle \mathrm {q} }are any of the lettersc{\displaystyle \mathrm {c} },s{\displaystyle \mathrm {s} },n{\displaystyle \mathrm {n} }, andd{\displaystyle \mathrm {d} }. (Functions of the formpp(u,m){\displaystyle \operatorname {pp} (u,m)}are trivially set to unity for notational completeness.)u{\displaystyle u}is the argument, andm{\displaystyle m}is the parameter, both of which may be complex. In fact, the Jacobi elliptic functions aremeromorphicin bothu{\displaystyle u}andm{\displaystyle m}.[2]The distribution of the zeros and poles in theu{\displaystyle u}-plane is well-known. However, questions of the distribution of the zeros and poles in them{\displaystyle m}-plane remain to be investigated.[2]
In the complex plane of the argumentu{\displaystyle u}, the twelve functions form a repeating lattice of simplepoles and zeroes.[3]Depending on the function, one repeating parallelogram, or unit cell, will have sides of length2K{\displaystyle 2K}or4K{\displaystyle 4K}on the real axis, and2K′{\displaystyle 2K'}or4K′{\displaystyle 4K'}on the imaginary axis, whereK=K(m){\displaystyle K=K(m)}andK′=K(1−m){\displaystyle K'=K(1-m)}are known as thequarter periodswithK(⋅){\displaystyle K(\cdot )}being theelliptic integralof the first kind. The nature of the unit cell can be determined by inspecting the "auxiliary rectangle" (generally a parallelogram), which is a rectangle formed by the origin(0,0){\displaystyle (0,0)}at one corner, and(K,K′){\displaystyle (K,K')}as the diagonally opposite corner. As in the diagram, the four corners of the auxiliary rectangle are nameds{\displaystyle \mathrm {s} },c{\displaystyle \mathrm {c} },d{\displaystyle \mathrm {d} }, andn{\displaystyle \mathrm {n} }, going counter-clockwise from the origin. The functionpq(u,m){\displaystyle \operatorname {pq} (u,m)}will have a zero at thep{\displaystyle \mathrm {p} }corner and a pole at theq{\displaystyle \mathrm {q} }corner. The twelve functions correspond to the twelve ways of arranging these poles and zeroes in the corners of the rectangle.
When the argumentu{\displaystyle u}and parameterm{\displaystyle m}are real, with0<m<1{\displaystyle 0<m<1},K{\displaystyle K}andK′{\displaystyle K'}will be real and the auxiliary parallelogram will in fact be a rectangle, and the Jacobi elliptic functions will all be real valued on the real line.
Since the Jacobi elliptic functions are doubly periodic inu{\displaystyle u}, they factor through atorus– in effect, their domain can be taken to be a torus, just as cosine and sine are in effect defined on a circle. Instead of having only one circle, we now have the product of two circles, one real and the other imaginary. The complex plane can be replaced by acomplex torus. The circumference of the first circle is4K{\displaystyle 4K}and the second4K′{\displaystyle 4K'}, whereK{\displaystyle K}andK′{\displaystyle K'}are thequarter periods. Each function has two zeroes and two poles at opposite positions on the torus. Among the points0{\displaystyle 0},K{\displaystyle K},K+iK′{\displaystyle K+iK'},iK′{\displaystyle iK'}there is one zero and one pole.
The Jacobi elliptic functions are then doubly periodic, meromorphic functions satisfying the following properties:
The elliptic functions can be given in a variety of notations, which can make the subject unnecessarily confusing. Elliptic functions are functions of two variables. The first variable might be given in terms of theamplitudeφ{\displaystyle \varphi }, or more commonly, in terms ofu{\displaystyle u}given below. The second variable might be given in terms of theparameterm{\displaystyle m}, or as theelliptic modulusk{\displaystyle k}, wherek2=m{\displaystyle k^{2}=m}, or in terms of themodular angleα{\displaystyle \alpha }, wherem=sin2α{\displaystyle m=\sin ^{2}\alpha }. The complements ofk{\displaystyle k}andm{\displaystyle m}are defined asm′=1−m{\displaystyle m'=1-m}andk′=m′{\textstyle k'={\sqrt {m'}}}. These four terms are used below without comment to simplify various expressions.
The twelve Jacobi elliptic functions are generally written aspq(u,m){\displaystyle \operatorname {pq} (u,m)}wherep{\displaystyle \mathrm {p} }andq{\displaystyle \mathrm {q} }are any of the lettersc{\displaystyle \mathrm {c} },s{\displaystyle \mathrm {s} },n{\displaystyle \mathrm {n} }, andd{\displaystyle \mathrm {d} }. Functions of the formpp(u,m){\displaystyle \operatorname {pp} (u,m)}are trivially set to unity for notational completeness. The “major” functions are generally taken to becn(u,m){\displaystyle \operatorname {cn} (u,m)},sn(u,m){\displaystyle \operatorname {sn} (u,m)}anddn(u,m){\displaystyle \operatorname {dn} (u,m)}from which all other functions can be derived and expressions are often written solely in terms of these three functions, however, various symmetries and generalizations are often most conveniently expressed using the full set. (This notation is due toGudermannandGlaisherand is not Jacobi's original notation.)
Throughout this article,pq(u,t2)=pq(u;t){\displaystyle \operatorname {pq} (u,t^{2})=\operatorname {pq} (u;t)}.
The functions are notationally related to each other by the multiplication rule: (arguments suppressed)
from which other commonly used relationships can be derived:
The multiplication rule follows immediately from the identification of the elliptic functions with theNeville theta functions[5]
Also note that:
There is a definition, relating the elliptic functions to the inverse of theincomplete elliptic integral of the first kindF{\displaystyle F}. These functions take the parametersu{\displaystyle u}andm{\displaystyle m}as inputs. Theφ{\displaystyle \varphi }that satisfies
is called theJacobi amplitude:
In this framework, theelliptic sinesnu(Latin:sinus amplitudinis) is given by
and theelliptic cosinecnu(Latin:cosinus amplitudinis) is given by
and thedelta amplitudednu(Latin:delta amplitudinis)[note 1]
In the above, the valuem{\displaystyle m}is a free parameter, usually taken to be real such that0≤m≤1{\displaystyle 0\leq m\leq 1}(but can be complex in general), and so the elliptic functions can be thought of as being given by two variables,u{\displaystyle u}and the parameterm{\displaystyle m}. The remaining nine elliptic functions are easily built from the above three (sn{\displaystyle \operatorname {sn} },cn{\displaystyle \operatorname {cn} },dn{\displaystyle \operatorname {dn} }), and are given in a section below. Note that whenφ=π/2{\displaystyle \varphi =\pi /2}, thatu{\displaystyle u}then equals thequarter periodK{\displaystyle K}.
In the most general setting,am(u,m){\displaystyle \operatorname {am} (u,m)}is amultivalued function(inu{\displaystyle u}) with infinitely manylogarithmic branch points(the branches differ by integer multiples of2π{\displaystyle 2\pi }), namely the points2sK(m)+(4t+1)K(1−m)i{\displaystyle 2sK(m)+(4t+1)K(1-m)i}and2sK(m)+(4t+3)K(1−m)i{\displaystyle 2sK(m)+(4t+3)K(1-m)i}wheres,t∈Z{\displaystyle s,t\in \mathbb {Z} }.[6]This multivalued function can be made single-valued by cutting the complex plane along the line segments joining these branch points (the cutting can be done in non-equivalent ways, giving non-equivalent single-valued functions), thus makingam(u,m){\displaystyle \operatorname {am} (u,m)}analyticeverywhere except on thebranch cuts. In contrast,sinam(u,m){\displaystyle \sin \operatorname {am} (u,m)}and other elliptic functions have no branch points, give consistent values for every branch ofam{\displaystyle \operatorname {am} }, and aremeromorphicin the whole complex plane. Since every elliptic function is meromorphic in the whole complex plane (by definition),am(u,m){\displaystyle \operatorname {am} (u,m)}(when considered as a single-valued function) is not an elliptic function.
However, a particular cutting foram(u,m){\displaystyle \operatorname {am} (u,m)}can be made in theu{\displaystyle u}-plane by line segments from2sK(m)+(4t+1)K(1−m)i{\displaystyle 2sK(m)+(4t+1)K(1-m)i}to2sK(m)+(4t+3)K(1−m)i{\displaystyle 2sK(m)+(4t+3)K(1-m)i}withs,t∈Z{\displaystyle s,t\in \mathbb {Z} }; then it only remains to defineam(u,m){\displaystyle \operatorname {am} (u,m)}at the branch cuts by continuity from some direction. Thenam(u,m){\displaystyle \operatorname {am} (u,m)}becomes single-valued and singly-periodic inu{\displaystyle u}with the minimal period4iK(1−m){\displaystyle 4iK(1-m)}and it has singularities at the logarithmic branch points mentioned above. Ifm∈R{\displaystyle m\in \mathbb {R} }andm≤1{\displaystyle m\leq 1},am(u,m){\displaystyle \operatorname {am} (u,m)}is continuous inu{\displaystyle u}on the real line. Whenm>1{\displaystyle m>1}, the branch cuts ofam(u,m){\displaystyle \operatorname {am} (u,m)}in theu{\displaystyle u}-plane cross the real line at2(2s+1)K(1/m)/m{\displaystyle 2(2s+1)K(1/m)/{\sqrt {m}}}fors∈Z{\displaystyle s\in \mathbb {Z} }; therefore form>1{\displaystyle m>1},am(u,m){\displaystyle \operatorname {am} (u,m)}is not continuous inu{\displaystyle u}on the real line and jumps by2π{\displaystyle 2\pi }on the discontinuities.
But definingam(u,m){\displaystyle \operatorname {am} (u,m)}this way gives rise to very complicated branch cuts in them{\displaystyle m}-plane (nottheu{\displaystyle u}-plane); they have not been fully described as of yet.
Let
be theincomplete elliptic integral of the second kindwith parameterm{\displaystyle m}.
Then theJacobi epsilonfunction can be defined as
foru∈R{\displaystyle u\in \mathbb {R} }and0<m<1{\displaystyle 0<m<1}and byanalytic continuationin each of the variables otherwise: the Jacobi epsilon function is meromorphic in the whole complex plane (in bothu{\displaystyle u}andm{\displaystyle m}). Alternatively, throughout both theu{\displaystyle u}-plane andm{\displaystyle m}-plane,[7]
E{\displaystyle {\mathcal {E}}}is well-defined in this way because allresiduesoft↦dn(t,m)2{\displaystyle t\mapsto \operatorname {dn} (t,m)^{2}}are zero, so the integral is path-independent. So the Jacobi epsilon relates the incomplete elliptic integral of the first kind to the incomplete elliptic integral of the second kind:
The Jacobi epsilon function is not an elliptic function, but it appears when differentiating the Jacobi elliptic functions with respect to the parameter.
TheJacobi znfunction is defined by
It is a singly periodic function which is meromorphic inu{\displaystyle u}, but not inm{\displaystyle m}(due to the branch cuts ofE{\displaystyle E}andK{\displaystyle K}). Its minimal period inu{\displaystyle u}is2K(m){\displaystyle 2K(m)}. It is related to theJacobi zeta functionbyZ(φ,m)=zn(F(φ,m),m).{\displaystyle Z(\varphi ,m)=\operatorname {zn} (F(\varphi ,m),m).}
Historically, the Jacobi elliptic functions were first defined by using the amplitude. In more modern texts on elliptic functions, the Jacobi elliptic functions are defined by other means, for example by ratios of theta functions (see below), and the amplitude is ignored.
In modern terms, the relation to elliptic integrals would be expressed bysn(F(φ,m),m)=sinφ{\displaystyle \operatorname {sn} (F(\varphi ,m),m)=\sin \varphi }(orcn(F(φ,m),m)=cosφ{\displaystyle \operatorname {cn} (F(\varphi ,m),m)=\cos \varphi }) instead ofam(F(φ,m),m)=φ{\displaystyle \operatorname {am} (F(\varphi ,m),m)=\varphi }.
cosφ,sinφ{\displaystyle \cos \varphi ,\sin \varphi }are defined on the unit circle, with radiusr= 1 and angleφ={\displaystyle \varphi =}arc length of the unit circle measured from the positivex-axis. Similarly, Jacobi elliptic functions are defined on the unit ellipse,[citation needed]witha= 1. Let
then:
For each angleφ{\displaystyle \varphi }the parameter
(the incomplete elliptic integral of the first kind) is computed.
On the unit circle (a=b=1{\displaystyle a=b=1}),u{\displaystyle u}would be an arc length.
However, the relation ofu{\displaystyle u}to thearc length of an ellipseis more complicated.[8]
LetP=(x,y)=(rcosφ,rsinφ){\displaystyle P=(x,y)=(r\cos \varphi ,r\sin \varphi )}be a point on the ellipse, and letP′=(x′,y′)=(cosφ,sinφ){\displaystyle P'=(x',y')=(\cos \varphi ,\sin \varphi )}be the point where the unit circle intersects the line betweenP{\displaystyle P}and the originO{\displaystyle O}.
Then the familiar relations from the unit circle:
read for the ellipse:
So the projections of the intersection pointP′{\displaystyle P'}of the lineOP{\displaystyle OP}with the unit circle on thex- andy-axes are simplycn(u,m){\displaystyle \operatorname {cn} (u,m)}andsn(u,m){\displaystyle \operatorname {sn} (u,m)}. These projections may be interpreted as 'definition as trigonometry'. In short:
For thex{\displaystyle x}andy{\displaystyle y}value of the pointP{\displaystyle P}withu{\displaystyle u}and parameterm{\displaystyle m}we get, after inserting the relation:
into:x=r(φ,m)cos(φ),y=r(φ,m)sin(φ){\displaystyle x=r(\varphi ,m)\cos(\varphi ),y=r(\varphi ,m)\sin(\varphi )}that:
The latter relations for thex- andy-coordinates of points on the unit ellipse may be considered as generalization of the relationsx=cosφ,y=sinφ{\displaystyle x=\cos \varphi ,y=\sin \varphi }for the coordinates of points on the unit circle.
The following table summarizes the expressions for all Jacobi elliptic functions pq(u,m) in the variables (x,y,r) and (φ,dn) withr=x2+y2{\textstyle r={\sqrt {x^{2}+y^{2}}}}
Equivalently, Jacobi's elliptic functions can be defined in terms of thetheta functions.[9]Withz,τ∈C{\displaystyle z,\tau \in \mathbb {C} }such thatImτ>0{\displaystyle \operatorname {Im} \tau >0}, let
and letθ2(τ)=θ2(0|τ){\displaystyle \theta _{2}(\tau )=\theta _{2}(0|\tau )},θ3(τ)=θ3(0|τ){\displaystyle \theta _{3}(\tau )=\theta _{3}(0|\tau )},θ4(τ)=θ4(0|τ){\displaystyle \theta _{4}(\tau )=\theta _{4}(0|\tau )}. Then withK=K(m){\displaystyle K=K(m)},K′=K(1−m){\displaystyle K'=K(1-m)},ζ=πu/(2K){\displaystyle \zeta =\pi u/(2K)}andτ=iK′/K{\displaystyle \tau =iK'/K},
The Jacobi zn function can be expressed by theta functions as well:
where′{\displaystyle '}denotes the partial derivative with respect to the first variable.
In fact, the definition of the Jacobi elliptic functions in Whittaker & Watson is stated a little bit differently than the one given above (but it's equivalent to it) and relies on modular inversion:The functionλ{\displaystyle \lambda }, defined by
assumes every value inC−{0,1}{\displaystyle \mathbb {C} -\{0,1\}}once and only once[10]in
whereH{\displaystyle \mathbb {H} }is the upper half-plane in the complex plane,∂F1{\displaystyle \partial F_{1}}is the boundary ofF1{\displaystyle F_{1}}and
In this way, eachm=defλ(τ)∈C−{0,1}{\displaystyle m\,{\overset {\text{def}}{=}}\,\lambda (\tau )\in \mathbb {C} -\{0,1\}}can be associated withone and only oneτ{\displaystyle \tau }. Then Whittaker & Watson define the Jacobi elliptic functions by
whereζ=u/θ3(τ)2{\displaystyle \zeta =u/\theta _{3}(\tau )^{2}}.
In the book, they place an additional restriction onm{\displaystyle m}(thatm∉(−∞,0)∪(1,∞){\displaystyle m\notin (-\infty ,0)\cup (1,\infty )}), but it is in fact not a necessary restriction (see the Cox reference). Also, ifm=0{\displaystyle m=0}orm=1{\displaystyle m=1}, the Jacobi elliptic functions degenerate to non-elliptic functions which is described below.
The Jacobi elliptic functions can be defined very simply using theNeville theta functions:[11]
Simplifications of complicated products of the Jacobi elliptic functions are often made easier using these identities.
The Jacobi imaginary transformations relate various functions of the imaginary variablei uor, equivalently, relations between various values of themparameter. In terms of the major functions:[12]: 506
Using the multiplication rule, all other functions may be expressed in terms of the above three. The transformations may be generally written aspq(u,m)=γpqpq′(iu,1−m){\displaystyle \operatorname {pq} (u,m)=\gamma _{\operatorname {pq} }\operatorname {pq} '(i\,u,1\!-\!m)}. The following table gives theγpqpq′(iu,1−m){\displaystyle \gamma _{\operatorname {pq} }\operatorname {pq} '(i\,u,1\!-\!m)}for the specified pq(u,m).[11](The arguments(iu,1−m){\displaystyle (i\,u,1\!-\!m)}are suppressed)
Since thehyperbolic trigonometric functionsare proportional to the circular trigonometric functions with imaginary arguments, it follows that the Jacobi functions will yield the hyperbolic functions for m=1.[5]: 249In the figure, the Jacobi curve has degenerated to two vertical lines atx= 1 andx= −1.
The Jacobi real transformations[5]: 308yield expressions for the elliptic functions in terms with alternate values ofm. The transformations may be generally written aspq(u,m)=γpqpq′(ku,1/m){\displaystyle \operatorname {pq} (u,m)=\gamma _{\operatorname {pq} }\operatorname {pq} '(k\,u,1/m)}. The following table gives theγpqpq′(ku,1/m){\displaystyle \gamma _{\operatorname {pq} }\operatorname {pq} '(k\,u,1/m)}for the specified pq(u,m).[11](The arguments(ku,1/m){\displaystyle (k\,u,1/m)}are suppressed)
Jacobi's real and imaginary transformations can be combined in various ways to yield three more simple transformations
.[5]: 214The real and imaginary transformations are two transformations in a group (D3oranharmonic group) of six transformations. If
is the transformation for themparameter in the real transformation, and
is the transformation ofmin the imaginary transformation, then the other transformations can be built up by successive application of these two basic transformations, yielding only three more possibilities:
These five transformations, along with the identity transformation (μU(m) =m) yield the six-element group. With regard to the Jacobi elliptic functions, the general transformation can be expressed using just three functions:
wherei= U, I, IR, R, RI, or RIR, identifying the transformation, γiis a multiplication factor common to these three functions, and the prime indicates the transformed function. The other nine transformed functions can be built up from the above three. The reason the cs, ns, ds functions were chosen to represent the transformation is that the other functions will be ratios of these three (except for their inverses) and the multiplication factors will cancel.
The following table lists the multiplication factors for the three ps functions, the transformedm's, and the transformed function names for each of the six transformations.[5]: 214(As usual,k2=m, 1 −k2=k12=m′ and the arguments (γiu,μi(m){\displaystyle \gamma _{i}u,\mu _{i}(m)}) are suppressed)
Thus, for example, we may build the following table for the RIR transformation.[11]The transformation is generally writtenpq(u,m)=γpqpq′(k′u,−m/m′){\displaystyle \operatorname {pq} (u,m)=\gamma _{\operatorname {pq} }\,\operatorname {pq'} (k'\,u,-m/m')}(The arguments(k′u,−m/m′){\displaystyle (k'\,u,-m/m')}are suppressed)
The value of the Jacobi transformations is that any set of Jacobi elliptic functions with any real-valued parametermcan be converted into another set for which0<m≤1/2{\displaystyle 0<m\leq 1/2}and, for real values ofu, the function values will be real.[5]: p. 215
In the following, the second variable is suppressed and is equal tom{\displaystyle m}:
where both identities are valid for allu,v,m∈C{\displaystyle u,v,m\in \mathbb {C} }such that both sides are well-defined.
With
we have
where all the identities are valid for allu,m∈C{\displaystyle u,m\in \mathbb {C} }such that both sides are well-defined.
Introducing complex numbers, our ellipse has an associated hyperbola:
from applying Jacobi's imaginary transformation[11]to the elliptic functions in the above equation forxandy.
It follows that we can putx=dn(u,1−m),y=sn(u,1−m){\displaystyle x=\operatorname {dn} (u,1-m),y=\operatorname {sn} (u,1-m)}. So our ellipse has a dual ellipse with m replaced by 1-m. This leads to the complex torus mentioned in the Introduction.[13]Generally, m may be a complex number, but when m is real and m<0, the curve is an ellipse with major axis in the x direction. At m=0 the curve is a circle, and for 0<m<1, the curve is an ellipse with major axis in the y direction. Atm= 1, the curve degenerates into two vertical lines atx= ±1. Form> 1, the curve is a hyperbola. Whenmis complex but not real,xoryor both are complex and the curve cannot be described on a realx-ydiagram.
Reversing the order of the two letters of the function name results in the reciprocals of the three functions above:
Similarly, the ratios of the three primary functions correspond to the first letter of the numerator followed by the first letter of the denominator:
More compactly, we have
where p and q are any of the letters s, c, d.
In the complex plane of the argumentu, the Jacobi elliptic functions form a repeating pattern of poles (and zeroes). The residues of the poles all have the same absolute value, differing only in sign. Each function pq(u,m) has an "inverse function" (in the multiplicative sense) qp(u,m) in which the positions of the poles and zeroes are exchanged. The periods of repetition are generally different in the real and imaginary directions, hence the use of the term "doubly periodic" to describe them.
For the Jacobi amplitude and the Jacobi epsilon function:
whereE(m){\displaystyle E(m)}is thecomplete elliptic integral of the second kindwith parameterm{\displaystyle m}.
The double periodicity of the Jacobi elliptic functions may be expressed as:
whereαandβare any pair of integers.K(⋅) is the complete elliptic integral of the first kind, also known as thequarter period. The power of negative unity (γ) is given in the following table:
When the factor (−1)γis equal to −1, the equation expresses quasi-periodicity. When it is equal to unity, it expresses full periodicity. It can be seen, for example, that for the entries containing only α when α is even, full periodicity is expressed by the above equation, and the function has full periods of 4K(m) and 2iK(1 −m). Likewise, functions with entries containing onlyβhave full periods of 2K(m) and 4iK(1 −m), while those with α + β have full periods of 4K(m) and 4iK(1 −m).
In the diagram on the right, which plots one repeating unit for each function, indicating phase along with the location of poles and zeroes, a number of regularities can be noted: The inverse of each function is opposite the diagonal, and has the same size unit cell, with poles and zeroes exchanged. The pole and zero arrangement in the auxiliary rectangle formed by (0,0), (K,0), (0,K′) and (K,K′) are in accordance with the description of the pole and zero placement described in the introduction above. Also, the size of the white ovals indicating poles are a rough measure of the absolute value of the residue for that pole. The residues of the poles closest to the origin in the figure (i.e. in the auxiliary rectangle) are listed in the following table:
When applicable, poles displaced above by 2Kor displaced to the right by 2K′ have the same value but with signs reversed, while those diagonally opposite have the same value. Note that poles and zeroes on the left and lower edges are considered part of the unit cell, while those on the upper and right edges are not.
The information about poles can in fact be used tocharacterizethe Jacobi elliptic functions:[14]
The functionu↦sn(u,m){\displaystyle u\mapsto \operatorname {sn} (u,m)}is the unique elliptic function having simple poles at2rK+(2s+1)iK′{\displaystyle 2rK+(2s+1)iK'}(withr,s∈Z{\displaystyle r,s\in \mathbb {Z} }) with residues(−1)r/m{\displaystyle (-1)^{r}/{\sqrt {m}}}taking the value0{\displaystyle 0}at0{\displaystyle 0}.
The functionu↦cn(u,m){\displaystyle u\mapsto \operatorname {cn} (u,m)}is the unique elliptic function having simple poles at2rK+(2s+1)iK′{\displaystyle 2rK+(2s+1)iK'}(withr,s∈Z{\displaystyle r,s\in \mathbb {Z} }) with residues(−1)r+s−1i/m{\displaystyle (-1)^{r+s-1}i/{\sqrt {m}}}taking the value1{\displaystyle 1}at0{\displaystyle 0}.
The functionu↦dn(u,m){\displaystyle u\mapsto \operatorname {dn} (u,m)}is the unique elliptic function having simple poles at2rK+(2s+1)iK′{\displaystyle 2rK+(2s+1)iK'}(withr,s∈Z{\displaystyle r,s\in \mathbb {Z} }) with residues(−1)s−1i{\displaystyle (-1)^{s-1}i}taking the value1{\displaystyle 1}at0{\displaystyle 0}.
Settingm=−1{\displaystyle m=-1}gives thelemniscate elliptic functionssl{\displaystyle \operatorname {sl} }andcl{\displaystyle \operatorname {cl} }:
Whenm=0{\displaystyle m=0}orm=1{\displaystyle m=1}, the Jacobi elliptic functions are reduced to non-elliptic functions:
For the Jacobi amplitude,am(u,0)=u{\displaystyle \operatorname {am} (u,0)=u}andam(u,1)=gdu{\displaystyle \operatorname {am} (u,1)=\operatorname {gd} u}wheregd{\displaystyle \operatorname {gd} }is theGudermannian function.
In general if neither of p,q is d thenpq(u,1)=pq(gd(u),0){\displaystyle \operatorname {pq} (u,1)=\operatorname {pq} (\operatorname {gd} (u),0)}.
sn(u2,m)=±1−cn(u,m)1+dn(u,m){\displaystyle \operatorname {sn} \left({\frac {u}{2}},m\right)=\pm {\sqrt {\frac {1-\operatorname {cn} (u,m)}{1+\operatorname {dn} (u,m)}}}}cn(u2,m)=±cn(u,m)+dn(u,m)1+dn(u,m){\displaystyle \operatorname {cn} \left({\frac {u}{2}},m\right)=\pm {\sqrt {\frac {\operatorname {cn} (u,m)+\operatorname {dn} (u,m)}{1+\operatorname {dn} (u,m)}}}}cn(u2,m)=±m′+dn(u,m)+mcn(u,m)1+dn(u,m){\displaystyle \operatorname {cn} \left({\frac {u}{2}},m\right)=\pm {\sqrt {\frac {m'+\operatorname {dn} (u,m)+m\operatorname {cn} (u,m)}{1+\operatorname {dn} (u,m)}}}}
Half K formula
sn[12K(k);k]=21+k+1−k{\displaystyle \operatorname {sn} \left[{\tfrac {1}{2}}K(k);k\right]={\frac {\sqrt {2}}{{\sqrt {1+k}}+{\sqrt {1-k}}}}}
cn[12K(k);k]=21−k241+k+1−k{\displaystyle \operatorname {cn} \left[{\tfrac {1}{2}}K(k);k\right]={\frac {{\sqrt {2}}\,{\sqrt[{4}]{1-k^{2}}}}{{\sqrt {1+k}}+{\sqrt {1-k}}}}}
dn[12K(k);k]=1−k24{\displaystyle \operatorname {dn} \left[{\tfrac {1}{2}}K(k);k\right]={\sqrt[{4}]{1-k^{2}}}}
Third K formula
To getx3, we take the tangent of twice the arctangent of the modulus.
Also this equation leads to the sn-value of the third ofK:
These equations lead to the other values of the Jacobi-Functions:
Fifth K formula
Following equation has following solution:
To get the sn-values, we put the solution x into following expressions:
Relations between squares of the functions can be derived from two basic relationships (Arguments (u,m) suppressed):cn2+sn2=1{\displaystyle \operatorname {cn} ^{2}+\operatorname {sn} ^{2}=1}cn2+m′sn2=dn2{\displaystyle \operatorname {cn} ^{2}+m'\operatorname {sn} ^{2}=\operatorname {dn} ^{2}}wherem + m'= 1. Multiplying by any function of the formnqyields more general equations:
cq2+sq2=nq2{\displaystyle \operatorname {cq} ^{2}+\operatorname {sq} ^{2}=\operatorname {nq} ^{2}}cq2+m′sq2=dq2{\displaystyle \operatorname {cq} ^{2}{}+m'\operatorname {sq} ^{2}=\operatorname {dq} ^{2}}
Withq=d, these correspond trigonometrically to the equations for the unit circle (x2+y2=r2{\displaystyle x^{2}+y^{2}=r^{2}}) and the unit ellipse (x2+m′y2=1{\displaystyle x^{2}{}+m'y^{2}=1}), withx=cd,y=sdandr=nd. Using the multiplication rule, other relationships may be derived. For example:
−dn2+m′=−mcn2=msn2−m{\displaystyle -\operatorname {dn} ^{2}{}+m'=-m\operatorname {cn} ^{2}=m\operatorname {sn} ^{2}-m}
−m′nd2+m′=−mm′sd2=mcd2−m{\displaystyle -m'\operatorname {nd} ^{2}{}+m'=-mm'\operatorname {sd} ^{2}=m\operatorname {cd} ^{2}-m}
m′sc2+m′=m′nc2=dc2−m{\displaystyle m'\operatorname {sc} ^{2}{}+m'=m'\operatorname {nc} ^{2}=\operatorname {dc} ^{2}-m}
cs2+m′=ds2=ns2−m{\displaystyle \operatorname {cs} ^{2}{}+m'=\operatorname {ds} ^{2}=\operatorname {ns} ^{2}-m}
The functions satisfy the two square relations (dependence onmsuppressed)cn2(u)+sn2(u)=1,{\displaystyle \operatorname {cn} ^{2}(u)+\operatorname {sn} ^{2}(u)=1,\,}
dn2(u)+msn2(u)=1.{\displaystyle \operatorname {dn} ^{2}(u)+m\operatorname {sn} ^{2}(u)=1.\,}
From this we see that (cn, sn, dn) parametrizes anelliptic curvewhich is the intersection of the twoquadricsdefined by the above two equations. We now may define a group law for points on this curve by the addition formulas for the Jacobi functions[3]
cn(x+y)=cn(x)cn(y)−sn(x)sn(y)dn(x)dn(y)1−msn2(x)sn2(y),sn(x+y)=sn(x)cn(y)dn(y)+sn(y)cn(x)dn(x)1−msn2(x)sn2(y),dn(x+y)=dn(x)dn(y)−msn(x)sn(y)cn(x)cn(y)1−msn2(x)sn2(y).{\displaystyle {\begin{aligned}\operatorname {cn} (x+y)&={\operatorname {cn} (x)\operatorname {cn} (y)-\operatorname {sn} (x)\operatorname {sn} (y)\operatorname {dn} (x)\operatorname {dn} (y) \over {1-m\operatorname {sn} ^{2}(x)\operatorname {sn} ^{2}(y)}},\\[8pt]\operatorname {sn} (x+y)&={\operatorname {sn} (x)\operatorname {cn} (y)\operatorname {dn} (y)+\operatorname {sn} (y)\operatorname {cn} (x)\operatorname {dn} (x) \over {1-m\operatorname {sn} ^{2}(x)\operatorname {sn} ^{2}(y)}},\\[8pt]\operatorname {dn} (x+y)&={\operatorname {dn} (x)\operatorname {dn} (y)-m\operatorname {sn} (x)\operatorname {sn} (y)\operatorname {cn} (x)\operatorname {cn} (y) \over {1-m\operatorname {sn} ^{2}(x)\operatorname {sn} ^{2}(y)}}.\end{aligned}}}
The Jacobi epsilon and zn functions satisfy a quasi-addition theorem:E(x+y,m)=E(x,m)+E(y,m)−msn(x,m)sn(y,m)sn(x+y,m),zn(x+y,m)=zn(x,m)+zn(y,m)−msn(x,m)sn(y,m)sn(x+y,m).{\displaystyle {\begin{aligned}{\mathcal {E}}(x+y,m)&={\mathcal {E}}(x,m)+{\mathcal {E}}(y,m)-m\operatorname {sn} (x,m)\operatorname {sn} (y,m)\operatorname {sn} (x+y,m),\\\operatorname {zn} (x+y,m)&=\operatorname {zn} (x,m)+\operatorname {zn} (y,m)-m\operatorname {sn} (x,m)\operatorname {sn} (y,m)\operatorname {sn} (x+y,m).\end{aligned}}}
Double angle formulae can be easily derived from the above equations by settingx=y.[3]Half angle formulae[11][3]are all of the form:
pq(12u,m)2=fp/fq{\displaystyle \operatorname {pq} ({\tfrac {1}{2}}u,m)^{2}=f_{\mathrm {p} }/f_{\mathrm {q} }}
where:fc=cn(u,m)+dn(u,m){\displaystyle f_{\mathrm {c} }=\operatorname {cn} (u,m)+\operatorname {dn} (u,m)}fs=1−cn(u,m){\displaystyle f_{\mathrm {s} }=1-\operatorname {cn} (u,m)}fn=1+dn(u,m){\displaystyle f_{\mathrm {n} }=1+\operatorname {dn} (u,m)}fd=(1+dn(u,m))−m(1−cn(u,m)){\displaystyle f_{\mathrm {d} }=(1+\operatorname {dn} (u,m))-m(1-\operatorname {cn} (u,m))}
Thederivativesof the three basic Jacobi elliptic functions (with respect to the first variable, withm{\displaystyle m}fixed) are:ddzsn(z)=cn(z)dn(z),{\displaystyle {\frac {\mathrm {d} }{\mathrm {d} z}}\operatorname {sn} (z)=\operatorname {cn} (z)\operatorname {dn} (z),}ddzcn(z)=−sn(z)dn(z),{\displaystyle {\frac {\mathrm {d} }{\mathrm {d} z}}\operatorname {cn} (z)=-\operatorname {sn} (z)\operatorname {dn} (z),}ddzdn(z)=−msn(z)cn(z).{\displaystyle {\frac {\mathrm {d} }{\mathrm {d} z}}\operatorname {dn} (z)=-m\operatorname {sn} (z)\operatorname {cn} (z).}
These can be used to derive the derivatives of all other functions as shown in the table below (arguments (u,m) suppressed):
Also
With theaddition theorems aboveand for a givenmwith 0 <m< 1 the major functions are therefore solutions to the following nonlinearordinary differential equations:
The function which exactly solves thependulum differential equation,
with initial angleθ0{\displaystyle \theta _{0}}and zero initial angular velocity is
wherem=sin(θ0/2)2{\displaystyle m=\sin(\theta _{0}/2)^{2}},c>0{\displaystyle c>0}andt∈R{\displaystyle t\in \mathbb {R} }.
With the first argumentz{\displaystyle z}fixed, the derivatives with respect to the second variablem{\displaystyle m}are as follows:
Let thenomebeq=exp(−πK′(m)/K(m))=eiπτ{\displaystyle q=\exp(-\pi K'(m)/K(m))=e^{i\pi \tau }},Im(τ)>0{\displaystyle \operatorname {Im} (\tau )>0},m=k2{\displaystyle m=k^{2}}and letv=πu/(2K(m)){\displaystyle v=\pi u/(2K(m))}. Then the functions have expansions asLambert series
when|Im(u/K)|<Im(iK′/K).{\displaystyle \left|\operatorname {Im} (u/K)\right|<\operatorname {Im} (iK'/K).}
Bivariate power series expansions have been published by Schett.[15]
The theta function ratios provide an efficient way of computing the Jacobi elliptic functions. There is an alternative method, based on thearithmetic-geometric meanandLanden's transformations:[6]
Initialize
where0<m<1{\displaystyle 0<m<1}.
Define
wheren≥1{\displaystyle n\geq 1}.
Then define
foru∈R{\displaystyle u\in \mathbb {R} }and a fixedN∈N{\displaystyle N\in \mathbb {N} }. If
forn≥1{\displaystyle n\geq 1}, then
asN→∞{\displaystyle N\to \infty }. This is notable for its rapid convergence. It is then trivial to compute all Jacobi elliptic functions from the Jacobi amplitudeam{\displaystyle \operatorname {am} }on the real line.[note 2]
In conjunction with the addition theorems for elliptic functions (which hold for complex numbers in general) and the Jacobi transformations, the method of computation described above can be used to compute all Jacobi elliptic functions in the whole complex plane.
Another method of fast computation of the Jacobi elliptic functions via the arithmetic–geometric mean, avoiding the computation of the Jacobi amplitude, is due to Herbert E. Salzer:[16]
Let
Set
Then
asN→∞{\displaystyle N\to \infty }.
Yet, another method for a rapidly converging fast computation of the Jacobi elliptic sine function found in the literature is shown below.[17]
Let:
Then set:
Then:
The Jacobi elliptic functions can be expanded in terms of the hyperbolic functions. Whenm{\displaystyle m}is close to unity, such thatm′2{\displaystyle m'^{2}}and higher powers ofm′{\displaystyle m'}can be neglected, we have:[18][19]
For the Jacobi amplitude,am(u,m)≈gd(u)+14m′(sinh(u)cosh(u)−u)sech(u).{\displaystyle \operatorname {am} (u,m)\approx \operatorname {gd} (u)+{\frac {1}{4}}m'(\sinh(u)\cosh(u)-u)\operatorname {sech} (u).}
Assuming real numbersa,p{\displaystyle a,p}with0<a<p{\displaystyle 0<a<p}and thenomeq=eπiτ{\displaystyle q=e^{\pi i\tau }},Im(τ)>0{\displaystyle \operatorname {Im} (\tau )>0}withelliptic modulusk(τ)=1−k′(τ)2=(ϑ10(0;τ)/ϑ00(0;τ))2{\textstyle k(\tau )={\sqrt {1-k'(\tau )^{2}}}=(\vartheta _{10}(0;\tau )/\vartheta _{00}(0;\tau ))^{2}}. IfK[τ]=K(k(τ)){\displaystyle K[\tau ]=K(k(\tau ))}, whereK(x)=π/2⋅2F1(1/2,1/2;1;x2){\displaystyle K(x)=\pi /2\cdot {}_{2}F_{1}(1/2,1/2;1;x^{2})}is thecomplete elliptic integral of the first kind, then holds the followingcontinued fraction expansion[20]
Known continued fractions involvingsn(t),cn(t){\displaystyle {\textrm {sn}}(t),{\textrm {cn}}(t)}anddn(t){\displaystyle {\textrm {dn}}(t)}with elliptic modulusk{\displaystyle k}are
Forz∈C{\displaystyle z\in \mathbb {C} },|k|<1{\displaystyle |k|<1}:[21]pg. 374
Forz∈C∖{0}{\displaystyle z\in \mathbb {C} \setminus \{0\}},|k|<1{\displaystyle |k|<1}:[21]pg. 375
Forz∈C∖{0}{\displaystyle z\in \mathbb {C} \setminus \{0\}},|k|<1{\displaystyle |k|<1}:[22]pg. 220
Forz∈C∖{0}{\displaystyle z\in \mathbb {C} \setminus \{0\}},|k|<1{\displaystyle |k|<1}:[21]pg. 374
Forz∈C{\displaystyle z\in \mathbb {C} },|k|<1{\displaystyle |k|<1}:[21]pg. 375
The inverses of the Jacobi elliptic functions can be defined similarly to theinverse trigonometric functions; ifx=sn(ξ,m){\displaystyle x=\operatorname {sn} (\xi ,m)},ξ=arcsn(x,m){\displaystyle \xi =\operatorname {arcsn} (x,m)}. They can be represented as elliptic integrals,[23][24][25]and power series representations have been found.[26][3]
ThePeirce quincuncial projectionis amap projectionbased on Jacobian elliptic functions.
|
https://en.wikipedia.org/wiki/Jacobi_elliptic_functions
|
Poly1305is auniversal hash familydesigned byDaniel J. Bernsteinin 2002 for use incryptography.[1][2]
As with any universal hash family, Poly1305 can be used as a one-timemessage authentication codeto authenticate a single message using a secret key shared between sender and recipient,[3]similar to the way that aone-time padcan be used to conceal the content of a single message using a secret key shared between sender and recipient.
Originally Poly1305 was proposed as part of Poly1305-AES,[2]a Carter–Wegman authenticator[4][5][1]that combines the Poly1305 hash withAES-128to authenticate many messages using a single short key and distinct message numbers.
Poly1305 was later applied with a single-use key generated for each message usingXSalsa20in theNaClcrypto_secretbox_xsalsa20poly1305 authenticated cipher,[6]and then usingChaChain theChaCha20-Poly1305authenticated cipher[7][8][1]deployed inTLSon the internet.[9]
Poly1305 takes a 16-byte secret keyr{\displaystyle r}and anL{\displaystyle L}-byte messagem{\displaystyle m}and returns a 16-byte hashPoly1305r(m){\displaystyle \operatorname {Poly1305} _{r}(m)}.
To do this, Poly1305:[2][1]
The coefficientsci{\displaystyle c_{i}}of the polynomialc1rq+c2rq−1+⋯+cqr{\displaystyle c_{1}r^{q}+c_{2}r^{q-1}+\cdots +c_{q}r}, whereq=⌈L/16⌉{\displaystyle q=\lceil L/16\rceil }, are:
ci=m[16i−16]+28m[16i−15]+216m[16i−14]+⋯+2120m[16i−1]+2128,{\displaystyle c_{i}=m[16i-16]+2^{8}m[16i-15]+2^{16}m[16i-14]+\cdots +2^{120}m[16i-1]+2^{128},}
with the exception that, ifL≢0(mod16){\displaystyle L\not \equiv 0{\pmod {16}}}, then:
cq=m[16q−16]+28m[16q−15]+⋯+28(Lmod16)−8m[L−1]+28(Lmod16).{\displaystyle c_{q}=m[16q-16]+2^{8}m[16q-15]+\cdots +2^{8(L{\bmod {1}}6)-8}m[L-1]+2^{8(L{\bmod {1}}6)}.}
The secret keyr=(r[0],r[1],r[2],…,r[15]){\displaystyle r=(r[0],r[1],r[2],\dotsc ,r[15])}is restricted to have the bytesr[3],r[7],r[11],r[15]∈{0,1,2,…,15}{\displaystyle r[3],r[7],r[11],r[15]\in \{0,1,2,\dotsc ,15\}},i.e., to have their top four bits clear; and to have the bytesr[4],r[8],r[12]∈{0,4,8,…,252}{\displaystyle r[4],r[8],r[12]\in \{0,4,8,\dotsc ,252\}},i.e., to have their bottom two bits clear.
Thus there are2106{\displaystyle 2^{106}}distinct possible values ofr{\displaystyle r}.
Ifs{\displaystyle s}is a secret 16-byte string interpreted as a little-endian integer, then
a:=(Poly1305r(m)+s)mod2128{\displaystyle a:={\bigl (}\operatorname {Poly1305} _{r}(m)+s{\bigr )}{\bmod {2}}^{128}}
is called theauthenticatorfor the messagem{\displaystyle m}.
If a sender and recipient share the 32-byte secret key(r,s){\displaystyle (r,s)}in advance, chosen uniformly at random, then the sender can transmit an authenticated message(a,m){\displaystyle (a,m)}.
When the recipient receives anallegedauthenticated message(a′,m′){\displaystyle (a',m')}(which may have been modified in transmit by an adversary), they can verify its authenticity by testing whether
a′=?(Poly1305r(m′)+s)mod2128.{\displaystyle a'\mathrel {\stackrel {?}{=}} {\bigl (}\operatorname {Poly1305} _{r}(m')+s{\bigr )}{\bmod {2}}^{128}.}Without knowledge of(r,s){\displaystyle (r,s)}, the adversary has probability8⌈L/16⌉/2106{\displaystyle 8\lceil L/16\rceil /2^{106}}of finding any(a′,m′)≠(a,m){\displaystyle (a',m')\neq (a,m)}that will pass verification.
However, the same key(r,s){\displaystyle (r,s)}must not be reused for two messages.
If the adversary learns
a1=(Poly1305r(m1)+s)mod2128,a2=(Poly1305r(m2)+s)mod2128,{\displaystyle {\begin{aligned}a_{1}&={\bigl (}\operatorname {Poly1305} _{r}(m_{1})+s{\bigr )}{\bmod {2}}^{128},\\a_{2}&={\bigl (}\operatorname {Poly1305} _{r}(m_{2})+s{\bigr )}{\bmod {2}}^{128},\end{aligned}}}
form1≠m2{\displaystyle m_{1}\neq m_{2}}, they can subtract
a1−a2≡Poly1305r(m1)−Poly1305r(m2)(mod2128){\displaystyle a_{1}-a_{2}\equiv \operatorname {Poly1305} _{r}(m_{1})-\operatorname {Poly1305} _{r}(m_{2}){\pmod {2^{128}}}}
and find a root of the resulting polynomial to recover a small list of candidates for the secret evaluation pointr{\displaystyle r}, and from that the secret pads{\displaystyle s}.
The adversary can then use this to forge additional messages with high probability.
The original Poly1305-AES proposal[2]uses the Carter–Wegman structure[4][5]to authenticate many messages by takingai:=Hr(mi)+pi{\displaystyle a_{i}:=H_{r}(m_{i})+p_{i}}to be the authenticator on theith messagemi{\displaystyle m_{i}}, whereHr{\displaystyle H_{r}}is a universal hash family andpi{\displaystyle p_{i}}is an independent uniform random hash value that serves as a one-time pad to conceal it.
Poly1305-AES usesAES-128to generatepi:=AESk(i){\displaystyle p_{i}:=\operatorname {AES} _{k}(i)}, wherei{\displaystyle i}is encoded as a 16-byte little-endian integer.
Specifically, a Poly1305-AES key is a 32-byte pair(r,k){\displaystyle (r,k)}of a 16-byte evaluation pointr{\displaystyle r}, as above, and a 16-byte AES keyk{\displaystyle k}.
The Poly1305-AES authenticator on a messagemi{\displaystyle m_{i}}is
ai:=(Poly1305r(mi)+AESk(i))mod2128,{\displaystyle a_{i}:={\bigl (}\operatorname {Poly1305} _{r}(m_{i})+\operatorname {AES} _{k}(i){\bigr )}{\bmod {2}}^{128},}
where 16-byte strings and integers are identified by little-endian encoding.
Note thatr{\displaystyle r}is reused between messages.
Without knowledge of(r,k){\displaystyle (r,k)}, the adversary has low probability of forging any authenticated messages that the recipient will accept as genuine.
Suppose the adversary seesC{\displaystyle C}authenticated messages and attemptsD{\displaystyle D}forgeries, and candistinguishAESk{\displaystyle \operatorname {AES} _{k}}from a uniform random permutationwith advantage at mostδ{\displaystyle \delta }.
(Unless AES is broken,δ{\displaystyle \delta }is very small.)
The adversary's chance of success at a single forgery is at most:
δ+(1−C/2128)−(C+1)/2⋅8D⌈L/16⌉2106.{\displaystyle \delta +{\frac {(1-C/2^{128})^{-(C+1)/2}\cdot 8D\lceil L/16\rceil }{2^{106}}}.}
The message numberi{\displaystyle i}must never be repeated with the same key(r,k){\displaystyle (r,k)}.
If it is, the adversary can recover a small list of candidates forr{\displaystyle r}andAESk(i){\displaystyle \operatorname {AES} _{k}(i)}, as with the one-time authenticator, and use that to forge messages.
TheNaClcrypto_secretbox_xsalsa20poly1305 authenticated cipher uses a message numberi{\displaystyle i}with theXSalsa20stream cipher to generate a per-messagekey stream, the first 32 bytes of which are taken as a one-time Poly1305 key(ri,si){\displaystyle (r_{i},s_{i})}and the rest of which is used for encrypting the message.
It then uses Poly1305 as a one-time authenticator for the ciphertext of the message.[6]ChaCha20-Poly1305does the same but withChaChainstead ofXSalsa20.[8]XChaCha20-Poly1305 using XChaCha20 instead of XSalsa20 has also been described.[10]
The security of Poly1305 and its derivatives against forgery follows from itsbounded difference probabilityas auniversal hash family:
Ifm1{\displaystyle m_{1}}andm2{\displaystyle m_{2}}are messages of up toL{\displaystyle L}bytes each, andd{\displaystyle d}is any 16-byte string interpreted as a little-endian integer, then
Pr[Poly1305r(m1)−Poly1305r(m2)≡d(mod2128)]≤8⌈L/16⌉2106,{\displaystyle \Pr[\operatorname {Poly1305} _{r}(m_{1})-\operatorname {Poly1305} _{r}(m_{2})\equiv d{\pmod {2^{128}}}]\leq {\frac {8\lceil L/16\rceil }{2^{106}}},}
wherer{\displaystyle r}is a uniform random Poly1305 key.[2]: Theorem 3.3, p. 8
This property is sometimes calledϵ{\displaystyle \epsilon }-almost-Δ-universalityoverZ/2128Z{\displaystyle \mathbb {Z} /2^{128}\mathbb {Z} }, orϵ{\displaystyle \epsilon }-AΔU,[11]whereϵ=8⌈L/16⌉/2106{\displaystyle \epsilon =8\lceil L/16\rceil /2^{106}}in this case.
With a one-time authenticatora=(Poly1305r(m)+s)mod2128{\displaystyle a={\bigl (}\operatorname {Poly1305} _{r}(m)+s{\bigr )}{\bmod {2}}^{128}}, the adversary's success probability for any forgery attempt(a′,m′){\displaystyle (a',m')}on a messagem′{\displaystyle m'}of up toL{\displaystyle L}bytes is:
Pr[a′=Poly1305r(m′)+s∣a=Poly1305r(m)+s]=Pr[a′=Poly1305r(m′)+a−Poly1305r(m)]=Pr[Poly1305r(m′)−Poly1305r(m)=a′−a]≤8⌈L/16⌉/2106.{\displaystyle {\begin{aligned}\Pr[&a'=\operatorname {Poly1305} _{r}(m')+s\mathrel {\mid } a=\operatorname {Poly1305} _{r}(m)+s]\\&=\Pr[a'=\operatorname {Poly1305} _{r}(m')+a-\operatorname {Poly1305} _{r}(m)]\\&=\Pr[\operatorname {Poly1305} _{r}(m')-\operatorname {Poly1305} _{r}(m)=a'-a]\\&\leq 8\lceil L/16\rceil /2^{106}.\end{aligned}}}
Here arithmetic inside thePr[⋯]{\displaystyle \Pr[\cdots ]}is taken to be inZ/2128Z{\displaystyle \mathbb {Z} /2^{128}\mathbb {Z} }for simplicity.
ForNaClcrypto_secretbox_xsalsa20poly1305 andChaCha20-Poly1305, the adversary's success probability at forgery is the same for each message independently as for a one-time authenticator, plus the adversary's distinguishing advantageδ{\displaystyle \delta }against XSalsa20 or ChaCha aspseudorandom functionsused to generate the per-message key.
In other words, the probability that the adversary succeeds at a single forgery afterD{\displaystyle D}attempts of messages up toL{\displaystyle L}bytes is at most:
δ+8D⌈L/16⌉2106.{\displaystyle \delta +{\frac {8D\lceil L/16\rceil }{2^{106}}}.}
The security of Poly1305-AES against forgery follows from the Carter–Wegman–Shoup structure, which instantiates a Carter–Wegman authenticator with a permutation to generate the per-message pad.[12]If an adversary seesC{\displaystyle C}authenticated messages and attemptsD{\displaystyle D}forgeries of messages of up toL{\displaystyle L}bytes, and if the adversary has distinguishing advantage at mostδ{\displaystyle \delta }against AES-128 as apseudorandom permutation, then the probability the adversary succeeds at any one of theD{\displaystyle D}forgeries is at most:[2]
δ+(1−C/2128)−(C+1)/2⋅8D⌈L/16⌉2106.{\displaystyle \delta +{\frac {(1-C/2^{128})^{-(C+1)/2}\cdot 8D\lceil L/16\rceil }{2^{106}}}.}
For instance, assuming that messages are packets up to 1024 bytes; that the attacker sees 264messages authenticated under a Poly1305-AES key; that the attacker attempts a whopping 275forgeries; and that the attacker cannot break AES with probability above δ; then, with probability at least 0.999999 − δ, all the 275are rejected
Poly1305-AES can be computed at high speed in various CPUs: for ann-byte message, no more than 3.1n+ 780 Athlon cycles are needed,[2]for example.
The author has released optimizedsource codeforAthlon,PentiumPro/II/III/M,PowerPC, andUltraSPARC, in addition to non-optimizedreference implementationsinCandC++aspublic domain software.[13]
Below is a list of cryptography libraries that support Poly1305:
|
https://en.wikipedia.org/wiki/Poly1305-AES
|
Theusageof alanguageis the ways in which itswrittenandspokenvariations are routinely employed by its speakers; that is, it refers to "the collective habits of a language's native speakers",[1]as opposed to idealized models of how a language works (or should work) in the abstract. For instance,Fowlercharacterized usage as "the way in which a word or phrase is normally and correctly used" and as the "points ofgrammar,syntax,style, and the choice of words."[2]In everyday usage, language is used differently, depending on the situation and individual.[3]Individual language users can shape language structures and language usage based on their community.[4]
In thedescriptivetradition of language analysis, by way of contrast, "correct" tends to mean functionally adequate for the purposes of the speaker or writer using it, and adequatelyidiomaticto be accepted by the listener or reader; usage is also, however, a concern for theprescriptivetradition, for which "correctness" is a matter of arbitrating style.[5][6]
Common usage may be used as one of the criteria of laying outprescriptive normsforcodifiedstandard languageusage.[7]
Everyday language users, including editors and writers, look at dictionaries, style guides, usage guides, and other published authoritative works to help inform their language decisions. This takes place because of the perception that Standard English is determined by language authorities.[8]For many language users, the dictionary is the source of correct language use, as far as accurate vocabulary and spelling go.[9]Moderndictionariesare not generally prescriptive, but they often include "usage notes" which may describe words as "formal", "informal", "slang", and so on.[10]"Despite occasional usage notes,lexicographersgenerally disclaim any intent to guide writers and editors on the thorny points of English usage."[1]
According to Jeremy Butterfield, "The first person we know of who madeusagerefer to language wasDaniel Defoe, at the end of the seventeenth century". Defoe proposed the creation of alanguage societyof 36 individuals who would setprescriptivelanguage rules for the approximately six million English speakers.[5]
The Latin equivalentususwas a crucial term in the research of Danish linguistsOtto JespersenandLouis Hjelmslev.[11]They used the term to designate usage that has widespread or significant acceptance among speakers of a language, regardless of its conformity to the sanctioned standard language norms.[12]
|
https://en.wikipedia.org/wiki/Usage_(language)
|
Bluetoothis a short-rangewirelesstechnology standard that is used for exchanging data between fixed and mobile devices over short distances and buildingpersonal area networks(PANs). In the most widely used mode, transmission power is limited to 2.5milliwatts, giving it a very short range of up to 10 metres (33 ft). It employsUHFradio wavesin theISM bands, from 2.402GHzto 2.48GHz.[3]It is mainly used as an alternative to wired connections to exchange files between nearby portable devices and connectcell phonesand music players withwireless headphones,wireless speakers,HIFIsystems,car audioand wireless transmission betweenTVsandsoundbars.
Bluetooth is managed by theBluetooth Special Interest Group(SIG), which has more than 35,000 member companies in the areas of telecommunication, computing, networking, and consumer electronics. TheIEEEstandardized Bluetooth asIEEE 802.15.1but no longer maintains the standard. The Bluetooth SIG oversees the development of the specification, manages the qualification program, and protects the trademarks.[4]A manufacturer must meetBluetooth SIG standardsto market it as a Bluetooth device.[5]A network ofpatentsapplies to the technology, which is licensed to individual qualifying devices. As of 2021[update], 4.7 billion Bluetoothintegrated circuitchips are shipped annually.[6]Bluetooth was first demonstrated in space in 2024, an early test envisioned to enhanceIoTcapabilities.[7]
The name "Bluetooth" was proposed in 1997 by Jim Kardach ofIntel, one of the founders of the Bluetooth SIG. The name was inspired by a conversation with Sven Mattisson who related Scandinavian history through tales fromFrans G. Bengtsson'sThe Long Ships, a historical novel about Vikings and the 10th-century Danish kingHarald Bluetooth. Upon discovering a picture of therunestone of Harald Bluetooth[8]in the bookA History of the VikingsbyGwyn Jones, Kardach proposed Bluetooth as the codename for the short-range wireless program which is now called Bluetooth.[9][10][11]
According to Bluetooth's official website,
Bluetooth was only intended as a placeholder until marketing could come up with something really cool.
Later, when it came time to select a serious name, Bluetooth was to be replaced with either RadioWire or PAN (Personal Area Networking). PAN was the front runner, but an exhaustive search discovered it already had tens of thousands of hits throughout the internet.
A full trademark search on RadioWire couldn't be completed in time for launch, making Bluetooth the only choice. The name caught on fast and before it could be changed, it spread throughout the industry, becoming synonymous with short-range wireless technology.[12]
Bluetooth is theAnglicisedversion of the ScandinavianBlåtand/Blåtann(or inOld Norseblátǫnn). It was theepithetof King Harald Bluetooth, who united the disparate
Danish tribes into a single kingdom; Kardach chose the name to imply that Bluetooth similarly unites communication protocols.[13]
The Bluetooth logois abind runemerging theYounger Futharkrunes(ᚼ,Hagall) and(ᛒ,Bjarkan), Harald's initials.[14][15]
The development of the "short-link" radio technology, later named Bluetooth, was initiated in 1989 by Nils Rydbeck, CTO atEricsson MobileinLund, Sweden. The purpose was to develop wireless headsets, according to two inventions byJohan Ullman,SE 8902098-6, issued 12 June 1989andSE 9202239, issued 24 July 1992. Nils Rydbeck taskedTord Wingrenwith specifying and DutchmanJaap Haartsenand Sven Mattisson with developing.[16]Both were working for Ericsson in Lund.[17]Principal design and development began in 1994 and by 1997 the team had a workable solution.[18]From 1997 Örjan Johansson became the project leader and propelled the technology and standardization.[19][20][21][22]
In 1997, Adalio Sanchez, then head ofIBMThinkPadproduct R&D, approached Nils Rydbeck about collaborating on integrating amobile phoneinto a ThinkPad notebook. The two assigned engineers fromEricssonandIBMstudied the idea. The conclusion was that power consumption on cellphone technology at that time was too high to allow viable integration into a notebook and still achieve adequate battery life. Instead, the two companies agreed to integrate Ericsson's short-link technology on both a ThinkPad notebook and an Ericsson phone to accomplish the goal.
Since neither IBM ThinkPad notebooks nor Ericsson phones were the market share leaders in their respective markets at that time, Adalio Sanchez and Nils Rydbeck agreed to make the short-link technology an open industry standard to permit each player maximum market access. Ericsson contributed the short-link radio technology, and IBM contributed patents around the logical layer. Adalio Sanchez of IBM then recruited Stephen Nachtsheim of Intel to join and then Intel also recruitedToshibaandNokia. In May 1998, the Bluetooth SIG was launched with IBM and Ericsson as the founding signatories and a total of five members: Ericsson, Intel, Nokia, Toshiba, and IBM.
The first Bluetooth device was revealed in 1999. It was a hands-free mobile headset that earned the "Best of show Technology Award" atCOMDEX. The first Bluetooth mobile phone was the unreleased prototype Ericsson T36, though it was the revised Ericsson modelT39that actually made it to store shelves in June 2001. However Ericsson released the R520m in Quarter 1 of 2001,[23]making the R520m the first ever commercially available Bluetooth phone. In parallel, IBM introduced the IBM ThinkPad A30 in October 2001 which was the first notebook with integrated Bluetooth.
Bluetooth's early incorporation into consumer electronics products continued at Vosi Technologies in Costa Mesa, California, initially overseen by founding members Bejan Amini and Tom Davidson. Vosi Technologies had been created by real estate developer Ivano Stegmenga, with United States Patent 608507, for communication between a cellular phone and a vehicle's audio system. At the time, Sony/Ericsson had only a minor market share in the cellular phone market, which was dominated in the US by Nokia and Motorola. Due to ongoing negotiations for an intended licensing agreement with Motorola beginning in the late 1990s, Vosi could not publicly disclose the intention, integration, and initial development of other enabled devices which were to be the first "Smart Home" internet connected devices.
Vosi needed a means for the system to communicate without a wired connection from the vehicle to the other devices in the network. Bluetooth was chosen, sinceWi-Fiwas not yet readily available or supported in the public market. Vosi had begun to develop the Vosi Cello integrated vehicular system and some other internet connected devices, one of which was intended to be a table-top device named the Vosi Symphony, networked with Bluetooth. Through the negotiations withMotorola, Vosi introduced and disclosed its intent to integrate Bluetooth in its devices. In the early 2000s a legal battle[24]ensued between Vosi and Motorola, which indefinitely suspended release of the devices. Later, Motorola implemented it in their devices, which initiated the significant propagation of Bluetooth in the public market due to its large market share at the time.
In 2012, Jaap Haartsen was nominated by theEuropean Patent Officefor theEuropean Inventor Award.[18]
Bluetooth operates at frequencies between 2.402 and 2.480GHz, or 2.400 and 2.4835GHz, includingguard bands2MHz wide at the bottom end and 3.5MHz wide at the top.[25]This is in the globally unlicensed (but not unregulated) industrial, scientific and medical (ISM) 2.4GHz short-range radio frequency band. Bluetooth uses a radio technology calledfrequency-hopping spread spectrum. Bluetooth divides transmitted data into packets, and transmits each packet on one of 79 designated Bluetooth channels. Each channel has a bandwidth of 1MHz. It usually performs 1600hops per second, withadaptive frequency-hopping(AFH) enabled.[25]Bluetooth Low Energyuses 2MHz spacing, which accommodates 40 channels.[26]
Originally,Gaussian frequency-shift keying(GFSK) modulation was the only modulation scheme available. Since the introduction of Bluetooth 2.0+EDR, π/4-DQPSK(differential quadrature phase-shift keying) and 8-DPSK modulation may also be used between compatible devices. Devices functioning with GFSK are said to be operating in basic rate (BR) mode, where an instantaneousbit rateof 1Mbit/sis possible. The termEnhanced Data Rate(EDR) is used to describe π/4-DPSK (EDR2) and 8-DPSK (EDR3) schemes, transferring 2 and 3Mbit/s respectively.
In 2019, Apple published an extension called HDR which supports data rates of 4 (HDR4) and 8 (HDR8) Mbit/s using π/4-DQPSKmodulation on 4 MHz channels with forward error correction (FEC).[27]
Bluetooth is apacket-based protocolwith amaster/slave architecture. One master may communicate with up to seven slaves in apiconet. All devices within a given piconet use the clock provided by the master as the base for packet exchange. The master clock ticks with a period of 312.5μs, two clock ticks then make up a slot of 625μs, and two slots make up a slot pair of 1250μs. In the simple case of single-slot packets, the master transmits in even slots and receives in odd slots. The slave, conversely, receives in even slots and transmits in odd slots. Packets may be 1, 3, or 5 slots long, but in all cases, the master's transmission begins in even slots and the slave's in odd slots.
The above excludes Bluetooth Low Energy, introduced in the 4.0 specification,[28]whichuses the same spectrum but somewhat differently.
A master BR/EDR Bluetooth device can communicate with a maximum of seven devices in a piconet (an ad hoc computer network using Bluetooth technology), though not all devices reach this maximum. The devices can switch roles, by agreement, and the slave can become the master (for example, a headset initiating a connection to a phone necessarily begins as master—as an initiator of the connection—but may subsequently operate as the slave).
The Bluetooth Core Specification provides for the connection of two or more piconets to form ascatternet, in which certain devices simultaneously play the master/leader role in one piconet and the slave role in another.
At any given time, data can be transferred between the master and one other device (except for the little-used broadcast mode). The master chooses which slave device to address; typically, it switches rapidly from one device to another in around-robinfashion. Since it is the master that chooses which slave to address, whereas a slave is (in theory) supposed to listen in each receive slot, being a master is a lighter burden than being a slave. Being a master of seven slaves is possible; being a slave of more than one master is possible. The specification is vague as to required behavior in scatternets.[29]
Bluetooth is a standard wire-replacement communications protocol primarily designed for low power consumption, with a short range based on low-costtransceivermicrochipsin each device.[30]Because the devices use a radio (broadcast) communications system, they do not have to be in visual line of sight of each other; however, aquasi opticalwireless path must be viable.[31]
Historically, the Bluetooth range was defined by the radio class, with a lower class (and higher output power) having larger range.[2]The actual range of a given link depends on several qualities of both communicating devices and theair and obstacles in between. The primary attributes affecting range are the data rate, protocol (Bluetooth Classic or Bluetooth Low Energy), transmission power, and receiver sensitivity, and the relative orientations and gains of both antennas.[32]
The effective range varies depending on propagation conditions, material coverage, production sample variations, antenna configurations and battery conditions. Most Bluetooth applications are for indoor conditions, where attenuation of walls and signal fading due to signal reflections make the range far lower than specified line-of-sight ranges of the Bluetooth products.
Most Bluetooth applications are battery-powered Class 2 devices, with little difference in range whether the other end of the link is a Class 1 or Class 2 device as the lower-powered device tends to set the range limit. In some cases the effective range of the data link can be extended when a Class 2 device is connecting to a Class 1 transceiver with both higher sensitivity and transmission power than a typical Class 2 device.[33]In general, however, Class 1 devices have sensitivities similar to those of Class 2 devices. Connecting two Class 1 devices with both high sensitivity and high power can allow ranges far in excess of the typical 100 m, depending on the throughput required by the application. Some such devices allow open field ranges of up to 1 km and beyond between two similar devices without exceeding legal emission limits.[34][35][36]
To use Bluetooth wireless technology, a device must be able to interpret certain Bluetooth profiles.
For example,
Profiles are definitions of possible applications and specify general behaviors that Bluetooth-enabled devices use to communicate with other Bluetooth devices. These profiles include settings to parameterize and to control the communication from the start. Adherence to profiles saves the time for transmitting the parameters anew before the bi-directional link becomes effective. There are a wide range of Bluetooth profiles that describe many different types of applications or use cases for devices.[37]
Bluetooth exists in numerous products such as telephones,speakers, tablets, media players, robotics systems, laptops, and game console equipment as well as some high definitionheadsets,modems,hearing aids[53]and even watches.[54]Bluetooth is useful when transferring information between two or more devices that are near each other in low-bandwidth situations. Bluetooth is commonly used to transfer sound data with telephones (i.e., with a Bluetooth headset) or byte data with hand-held computers (transferring files).
Bluetooth protocols simplify the discovery and setup of services between devices.[55]Bluetooth devices can advertise all of the services they provide.[56]This makes using services easier, because more of the security,network addressand permission configuration can be automated than with many other network types.[55]
A personal computer that does not have embedded Bluetooth can use a Bluetooth adapter that enables the PC to communicate with Bluetooth devices. While somedesktop computersand most recent laptops come with a built-in Bluetooth radio, others require an external adapter, typically in the form of a small USB "dongle".
Unlike its predecessor,IrDA, which requires a separate adapter for each device, Bluetooth lets multiple devices communicate with a computer over a single adapter.[57]
ForMicrosoftplatforms,Windows XP Service Pack 2and SP3 releases work natively with Bluetooth v1.1, v2.0 and v2.0+EDR.[58]Previous versions required users to install their Bluetooth adapter's own drivers, which were not directly supported by Microsoft.[59]Microsoft's own Bluetooth dongles (packaged with their Bluetooth computer devices) have no external drivers and thus require at least Windows XP Service Pack 2. Windows Vista RTM/SP1 with the Feature Pack for Wireless or Windows Vista SP2 work with Bluetooth v2.1+EDR.[58]Windows 7 works with Bluetooth v2.1+EDR and Extended Inquiry Response (EIR).[58]The Windows XP and Windows Vista/Windows 7 Bluetooth stacks support the following Bluetooth profiles natively: PAN, SPP,DUN, HID, HCRP. The Windows XP stack can be replaced by a third party stack that supports more profiles or newer Bluetooth versions. The Windows Vista/Windows 7 Bluetooth stack supports vendor-supplied additional profiles without requiring that the Microsoft stack be replaced.[58]Windows 8 and later support Bluetooth Low Energy (BLE). It is generally recommended to install the latest vendor driver and its associated stack to be able to use the Bluetooth device at its fullest extent.
Appleproducts have worked with Bluetooth sinceMac OSX v10.2, which was released in 2002.[60]
Linuxhas two popularBluetooth stacks, BlueZ and Fluoride. The BlueZ stack is included with most Linux kernels and was originally developed byQualcomm.[61]Fluoride, earlier known as Bluedroid is included in Android OS and was originally developed byBroadcom.[62]There is also Affix stack, developed byNokia. It was once popular, but has not been updated since 2005.[63]
FreeBSDhas included Bluetooth since its v5.0 release, implemented throughnetgraph.[64][65]
NetBSDhas included Bluetooth since its v4.0 release.[66][67]Its Bluetooth stack was ported toOpenBSDas well, however OpenBSD later removed it as unmaintained.[68][69]
DragonFly BSDhas had NetBSD's Bluetooth implementation since 1.11 (2008).[70][71]Anetgraph-based implementation fromFreeBSDhas also been available in the tree, possibly disabled until 2014-11-15, and may require more work.[72][73]
The specifications were formalized by theBluetooth Special Interest Group(SIG) and formally announced on 20 May 1998.[74]In 2014 it had a membership of over 30,000 companies worldwide.[75]It was established byEricsson,IBM,Intel,NokiaandToshiba, and later joined by many other companies.
All versions of the Bluetooth standards arebackward-compatiblewith all earlier versions.[76]
The Bluetooth Core Specification Working Group (CSWG) produces mainly four kinds of specifications:
Major enhancements include:
This version of the Bluetooth Core Specification was released before 2005. The main difference is the introduction of an Enhanced Data Rate (EDR) for fasterdata transfer. The data rate of EDR is 3Mbit/s, although the maximum data transfer rate (allowing for inter-packet time and acknowledgements) is 2.1Mbit/s.[79]EDR uses a combination ofGFSKandphase-shift keyingmodulation (PSK) with two variants, π/4-DQPSKand 8-DPSK.[81]EDR can provide a lower power consumption through a reducedduty cycle.
The specification is published asBluetooth v2.0 + EDR, which implies that EDR is an optional feature. Aside from EDR, the v2.0 specification contains other minor improvements, and products may claim compliance to "Bluetooth v2.0" without supporting the higher data rate. At least one commercial device states "Bluetooth v2.0 without EDR" on its data sheet.[82]
Bluetooth Core Specification version 2.1 + EDR was adopted by the Bluetooth SIG on 26 July 2007.[81]
The headline feature of v2.1 issecure simple pairing(SSP): this improves the pairing experience for Bluetooth devices, while increasing the use and strength of security.[83]
Version 2.1 allows various other improvements, includingextended inquiry response(EIR), which provides more information during the inquiry procedure to allow better filtering of devices before connection; and sniff subrating, which reduces the power consumption in low-power mode.
Version 3.0 + HS of the Bluetooth Core Specification[81]was adopted by the Bluetooth SIG on 21 April 2009. Bluetooth v3.0 + HS provides theoretical data transfer speeds of up to 24 Mbit/s, though not over the Bluetooth link itself. Instead, the Bluetooth link is used for negotiation and establishment, and the high data rate traffic is carried over a colocated802.11link.
The main new feature isAMP(Alternative MAC/PHY), the addition of802.11as a high-speed transport. The high-speed part of the specification is not mandatory, and hence only devices that display the "+HS" logo actually support Bluetooth over 802.11 high-speed data transfer. A Bluetooth v3.0 device without the "+HS" suffix is only required to support features introduced in Core Specification version 3.0[84]or earlier Core Specification Addendum 1.[85]
The high-speed (AMP) feature of Bluetooth v3.0 was originally intended forUWB, but the WiMedia Alliance, the body responsible for the flavor of UWB intended for Bluetooth, announced in March 2009 that it was disbanding, and ultimately UWB was omitted from the Core v3.0 specification.[86]
On 16 March 2009, theWiMedia Allianceannounced it was entering into technology transfer agreements for the WiMediaUltra-wideband(UWB) specifications. WiMedia has transferred all current and future specifications, including work on future high-speed and power-optimized implementations, to the Bluetooth Special Interest Group (SIG),Wireless USBPromoter Group and theUSB Implementers Forum. After successful completion of the technology transfer, marketing, and related administrative items, the WiMedia Alliance ceased operations.[87][88][89][90][91]
In October 2009, theBluetooth Special Interest Groupsuspended development of UWB as part of the alternative MAC/PHY, Bluetooth v3.0 + HS solution. A small, but significant, number of formerWiMediamembers had not and would not sign up to the necessary agreements for theIPtransfer. As of 2009, the Bluetooth SIG was in the process of evaluating other options for its longer-term roadmap.[92][93][94]
The Bluetooth SIG completed the Bluetooth Core Specification version 4.0 (called Bluetooth Smart) and has been adopted as of 30 June 2010[update]. It includesClassic Bluetooth,Bluetooth high speedandBluetooth Low Energy(BLE) protocols. Bluetooth high speed is based on Wi-Fi, and Classic Bluetooth consists of legacy Bluetooth protocols.
Bluetooth Low Energy, previously known as Wibree,[95]is a subset of Bluetooth v4.0 with an entirely new protocol stack for rapid build-up of simple links. As an alternative to the Bluetooth standard protocols that were introduced in Bluetooth v1.0 to v3.0, it is aimed at very low power applications powered by acoin cell. Chip designs allow for two types of implementation, dual-mode, single-mode and enhanced past versions.[96]The provisional namesWibreeandBluetooth ULP(Ultra Low Power) were abandoned and the BLE name was used for a while. In late 2011, new logos "Bluetooth Smart Ready" for hosts and "Bluetooth Smart" for sensors were introduced as the general-public face of BLE.[97]
Compared toClassic Bluetooth, Bluetooth Low Energy is intended to provide considerably reduced power consumption and cost while maintaining asimilar communication range. In terms of lengthening the battery life of Bluetooth devices,BLErepresents a significant progression.
Cost-reduced single-mode chips, which enable highly integrated and compact devices, feature a lightweight Link Layer providing ultra-low power idle mode operation, simple device discovery, and reliable point-to-multipoint data transfer with advanced power-save and secure encrypted connections at the lowest possible cost.
General improvements in version 4.0 include the changes necessary to facilitate BLE modes, as well the Generic Attribute Profile (GATT) and Security Manager (SM) services withAESEncryption.
Core Specification Addendum 2 was unveiled in December 2011; it contains improvements to the audio Host Controller Interface and to the High Speed (802.11) Protocol Adaptation Layer.
Core Specification Addendum 3 revision 2 has an adoption date of 24 July 2012.
Core Specification Addendum 4 has an adoption date of 12 February 2013.
The Bluetooth SIG announced formal adoption of the Bluetooth v4.1 specification on 4 December 2013. This specification is an incremental software update to Bluetooth Specification v4.0, and not a hardware update. The update incorporates Bluetooth Core Specification Addenda (CSA 1, 2, 3 & 4) and adds new features that improve consumer usability. These include increased co-existence support for LTE, bulk data exchange rates—and aid developer innovation by allowing devices to support multiple roles simultaneously.[106]
New features of this specification include:
Some features were already available in a Core Specification Addendum (CSA) before the release of v4.1.
Released on 2 December 2014,[108]it introduces features for theInternet of things.
The major areas of improvement are:
Older Bluetooth hardware may receive 4.2 features such as Data Packet Length Extension and improved privacy via firmware updates.[109][110]
The Bluetooth SIG released Bluetooth 5 on 6 December 2016.[111]Its new features are mainly focused on newInternet of Thingstechnology. Sony was the first to announce Bluetooth 5.0 support with itsXperia XZ Premiumin Feb 2017 during the Mobile World Congress 2017.[112]The SamsungGalaxy S8launched with Bluetooth 5 support in April 2017. In September 2017, theiPhone 8, 8 Plus andiPhone Xlaunched with Bluetooth 5 support as well.Applealso integrated Bluetooth 5 in its newHomePodoffering released on 9 February 2018.[113]Marketing drops the point number; so that it is just "Bluetooth 5" (unlike Bluetooth 4.0);[114]the change is for the sake of "Simplifying our marketing, communicating user benefits more effectively and making it easier to signal significant technology updates to the market."
Bluetooth 5 provides, forBLE, options that can double the data rate (2Mbit/s burst) at the expense of range, or provide up to four times the range at the expense of data rate. The increase in transmissions could be important for Internet of Things devices, where many nodes connect throughout a whole house. Bluetooth 5 increases capacity of connectionless services such as location-relevant navigation[115]of low-energy Bluetooth connections.[116][117][118]
The major areas of improvement are:
Features added in CSA5 – integrated in v5.0:
The following features were removed in this version of the specification:
The Bluetooth SIG presented Bluetooth 5.1 on 21 January 2019.[120]
The major areas of improvement are:
Features added in Core Specification Addendum (CSA) 6 – integrated in v5.1:
The following features were removed in this version of the specification:
On 31 December 2019, the Bluetooth SIG published the Bluetooth Core Specification version 5.2. The new specification adds new features:[121]
The Bluetooth SIG published the Bluetooth Core Specification version 5.3 on 13 July 2021. The feature enhancements of Bluetooth 5.3 are:[128]
The following features were removed in this version of the specification:
The Bluetooth SIG released the Bluetooth Core Specification version 5.4 on 7 February 2023. This new version adds the following features:[129]
The Bluetooth SIG released the Bluetooth Core Specification version 6.0 on 27 August 2024.[130]This version adds the following features:[131]
The Bluetooth SIG released the Bluetooth Core Specification version 6.1 on 7 May 2025.[132]
Seeking to extend the compatibility of Bluetooth devices, the devices that adhere to the standard use an interface called HCI (Host Controller Interface) between the host and the controller.
High-level protocols such as the SDP (Protocol used to find other Bluetooth devices within the communication range, also responsible for detecting the function of devices in range), RFCOMM (Protocol used to emulate serial port connections) and TCS (Telephony control protocol) interact with the baseband controller through the L2CAP (Logical Link Control and Adaptation Protocol). The L2CAP protocol is responsible for the segmentation and reassembly of the packets.
The hardware that makes up the Bluetooth device is made up of, logically, two parts; which may or may not be physically separate. A radio device, responsible for modulating and transmitting the signal; and a digital controller. The digital controller is likely a CPU, one of whose functions is to run a Link Controller; and interfaces with the host device; but some functions may be delegated to hardware. The Link Controller is responsible for the processing of the baseband and the management of ARQ and physical layer FEC protocols. In addition, it handles the transfer functions (both asynchronous and synchronous), audio coding (e.g.SBC (codec)) and data encryption. The CPU of the device is responsible for attending the instructions related to Bluetooth of the host device, in order to simplify its operation. To do this, the CPU runs software called Link Manager that has the function of communicating with other devices through the LMP protocol.
A Bluetooth device is ashort-rangewirelessdevice. Bluetooth devices arefabricatedonRF CMOSintegrated circuit(RF circuit) chips.[133][134]
Bluetooth is defined as a layer protocol architecture consisting of core protocols, cable replacement protocols, telephony control protocols, and adopted protocols.[135]Mandatory protocols for all Bluetooth stacks are LMP, L2CAP and SDP. In addition, devices that communicate with Bluetooth almost universally can use these protocols:HCIand RFCOMM.[citation needed]
The Link Manager (LM) is the system that manages establishing the connection between devices. It is responsible for the establishment, authentication and configuration of the link. The Link Manager locates other managers and communicates with them via the management protocol of the LMP link. To perform its function as a service provider, the LM uses the services included in the Link Controller (LC).
The Link Manager Protocol basically consists of several PDUs (Protocol Data Units) that are sent from one device to another. The following is a list of supported services:
The Host Controller Interface provides a command interface between the controller and the host.
TheLogical Link Control and Adaptation Protocol(L2CAP) is used to multiplex multiple logical connections between two devices using different higher level protocols.
Provides segmentation and reassembly of on-air packets.
InBasicmode, L2CAP provides packets with a payload configurable up to 64 kB, with 672 bytes as the defaultMTU, and 48 bytes as the minimum mandatory supported MTU.
InRetransmission and Flow Controlmodes, L2CAP can be configured either for isochronous data or reliable data per channel by performing retransmissions and CRC checks.
Bluetooth Core Specification Addendum 1 adds two additional L2CAP modes to the core specification. These modes effectively deprecate original Retransmission and Flow Control modes:
Reliability in any of these modes is optionally and/or additionally guaranteed by the lower layer Bluetooth BDR/EDR air interface by configuring the number of retransmissions and flush timeout (time after which the radio flushes packets). In-order sequencing is guaranteed by the lower layer.
Only L2CAP channels configured in ERTM or SM may be operated over AMP logical links.
TheService Discovery Protocol(SDP) allows a device to discover services offered by other devices, and their associated parameters. For example, when you use a mobile phone with a Bluetooth headset, the phone uses SDP to determine whichBluetooth profilesthe headset can use (Headset Profile, Hands Free Profile (HFP),Advanced Audio Distribution Profile (A2DP)etc.) and the protocol multiplexer settings needed for the phone to connect to the headset using each of them. Each service is identified by aUniversally unique identifier(UUID), with official services (Bluetooth profiles) assigned a short form UUID (16 bits rather than the full 128).
Radio Frequency Communications(RFCOMM) is a cable replacement protocol used for generating a virtual serial data stream. RFCOMM provides for binary data transport and emulatesEIA-232(formerly RS-232) control signals over the Bluetooth baseband layer, i.e., it is a serial port emulation.
RFCOMM provides a simple, reliable, data stream to the user, similar to TCP. It is used directly by many telephony related profiles as a carrier for AT commands, as well as being a transport layer for OBEX over Bluetooth.
Many Bluetooth applications use RFCOMM because of its widespread support and publicly available API on most operating systems. Additionally, applications that used a serial port to communicate can be quickly ported to use RFCOMM.
TheBluetooth Network Encapsulation Protocol(BNEP) is used for transferring another protocol stack's data via an L2CAP channel.
Its main purpose is the transmission of IP packets in the Personal Area Networking Profile.
BNEP performs a similar function toSNAPin Wireless LAN.
TheAudio/Video Control Transport Protocol(AVCTP) is used by the remote control profile to transfer AV/C commands over an L2CAP channel. The music control buttons on a stereo headset use this protocol to control the music player.
TheAudio/Video Distribution Transport Protocol(AVDTP) is used by the advanced audio distribution (A2DP) profile to stream music to stereo headsets over anL2CAPchannel intended for video distribution profile in the Bluetooth transmission.
TheTelephony Control Protocol– Binary(TCS BIN) is the bit-oriented protocol that defines the call control signaling for the establishment of voice and data calls between Bluetooth devices. Additionally, "TCS BIN defines mobility management procedures for handling groups of Bluetooth TCS devices."
TCS-BIN is only used by the cordless telephony profile, which failed to attract implementers. As such it is only of historical interest.
Adopted protocols are defined by other standards-making organizations and incorporated into Bluetooth's protocol stack, allowing Bluetooth to code protocols only when necessary. The adopted protocols include:
Depending on packet type, individual packets may be protected byerror correction, either 1/3 rateforward error correction(FEC) or 2/3 rate. In addition, packets with CRC will be retransmitted until acknowledged byautomatic repeat request(ARQ).
Any Bluetooth device indiscoverable modetransmits the following information on demand:
Any device may perform an inquiry to find other devices to connect to, and any device can be configured to respond to such inquiries. However, if the device trying to connect knows the address of the device, it always responds to direct connection requests and transmits the information shown in the list above if requested. Use of a device's services may require pairing or acceptance by its owner, but the connection itself can be initiated by any device and held until it goes out of range. Some devices can be connected to only one device at a time, and connecting to them prevents them from connecting to other devices and appearing in inquiries until they disconnect from the other device.
Every device has aunique 48-bit address. However, these addresses are generally not shown in inquiries. Instead, friendly Bluetooth names are used, which can be set by the user. This name appears when another user scans for devices and in lists of paired devices.
Most cellular phones have the Bluetooth name set to the manufacturer and model of the phone by default. Most cellular phones and laptops show only the Bluetooth names and special programs are required to get additional information about remote devices. This can be confusing as, for example, there could be several cellular phones in range namedT610(seeBluejacking).
Many services offered over Bluetooth can expose private data or let a connecting party control the Bluetooth device. Security reasons make it necessary to recognize specific devices, and thus enable control over which devices can connect to a given Bluetooth device. At the same time, it is useful for Bluetooth devices to be able to establish a connection without user intervention (for example, as soon as in range).
To resolve this conflict, Bluetooth uses a process calledbonding, and a bond is generated through a process calledpairing. The pairing process is triggered either by a specific request from a user to generate a bond (for example, the user explicitly requests to "Add a Bluetooth device"), or it is triggered automatically when connecting to a service where (for the first time) the identity of a device is required for security purposes. These two cases are referred to as dedicated bonding and general bonding respectively.
Pairing often involves some level of user interaction. This user interaction confirms the identity of the devices. When pairing completes, a bond forms between the two devices, enabling those two devices to connect in the future without repeating the pairing process to confirm device identities. When desired, the user can remove the bonding relationship.
During pairing, the two devices establish a relationship by creating ashared secretknown as alink key. If both devices store the same link key, they are said to bepairedorbonded. A device that wants to communicate only with a bonded device cancryptographicallyauthenticatethe identity of the other device, ensuring it is the same device it previously paired with. Once a link key is generated, an authenticatedACLlink between the devices may beencryptedto protect exchanged data againsteavesdropping. Users can delete link keys from either device, which removes the bond between the devices—so it is possible for one device to have a stored link key for a device it is no longer paired with.
Bluetooth services generally require either encryption or authentication and as such require pairing before they let a remote device connect. Some services, such as the Object Push Profile, elect not to explicitly require authentication or encryption so that pairing does not interfere with the user experience associated with the service use-cases.
Pairing mechanisms changed significantly with the introduction of Secure Simple Pairing in Bluetooth v2.1. The following summarizes the pairing mechanisms:
SSP is considered simple for the following reasons:
Prior to Bluetooth v2.1, encryption is not required and can be turned off at any time. Moreover, the encryption key is only good for approximately 23.5 hours; using a single encryption key longer than this time allows simpleXOR attacksto retrieve the encryption key.
Bluetooth v2.1 addresses this in the following ways:
Link keys may be stored on the device file system, not on the Bluetooth chip itself. Many Bluetooth chip manufacturers let link keys be stored on the device—however, if the device is removable, this means that the link key moves with the device.
Bluetooth implementsconfidentiality,authenticationandkeyderivation with custom algorithms based on theSAFER+block cipher. Bluetooth key generation is generally based on a Bluetooth PIN, which must be entered into both devices. This procedure might be modified if one of the devices has a fixed PIN (e.g., for headsets or similar devices with a restricted user interface). During pairing, an initialization key or master key is generated, using the E22 algorithm.[136]TheE0stream cipher is used for encrypting packets, granting confidentiality, and is based on a shared cryptographic secret, namely a previously generated link key or master key. Those keys, used for subsequent encryption of data sent via the air interface, rely on the Bluetooth PIN, which has been entered into one or both devices.
An overview of Bluetooth vulnerabilities exploits was published in 2007 by Andreas Becker.[137]
In September 2008, theNational Institute of Standards and Technology(NIST) published a Guide to Bluetooth Security as a reference for organizations. It describes Bluetooth security capabilities and how to secure Bluetooth technologies effectively. While Bluetooth has its benefits, it is susceptible to denial-of-service attacks, eavesdropping, man-in-the-middle attacks, message modification, and resource misappropriation. Users and organizations must evaluate their acceptable level of risk and incorporate security into the lifecycle of Bluetooth devices. To help mitigate risks, included in the NIST document are security checklists with guidelines and recommendations for creating and maintaining secure Bluetooth piconets, headsets, and smart card readers.[138]
Bluetooth v2.1 – finalized in 2007 with consumer devices first appearing in 2009 – makes significant changes to Bluetooth's security, including pairing. See thepairing mechanismssection for more about these changes.
Bluejacking is the sending of either a picture or a message from one user to an unsuspecting user through Bluetooth wireless technology. Common applications include short messages, e.g., "You've just been bluejacked!"[139]Bluejacking does not involve the removal or alteration of any data from the device.[140]
Some form ofDoSis also possible, even in modern devices, by sending unsolicited pairing requests in rapid succession; this becomes disruptive because most systems display a full screen notification for every connection request, interrupting every other activity, especially on less powerful devices.
In 2001, Jakobsson and Wetzel fromBell Laboratoriesdiscovered flaws in the Bluetooth pairing protocol and also pointed to vulnerabilities in the encryption scheme.[141]In 2003, Ben and Adam Laurie from A.L. Digital Ltd. discovered that serious flaws in some poor implementations of Bluetooth security may lead to disclosure of personal data.[142]In a subsequent experiment, Martin Herfurt from the trifinite.group was able to do a field-trial at theCeBITfairgrounds, showing the importance of the problem to the world. A new attack calledBlueBugwas used for this experiment.[143]In 2004 the first purportedvirususing Bluetooth to spread itself among mobile phones appeared on theSymbian OS.[144]The virus was first described byKaspersky Laband requires users to confirm the installation of unknown software before it can propagate. The virus was written as a proof-of-concept by a group of virus writers known as "29A" and sent to anti-virus groups. Thus, it should be regarded as a potential (but not real) security threat to Bluetooth technology orSymbian OSsince the virus has never spread outside of this system. In August 2004, a world-record-setting experiment (see alsoBluetooth sniping) showed that the range of Class 2 Bluetooth radios could be extended to 1.78 km (1.11 mi) with directional antennas and signal amplifiers.[145]This poses a potential security threat because it enables attackers to access vulnerable Bluetooth devices from a distance beyond expectation. The attacker must also be able to receive information from the victim to set up a connection. No attack can be made against a Bluetooth device unless the attacker knows its Bluetooth address and which channels to transmit on, although these can be deduced within a few minutes if the device is in use.[146]
In January 2005, a mobilemalwareworm known as Lasco surfaced. The worm began targeting mobile phones usingSymbian OS(Series 60 platform) using Bluetooth enabled devices to replicate itself and spread to other devices. The worm is self-installing and begins once the mobile user approves the transfer of the file (Velasco.sis) from another device. Once installed, the worm begins looking for other Bluetooth enabled devices to infect. Additionally, the worm infects other.SISfiles on the device, allowing replication to another device through the use of removable media (Secure Digital,CompactFlash, etc.). The worm can render the mobile device unstable.[147]
In April 2005,University of Cambridgesecurity researchers published results of their actual implementation of passive attacks against thePIN-basedpairing between commercial Bluetooth devices. They confirmed that attacks are practicably fast, and the Bluetooth symmetric key establishment method is vulnerable. To rectify this vulnerability, they designed an implementation that showed that stronger, asymmetric key establishment is feasible for certain classes of devices, such as mobile phones.[148]
In June 2005, Yaniv Shaked[149]and Avishai Wool[150]published a paper describing both passive and active methods for obtaining the PIN for a Bluetooth link. The passive attack allows a suitably equipped attacker to eavesdrop on communications and spoof if the attacker was present at the time of initial pairing. The active method makes use of a specially constructed message that must be inserted at a specific point in the protocol, to make the master and slave repeat the pairing process. After that, the first method can be used to crack the PIN. This attack's major weakness is that it requires the user of the devices under attack to re-enter the PIN during the attack when the device prompts them to. Also, this active attack probably requires custom hardware, since most commercially available Bluetooth devices are not capable of the timing necessary.[151]
In August 2005, police inCambridgeshire, England, issued warnings about thieves using Bluetooth enabled phones to track other devices left in cars. Police are advising users to ensure that any mobile networking connections are de-activated if laptops and other devices are left in this way.[152]
In April 2006, researchers fromSecure NetworkandF-Securepublished a report that warns of the large number of devices left in a visible state, and issued statistics on the spread of various Bluetooth services and the ease of spread of an eventual Bluetooth worm.[153]
In October 2006, at the Luxembourgish Hack.lu Security Conference, Kevin Finistere and Thierry Zoller demonstrated and released a remote root shell via Bluetooth on Mac OS X v10.3.9 and v10.4. They also demonstrated the first Bluetooth PIN and Linkkeys cracker, which is based on the research of Wool and Shaked.[154]
In April 2017, security researchers at Armis discovered multiple exploits in the Bluetooth software in various platforms, includingMicrosoft Windows,Linux, AppleiOS, and GoogleAndroid. These vulnerabilities are collectively called "BlueBorne". The exploits allow an attacker to connect to devices or systems without authentication and can give them "virtually full control over the device". Armis contacted Google, Microsoft, Apple, Samsung and Linux developers allowing them to patch their software before the coordinated announcement of the vulnerabilities on 12 September 2017.[155]
In July 2018, Lior Neumann andEli Biham, researchers at the Technion – Israel Institute of Technology identified a security vulnerability in the latest Bluetooth pairing procedures: Secure Simple Pairing and LE Secure Connections.[156][157]
Also, in October 2018, Karim Lounis, a network security researcher at Queen's University, identified a security vulnerability, called CDV (Connection Dumping Vulnerability), on various Bluetooth devices that allows an attacker to tear down an existing Bluetooth connection and cause the deauthentication and disconnection of the involved devices. The researcher demonstrated the attack on various devices of different categories and from different manufacturers.[158]
In August 2019, security researchers at theSingapore University of Technology and Design, Helmholtz Center for Information Security, andUniversity of Oxforddiscovered a vulnerability, called KNOB (Key Negotiation of Bluetooth) in the key negotiation that would "brute force the negotiated encryption keys, decrypt the eavesdropped ciphertext, and inject valid encrypted messages (in real-time)".[159][160]Google released anAndroidsecurity patch on 5 August 2019, which removed this vulnerability.[161]
In November 2023, researchers fromEurecomrevealed a new class of attacks known as BLUFFS (Bluetooth Low Energy Forward and Future Secrecy Attacks). These 6 new attacks expand on and work in conjunction with the previously known KNOB and BIAS (Bluetooth Impersonation AttackS) attacks. While the previous KNOB and BIAS attacks allowed an attacker to decrypt and spoof Bluetooth packets within a session, BLUFFS extends this capability to all sessions generated by a device (including past, present, and future). All devices running Bluetooth versions 4.2 up to and including 5.4 are affected.[162][163]
Bluetooth uses theradio frequencyspectrum in the 2.402GHz to 2.480GHz range,[164]which isnon-ionizing radiation, of similar bandwidth to that used by wireless and mobile phones. No specific harm has been demonstrated, even though wireless transmission has been included byIARCin the possiblecarcinogenlist. Maximum power output from a Bluetooth radio is 100mWfor Class1, 2.5mW for Class2, and 1mW for Class3 devices. Even the maximum power output of Class1 is a lower level than the lowest-powered mobile phones.[165]UMTSandW-CDMAoutput 250mW,GSM1800/1900outputs 1000mW, andGSM850/900outputs 2000mW.
The Bluetooth Innovation World Cup, a marketing initiative of the Bluetooth Special Interest Group (SIG), was an international competition that encouraged the development of innovations for applications leveraging Bluetooth technology in sports, fitness and health care products. The competition aimed to stimulate new markets.[166]
The Bluetooth Innovation World Cup morphed into the Bluetooth Breakthrough Awards in 2013. Bluetooth SIG subsequently launched the Imagine Blue Award in 2016 at Bluetooth World.[167]The Bluetooth Breakthrough Awards program highlights the most innovative products and applications available today, prototypes coming soon, and student-led projects in the making.[168]
|
https://en.wikipedia.org/wiki/Bluetooth#Pairing
|
In computing,autonomous peripheral operationis a hardware feature found in somemicrocontrollerarchitectures to off-load certain tasks into embeddedautonomous peripheralsin order to minimizelatenciesand improvethroughputinhard real-timeapplications as well as to save energy inultra-low-powerdesigns.
Forms of autonomous peripherals in microcontrollers were first introduced in the 1990s. Allowing embeddedperipheralsto work independently of theCPUand even interact with each other in certain pre-configurable ways off-loads event-driven communication into the peripherals to help improve thereal-timeperformance due to lowerlatencyand allows for potentially higher datathroughputdue to the added parallelism. Since 2009, the scheme has been improved in newer implementations to continue functioning insleep modesas well, thereby allowing the CPU (and other unaffected peripheral blocks) to remain dormant for longer periods of time in order to save energy. This is partially driven by the emergingIoTmarket.[1]
Conceptually, autonomous peripheral operation can be seen as a generalization of and mixture betweendirect memory access(DMA) andhardware interrupts. Peripherals that issue event signals are calledevent generatorsorproducerswhereas target peripherals are calledevent usersorconsumers. In some implementations, peripherals can be configured to pre-process the incoming data and perform various peripheral-specific functions like comparing, windowing, filtering or averaging in hardware without having to pass the data through the CPU for processing.
Known implementations include:
|
https://en.wikipedia.org/wiki/Autonomous_peripheral_operation
|
Nearables(alsonearable technology) is a term for a type ofsmart object, invented byEstimote Inc.. The term is used to describe everyday items that have small, wireless computing devices attached to them. These devices can be equipped with a variety of sensors and work as transmitters to broadcast digital data through a variety of methods, but they usually use theBluetooth Smartprotocol. These objects provide mobile devices within their range with information about their location, state, and immediate surroundings. The word 'nearables' is a reference towearable technology– electronic devices worn as part of clothing or jewelry.[1]
The term 'nearables' was first introduced byEstimote Inc.in 2014 as part of a marketing campaign associated with a product launch of the next generation of Bluetooth Smart beacons.[2]Using the language of Estimote, 'nearables' were an implementation of theiBeaconstandard that provided orientation, temperature, and motion information – enabling functionality for Internet of Things applications.[3]
Nearables are a further development of theInternet of Things(also referred to as Internet of Everything). It's a vision of a wide, global network of interconnected devices, using the existing Internet infrastructure to provide services beyond standardmachine-to-machine communications. Although the term Internet of Things was coined byKevin Ashtonin 1999,[4]the idea can be traced to the late 1980s, whenMark Weiserintroduced the idea ofubiquitous computing.[5]
Location-based services emerged in the 1990s with widespread adoption of mobile phones and development of location- and proximity-based technologies, such asGPSandRFID. This, in turn, led to first attempts at wirelessproximity marketingin 2000s with early versions ofBluetooth,NFCandWi-Fistandards as predominant technologies. However, it was not until 2013, whenApple Inc.announced theiBeaconprotocol for Bluetooth Smart-enabled devices, that the idea of creating smart objects by attaching wireless beacons to them started gaining traction.[6]
In August 2014 Estimote Inc. launched Estimote Stickers, a new generation of small Bluetooth Smart-based beacons. The term ‘nearables’ was inspired by the wearable computers gaining increasing popularity in 2013 and 2014. Two such computers were thePebblesmartwatch andGoogle Glass. Originally, nearables were described assmart, connected objects that broadcast data about their location, motion and temperature.[7]
In its first interpretation, Nearables are not devices themselves. Any object (or a live being, like a human or animal) can become a nearable after a wireless, electronic sensor is attached to it and starts broadcasting data to nearby mobile devices. Due to the continued miniaturization of sensor technology, a single transmitter could be equipped with a whole set of these, for example:accelerometer,thermometer,ambient light sensor,humiditysensor ormagnetometer. In the second interpretation the actual nearable devices can be part of an infinite array of smart interconnected objects, programmed to improve an individual's vicinity in every way, usually to be used in a smart home environment. Making today's homes smart by having nearable technologies creating these devices to act intuitively depending on the needs of individuals through self learning software.
First examples of nearables were objects tagged with Bluetooth Smart beacons supporting accelerometer and temperature sensor and broadcasting their signal in the range of approximately 50 meters. They can communicate with mobile applications installed on devices withBluetooth 4.0, compatible with Bluetooth Smart protocol on the software side. At the moment of their launch, it included mainlyiOS 7and high-endAndroidmobile devices.
To create a nearable, one must attach an electronic device, working as both a sensor and a transmitter, to an object. Since the only limitation is the size of the device, both items and living beings can act as nearables. The most often cited examples, however, include retail andhome automationenvironments.[8]
|
https://en.wikipedia.org/wiki/Nearables
|
Inmathematicsand computing, theLevenberg–Marquardt algorithm(LMAor justLM), also known as thedamped least-squares(DLS) method, is used to solvenon-linear least squaresproblems. These minimization problems arise especially inleast squarescurve fitting. The LMA interpolates between theGauss–Newton algorithm(GNA) and the method ofgradient descent. The LMA is morerobustthan the GNA, which means that in many cases it finds a solution even if it starts very far off the final minimum. For well-behaved functions and reasonable starting parameters, the LMA tends to be slower than the GNA. LMA can also be viewed asGauss–Newtonusing atrust regionapproach.
The algorithm was first published in 1944 byKenneth Levenberg,[1]while working at theFrankford Army Arsenal. It was rediscovered in 1963 byDonald Marquardt,[2]who worked as astatisticianatDuPont, and independently by Girard,[3]Wynne[4]and Morrison.[5]
The LMA is used in many software applications for solving generic curve-fitting problems. By using the Gauss–Newton algorithm it often converges faster than first-order methods.[6]However, like other iterative optimization algorithms, the LMA finds only alocal minimum, which is not necessarily theglobal minimum.
The primary application of the Levenberg–Marquardt algorithm is in the least-squares curve fitting problem: given a set ofm{\displaystyle m}empirical pairs(xi,yi){\displaystyle \left(x_{i},y_{i}\right)}of independent and dependent variables, find the parametersβ{\displaystyle {\boldsymbol {\beta }}}of the model curvef(x,β){\displaystyle f\left(x,{\boldsymbol {\beta }}\right)}so that the sum of the squares of the deviationsS(β){\displaystyle S\left({\boldsymbol {\beta }}\right)}is minimized:
Like other numeric minimization algorithms, the Levenberg–Marquardt algorithm is aniterativeprocedure. To start a minimization, the user has to provide an initial guess for the parameter vectorβ{\displaystyle {\boldsymbol {\beta }}}. In cases with only one minimum, an uninformed standard guess likeβT=(1,1,…,1){\displaystyle {\boldsymbol {\beta }}^{\text{T}}={\begin{pmatrix}1,\ 1,\ \dots ,\ 1\end{pmatrix}}}will work fine; in cases withmultiple minima, the algorithm converges to the global minimum only if the initial guess is already somewhat close to the final solution.
In each iteration step, the parameter vectorβ{\displaystyle {\boldsymbol {\beta }}}is replaced by a new estimateβ+δ{\displaystyle {\boldsymbol {\beta }}+{\boldsymbol {\delta }}}. To determineδ{\displaystyle {\boldsymbol {\delta }}}, the functionf(xi,β+δ){\displaystyle f\left(x_{i},{\boldsymbol {\beta }}+{\boldsymbol {\delta }}\right)}is approximated by itslinearization:
where
is thegradient(row-vector in this case) off{\displaystyle f}with respect toβ{\displaystyle {\boldsymbol {\beta }}}.
The sumS(β){\displaystyle S\left({\boldsymbol {\beta }}\right)}of square deviations has its minimum at a zerogradientwith respect toβ{\displaystyle {\boldsymbol {\beta }}}. The above first-order approximation off(xi,β+δ){\displaystyle f\left(x_{i},{\boldsymbol {\beta }}+{\boldsymbol {\delta }}\right)}gives
or in vector notation,
Taking the derivative of this approximation ofS(β+δ){\displaystyle S\left({\boldsymbol {\beta }}+{\boldsymbol {\delta }}\right)}with respect toδ{\displaystyle {\boldsymbol {\delta }}}and setting the result to zero gives
whereJ{\displaystyle \mathbf {J} }is theJacobian matrix, whosei{\displaystyle i}-th row equalsJi{\displaystyle \mathbf {J} _{i}}, and wheref(β){\displaystyle \mathbf {f} \left({\boldsymbol {\beta }}\right)}andy{\displaystyle \mathbf {y} }are vectors withi{\displaystyle i}-th componentf(xi,β){\displaystyle f\left(x_{i},{\boldsymbol {\beta }}\right)}andyi{\displaystyle y_{i}}respectively. The above expression obtained forβ{\displaystyle {\boldsymbol {\beta }}}comes under the Gauss–Newton method. The Jacobian matrix as defined above is not (in general) a square matrix, but a rectangular matrix of sizem×n{\displaystyle m\times n}, wheren{\displaystyle n}is the number of parameters (size of the vectorβ{\displaystyle {\boldsymbol {\beta }}}). The matrix multiplication(JTJ){\displaystyle \left(\mathbf {J} ^{\mathrm {T} }\mathbf {J} \right)}yields the requiredn×n{\displaystyle n\times n}square matrix and the matrix-vector product on the right hand side yields a vector of sizen{\displaystyle n}. The result is a set ofn{\displaystyle n}linear equations, which can be solved forδ{\displaystyle {\boldsymbol {\delta }}}.
Levenberg's contribution is to replace this equation by a "damped version":
whereI{\displaystyle \mathbf {I} }is the identity matrix, giving as the incrementδ{\displaystyle {\boldsymbol {\delta }}}to the estimated parameter vectorβ{\displaystyle {\boldsymbol {\beta }}}.
The (non-negative) damping factorλ{\displaystyle \lambda }is adjusted at each iteration. If reduction ofS{\displaystyle S}is rapid, a smaller value can be used, bringing the algorithm closer to theGauss–Newton algorithm, whereas if an iteration gives insufficient reduction in the residual,λ{\displaystyle \lambda }can be increased, giving a step closer to the gradient-descent direction. Note that thegradientofS{\displaystyle S}with respect toβ{\displaystyle {\boldsymbol {\beta }}}equals−2(JT[y−f(β)])T{\displaystyle -2\left(\mathbf {J} ^{\mathrm {T} }\left[\mathbf {y} -\mathbf {f} \left({\boldsymbol {\beta }}\right)\right]\right)^{\mathrm {T} }}. Therefore, for large values ofλ{\displaystyle \lambda }, the step will be taken approximately in the direction opposite to the gradient. If either the length of the calculated stepδ{\displaystyle {\boldsymbol {\delta }}}or the reduction of sum of squares from the latest parameter vectorβ+δ{\displaystyle {\boldsymbol {\beta }}+{\boldsymbol {\delta }}}fall below predefined limits, iteration stops, and the last parameter vectorβ{\displaystyle {\boldsymbol {\beta }}}is considered to be the solution.
When the damping factorλ{\displaystyle \lambda }is large relative to‖JTJ‖{\displaystyle \|\mathbf {J} ^{\mathrm {T} }\mathbf {J} \|}, invertingJTJ+λI{\displaystyle \mathbf {J} ^{\mathrm {T} }\mathbf {J} +\lambda \mathbf {I} }is not necessary, as the update is well-approximated by the small gradient stepλ−1JT[y−f(β)]{\displaystyle \lambda ^{-1}\mathbf {J} ^{\mathrm {T} }\left[\mathbf {y} -\mathbf {f} \left({\boldsymbol {\beta }}\right)\right]}.
To make the solution scale invariant Marquardt's algorithm solved a modified problem with each component of the gradient scaled according to the curvature. This provides larger movement along the directions where the gradient is smaller, which avoids slow convergence in the direction of small gradient. Fletcher in his 1971 paperA modified Marquardt subroutine for non-linear least squaressimplified the form, replacing the identity matrixI{\displaystyle \mathbf {I} }with the diagonal matrix consisting of the diagonal elements ofJTJ{\displaystyle \mathbf {J} ^{\text{T}}\mathbf {J} }:
A similar damping factor appears inTikhonov regularization, which is used to solve linearill-posed problems, as well as inridge regression, anestimationtechnique instatistics.
Various more or less heuristic arguments have been put forward for the best choice for the damping parameterλ{\displaystyle \lambda }. Theoretical arguments exist showing why some of these choices guarantee local convergence of the algorithm; however, these choices can make the global convergence of the algorithm suffer from the undesirable properties ofsteepest descent, in particular, very slow convergence close to the optimum.
The absolute values of any choice depend on how well-scaled the initial problem is. Marquardt recommended starting with a valueλ0{\displaystyle \lambda _{0}}and a factorν>1{\displaystyle \nu >1}. Initially settingλ=λ0{\displaystyle \lambda =\lambda _{0}}and computing the residual sum of squaresS(β){\displaystyle S\left({\boldsymbol {\beta }}\right)}after one step from the starting point with the damping factor ofλ=λ0{\displaystyle \lambda =\lambda _{0}}and secondly withλ0/ν{\displaystyle \lambda _{0}/\nu }. If both of these are worse than the initial point, then the damping is increased by successive multiplication byν{\displaystyle \nu }until a better point is found with a new damping factor ofλ0νk{\displaystyle \lambda _{0}\nu ^{k}}for somek{\displaystyle k}.
If use of the damping factorλ/ν{\displaystyle \lambda /\nu }results in a reduction in squared residual, then this is taken as the new value ofλ{\displaystyle \lambda }(and the new optimum location is taken as that obtained with this damping factor) and the process continues; if usingλ/ν{\displaystyle \lambda /\nu }resulted in a worse residual, but usingλ{\displaystyle \lambda }resulted in a better residual, thenλ{\displaystyle \lambda }is left unchanged and the new optimum is taken as the value obtained withλ{\displaystyle \lambda }as damping factor.
An effective strategy for the control of the damping parameter, calleddelayed gratification, consists of increasing the parameter by a small amount for each uphill step, and decreasing by a large amount for each downhill step. The idea behind this strategy is to avoid moving downhill too fast in the beginning of optimization, therefore restricting the steps available in future iterations and therefore slowing down convergence.[7]An increase by a factor of 2 and a decrease by a factor of 3 has been shown to be effective in most cases, while for large problems more extreme values can work better, with an increase by a factor of 1.5 and a decrease by a factor of 5.[8]
When interpreting the Levenberg–Marquardt step as the velocityvk{\displaystyle {\boldsymbol {v}}_{k}}along ageodesicpath in the parameter space, it is possible to improve the method by adding a second order term that accounts for the accelerationak{\displaystyle {\boldsymbol {a}}_{k}}along the geodesic
whereak{\displaystyle {\boldsymbol {a}}_{k}}is the solution of
Since this geodesic acceleration term depends only on thedirectional derivativefvv=∑μνvμvν∂μ∂νf(x){\displaystyle f_{vv}=\sum _{\mu \nu }v_{\mu }v_{\nu }\partial _{\mu }\partial _{\nu }f({\boldsymbol {x}})}along the direction of the velocityv{\displaystyle {\boldsymbol {v}}}, it does not require computing the full second order derivative matrix, requiring only a small overhead in terms of computing cost.[9]Since the second order derivative can be a fairly complex expression, it can be convenient to replace it with afinite differenceapproximation
wheref(x){\displaystyle f({\boldsymbol {x}})}andJ{\displaystyle {\boldsymbol {J}}}have already been computed by the algorithm, therefore requiring only one additional function evaluation to computef(x+hδ){\displaystyle f({\boldsymbol {x}}+h{\boldsymbol {\delta }})}. The choice of the finite difference steph{\displaystyle h}can affect the stability of the algorithm, and a value of around 0.1 is usually reasonable in general.[8]
Since the acceleration may point in opposite direction to the velocity, to prevent it to stall the method in case the damping is too small, an additional criterion on the acceleration is added in order to accept a step, requiring that
whereα{\displaystyle \alpha }is usually fixed to a value lesser than 1, with smaller values for harder problems.[8]
The addition of a geodesic acceleration term can allow significant increase in convergence speed and it is especially useful when the algorithm is moving through narrow canyons in the landscape of the objective function, where the allowed steps are smaller and the higher accuracy due to the second order term gives significant improvements.[8]
In this example we try to fit the functiony=acos(bX)+bsin(aX){\displaystyle y=a\cos \left(bX\right)+b\sin \left(aX\right)}using the Levenberg–Marquardt algorithm implemented inGNU Octaveas theleasqrfunction. The graphs show progressively better fitting for the parametersa=100{\displaystyle a=100},b=102{\displaystyle b=102}used
in the initial curve. Only when the parameters in the last graph are chosen closest to the original, are the curves fitting exactly. This equation
is an example of very sensitive initial conditions for the Levenberg–Marquardt algorithm. One reason for this sensitivity is the existence of multiple minima — the functioncos(βx){\displaystyle \cos \left(\beta x\right)}has minima at parameter valueβ^{\displaystyle {\hat {\beta }}}andβ^+2nπ{\displaystyle {\hat {\beta }}+2n\pi }.
|
https://en.wikipedia.org/wiki/Levenberg%E2%80%93Marquardt_algorithm
|
Anomalous diffusionis adiffusionprocess with anon-linearrelationship between themean squared displacement(MSD),⟨r2(τ)⟩{\displaystyle \langle r^{2}(\tau )\rangle }, and time. This behavior is in stark contrast toBrownian motion, the typical diffusion process described byAlbert EinsteinandMarian Smoluchowski, where the MSD islinearin time (namely,⟨r2(τ)⟩=2dDτ{\displaystyle \langle r^{2}(\tau )\rangle =2dD\tau }withdbeing the number of dimensions andDthediffusion coefficient).[1][2]
It has been found that equations describing normal diffusion are not capable of characterizing some complex diffusion processes, for instance, diffusion process in inhomogeneous or heterogeneous medium, e.g. porous media.Fractional diffusion equationswere introduced in order to characterize anomalous diffusion phenomena.
Examples of anomalous diffusion in nature have been observed in ultra-cold atoms,[3]harmonic spring-mass systems,[4]scalar mixing in theinterstellar medium,[5]telomeresin thenucleusof cells,[6]ion channelsin theplasma membrane,[7]colloidal particle in thecytoplasm,[8][9][10]moisture transport in cement-based materials,[11]and worm-likemicellar solutions.[12]
Unlike typical diffusion, anomalous diffusion is described by a power law,
⟨r2(τ)⟩=Kατα{\displaystyle \langle r^{2}(\tau )\rangle =K_{\alpha }\tau ^{\alpha }\,}
whereKα{\displaystyle K_{\alpha }}is the so-called generalized diffusion coefficient andτ{\displaystyle \tau }is the elapsed time. The classes of anomalous diffusions are classified as follows:
In 1926, using weather balloons,Lewis Fry Richardsondemonstrated that the atmosphere exhibits super-diffusion.[15]In a bounded system, the mixing length (which determines the scale of dominant mixing motions) is given by thevon Kármán constantaccording to the equationlm=κz{\displaystyle l_{m}={\kappa }z}, wherelm{\displaystyle l_{m}}is the mixing length,κ{\displaystyle {\kappa }}is the von Kármán constant, andz{\displaystyle z}is the distance to the nearest boundary.[16]Because the scale of motions in the atmosphere is not limited, as in rivers or the subsurface, a plume continues to experience larger mixing motions as it increases in size, which also increases its diffusivity, resulting in super-diffusion.[17]
The types of anomalous diffusion given above allows one to measure the type. There are many possible ways to mathematically define a stochastic process which then has the right kind of power law. Some models are given here.
These are long range correlations between the signalscontinuous-time random walks(CTRW)[18]andfractional Brownian motion(fBm), and diffusion in disordered media.[19]Currently the most studied types of anomalous diffusion processes are those involving the following
These processes have growing interest incell biophysicswhere the mechanism behind anomalous diffusion has directphysiologicalimportance. Of particular interest, works by the groups ofEli Barkai,Maria Garcia-Parajo,Joseph Klafter,Diego Krapf, andRalf Metzlerhave shown that the motion of molecules in live cells often show a type of anomalous diffusion that breaks theergodic hypothesis.[20][21][22]This type of motion require novel formalisms for the underlyingstatistical physicsbecause approaches usingmicrocanonical ensembleandWiener–Khinchin theorembreak down.
|
https://en.wikipedia.org/wiki/Anomalous_diffusion
|
Reproducibility, closely related toreplicabilityandrepeatability, is a major principle underpinning thescientific method. For the findings of a study to be reproducible means that results obtained by anexperimentor anobservational studyor in astatistical analysisof adata setshould be achieved again with a high degree of reliability when the study is replicated. There are different kinds of replication[1]but typically replication studies involve different researchers using the same methodology. Only after one or several such successful replications should a result be recognized as scientific knowledge.
There are different kinds of replication studies, each serving a unique role in scientific validation:
Direct Replication – The exact experiment or study is repeated under the same conditions to verify the original findings.
Conceptual Replication – A study tests the same hypothesis but uses a different methodology, materials, or population to see if the results hold in different contexts.
Computational Reproducibility – In data science and computational research, reproducibility requires making all datasets, code, and algorithms openly available so others can replicate the analysis and obtain the same results.
Reproducibility serves several critical purposes in science:
Verification of Results – Confirms that findings are not due to random chance or errors.
Building Trust in Research – Scientists, policymakers, and the public rely on reproducible studies to make informed decisions.
Advancing Knowledge – Establishes a strong foundation for future research by validating existing theories.
Avoiding Bias and Fraud – Helps detect false positives, publication bias, and data manipulation that could mislead the scientific community.
Challenges in Achieving Reproducibility
Despite its importance, many studies fail reproducibility tests, leading to what is known as the replication crisis in fields like psychology, medicine, and social sciences. Some key challenges include:
Insufficient Data Sharing – Many researchers do not make raw data, code, or methodology openly available, making replication difficult.
Small Sample Sizes – Studies with limited sample sizes may show results that do not generalize to larger populations.
Publication Bias – Journals tend to publish positive findings rather than null or negative results, leading to an incomplete scientific record.
Complex Experimental Conditions – In some cases, small variations in laboratory settings, equipment, or researcher expertise can affect outcomes, making exact replication difficult.
Medical Research – Reproducibility ensures that clinical trials and drug effectiveness studies produce reliable results before treatments reach the public.
AI and Machine Learning – Scientists emphasize reproducibility in AI by requiring open-source models and datasets to validate algorithm performance.
Climate Science – Climate models must be reproducible across different datasets and simulations to ensure accurate predictions of global warming.
Pharmaceutical Development – Drug discovery relies on reproducing experiments across multiple labs to ensure safety and efficacy.
To enhance reproducibility, researchers and institutions can adopt several best practices:
Open Data and Code – Making datasets and computational methods publicly available ensures that others can verify results.
Registered Reports – Some scientific journals now accept studies based on pre-registered research plans, reducing bias.
Standardized Methods – Using well-documented, standardized experimental protocols helps ensure consistent results.
Independent Replication Studies – Funding agencies and journals should prioritize replication studies to strengthen scientific integrity.
With a narrower scope,reproducibilityhas been defined incomputational sciencesas having the following quality: the results should be documented by making all data and code available in such a way that the computations can be executed again with identical results.
In recent decades, there has been a rising concern that many published scientific results fail the test of reproducibility, evoking a reproducibility orreplication crisis.
The first to stress the importance of reproducibility in science was the Anglo-Irish chemistRobert Boyle, inEnglandin the 17th century. Boyle'sair pumpwas designed to generate and studyvacuum, which at the time was a very controversial concept. Indeed, distinguished philosophers such asRené DescartesandThomas Hobbesdenied the very possibility of vacuum existence.Historians of scienceSteven ShapinandSimon Schaffer, in their 1985 bookLeviathan and the Air-Pump, describe the debate between Boyle and Hobbes, ostensibly over the nature of vacuum, as fundamentally an argument about how useful knowledge should be gained. Boyle, a pioneer of theexperimental method, maintained that the foundations of knowledge should be constituted by experimentally produced facts, which can be made believable to a scientific community by their reproducibility. By repeating the same experiment over and over again, Boyle argued, the certainty of fact will emerge.
The air pump, which in the 17th century was a complicated and expensive apparatus to build, also led to one of the first documented disputes over the reproducibility of a particularscientific phenomenon. In the 1660s, the Dutch scientistChristiaan Huygensbuilt his own air pump inAmsterdam, the first one outside the direct management of Boyle and his assistant at the timeRobert Hooke. Huygens reported an effect he termed "anomalous suspension", in which water appeared to levitate in a glass jar inside his air pump (in fact suspended over an air bubble), but Boyle and Hooke could not replicate this phenomenon in their own pumps. As Shapin and Schaffer describe, "it became clear that unless the phenomenon could be produced in England with one of the two pumps available, then no one in England would accept the claims Huygens had made, or his competence in working the pump". Huygens was finally invited to England in 1663, and under his personal guidance Hooke was able to replicate anomalous suspension of water. Following this Huygens was elected a Foreign Member of theRoyal Society. However, Shapin and Schaffer also note that "the accomplishment of replication was dependent on contingent acts of judgment. One cannot write down a formula saying when replication was or was not achieved".[2]
Thephilosopher of scienceKarl Poppernoted briefly in his famous 1934 bookThe Logic of Scientific Discoverythat "non-reproducible single occurrences are of no significance to science".[3]ThestatisticianRonald Fisherwrote in his 1935 bookThe Design of Experiments, which set the foundations for the modern scientific practice ofhypothesis testingandstatistical significance, that "we may say that a phenomenon is experimentally demonstrable when we know how to conduct an experiment which will rarely fail to give us statistically significant results".[4]Such assertions express a commondogmain modern science that reproducibility is a necessary condition (although not necessarilysufficient) for establishing a scientific fact, and in practice for establishing scientific authority in any field of knowledge. However, as noted above by Shapin and Schaffer, this dogma is not well-formulated quantitatively, such as statistical significance for instance, and therefore it is not explicitly established how many times must a fact be replicated to be considered reproducible.
Replicabilityandrepeatabilityare related terms broadly or loosely synonymous with reproducibility (for example, among the general public), but they are often usefully differentiated in more precise senses, as follows.
Two major steps are naturally distinguished in connection with reproducibility of experimental or observational studies:
When new data is obtained in the attempt to achieve it, the termreplicabilityis often used, and the new study is areplicationorreplicateof the original one. Obtaining the same results when analyzing the data set of the original study again with the same procedures, many authors use the termreproducibilityin a narrow, technical sense coming from its use in computational research.Repeatabilityis related to therepetitionof the experiment within the same study by the same researchers.
Reproducibility in the original, wide sense is only acknowledged if a replication performed by anindependent researcher teamis successful.
The terms reproducibility and replicability sometimes appear even in the scientific literature with reversed meaning,[5][6]as different research fields settled on their own definitions for the same terms.[7]
In chemistry, the terms reproducibility and repeatability are used with a specific quantitative meaning.[8]In inter-laboratory experiments, a concentration or other quantity of a chemical substance is measured repeatedly in different laboratories to assess the variability of the measurements. Then, the standard deviation of the difference between two values obtained within the same laboratory is called repeatability. The standard deviation for the difference between two measurement from different laboratories is calledreproducibility.[9]These measures are related to the more general concept ofvariance componentsinmetrology.
The termreproducible researchrefers to the idea that scientific results should be documented in such a way that their deduction is fully transparent. This requires a detailed description of the methods used to obtain the data[10][11]and making the full dataset and the code to calculate the results easily accessible.[12][13][14][15][16][17]This is the essential part ofopen science.
To make any research project computationally reproducible, general practice involves all data and files being clearly separated, labelled, and documented. All operations should be fully documented and automated as much as practicable, avoiding manual intervention where feasible. The workflow should be designed as a sequence of smaller steps that are combined so that the intermediate outputs from one step directly feed as inputs into the next step. Version control should be used as it lets the history of the project be easily reviewed and allows for the documenting and tracking of changes in a transparent manner.
A basic workflow for reproducible research involves data acquisition, data processing and data analysis. Data acquisition primarily consists of obtaining primary data from a primary source such as surveys, field observations, experimental research, or obtaining data from an existing source. Data processing involves the processing and review of the raw data collected in the first stage, and includes data entry, data manipulation and filtering and may be done using software. The data should be digitized and prepared for data analysis. Data may be analysed with the use of software to interpret or visualise statistics or data to produce the desired results of the research such as quantitative results including figures and tables. The use of software and automation enhances the reproducibility of research methods.[18]
There are systems that facilitate such documentation, like theRMarkdownlanguage[19]or theJupyternotebook.[20][21][22]TheOpen Science Frameworkprovides a platform and useful tools to support reproducible research.
Psychology has seen a renewal of internal concerns about irreproducible results (see the entry onreplicability crisisfor empirical results on success rates of replications). Researchers showed in a 2006 study that, of 141 authors of a publication from the American Psychological Association (APA) empirical articles, 103 (73%) did not respond with their data over a six-month period.[23]In a follow-up study published in 2015, it was found that 246 out of 394 contacted authors of papers in APA journals did not share their data upon request (62%).[24]In a 2012 paper, it was suggested that researchers should publish data along with their works, and a dataset was released alongside as a demonstration.[25]In 2017, an article published inScientific Datasuggested that this may not be sufficient and that the whole analysis context should be disclosed.[26]
In economics, concerns have been raised in relation to the credibility and reliability of published research. In other sciences, reproducibility is regarded as fundamental and is often a prerequisite to research being published, however in economic sciences it is not seen as a priority of the greatest importance. Most peer-reviewed economic journals do not take any substantive measures to ensure that published results are reproducible, however, the top economics journals have been moving to adopt mandatory data and code archives.[27]There is low or no incentives for researchers to share their data, and authors would have to bear the costs of compiling data into reusable forms. Economic research is often not reproducible as only a portion of journals have adequate disclosure policies for datasets and program code, and even if they do, authors frequently do not comply with them or they are not enforced by the publisher. A Study of 599 articles published in 37 peer-reviewed journals revealed that while some journals have achieved significant compliance rates, significant portion have only partially complied, or not complied at all. On an article level, the average compliance rate was 47.5%; and on a journal level, the average compliance rate was 38%, ranging from 13% to 99%.[28]
A 2018 study published in the journalPLOS ONEfound that 14.4% of a sample of public health statistics researchers had shared their data or code or both.[29]
There have been initiatives to improve reporting and hence reproducibility in the medical literature for many years, beginning with theCONSORTinitiative, which is now part of a wider initiative, theEQUATOR Network.
This group has recently turned its attention to how better reporting might reduce waste in research,[30]especially biomedical research.
Reproducible research is key to new discoveries inpharmacology. A Phase I discovery will be followed by Phase II reproductions as a drug develops towards commercial production. In recent decades Phase II success has fallen from 28% to 18%. A 2011 study found that 65% of medical studies were inconsistent when re-tested, and only 6% were completely reproducible.[31]
Some efforts have been made to imcrease replicability beyond the social and biomedical sciences. Studies in the humanities tend to rely more on expertise and hermeneutics which may make replicability more difficult. Nonetheless, some efforts have been made to call for more transparency and documentation in the humanities.[32]
Hideyo Noguchibecame famous for correctly identifying the bacterial agent ofsyphilis, but also claimed that he could culture this agent in his laboratory. Nobody else has been able to produce this latter result.[33]
In March 1989,University of Utahchemists Stanley Pons and Martin Fleischmann reported the production of excess heat that could only be explained by a nuclear process ("cold fusion"). The report was astounding given the simplicity of the equipment: it was essentially anelectrolysiscell containingheavy waterand apalladiumcathodewhich rapidly absorbed thedeuteriumproduced during electrolysis. The news media reported on the experiments widely, and it was a front-page item on many newspapers around the world (seescience by press conference). Over the next several months others tried to replicate the experiment, but were unsuccessful.[34]
Nikola Teslaclaimed as early as 1899 to have used a high frequency current to light gas-filled lamps from over 25 miles (40 km) awaywithout using wires. In 1904 he builtWardenclyffe ToweronLong Islandto demonstrate means to send and receive power without connecting wires. The facility was never fully operational and was not completed due to economic problems, so no attempt to reproduce his first result was ever carried out.[35]
Other examples which contrary evidence has refuted the original claim:
|
https://en.wikipedia.org/wiki/Reproducibility
|
This is alist of operator splitting topics.
|
https://en.wikipedia.org/wiki/List_of_operator_splitting_topics
|
Anintegeris thenumberzero (0), a positivenatural number(1, 2, 3, ...), or the negation of a positive natural number (−1, −2, −3, ...).[1]The negations oradditive inversesof the positive natural numbers are referred to asnegative integers.[2]Thesetof all integers is often denoted by theboldfaceZorblackboard boldZ{\displaystyle \mathbb {Z} }.[3][4]
The set of natural numbersN{\displaystyle \mathbb {N} }is asubsetofZ{\displaystyle \mathbb {Z} }, which in turn is a subset of the set of allrational numbersQ{\displaystyle \mathbb {Q} }, itself a subset of thereal numbersR{\displaystyle \mathbb {R} }.[a]Like the set of natural numbers, the set of integersZ{\displaystyle \mathbb {Z} }iscountably infinite. An integer may be regarded as a real number that can be written without afractional component. For example, 21, 4, 0, and −2048 are integers, while 9.75,5+1/2, 5/4, and√2are not.[8]
The integers form the smallestgroupand the smallestringcontaining thenatural numbers. Inalgebraic number theory, the integers are sometimes qualified asrational integersto distinguish them from the more generalalgebraic integers. In fact, (rational) integers are algebraic integers that are alsorational numbers.
The word integer comes from theLatinintegermeaning "whole" or (literally) "untouched", fromin("not") plustangere("to touch"). "Entire" derives from the same origin via theFrenchwordentier, which means bothentireandinteger.[9]Historically the term was used for anumberthat was a multiple of 1,[10][11]or to the whole part of amixed number.[12][13]Only positive integers were considered, making the term synonymous with thenatural numbers. The definition of integer expanded over time to includenegative numbersas their usefulness was recognized.[14]For exampleLeonhard Eulerin his 1765Elements of Algebradefined integers to include both positive and negative numbers.[15]
The phrasethe set of the integerswas not used before the end of the 19th century, whenGeorg Cantorintroduced the concept ofinfinite setsandset theory. The use of the letter Z to denote the set of integers comes from theGermanwordZahlen("numbers")[3][4]and has been attributed toDavid Hilbert.[16]The earliest known use of the notation in a textbook occurs inAlgèbrewritten by the collectiveNicolas Bourbaki, dating to 1947.[3][17]The notation was not adopted immediately. For example, another textbook used the letter J,[18]and a 1960 paper used Z to denote the non-negative integers.[19]But by 1961, Z was generally used by modern algebra texts to denote the positive and negative integers.[20]
The symbolZ{\displaystyle \mathbb {Z} }is often annotated to denote various sets, with varying usage amongst different authors:Z+{\displaystyle \mathbb {Z} ^{+}},Z+{\displaystyle \mathbb {Z} _{+}}, orZ>{\displaystyle \mathbb {Z} ^{>}}for the positive integers,Z0+{\displaystyle \mathbb {Z} ^{0+}}orZ≥{\displaystyle \mathbb {Z} ^{\geq }}for non-negative integers, andZ≠{\displaystyle \mathbb {Z} ^{\neq }}for non-zero integers. Some authors useZ∗{\displaystyle \mathbb {Z} ^{*}}for non-zero integers, while others use it for non-negative integers, or for {−1,1} (thegroup of unitsofZ{\displaystyle \mathbb {Z} }). Additionally,Zp{\displaystyle \mathbb {Z} _{p}}is used to denote either the set ofintegers modulop(i.e., the set ofcongruence classesof integers), or the set ofp-adic integers.[21][22]
Thewhole numberswere synonymous with the integers up until the early 1950s.[23][24][25]In the late 1950s, as part of theNew Mathmovement,[26]American elementary school teachers began teaching thatwhole numbersreferred to thenatural numbers, excluding negative numbers, whileintegerincluded the negative numbers.[27][28]Thewhole numbersremain ambiguous to the present day.[29]
Ring homomorphisms
Algebraic structures
Related structures
Algebraic number theory
Noncommutative algebraic geometry
Free algebra
Clifford algebra
Like thenatural numbers,Z{\displaystyle \mathbb {Z} }isclosedunder theoperationsof addition andmultiplication, that is, the sum and product of any two integers is an integer. However, with the inclusion of the negative natural numbers (and importantly,0),Z{\displaystyle \mathbb {Z} }, unlike the natural numbers, is also closed undersubtraction.[30]
The integers form aringwhich is the most basic one, in the following sense: for any ring, there is a uniquering homomorphismfrom the integers into this ring. Thisuniversal property, namely to be aninitial objectin thecategory of rings, characterizes the ringZ{\displaystyle \mathbb {Z} }. This unique homomorphism isinjectiveif and only if thecharacteristicof the ring is zero. It follows that every ring of characteristic zero contains a subring isomorphic toZ{\displaystyle \mathbb {Z} }, which is its smallest subring.
Z{\displaystyle \mathbb {Z} }is not closed underdivision, since the quotient of two integers (e.g., 1 divided by 2) need not be an integer. Although the natural numbers are closed underexponentiation, the integers are not (since the result can be a fraction when the exponent is negative).
The following table lists some of the basic properties of addition and multiplication for any integersa,b, andc:
The first five properties listed above for addition say thatZ{\displaystyle \mathbb {Z} }, under addition, is anabelian group. It is also acyclic group, since every non-zero integer can be written as a finite sum1 + 1 + ... + 1or(−1) + (−1) + ... + (−1). In fact,Z{\displaystyle \mathbb {Z} }under addition is theonlyinfinite cyclic group—in the sense that any infinite cyclic group isisomorphictoZ{\displaystyle \mathbb {Z} }.
The first four properties listed above for multiplication say thatZ{\displaystyle \mathbb {Z} }under multiplication is acommutative monoid. However, not every integer has a multiplicative inverse (as is the case of the number 2), which means thatZ{\displaystyle \mathbb {Z} }under multiplication is not a group.
All the rules from the above property table (except for the last), when taken together, say thatZ{\displaystyle \mathbb {Z} }together with addition and multiplication is acommutative ringwithunity. It is the prototype of all objects of suchalgebraic structure. Only thoseequalitiesofexpressionsare true inZ{\displaystyle \mathbb {Z} }for allvalues of variables, which are true in any unital commutative ring. Certain non-zero integers map tozeroin certain rings.
The lack ofzero divisorsin the integers (last property in the table) means that the commutative ringZ{\displaystyle \mathbb {Z} }is anintegral domain.
The lack of multiplicative inverses, which is equivalent to the fact thatZ{\displaystyle \mathbb {Z} }is not closed under division, means thatZ{\displaystyle \mathbb {Z} }isnotafield. The smallest field containing the integers as asubringis the field ofrational numbers. The process of constructing the rationals from the integers can be mimicked to form thefield of fractionsof any integral domain. And back, starting from analgebraic number field(an extension of rational numbers), itsring of integerscan be extracted, which includesZ{\displaystyle \mathbb {Z} }as itssubring.
Although ordinary division is not defined onZ{\displaystyle \mathbb {Z} }, the division "with remainder" is defined on them. It is calledEuclidean division, and possesses the following important property: given two integersaandbwithb≠ 0, there exist unique integersqandrsuch thata=q×b+rand0 ≤r< |b|, where|b|denotes theabsolute valueofb. The integerqis called thequotientandris called theremainderof the division ofabyb. TheEuclidean algorithmfor computinggreatest common divisorsworks by a sequence of Euclidean divisions.
The above says thatZ{\displaystyle \mathbb {Z} }is aEuclidean domain. This implies thatZ{\displaystyle \mathbb {Z} }is aprincipal ideal domain, and any positive integer can be written as the products ofprimesin anessentially uniqueway.[31]This is thefundamental theorem of arithmetic.
Z{\displaystyle \mathbb {Z} }is atotally ordered setwithoutupper or lower bound. The ordering ofZ{\displaystyle \mathbb {Z} }is given by::... −3 < −2 < −1 < 0 < 1 < 2 < 3 < ....
An integer ispositiveif it is greater thanzero, andnegativeif it is less than zero. Zero is defined as neither negative nor positive.
The ordering of integers is compatible with the algebraic operations in the following way:
Thus it follows thatZ{\displaystyle \mathbb {Z} }together with the above ordering is anordered ring.
The integers are the only nontrivialtotally orderedabelian groupwhose positive elements arewell-ordered.[32]This is equivalent to the statement that anyNoetherianvaluation ringis either afield—or adiscrete valuation ring.
In elementary school teaching, integers are often intuitively defined as the union of the (positive) natural numbers,zero, and the negations of the natural numbers. This can be formalized as follows.[33]First construct the set of natural numbers according to thePeano axioms, call thisP{\displaystyle P}. Then construct a setP−{\displaystyle P^{-}}which isdisjointfromP{\displaystyle P}and in one-to-one correspondence withP{\displaystyle P}via a functionψ{\displaystyle \psi }. For example, takeP−{\displaystyle P^{-}}to be theordered pairs(1,n){\displaystyle (1,n)}with the mappingψ=n↦(1,n){\displaystyle \psi =n\mapsto (1,n)}. Finally let 0 be some object not inP{\displaystyle P}orP−{\displaystyle P^{-}}, for example the ordered pair (0,0). Then the integers are defined to be the unionP∪P−∪{0}{\displaystyle P\cup P^{-}\cup \{0\}}.
The traditional arithmetic operations can then be defined on the integers in apiecewisefashion, for each of positive numbers, negative numbers, and zero. For examplenegationis defined as follows:
−x={ψ(x),ifx∈Pψ−1(x),ifx∈P−0,ifx=0{\displaystyle -x={\begin{cases}\psi (x),&{\text{if }}x\in P\\\psi ^{-1}(x),&{\text{if }}x\in P^{-}\\0,&{\text{if }}x=0\end{cases}}}
The traditional style of definition leads to many different cases (each arithmetic operation needs to be defined on each combination of types of integer) and makes it tedious to prove that integers obey the various laws of arithmetic.[34]
In modern set-theoretic mathematics, a more abstract construction[35][36]allowing one to define arithmetical operations without any case distinction is often used instead.[37]The integers can thus be formally constructed as theequivalence classesofordered pairsofnatural numbers(a,b).[38]
The intuition is that(a,b)stands for the result of subtractingbfroma.[38]To confirm our expectation that1 − 2and4 − 5denote the same number, we define anequivalence relation~on these pairs with the following rule:
precisely when
Addition and multiplication of integers can be defined in terms of the equivalent operations on the natural numbers;[38]by using[(a,b)]to denote the equivalence class having(a,b)as a member, one has:
The negation (or additive inverse) of an integer is obtained by reversing the order of the pair:
Hence subtraction can be defined as the addition of the additive inverse:
The standard ordering on the integers is given by:
It is easily verified that these definitions are independent of the choice of representatives of the equivalence classes.
Every equivalence class has a unique member that is of the form(n,0)or(0,n)(or both at once). The natural numbernis identified with the class[(n,0)](i.e., the natural numbers areembeddedinto the integers by map sendingnto[(n,0)]), and the class[(0,n)]is denoted−n(this covers all remaining classes, and gives the class[(0,0)]a second time since −0 = 0.
Thus,[(a,b)]is denoted by
If the natural numbers are identified with the corresponding integers (using the embedding mentioned above), this convention creates no ambiguity.
This notation recovers the familiarrepresentationof the integers as{..., −2, −1, 0, 1, 2, ...}.
Some examples are:
In theoretical computer science, other approaches for the construction of integers are used byautomated theorem proversandterm rewrite engines. Integers are represented asalgebraic termsbuilt using a few basic operations (e.g.,zero,succ,pred) and usingnatural numbers, which are assumed to be already constructed (using thePeano approach).
There exist at least ten such constructions of signed integers.[39]These constructions differ in several ways: the number of basic operations used for the construction, the number (usually, between 0 and 2), and the types of arguments accepted by these operations; the presence or absence of natural numbers as arguments of some of these operations, and the fact that these operations are free constructors or not, i.e., that the same integer can be represented using only one or many algebraic terms.
The technique for the construction of integers presented in the previous section corresponds to the particular case where there is a single basic operationpair(x,y){\displaystyle (x,y)}that takes as arguments two natural numbersx{\displaystyle x}andy{\displaystyle y}, and returns an integer (equal tox−y{\displaystyle x-y}). This operation is not free since the integer 0 can be writtenpair(0,0), orpair(1,1), orpair(2,2), etc.. This technique of construction is used by theproof assistantIsabelle; however, many other tools use alternative construction techniques, notable those based upon free constructors, which are simpler and can be implemented more efficiently in computers.
An integer is often a primitivedata typeincomputer languages. However, integer data types can only represent asubsetof all integers, since practical computers are of finite capacity. Also, in the commontwo's complementrepresentation, the inherent definition ofsigndistinguishes between "negative" and "non-negative" rather than "negative, positive, and 0". (It is, however, certainly possible for a computer to determine whether an integer value is truly positive.) Fixed length integer approximation data types (or subsets) are denotedintor Integer in several programming languages (such asAlgol68,C,Java,Delphi, etc.).
Variable-length representations of integers, such asbignums, can store any integer that fits in the computer's memory. Other integer data types are implemented with a fixed size, usually a number of bits which is a power of 2 (4, 8, 16, etc.) or a memorable number of decimal digits (e.g., 9 or 10).
The set of integers iscountably infinite, meaning it is possible to pair each integer with a unique natural number. An example of such a pairing is
More technically, thecardinalityofZ{\displaystyle \mathbb {Z} }is said to equalℵ0(aleph-null). The pairing between elements ofZ{\displaystyle \mathbb {Z} }andN{\displaystyle \mathbb {N} }is called abijection.
This article incorporates material from Integer onPlanetMath, which is licensed under theCreative Commons Attribution/Share-Alike License.
|
https://en.wikipedia.org/wiki/Integer
|
In mathematics, in functional analysis, several differentwaveletsare known by the namePoisson wavelet. In one context, the term "Poisson wavelet" is used to denote a family of wavelets labeled by the set ofpositive integers, the members of which are associated with thePoisson probability distribution. These wavelets were first defined and studied by Karlene A. Kosanovich, Allan R. Moser and Michael J. Piovoso in 1995–96.[1][2]In another context, the term refers to a certain wavelet which involves a form of the Poisson integral kernel.[3]In still another context, the terminology is used to describe a family of complex wavelets indexed by positive integers which are connected with the derivatives of the Poisson integral kernel.[4]
For each positive integernthe Poisson waveletψn(t){\displaystyle \psi _{n}(t)}is defined by
To see the relation between the Poisson wavelet and the Poisson distribution letXbe a discrete random variable having the Poisson distribution with parameter (mean)tand, for each non-negative integern, let Prob(X=n) =pn(t). Then we have
The Poisson waveletψn(t){\displaystyle \psi _{n}(t)}is now given by
The Poisson wavelet family can be used to construct the family of Poisson wavelet transforms of functions defined the time domain. Since the Poisson wavelets satisfy the admissibility condition also, functions in the time domain can be reconstructed from their Poisson wavelet transforms using the formula for inverse continuous-time wavelet transforms.
Iff(t) is a function in the time domain itsn-th Poisson wavelet transform is given by
In the reverse direction, given then-th Poisson wavelet transform(Wnf)(a,b){\displaystyle (W_{n}f)(a,b)}of a functionf(t) in the time domain, the functionf(t) can be reconstructed as follows:
Poisson wavelet transforms have been applied in multi-resolution analysis, system identification, and parameter estimation. They are particularly useful in studying problems in which the functions in the time domain consist of linear combinations of decaying exponentials with time delay.
The Poisson wavelet is defined by the function[3]
This can be expressed in the form
The functionP(t){\displaystyle P(t)}appears as anintegral kernelin the solution of a certaininitial value problemof theLaplace operator.
This is the initial value problem: Given anys(x){\displaystyle s(x)}inLp(R){\displaystyle L^{p}(\mathbb {R} )}, find a harmonic functionϕ(x,y){\displaystyle \phi (x,y)}defined in theupper half-planesatisfying the following conditions:
The problem has the following solution: There is exactly one functionϕ(x,y){\displaystyle \phi (x,y)}satisfying the two conditions and it is given by
wherePy(t)=1yP(ty)=1πyt2+y2{\displaystyle P_{y}(t)={\frac {1}{y}}P\left({\frac {t}{y}}\right)={\frac {1}{\pi }}{\frac {y}{t^{2}+y^{2}}}}and where "⋆{\displaystyle \star }" denotes theconvolution operation. The functionPy(t){\displaystyle P_{y}(t)}is the integral kernel for the functionϕ(x,y){\displaystyle \phi (x,y)}. The functionϕ(x,y){\displaystyle \phi (x,y)}is the harmonic continuation ofs(x){\displaystyle s(x)}into the upper half plane.
The Poisson wavelet is a family of complex valued functions indexed by the set of positive integers and defined by[4][5]
The functionψn(t){\displaystyle \psi _{n}(t)}can be expressed as ann-th derivative as follows:
Writing the function(1−it)−1{\displaystyle (1-it)^{-1}}in terms of the Poisson integral kernelP(t)=11+t2{\displaystyle P(t)={\frac {1}{1+t^{2}}}}as
we have
Thusψn(t){\displaystyle \psi _{n}(t)}can be interpreted as a function proportional to the derivatives of the Poisson integral kernel.
The Fourier transform ofψn(t){\displaystyle \psi _{n}(t)}is given by
whereu(ω){\displaystyle u(\omega )}is theunit step function.
|
https://en.wikipedia.org/wiki/Poisson_wavelet
|
Truecasing, also calledcapitalization recovery,[1]capitalization correction,[2]orcase restoration,[3]is the problem innatural language processing(NLP) of determining the propercapitalizationof words where such information is unavailable. This commonly comes up due to the standard practice (inEnglishand many other languages) of automatically capitalizing the first word of a sentence. It can also arise in badly cased or noncased text (for example, all-lowercase or all-uppercasetext messages).
Truecasing is unnecessary in languages whose scripts do not have a distinction between uppercase and lowercase letters. This includes all languages not written in theLatin,Greek,CyrillicorArmenian alphabets, such asKorean,Japanese,Chinese,Thai,Hebrew,Arabic,Hindi, andGeorgian.
Truecasing aids in other NLP tasks, such asnamed entity recognition(NER),automatic content extraction(ACE), andmachine translation.[4]Proper capitalization allows easier detection of proper nouns, which are the starting points of NER and ACE. Some translation systems usestatistical machine learningtechniques, which could make use of the information contained in capitalization to increase accuracy.
|
https://en.wikipedia.org/wiki/Truecasing
|
Metaknowledgeormeta-knowledgeisknowledgeabout knowledge.[1]
Some authors divide meta-knowledge into orders:
Other authors call zero order meta-knowledgefirst order knowledge, and call first order meta-knowledgesecond order knowledge; meta-knowledge is also known ashigher order knowledge.[3]
Meta-knowledge is a fundamental conceptual instrument in such research and scientific domains as,knowledge engineering,knowledge management, and others dealing with study and operations on knowledge, seen as a unifiedobject/entities, abstracted from local conceptualizations and terminologies.
Examples of the first-level individual meta-knowledge are methods of planning, modeling,tagging, learning and every modification of adomain knowledge.
Indeed, universal meta-knowledge frameworks have to be valid for the organization of meta-levels of individual meta-knowledge.
Meta-knowledge may be automatically harvested from electronic publication archives, to reveal patterns in research, relationships between researchers and institutions and to identify contradictory results.[1]
This article aboutepistemologyis astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Meta-knowledge
|
Solomonoff's theory of inductive inferenceproves that, under its common sense assumptions (axioms), the best possible scientific model is the shortest algorithm that generates the empirical data under consideration. In addition to the choice of data, other assumptions are that, to avoid the post-hoc fallacy, the programming language must be chosen prior to the data[1]and that the environment being observed is generated by an unknown algorithm. This is also called a theory ofinduction. Due to its basis in the dynamical (state-space model) character ofAlgorithmic Information Theory, it encompassesstatisticalas well as dynamical information criteria for model selection. It was introduced byRay Solomonoff, based onprobability theoryandtheoretical computer science.[2][3][4]In essence, Solomonoff's induction derives theposterior probabilityof anycomputabletheory, given a sequence of observed data. This posterior probability is derived fromBayes' ruleand someuniversalprior, that is, a prior that assigns a positive probability to any computable theory.
Solomonoff proved that this induction isincomputable(or more precisely, lower semi-computable), but noted that "this incomputability is of a very benign kind", and that it "in no way inhibits its use for practical prediction" (as it can be approximated from below more accurately with more computational resources).[3]It is only "incomputable" in the benign sense that no scientific consensus is able to prove that the best currentscientific theoryis the best of all possible theories. However, Solomonoff's theory does provide an objective criterion for deciding among the current scientific theories explaining a given set of observations.
Solomonoff's induction naturally formalizesOccam's razor[5][6][7][8][9]by assigning larger prior credences to theories that require a shorter algorithmic description.
The theory is based in philosophical foundations, and was founded byRay Solomonoffaround 1960.[10]It is a mathematically formalized combination ofOccam's razor[5][6][7][8][9]and thePrinciple of Multiple Explanations.[11]Allcomputabletheories which perfectly describe previous observations are used to calculate the probability of the next observation, with more weight put on the shorter computable theories.Marcus Hutter'suniversal artificial intelligencebuilds upon this to calculate theexpected valueof an action.
Solomonoff's induction has been argued to be the computational formalization of pureBayesianism.[4]To understand, recall that Bayesianism derives the posterior probabilityP[T|D]{\displaystyle \mathbb {P} [T|D]}of a theoryT{\displaystyle T}given dataD{\displaystyle D}by applying Bayes rule, which yields
where theoriesA{\displaystyle A}are alternatives to theoryT{\displaystyle T}. For this equation to make sense, the quantitiesP[D|T]{\displaystyle \mathbb {P} [D|T]}andP[D|A]{\displaystyle \mathbb {P} [D|A]}must be well-defined for all theoriesT{\displaystyle T}andA{\displaystyle A}. In other words, any theory must define a probability distribution over observable dataD{\displaystyle D}. Solomonoff's induction essentially boils down to demanding that all such probability distributions becomputable.
Interestingly, the set of computable probability distributions is a subset of the set of all programs, which iscountable. Similarly, the sets of observable data considered by Solomonoff were finite.Without loss of generality, we can thus consider that any observable data is a finitebit string. As a result, Solomonoff's induction can be defined by only invoking discrete probability distributions.
Solomonoff's induction then allows to make probabilistic predictions of future dataF{\displaystyle F}, by simply obeying the laws of probability. Namely, we haveP[F|D]=ET[P[F|T,D]]=∑TP[F|T,D]P[T|D]{\displaystyle \mathbb {P} [F|D]=\mathbb {E} _{T}[\mathbb {P} [F|T,D]]=\sum _{T}\mathbb {P} [F|T,D]\mathbb {P} [T|D]}. This quantity can be interpreted as the average predictionsP[F|T,D]{\displaystyle \mathbb {P} [F|T,D]}of all theoriesT{\displaystyle T}given past dataD{\displaystyle D}, weighted by their posterior credencesP[T|D]{\displaystyle \mathbb {P} [T|D]}.
The proof of the "razor" is based on the known mathematical properties of a probability distribution over acountable set. These properties are relevant because theinfinite setof all programs is a denumerable set. The sum S of the probabilities of all programs must be exactly equal to one (as per the definition ofprobability) thus the probabilities must roughly decrease as we enumerate the infinite set of all programs, otherwise S will be strictly greater than one. To be more precise, for everyϵ{\displaystyle \epsilon }> 0, there is some lengthlsuch that the probability of all programs longer thanlis at mostϵ{\displaystyle \epsilon }. This does not, however, preclude very long programs from having very high probability.
Fundamental ingredients of the theory are the concepts ofalgorithmic probabilityandKolmogorov complexity. The universalprior probabilityof any prefixpof a computable sequencexis the sum of the probabilities of all programs (for auniversal computer) that compute something starting withp. Given somepand any computable but unknown probability distribution from whichxis sampled, the universal prior and Bayes' theorem can be used to predict the yet unseen parts ofxin optimal fashion.
The remarkable property of Solomonoff's induction is its completeness. In essence, the completeness theorem guarantees that the expected cumulative errors made by the predictions based on Solomonoff's induction are upper-bounded by theKolmogorov complexityof the (stochastic) data generating process. The errors can be measured using theKullback–Leibler divergenceor the square of the difference between the induction's prediction and the probability assigned by the (stochastic) data generating process.
Unfortunately, Solomonoff also proved that Solomonoff's induction is uncomputable. In fact, he showed thatcomputabilityand completeness are mutually exclusive: any complete theory must be uncomputable. The proof of this is derived from a game between the induction and the environment. Essentially, any computable induction can be tricked by a computable environment, by choosing the computable environment that negates the computable induction's prediction. This fact can be regarded as an instance of theno free lunch theorem.
Though Solomonoff's inductive inference is notcomputable, severalAIXI-derived algorithms approximate it in order to make it run on a modern computer. The more computing power they are given, the closer their predictions are to the predictions of inductive inference (their mathematicallimitis Solomonoff's inductive inference).[12][13][14]
Another direction of inductive inference is based onE. Mark Gold's model oflearning in the limitfrom 1967 and has developed since then more and more models of learning.[15]The general scenario is the following: Given a classSof computable functions, is there a learner (that is, recursive functional) which for any input of the form (f(0),f(1),...,f(n)) outputs a hypothesis (an indexewith respect to a previously agreed on acceptable numbering of all computable functions; the indexed function may be required consistent with the given values off). A learnerMlearns a functionfif almost all its hypotheses are the same indexe, which generates the functionf;MlearnsSifMlearns everyfinS. Basic results are that all recursively enumerable classes of functions are learnable while the class REC of all computable functions is not learnable.[citation needed]Many related models have been considered and also the learning of classes of recursively enumerable sets from positive data is a topic studied from Gold's pioneering paper in 1967 onwards. A far reaching extension of the Gold’s approach is developed by Schmidhuber's theory of generalized Kolmogorov complexities,[16]which are kinds ofsuper-recursive algorithms.
|
https://en.wikipedia.org/wiki/Solomonoff%27s_theory_of_inductive_inference
|
Slopsquattingis a type ofcybersquattingand practice of registering a non-existent software package name that alarge language model(LLM) mayhallucinatein its output, whereby someone unknowingly may copy-paste and install the software package without realizing it is fake.[1]Attempting to install a non-existent package should result in an error, but some have exploited this for their gain in the form oftyposquatting.[2]
The term was coined byPython Software FoundationDeveloper-in-Residence Seth Larson and popularized in April 2025 by Andrew Nesbitt onMastodon.[1]
The potential for slopsquatting was detailed in the academic paper, "We Have a Package for You! A Comprehensive Analysis of Package Hallucinations by Code Generating LLMs".[1][3]Some of the paper's main findings are that 19.7% of the LLM recommended packages did not exist, open-source models hallucinated far more frequently (21.7% on average, compared to commercial models at 5.2%),CodeLlama 7Band CodeLlama 34B hallucinated in over a third of outputs, and across all models, the researchers observed over 205,000 unique hallucinated package names.
In 2024, security researcher Bar Lanyado noted that LLMs hallucinated a packaged named "huggingface-cli".[4][5]While this name is identical to the command used for the command-line version of HuggingFace Hub, it is not the name of the package. The software is correctly installed with the codepip install -U "huggingface_hub[cli]". Lanyado tested the potential for slopsquatting by uploading an empty package under this hallucinated name. In three months, it had received over 30,000 downloads.[5]The hallucinated packaged name was also used in the README file of a repo for research conducted byAlibaba.[6]
Feross Aboukhadijeh, CEO of security firmSocket, warns about software engineers who are practicingvibe codingmay be susceptible to slopsquatting and either using the code without reviewing the code or theAI assistant toolinstalling the non-existent package.[2]There has not yet been a reported case where slopsquatting has been used as a cyber attack.
Thisartificial intelligence-related article is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Slopsquatting
|
This article presents atimelineof events in the history of computeroperating systemsfrom 1951 to the current day. For a narrative explaining the overall developments, see theHistory of operating systems.
NetBSD8.1
(15.0)
iOS 18.0
iPadOS 18
watchOS11
tvOS18
|
https://en.wikipedia.org/wiki/Timeline_of_operating_systems
|
Insignal processing, afilteris a device or process that removes some unwanted components or features from asignal. Filtering is a class ofsignal processing, the defining feature of filters being the complete or partial suppression of some aspect of the signal. Most often, this means removing somefrequenciesor frequency bands. However, filters do not exclusively act in thefrequency domain; especially in the field ofimage processingmany other targets for filtering exist. Correlations can be removed for certain frequency components and not for others without having to act in the frequency domain. Filters are widely used inelectronicsandtelecommunication, inradio,television,audio recording,radar,control systems,music synthesis,image processing,computer graphics, andstructural dynamics.
There are many different bases of classifying filters and these overlap in many different ways; there is no simple hierarchical classification. Filters may be:
Linear continuous-time circuit is perhaps the most common meaning for filter in the signal processing world, and simply "filter" is often taken to be synonymous. These circuits are generallydesignedto remove certainfrequenciesand allow others to pass. Circuits that perform this function are generallylinearin their response, or at least approximately so. Any nonlinearity would potentially result in the output signal containing frequency components not present in the input signal.
The modern design methodology for linear continuous-time filters is callednetwork synthesis. Some important filter families designed in this way are:
The difference between these filter families is that they all use a differentpolynomial functionto approximate to theideal filterresponse. This results in each having a differenttransfer function.
Another older, less-used methodology is theimage parameter method. Filters designed by this methodology are archaically called "wave filters". Some important filters designed by this method are:
Some terms used to describe and classify linear filters:
One important application of filters is intelecommunication.
Many telecommunication systems usefrequency-division multiplexing, where the system designers divide a wide frequency band into many narrower frequency bands called "slots" or "channels", and each stream of information is allocated one of those channels.
The people who design the filters at each transmitter and each receiver try to balance passing the desired signal through as accurately as possible, keeping interference to and from other cooperating transmitters and noise sources outside the system as low as possible, at reasonable cost.
Multilevelandmultiphasedigital modulationsystems require filters that have flat phase delay—are linear phase in the passband—to preserve pulse integrity in the time domain,[1]giving lessintersymbol interferencethan other kinds of filters.
On the other hand,analog audiosystems usinganalog transmissioncan tolerate much larger ripples inphase delay, and so designers of such systems often deliberately sacrifice linear phase to get filters that are better in other ways—better stop-band rejection, lower passband amplitude ripple, lower cost, etc.
Filters can be built in a number of different technologies. The same transfer function can be realised in several different ways, that is the mathematical properties of the filter are the same but the physical properties are quite different. Often the components in different technologies are directly analogous to each other and fulfill the same role in their respective filters. For instance, the resistors, inductors and capacitors of electronics correspond respectively to dampers, masses and springs in mechanics. Likewise, there are corresponding components indistributed-element filters.
Digital signal processingallows the inexpensive construction of a wide variety of filters. The signal is sampled and ananalog-to-digital converterturns the signal into a stream of numbers. A computer program running on aCPUor a specializedDSP(or less often running on a hardware implementation of thealgorithm) calculates an output number stream. This output can be converted to a signal by passing it through adigital-to-analog converter. There are problems with noise introduced by the conversions, but these can be controlled and limited for many useful filters. Due to the sampling involved, the input signal must be of limited frequency content oraliasingwill occur.
In the late 1930s, engineers realized that small mechanical systems made of rigid materials such asquartzwould acoustically resonate at radio frequencies, i.e. from audible frequencies (sound) up to several hundred megahertz. Some early resonators were made ofsteel, but quartz quickly became favored. The biggest advantage of quartz is that it ispiezoelectric. This means that quartz resonators can directly convert their own mechanical motion into electrical signals. Quartz also has a very low coefficient of thermal expansion which means that quartz resonators can produce stable frequencies over a wide temperature range.Quartz crystalfilters have much higher quality factors than LCR filters. When higher stabilities are required, the crystals and their driving circuits may be mounted in a "crystal oven" to control the temperature. For very narrow band filters, sometimes several crystals are operated in series.
A large number of crystals can be collapsed into a single component, by mounting comb-shaped evaporations of metal on a quartz crystal. In this scheme, a "tappeddelay line" reinforces the desired frequencies as the sound waves flow across the surface of the quartz crystal. The tapped delay line has become a general scheme of making high-Qfilters in many different ways.
SAW (surface acoustic wave) filters areelectromechanicaldevices commonly used inradio frequencyapplications. Electrical signals are converted to a mechanical wave in a device constructed of apiezoelectriccrystal or ceramic; this wave is delayed as it propagates across the device, before being converted back to an electrical signal by furtherelectrodes. The delayed outputs are recombined to produce a direct analog implementation of afinite impulse responsefilter. This hybrid filtering technique is also found in ananalog sampled filter.
SAW filters are limited to frequencies up to 3 GHz. The filters were developed by ProfessorTed Paigeand others.[2]
BAW (bulk acoustic wave) filters areelectromechanicaldevices. BAW filters can implement ladder or lattice filters. BAW filters typically operate at frequencies from around 2 to around 16 GHz, and may be smaller or thinner than equivalent SAW filters. Two main variants of BAW filters are making their way into devices:thin-film bulk acoustic resonatoror FBAR and solid mounted bulk acoustic resonators (SMRs).
Another method of filtering, atmicrowavefrequencies from 800 MHz to about 5 GHz, is to use a syntheticsingle crystalyttrium iron garnetsphere made of a chemical combination ofyttriumandiron(YIGF, or yttrium iron garnet filter). The garnet sits on a strip of metal driven by atransistor, and a small loopantennatouches the top of the sphere. Anelectromagnetchanges the frequency that the garnet will pass. The advantage of this method is that the garnet can be tuned over a very wide frequency by varying the strength of themagnetic field.
For even higher frequencies and greater precision, the vibrations of atoms must be used.Atomic clocksusecaesiummasersas ultra-highQfilters to stabilize their primary oscillators. Another method, used at high, fixed frequencies with very weak radio signals, is to use arubymaser tapped delay line.
Thetransfer functionof a filter is most often defined in the domain of the complex frequencies. The back and forth passage to/from this domain is operated by theLaplace transformand its inverse (therefore, here below, the term "input signal" shall be understood as "the Laplace transform of" the time representation of the input signal, and so on).
Thetransfer functionH(s){\displaystyle H(s)}of a filter is the ratio of the output signalY(s){\displaystyle Y(s)}to the input signalX(s){\displaystyle X(s)}as a function of the complex frequencys{\displaystyle s}:
withs=σ+jω{\displaystyle s=\sigma +j\omega }.
For filters that are constructed of discrete components (lumped elements):
Distributed-element filtersdo not, in general, have rational-function transfer functions, but can approximate them.
The construction of a transfer function involves theLaplace transform, and therefore it is needed to assume null initial conditions, because
And whenf(0) = 0 we can get rid of the constants and use the usual expression
An alternative to transfer functions is to give the behavior of the filter as aconvolutionof the time-domain input with the filter'simpulse response. Theconvolution theorem, which holds for Laplace transforms, guarantees equivalence with transfer functions.
Certain filters may be specified by family and bandform. A filter's family is specified by the approximating polynomial used, and each leads to certain characteristics of the transfer function of the filter. Some common filter families and their particular characteristics are:
Each family of filters can be specified to a particular order. The higher the order, the more the filter will approach the "ideal" filter; but also the longer the impulse response is and the longer the latency will be. An ideal filter has full transmission in the pass band, complete attenuation in the stop band, and an abrupt transition between the two bands, but this filter has infinite order (i.e., the response cannot be expressed as alinear differential equationwith a finite sum) and infinite latency (i.e., itscompact supportin theFourier transformforces its time response to be ever lasting).
Here is an image comparing Butterworth, Chebyshev, and elliptic filters. The filters in this illustration are all fifth-order low-pass filters. The particular implementation – analog or digital, passive or active – makes no difference; their output would be the same. As is clear from the image, elliptic filters are sharper than the others, but they show ripples on the whole bandwidth.
Any family can be used to implement a particular bandform of which frequencies are transmitted, and which, outside the passband, are more or less attenuated. The transfer function completely specifies the behavior of a linear filter, but not the particular technology used to implement it. In other words, there are a number of different ways of achieving a particular transfer function when designing a circuit. A particular bandform of filter can be obtained bytransformationof aprototype filterof that family.
Impedance matchingstructures invariably take on the form of a filter, that is, a network of non-dissipative elements. For instance, in a passive electronics implementation, it would likely take the form of aladder topologyof inductors and capacitors. The design of matching networks shares much in common with filters and the design invariably will have a filtering action as an incidental consequence. Although the prime purpose of a matching network is not to filter, it is often the case that both functions are combined in the same circuit. The need for impedance matching does not arise while signals are in the digital domain.
Similar comments can be made regardingpower dividers and directional couplers. When implemented in a distributed-element format, these devices can take the form of adistributed-element filter. There are four ports to be matched and widening the bandwidth requires filter-like structures to achieve this. The inverse is also true: distributed-element filters can take the form of coupled lines.[3]
|
https://en.wikipedia.org/wiki/Filter_(signal_processing)
|
METEOR(Metric for Evaluation of Translation with Explicit ORdering) is ametricfor theevaluation of machine translation output. The metric is based on theharmonic meanof unigramprecision and recall, with recall weighted higher than precision. It also has several features that are not found in other metrics, such asstemmingandsynonymymatching, along with the standard exact word matching. The metric was designed to fix some of the problems found in the more popularBLEUmetric, and also produce good correlation with human judgement at the sentence or segment level. This differs from the BLEU metric in that BLEU seeks correlation at the corpus level.
Results have been presented which givecorrelationof up to 0.964 with human judgement at the corpus level, compared toBLEU's achievement of 0.817 on the same data set. At the sentence level, the maximum correlation with human judgement achieved was 0.403.[1]
As withBLEU, the basic unit of evaluation is the sentence, the algorithm first creates analignment(see illustrations) between twosentences, the candidate translation string, and the reference translation string. Thealignmentis a set ofmappingsbetweenunigrams. A mapping can be thought of as a line between a unigram in one string, and a unigram in another string. The constraints are as follows; every unigram in the candidate translation must map to zero or one unigram in the reference. Mappings are selected to produce analignmentas defined above. If there are two alignments with the same number of mappings, the alignment is chosen with the fewestcrosses, that is, with fewerintersectionsof two mappings. From the two alignments shown, alignment (a) would be selected at this point. Stages are run consecutively and each stage only adds to the alignment those unigrams which have not been matched in previous stages. Once the final alignment is computed, the score is computed as follows: Unigram precisionPis calculated as:
Wheremis the number of unigrams in the candidate translation that are also found in the reference translation, andwt{\displaystyle w_{t}}is the number of unigrams in the candidate translation. Unigram recallRis computed as:
Wheremis as above, andwr{\displaystyle w_{r}}is the number of unigrams in the reference translation. Precision and recall are combined using theharmonic meanin the following fashion, with recall weighted 9 times more than precision:
The measures that have been introduced so far only account for congruity with respect to single words but not with respect to larger segments that appear in both the reference and the candidate sentence. In order to take these into account, longern-gram matches are used to compute a penaltypfor the alignment. The more mappings there are that are not adjacent in the reference and the candidate sentence, the higher the penalty will be.
In order to compute this penalty, unigrams are grouped into the fewest possiblechunks, where a chunk is defined as a set of unigrams that are adjacent in the hypothesis and in the reference. The longer the adjacent mappings between the candidate and the reference, the fewer chunks there are. A translation that is identical to the reference will give just one chunk. The penaltypis computed as follows,
Wherecis the number of chunks, andum{\displaystyle u_{m}}is the number of unigrams that have been mapped. The final score for a segment is calculated asMbelow. The penalty has the effect of reducing theFmean{\displaystyle F_{mean}}by up to 50% if there are no bigram or longer matches.
To calculate a score over a wholecorpus, or collection of segments, the aggregate values forP,Randpare taken and then combined using the same formula. The algorithm also works for comparing a candidate translation against more than one reference translations. In this case the algorithm compares the candidate against each of the references and selects the highest score.
|
https://en.wikipedia.org/wiki/METEOR
|
Radioactive decay(also known asnuclear decay,radioactivity,radioactive disintegration, ornuclear disintegration) is the process by which an unstableatomic nucleusloses energy byradiation. A material containing unstable nuclei is consideredradioactive. Three of the most common types of decay arealpha,beta, andgamma decay. Theweak forceis themechanismthat is responsible for beta decay, while the other two are governed by theelectromagneticandnuclear forces.[1]
Radioactive decay is arandomprocess at the level of single atoms. According toquantum theory, it is impossible to predict when a particular atom will decay, regardless of how long the atom has existed.[2][3][4]However, for a significant number of identical atoms, the overall decay rate can be expressed as adecay constantor as ahalf-life. The half-lives of radioactive atoms have a huge range: from nearly instantaneous to far longer than theage of the universe.
The decaying nucleus is called the parentradionuclide(or parentradioisotope), and the process produces at least onedaughter nuclide. Except for gamma decay orinternal conversionfrom a nuclearexcited state, the decay is anuclear transmutationresulting in a daughter containing a different number ofprotonsorneutrons(or both). When the number of protons changes, an atom of a differentchemical elementis created.
There are 28 naturally occurring chemical elements on Earth that are radioactive, consisting of 35radionuclides(seven elements have two different radionuclides each) that date before the time of formation of theSolar System. These 35 are known asprimordial radionuclides. Well-known examples areuraniumandthorium, but also included are naturally occurring long-lived radioisotopes, such aspotassium-40. Each of the heavyprimordial radionuclidesparticipates in one of the fourdecay chains.
Henri Poincarélaid the seeds for the discovery of radioactivity through his interest in and studies ofX-rays, which significantly influenced physicistHenri Becquerel.[5]Radioactivity was discovered in 1896 by Becquerel and independently byMarie Curie, while working withphosphorescentmaterials.[6][7][8][9][10]These materials glow in the dark after exposure to light, and Becquerel suspected that the glow produced incathode-ray tubesby X-rays might be associated with phosphorescence. He wrapped a photographic plate in black paper and placed various phosphorescentsaltson it. All results were negative until he useduraniumsalts. The uranium salts caused a blackening of the plate in spite of the plate being wrapped in black paper. These radiations were given the name "Becquerel Rays".
It soon became clear that the blackening of the plate had nothing to do with phosphorescence, as the blackening was also produced by non-phosphorescentsaltsof uranium and by metallic uranium. It became clear from these experiments that there was a form of invisible radiation that could pass through paper and was causing the plate to react as if exposed to light.
At first, it seemed as though the new radiation was similar to the then recently discovered X-rays. Further research by Becquerel,Ernest Rutherford,Paul Villard,Pierre Curie,Marie Curie, and others showed that this form of radioactivity was significantly more complicated. Rutherford was the first to realize that all such elements decay in accordance with the same mathematical exponential formula. Rutherford and his studentFrederick Soddywere the first to realize that many decay processes resulted in thetransmutationof one element to another. Subsequently, theradioactive displacement law of Fajans and Soddywas formulated to describe the products of alpha andbeta decay.[11][12]
The early researchers also discovered that many otherchemical elements, besides uranium, have radioactive isotopes. A systematic search for the total radioactivity in uranium ores also guided Pierre and Marie Curie to isolate two new elements:poloniumandradium. Except for the radioactivity of radium, the chemical similarity of radium tobariummade these two elements difficult to distinguish.
Marie and Pierre Curie's study of radioactivity is an important factor in science and medicine. After their research on Becquerel's rays led them to the discovery of both radium and polonium, they coined the term "radioactivity"[13]to define the emission ofionizing radiationby some heavy elements.[14](Later the term was generalized to all elements.) Their research on the penetrating rays in uranium and the discovery of radium launched an era of using radium for the treatment of cancer. Their exploration of radium could be seen as the first peaceful use of nuclear energy and the start of modernnuclear medicine.[13]
The dangers ofionizing radiationdue to radioactivity and X-rays were not immediately recognized.
The discovery of X‑rays byWilhelm Röntgenin 1895 led to widespread experimentation by scientists, physicians, and inventors. Many people began recounting stories of burns, hair loss and worse in technical journals as early as 1896. In February of that year, Professor Daniel and Dr. Dudley ofVanderbilt Universityperformed an experiment involving X-raying Dudley's head that resulted in his hair loss. A report by Dr. H.D. Hawks, of his suffering severe hand and chest burns in an X-ray demonstration, was the first of many other reports inElectrical Review.[15]
Other experimenters, includingElihu ThomsonandNikola Tesla, also reported burns. Thomson deliberately exposed a finger to an X-ray tube over a period of time and suffered pain, swelling, and blistering.[16]Other effects, including ultraviolet rays and ozone, were sometimes blamed for the damage,[17]and many physicians still claimed that there were no effects from X-ray exposure at all.[16]
Despite this, there were some early systematic hazard investigations, and as early as 1902William Herbert Rollinswrote almost despairingly that his warnings about the dangers involved in the careless use of X-rays were not being heeded, either by industry or by his colleagues. By this time, Rollins had proved that X-rays could kill experimental animals, could cause a pregnant guinea pig to abort, and that they could kill a foetus. He also stressed that "animals vary in susceptibility to the external action of X-light" and warned that these differences be considered when patients were treated by means of X-rays.[citation needed]
However, the biological effects of radiation due to radioactive substances were less easy to gauge. This gave the opportunity for many physicians and corporations to market radioactive substances aspatent medicines. Examples were radiumenematreatments, and radium-containing waters to be drunk as tonics. Marie Curie protested against this sort of treatment, warning that "radium is dangerous in untrained hands".[18]Curie later died fromaplastic anaemia, likely caused by exposure to ionizing radiation. By the 1930s, after a number of cases of bone necrosis and death of radium treatment enthusiasts, radium-containing medicinal products had been largely removed from the market (radioactive quackery).
Only a year afterRöntgen's discovery of X-rays, the American engineerWolfram Fuchs(1896) gave what is probably the first protection advice, but it was not until 1925 that the firstInternational Congress of Radiology(ICR) was held and considered establishing international protection standards. The effects of radiation on genes, including the effect of cancer risk, were recognized much later. In 1927,Hermann Joseph Mullerpublished research showing genetic effects and, in 1946, was awarded theNobel Prize in Physiology or Medicinefor his findings.
The second ICR was held in Stockholm in 1928 and proposed the adoption of theröntgenunit, and theInternational X-ray and Radium Protection Committee(IXRPC) was formed.Rolf Sievertwas named chairman, but a driving force wasGeorge Kayeof the BritishNational Physical Laboratory. The committee met in 1931, 1934, and 1937.
AfterWorld War II, the increased range and quantity of radioactive substances being handled as a result of military and civil nuclear programs led to large groups of occupational workers and the public being potentially exposed to harmful levels of ionising radiation. This was considered at the first post-war ICR convened in London in 1950, when the presentInternational Commission on Radiological Protection(ICRP) was born.[19]Since then the ICRP has developed the present international system of radiation protection, covering all aspects of radiation hazards.
In 2020, Hauptmann and another 15 international researchers from eight nations (among them: Institutes of Biostatistics, Registry Research, Centers of Cancer Epidemiology, Radiation Epidemiology, and also theU.S. National Cancer Institute(NCI),International Agency for Research on Cancer(IARC) and theRadiation Effects Research Foundation of Hiroshima) studied definitively throughmeta-analysisthe damage resulting from the "low doses" that have afflicted survivors of theatomic bombings of Hiroshima and Nagasakiand also in numerousaccidents at nuclear plantsthat have occurred. These scientists reported, inJNCI Monographs: Epidemiological Studies of Low Dose Ionizing Radiation and Cancer Risk, that the new epidemiological studies directly support excess cancer risks from low-dose ionizing radiation.[20]In 2021, Italian researcher Sebastiano Venturi reported the first correlations between radio-caesium andpancreatic cancerwith the role ofcaesiumin biology, in pancreatitis and in diabetes of pancreatic origin.[21]
TheInternational System of Units(SI) unit of radioactive activity is thebecquerel(Bq), named in honor of the scientistHenri Becquerel. One Bq is defined as one transformation (or decay or disintegration) per second.
An older unit of radioactivity is thecurie, Ci, which was originally defined as "the quantity or mass ofradium emanationinequilibriumwith one gram ofradium(element)".[22]Today, the curie is defined as3.7×1010disintegrations per second, so that 1curie(Ci) =3.7×1010Bq.
For radiological protection purposes, although the United States Nuclear Regulatory Commission permits the use of the unit curie alongside SI units,[23]theEuropean UnionEuropean units of measurement directivesrequired that its use for "public health ... purposes" be phased out by 31 December 1985.[24]
The effects of ionizing radiation are often measured in units ofgrayfor mechanical orsievertfor damage to tissue.
Radioactive decay results in a reduction of summed restmass, once the released energy (thedisintegration energy) has escaped in some way. Althoughdecay energyis sometimes defined as associated with the difference between the mass of the parent nuclide products and the mass of the decay products, this is true only of rest mass measurements, where some energy has been removed from the product system. This is true because the decay energy must always carry mass with it, wherever it appears (seemass in special relativity) according to the formulaE=mc2. The decay energy is initially released as the energy of emitted photons plus the kinetic energy of massive emitted particles (that is, particles that have rest mass). If these particles come tothermal equilibriumwith their surroundings and photons are absorbed, then the decay energy is transformed to thermal energy, which retains its mass.
Decay energy, therefore, remains associated with a certain measure of the mass of the decay system, calledinvariant mass, which does not change during the decay, even though the energy of decay is distributed among decay particles. The energy of photons, the kinetic energy of emitted particles, and, later, the thermal energy of the surrounding matter, all contribute to theinvariant massof the system. Thus, while the sum of the rest masses of the particles is not conserved in radioactive decay, thesystemmass and system invariant mass (and also the system total energy) is conserved throughout any decay process. This is a restatement of the equivalent laws ofconservation of energyandconservation of mass.
Early researchers found that anelectricormagnetic fieldcould split radioactive emissions into three types of beams. The rays were given the namesalpha,beta, and gamma, in increasing order of their ability to penetrate matter. Alpha decay is observed only in heavier elements of atomic number 52 (tellurium) and greater, with the exception ofberyllium-8(which decays to two alpha particles). The other two types of decay are observed in all the elements. Lead,atomic number82, is the heaviest element to have any isotopes stable (to the limit of measurement) to radioactive decay. Radioactive decay is seen in all isotopes of all elements of atomic number 83 (bismuth) or greater.Bismuth-209, however, is only very slightly radioactive, with a half-life greater than the age of the universe; radioisotopes with extremely long half-lives are considered effectively stable for practical purposes.
In analyzing the nature of the decay products, it was obvious from the direction of theelectromagnetic forcesapplied to the radiations by external magnetic and electric fields that alpha particles carried a positive charge, beta particles carried a negative charge, and gamma rays were neutral. From the magnitude of deflection, it was clear thatalpha particleswere much more massive thanbeta particles. Passing alpha particles through a very thin glass window and trapping them in adischarge tubeallowed researchers to study theemission spectrumof the captured particles, and ultimately proved that alpha particles areheliumnuclei. Other experiments showed beta radiation, resulting from decay andcathode rays, were high-speedelectrons. Likewise, gamma radiation and X-rays were found to be high-energyelectromagnetic radiation.
The relationship between the types of decays also began to be examined: For example, gamma decay was almost always found to be associated with other types of decay, and occurred at about the same time, or afterwards. Gamma decay as a separate phenomenon, with its own half-life (now termedisomeric transition), was found in natural radioactivity to be a result of the gamma decay of excited metastablenuclear isomers, which were in turn created from other types of decay. Although alpha, beta, and gamma radiations were most commonly found, other types of emission were eventually discovered. Shortly after the discovery of thepositronin cosmic ray products, it was realized that the same process that operates in classical beta decay can also produce positrons (positron emission), along withneutrinos(classical beta decay produces antineutrinos).
In electron capture, some proton-rich nuclides were found to capture their own atomic electrons instead of emitting positrons, and subsequently, these nuclides emit only a neutrino and a gamma ray from the excited nucleus (and often alsoAuger electronsandcharacteristic X-rays, as a result of the re-ordering of electrons to fill the place of the missing captured electron). These types of decay involve the nuclear capture of electrons or emission of electrons or positrons, and thus acts to move a nucleus toward the ratio of neutrons to protons that has the least energy for a given total number ofnucleons. This consequently produces a more stable (lower energy) nucleus.
A hypothetical process of positron capture, analogous to electron capture, is theoretically possible in antimatter atoms, but has not been observed, as complex antimatter atoms beyondantiheliumare not experimentally available.[25]Such a decay would require antimatter atoms at least as complex asberyllium-7, which is the lightest known isotope of normal matter to undergo decay by electron capture.[26]
Shortly after the discovery of the neutron in 1932,Enrico Fermirealized that certain rare beta-decay reactions immediately yield neutrons as an additional decay particle, so called beta-delayedneutron emission. Neutron emission usually happens from nuclei that are in an excited state, such as the excited17O* produced from the beta decay of17N. The neutron emission process itself is controlled by thenuclear forceand therefore is extremely fast, sometimes referred to as "nearly instantaneous". Isolatedproton emissionwas eventually observed in some elements. It was also found that some heavy elements may undergospontaneous fissioninto products that vary in composition. In a phenomenon calledcluster decay, specific combinations of neutrons and protons other than alpha particles (helium nuclei) were found to be spontaneously emitted from atoms.
Other types of radioactive decay were found to emit previously seen particles but via different mechanisms. An example isinternal conversion, which results in an initial electron emission, and then often furthercharacteristic X-raysandAuger electronsemissions, although the internal conversion process involves neither beta nor gamma decay. A neutrino is not emitted, and none of the electron(s) and photon(s) emitted originate in the nucleus, even though the energy to emit all of them does originate there. Internal conversion decay, likeisomeric transitiongamma decay and neutron emission, involves the release of energy by an excited nuclide, without the transmutation of one element into another.
Rare events that involve a combination of two beta-decay-type events happening simultaneously are known (see below). Any decay process that does not violate the conservation of energy or momentum laws (and perhaps other particle conservation laws) is permitted to happen, although not all have been detected. An interesting example discussed in a final section, isbound state beta decayofrhenium-187. In this process, the beta electron-decay of the parent nuclide is not accompanied by beta electron emission, because the beta particle has been captured into the K-shell of the emitting atom. An antineutrino is emitted, as in all negative beta decays.
If energy circumstances are favorable, a given radionuclide may undergo many competing types of decay, with some atoms decaying by one route, and others decaying by another. An example iscopper-64, which has 29 protons, and 35 neutrons, which decays with a half-life of12.7004(13)hours.[27]This isotope has one unpaired proton and one unpaired neutron, so either the proton or the neutron can decay to the other particle, which has oppositeisospin. This particular nuclide (though not all nuclides in this situation) is more likely to decay throughbeta plus decay(61.52(26)%[27]) than throughelectron capture(38.48(26)%[27]). The excited energy states resulting from these decays which fail to end in a ground energy state, also produce later internal conversion andgamma decayin almost 0.5% of the time.
The daughter nuclide of a decay event may also be unstable (radioactive). In this case, it too will decay, producing radiation. The resulting second daughter nuclide may also be radioactive. This can lead to a sequence of several decay events called adecay chain(see this article for specific details of important natural decay chains). Eventually, a stable nuclide is produced. Any decay daughters that are the result of an alpha decay will also result in helium atoms being created.
Some radionuclides may have several different paths of decay. For example,35.94(6)%[27]ofbismuth-212decays, through alpha-emission, tothallium-208while64.06(6)%[27]ofbismuth-212decays, through beta-emission, topolonium-212. Boththallium-208andpolonium-212are radioactive daughter products of bismuth-212, and both decay directly to stablelead-208.
According to theBig Bang theory, stable isotopes of the lightest three elements (H, He, and traces ofLi) were produced very shortly after the emergence of the universe, in a process calledBig Bang nucleosynthesis. These lightest stable nuclides (includingdeuterium) survive to today, but any radioactive isotopes of the light elements produced in the Big Bang (such astritium) have long since decayed. Isotopes of elements heavier than boron were not produced at all in the Big Bang, and these first five elements do not have any long-lived radioisotopes. Thus, all radioactive nuclei are, therefore, relatively young with respect to the birth of the universe, having formed later in various other types ofnucleosynthesisinstars(in particular,supernovae), and also during ongoing interactions between stable isotopes and energetic particles. For example,carbon-14, a radioactive nuclide with a half-life of only5700(30)years,[27]is constantly produced in Earth's upper atmosphere due to interactions between cosmic rays and nitrogen.
Nuclides that are produced by radioactive decay are calledradiogenic nuclides, whether they themselves arestableor not. There exist stable radiogenic nuclides that were formed from short-livedextinct radionuclidesin the early Solar System.[28][29]The extra presence of these stable radiogenic nuclides (such as xenon-129 from extinctiodine-129) against the background of primordialstable nuclidescan be inferred by various means.
Radioactive decay has been put to use in the technique ofradioisotopic labeling, which is used to track the passage of a chemical substance through a complex system (such as a livingorganism). A sample of the substance is synthesized with a high concentration of unstable atoms. The presence of the substance in one or another part of the system is determined by detecting the locations of decay events.
On the premise that radioactive decay is trulyrandom(rather than merelychaotic), it has been used inhardware random-number generators. Because the process is not thought to vary significantly in mechanism over time, it is also a valuable tool in estimating the absolute ages of certain materials. For geological materials, the radioisotopes and some of their decay products become trapped when a rock solidifies, and can then later be used (subject to many well-known qualifications) to estimate the date of the solidification. These include checking the results of several simultaneous processes and their products against each other, within the same sample. In a similar fashion, and also subject to qualification, the rate of formation of carbon-14 in various eras, the date of formation of organic matter within a certain period related to the isotope's half-life may be estimated, because the carbon-14 becomes trapped when the organic matter grows and incorporates the new carbon-14 from the air. Thereafter, the amount of carbon-14 in organic matter decreases according to decay processes that may also be independently cross-checked by other means (such as checking the carbon-14 in individual tree rings, for example).
The Szilard–Chalmers effect is the breaking of a chemical bond as a result of a kinetic energy imparted from radioactive decay. It operates by the absorption of neutrons by an atom and subsequent emission of gamma rays, often with significant amounts of kinetic energy. This kinetic energy, byNewton's third law, pushes back on the decaying atom, which causes it to move with enough speed to break a chemical bond.[30]This effect can be used to separate isotopes by chemical means.
The Szilard–Chalmers effect was discovered in 1934 byLeó Szilárdand Thomas A. Chalmers.[31]They observed that after bombardment by neutrons, the breaking of a bond in liquid ethyl iodide allowed radioactive iodine to be removed.[32]
Radioactiveprimordial nuclidesfound in theEarthare residues from ancientsupernovaexplosions that occurred before the formation of theSolar System. They are the fraction of radionuclides that survived from that time, through the formation of the primordial solarnebula, through planetaccretion, and up to the present time. The naturally occurring short-livedradiogenicradionuclides found in today'srocks, are the daughters of those radioactive primordial nuclides. Another minor source of naturally occurring radioactive nuclides arecosmogenic nuclides, that are formed by cosmic ray bombardment of material in the Earth'satmosphereorcrust. The decay of the radionuclides in rocks of the Earth'smantleandcrustcontribute significantly toEarth's internal heat budget.
While the underlying process of radioactive decay is subatomic, historically and in most practical cases it is encountered in bulk materials with very large numbers of atoms. This section discusses models that connect events at the atomic level to observations in aggregate.
Thedecay rate, oractivity, of a radioactive substance is characterized by the following time-independent parameters:
Although these are constants, they are associated with thestatistical behavior of populationsof atoms. In consequence, predictions using these constants are less accurate for minuscule samples of atoms.
In principle a half-life, a third-life, or even a (1/√2)-life, could be used in exactly the same way as half-life; but the mean life and half-lifet1/2have been adopted as standard times associated with exponential decay.
Those parameters can be related to the following time-dependent parameters:
These are related as follows:
whereN0is the initial amount of active substance — substance that has the same percentage of unstable particles as when the substance was formed.
The mathematics of radioactive decay depend on a key assumption that a nucleus of a radionuclide has no "memory" or way of translating its history into its present behavior. A nucleus does not "age" with the passage of time. Thus, the probability of its breaking down does not increase with time but stays constant, no matter how long the nucleus has existed. This constant probability may differ greatly between one type of nucleus and another, leading to the many different observed decay rates. However, whatever the probability is, it does not change over time. This is in marked contrast to complex objects that do show aging, such as automobiles and humans. These aging systems do have a chance of breakdown per unit of time that increases from the moment they begin their existence.
Aggregate processes, like the radioactive decay of a lump of atoms, for which the single-event probability of realization is very small but in which the number of time-slices is so large that there is nevertheless a reasonable rate of events, are modelled by thePoisson distribution, which is discrete. Radioactive decay andnuclear particle reactionsare two examples of such aggregate processes.[33]The mathematics ofPoisson processesreduce to the law ofexponential decay, which describes the statistical behaviour of a large number of nuclei, rather than one individual nucleus. In the following formalism, the number of nuclei or the nuclei populationN, is of course a discrete variable (anatural number)—but for any physical sampleNis so large that it can be treated as a continuous variable.Differential calculusis used to model the behaviour of nuclear decay.
Consider the case of a nuclideAthat decays into anotherBby some processA→B(emission of other particles, likeelectron neutrinosνeandelectronse−as inbeta decay, are irrelevant in what follows). The decay of an unstable nucleus is entirely random in time so it is impossible to predict when a particular atom will decay. However, it is equally likely to decay at any instant in time. Therefore, given a sample of a particular radioisotope, the number of decay events−dNexpected to occur in a small interval of timedtis proportional to the number of atoms presentN, that is[34]
Particular radionuclides decay at different rates, so each has its own decay constantλ. The expected decay−dN/Nis proportional to an increment of time,dt:
−dNN=λdt{\displaystyle -{\frac {\mathrm {d} N}{N}}=\lambda \mathrm {d} t}
The negative sign indicates thatNdecreases as time increases, as the decay events follow one after another. The solution to this first-orderdifferential equationis thefunction:
whereN0is the value ofNat timet= 0, with the decay constant expressed asλ[34]
We have for all timet:
whereNtotalis the constant number of particles throughout the decay process, which is equal to the initial number ofAnuclides since this is the initial substance.
If the number of non-decayedAnuclei is:
then the number of nuclei ofB(i.e. the number of decayedAnuclei) is
The number of decays observed over a given interval obeysPoisson statistics. If the average number of decays is⟨N⟩, the probability of a given number of decaysNis[34]
Now consider the case of a chain of two decays: one nuclideAdecaying into anotherBby one process, thenBdecaying into anotherCby a second process, i.e.A → B → C. The previous equation cannot be applied to the decay chain, but can be generalized as follows. SinceAdecays intoB,thenBdecays intoC, the activity ofAadds to the total number ofBnuclides in the present sample,beforethoseBnuclides decay and reduce the number of nuclides leading to the later sample. In other words, the number of second generation nucleiBincreases as a result of the first generation nuclei decay ofA, and decreases as a result of its own decay into the third generation nucleiC.[35]The sum of these two terms gives the law for a decay chain for two nuclides:
The rate of change ofNB, that isdNB/dt, is related to the changes in the amounts ofAandB,NBcan increase asBis produced fromAand decrease asBproducesC.
Re-writing using the previous results:
dNBdt=−λBNB+λANA0e−λAt{\displaystyle {\frac {\mathrm {d} N_{B}}{\mathrm {d} t}}=-\lambda _{B}N_{B}+\lambda _{A}N_{A0}e^{-\lambda _{A}t}}
The subscripts simply refer to the respective nuclides, i.e.NAis the number of nuclides of typeA;NA0is the initial number of nuclides of typeA;λAis the decay constant forA– and similarly for nuclideB. Solving this equation forNBgives:
In the case whereBis a stable nuclide (λB= 0), this equation reduces to the previous solution:
as shown above for one decay. The solution can be found by theintegration factormethod, where the integrating factor iseλBt. This case is perhaps the most useful since it can derive both the one-decay equation (above) and the equation for multi-decay chains (below) more directly.
For the general case of any number of consecutive decays in a decay chain, i.e.A1→ A2··· → Ai··· → AD, whereDis the number of decays andiis a dummy index (i= 1, 2, 3, ...,D), each nuclide population can be found in terms of the previous population. In this caseN2= 0,N3= 0, ...,ND= 0. Using the above result in a recursive form:
The general solution to the recursive problem is given byBateman's equations:[36]
ND=N1(0)λD∑i=1Dλicie−λitci=∏j=1,i≠jDλjλj−λi{\displaystyle {\begin{aligned}N_{D}&={\frac {N_{1}(0)}{\lambda _{D}}}\sum _{i=1}^{D}\lambda _{i}c_{i}e^{-\lambda _{i}t}\\[3pt]c_{i}&=\prod _{j=1,i\neq j}^{D}{\frac {\lambda _{j}}{\lambda _{j}-\lambda _{i}}}\end{aligned}}}
In all of the above examples, the initial nuclide decays into just one product.[37]Consider the case of one initial nuclide that can decay into either of two products, that isA → BandA → Cin parallel. For example, in a sample ofpotassium-40, 89.3% of the nuclei decay tocalcium-40and 10.7% toargon-40. We have for all timet:
which is constant, since the total number of nuclides remains constant. Differentiating with respect to time:
defining thetotal decay constantλin terms of the sum ofpartial decay constantsλBandλC:
Solving this equation forNA:
whereNA0is the initial number of nuclide A. When measuring the production of one nuclide, one can only observe the total decay constantλ. The decay constantsλBandλCdetermine the probability for the decay to result in productsBorCas follows:
because the fractionλB/λof nuclei decay intoBwhile the fractionλC/λof nuclei decay intoC.
The above equations can also be written using quantities related to the number of nuclide particlesNin a sample;
whereNA=6.02214076×1023mol−1[38]is theAvogadro constant,Mis themolar massof the substance in kg/mol, and the amount of the substancenis inmoles.
For the one-decay solutionA → B:
the equation indicates that the decay constantλhas units oft−1, and can thus also be represented as 1/τ, whereτis a characteristic time of the process called thetime constant.
In a radioactive decay process, this time constant is also themean lifetimefor decaying atoms. Each atom "lives" for a finite amount of time before it decays, and it may be shown that this mean lifetime is thearithmetic meanof all the atoms' lifetimes, and that it isτ, which again is related to the decay constant as follows:
This form is also true for two-decay processes simultaneouslyA → B + C, inserting the equivalent values of decay constants (as given above)
into the decay solution leads to:
A more commonly used parameter is the half-lifeT1/2. Given a sample of a particular radionuclide, the half-life is the time taken for half the radionuclide's atoms to decay. For the case of one-decay nuclear reactions:
the half-life is related to the decay constant as follows: setN =N0/2andt=T1/2to obtain
This relationship between the half-life and the decay constant shows that highly radioactive substances are quickly spent, while those that radiate weakly endure longer.Half-lives of known radionuclidesvary by almost 54 orders of magnitude, from more than2.25(9)×1024years (6.9×1031sec) for the very nearly stable nuclide128Te, to8.6(6)×10−23seconds for the highly unstable nuclide5H.[27]
The factor ofln(2)in the above relations results from the fact that the concept of "half-life" is merely a way of selecting a different base other than the natural baseefor the lifetime expression. The time constantτis thee−1-life, the time until only 1/eremains, about 36.8%, rather than the 50% in the half-life of a radionuclide. Thus,τis longer thant1/2. The following equation can be shown to be valid:
Since radioactive decay is exponential with a constant probability, each process could as easily be described with a different constant time period that (for example) gave its "(1/3)-life" (how long until only 1/3 is left) or "(1/10)-life" (a time period until only 10% is left), and so on. Thus, the choice ofτandt1/2for marker-times, are only for convenience, and from convention. They reflect a fundamental principle only in so much as they show that thesame proportionof a given radioactive substance will decay, during any time-period that one chooses.
Mathematically, thenthlife for the above situation would be found in the same way as above—by settingN = N0/n,t=T1/nand substituting into the decay solution to obtain
Carbon-14has a half-life of5700(30)years[27]and a decay rate of 14 disintegrations per minute (dpm) per gram of natural carbon.
If an artifact is found to have radioactivity of 4 dpm per gram of its present C, we can find the approximate age of the object using the above equation:
where:
The radioactive decay modes of electron capture and internal conversion are known to be slightly sensitive to chemical and environmental effects that change the electronic structure of the atom, which in turn affects the presence of1sand2selectrons that participate in the decay process. A small number of nuclides are affected.[39]For example,chemical bondscan affect the rate of electron capture to a small degree (in general, less than 1%) depending on the proximity of electrons to the nucleus. In7Be, a difference of 0.9% has been observed between half-lives in metallic and insulating environments.[40]This relatively large effect is because beryllium is a small atom whose valence electrons are in2satomic orbitals, which are subject to electron capture in7Be because (like allsatomic orbitals in all atoms) they naturally penetrate into the nucleus.
In 1992, Jung et al. of the Darmstadt Heavy-Ion Research group observed an accelerated β−decay of163Dy66+. Although neutral163Dy is a stable isotope, the fully ionized163Dy66+undergoes β−decayinto the K and L shellsto163Ho66+with a half-life of 47 days.[41]
Rhenium-187is another spectacular example.187Re normally undergoes beta decay to187Os with a half-life of 41.6 × 109years,[42]but studies using fully ionised187Reatoms (bare nuclei) have found that this can decrease to only 32.9 years.[43]This is attributed to "bound-state β−decay" of the fully ionised atom – the electron is emitted into the "K-shell" (1satomic orbital), which cannot occur for neutral atoms in which all low-lying bound states are occupied.[44]
A number of experiments have found that decay rates of other modes of artificial and naturally occurring radioisotopes are, to a high degree of precision, unaffected by external conditions such as temperature, pressure, the chemical environment, and electric, magnetic, or gravitational fields.[45]Comparison of laboratory experiments over the last century, studies of the Oklonatural nuclear reactor(which exemplified the effects of thermal neutrons on nuclear decay), and astrophysical observations of the luminosity decays of distant supernovae (which occurred far away so the light has taken a great deal of time to reach us), for example, strongly indicate that unperturbed decay rates have been constant (at least to within the limitations of small experimental errors) as a function of time as well.[citation needed]
Recent results suggest the possibility that decay rates might have a weak dependence on environmental factors. It has been suggested that measurements of decay rates ofsilicon-32,manganese-54, andradium-226exhibit small seasonal variations (of the order of 0.1%).[46][47][48]However, such measurements are highly susceptible to systematic errors, and a subsequent paper[49]has found no evidence for such correlations in seven other isotopes (22Na,44Ti,108Ag,121Sn,133Ba,241Am,238Pu), and sets upper limits on the size of any such effects. The decay ofradon-222was once reported to exhibit large 4% peak-to-peak seasonal variations (see plot),[50]which were proposed to be related to eithersolar flareactivity or the distance from the Sun, but detailed analysis of the experiment's design flaws, along with comparisons to other, much more stringent and systematically controlled, experiments refute this claim.[51]
An unexpected series of experimental results for the rate of decay of heavyhighly chargedradioactiveionscirculating in astorage ringhas provoked theoretical activity in an effort to find a convincing explanation. The rates ofweakdecay of two radioactive species with half lives of about 40 s and 200 s are found to have a significantoscillatorymodulation, with a period of about 7 s.[52]The observed phenomenon is known as theGSI anomaly, as the storage ring is a facility at theGSI Helmholtz Centre for Heavy Ion ResearchinDarmstadt,Germany. As the decay process produces anelectron neutrino, some of the proposed explanations for the observed rate oscillation invoke neutrino properties. Initial ideas related toflavour oscillationmet with skepticism.[53]A more recent proposal involves mass differences between neutrino masseigenstates.[54]
A nuclide is considered to "exist" if it has a half-life greater than 2x10−14s. This is an arbitrary boundary; shorter half-lives are considered resonances, such as a system undergoing a nuclear reaction. This time scale is characteristic of thestrong interactionwhich creates thenuclear force. Only nuclides are considered to decay and produce radioactivity.[55]: 568
Nuclides can be stable or unstable. Unstable nuclides decay, possibly in several steps, until they become stable. There are 251 knownstable nuclides. The number of unstable nuclides discovered has grown, with about 3000 known in 2006.[55]
The most common and consequently historically the most important forms of natural radioactive decay involve the emission of alpha-particles, beta-particles, and gamma rays. Each of these correspond to afundamental interactionpredominantly responsible for the radioactivity:[56]: 142
In alpha decay, a particle containing two protons and two neutrons, equivalent to a He nucleus, breaks out of the parent nucleus. The process represents a competition between the electromagnetic repulsion between the protons in the nucleus and attractivenuclear force, a residual of the strong interaction. The alpha particle is an especially strongly bound nucleus, helping it win the competition more often.[57]: 872However some nuclei break up orfissioninto larger particles and artificial nuclei decay with the emission of
single protons, double protons, and other combinations.[55]
Beta decay transforms a neutron into proton or vice versa. When a neutron inside a parent nuclide decays to a proton, an electron, aanti-neutrino, and nuclide with high atomic number results. When a proton in a parent nuclide transforms to a neutron, apositron, aneutrino, and nuclide with a lower atomic number results. These changes are a direct manifestation of the weak interaction.[57]: 874
Gamma decay resembles other kinds of electromagnetic emission: it corresponds to transitions between an excited quantum state and lower energy state. Any of the particle decay mechanisms often leave the daughter in an excited state, which then decays via gamma emission.[57]: 876
Other forms of decay includeneutron emission,electron capture,internal conversion,cluster decay.[58]
Nuclear technology portalPhysics portal
|
https://en.wikipedia.org/wiki/Radioactive_decay
|
TheCenter for Internet Security(CIS) is a US501(c)(3)nonprofit organization,[2]formed in October 2000.[1]Its mission statement professes that the function of CIS is to " help people, businesses, and governments protect themselves against pervasivecyber threats."
The organization is headquartered inEast Greenbush, New York, US, with members including large corporations, government agencies, and academic institutions.[1]
CIS has several program areas, including MS-ISAC, CIS Controls, CIS Benchmarks, CIS Communities, and CIS CyberMarket. Through these program areas, CIS works with a wide range of entities, including those inacademia, the government, and both the private sector and the general public to increase their online security by providing them with products and services that improve security efficiency and effectiveness.[5][6]
The Multi-State Information Sharing and Analysis Center (MS-ISAC) is a "round-the-clock cyber threat monitoring and mitigation center for state and local governments" operated by CIS under a cooperative agreement with the U.S. Department of Homeland Security[7](DHS), Cybersecurity and Infrastructure Security Agency[8](CISA).[9]The MS-ISAC was established in late 2002, and officially launched in January 2003, by William F. Pelgrin, then Chief Security Officer of the state of New York.[10]Beginning from a small group of participating states in the Northeast, MS-ISAC came to include all 50 U.S. States and theDistrict of Columbia, as well as U.S. State, Local, Tribal, and Territorial (SLTT) governments. In order to facilitate its expanding scope, in late 2010, MS-ISAC "transitioned into a not-for-profit status under the auspices of the Center for Internet Security."[10][11]In March 2025, CISA ended funding for MC-ISAC.[12]
MS-ISAC "helps government agencies combat cyberthreats and works closely with federal law enforcement",[13][14]and is designated by DHS as a keycyber securityresource for the nation's SLTT governments.
The main objectives of MS-ISAC are described as follows:[15]
The MS-ISAC offers a variety of federally funded, no-cost, cybersecurity products and services to its members through the DHS CISA cooperative agreement. It also offers fee-based products and services for SLTT members who want additional protection in addition to what is offered under the cooperative agreement. In 2021, the MS-ISAC announced[16]it was undergoing a digital transformation, making major infrastructure upgrades including the implementation of a new cloud-based threat intelligence platform, security information and event management (SIEM) capability, security orchestration, automation, and response (SOAR) tool, anddata lakecapabilities for threat hunting.
Some of the offerings for SLTTs include:
The Elections Infrastructure Information Sharing and Analysis Center (EI-ISAC), as established by the Election Infrastructure Subsector Government Coordinating Council (GCC), is a critical resource for cyber threat prevention, protection, response and recovery for the nation's state, local, territorial, and tribal (SLTT) election offices. The EI-ISAC is operated by the Center for Internet Security, Inc. under the same cooperative agreement with DHS CISA as the MS-ISAC. By nature of election offices being SLTT organizations, each EI-ISAC member is automatically an MS-ISAC member and can take full advantage of the products and services provided to both ISACs.
The mission of the EI-ISAC is to improve the overall cybersecurity posture of SLTT election offices, through collaboration and information sharing among members, the U.S. Department of Homeland Security (DHS) and other federal partners, and private sector partners are the keys to success. The EI-ISAC provides a central resource for gathering information on cyber threats to election infrastructure and two-way sharing of information between and among public and private sectors in order to identify, protect, detect, respond and recover from attacks on public and private election infrastructure. And the EI-ISAC comprises representatives from SLTT election offices and contractors supporting SLTT election infrastructure.[21]
Formerly known as the SANS Critical Security Controls (SANS Top 20) and the CIS Critical Security Controls, theCIS Controlsas they are called today is a set of 18 prioritized safeguards to mitigate the most prevalent cyber-attacks against today's modern systems and networks. The CIS Controls are grouped into Implementation Groups[22](IGs), which allow organizations to use a risk assessment in order to determine the appropriate level of IG (one through three) that should be implemented for their organization. The CIS Controls can be downloaded from CIS, as can various mappings to other frameworks such as the National Institute of Standards and Technology (NIST) Cybersecurity Framework[23](CSF), NIST Special Publication (SP) 800-53,[24]and many others. CIS also offers a free hosted software product called the CIS Controls Assessment Tool[25](CIS-CAT) that allows organizations to track and prioritize the implementation of the CIS Controls.
The CIS Controls advocate "a defense-in-depth model to help prevent and detect malware".[26]A May 2017 study showed that "on average, organizations fail 55% of compliance checks established by the Center for Internet Security", with more than half of these violations being high severity issues.[27]In March 2015, CIS launched CIS Hardened Images forAmazon Web Services, in response to "a growing concern surrounding the data safety of information housed on virtual servers in the cloud".[28]The resources were made available asAmazon Machine Images, for six "CIS benchmarks-hardened systems", includingMicrosoft Windows,LinuxandUbuntu, with additional images and cloud providers added later.[28]CIS released Companion Guides to CIS Controls, recommendations for actions to counter cybersecurity attacks, with new guides having been released in October and December 2015.[29]In April 2018, CIS launched aninformation securityrisk assessment method to implement CIS Controls, called CIS RAM which is based upon the risk assessment standard by the DoCRA (Duty of Care Risk Analysis) Council.[30]Version of CIS RAM v2.0[31]was released October 2021.[32]CIS RAM v2.1was released in 2022.
CIS Benchmarks are a collaboration of the Consensus Community andCIS SecureSuitemembers (a class of CIS members with access to additional sets of tools and resources).[33]The Consensus Community is made up of experts in the field of IT security who use their knowledge and experience to help the global Internet community. CIS SecureSuite members are made up of several different types of companies ranging in size, including government agencies, colleges and universities, nonprofits, IT auditors and consultants, security software vendors and other organizations. CIS Benchmarks and other tools that CIS provides at no cost allow IT workers to create reports that compare their system security to universal consensus standard. This fosters a new structure for internet security that everyone is accountable for and that is shared by top executives, technology professionals and other internet users throughout the globe. Further, CIS provides internet security tools with a scoring feature that rates the configuration security of the system at hand. For example, CIS provides SecureSuite members with access to CIS-CAT Pro, a "cross-platform Java app" which scans target systems and "produces a report comparing your settings to the published benchmarks".[5]This is intended to encourage and motivate users to improve the scores given by the software, which bolsters the security of their internet and systems. The universal consensus standard that CIS employs draws upon and uses the accumulated knowledge of skillful technology professionals. Since internet security professionals volunteer in contributing to this consensus, this reduces costs for CIS and makes it cost effective.[34]
CIS CyberMarket is a "collaborative purchasing program that serves U.S. State, Local, Tribal, and Territorial (SLTT) government organizations, nonprofit entities, and public health and education institutions to improve cybersecurity through cost-effective group procurement".[35]The intent of the CIS CyberMarket is to combine the purchasing power of governmental and nonprofit sectors to help participants improve their cybersecurity condition at a lower cost than they would have been able to attain on their own. The program assists with the "time intensive, costly, complex, and daunting" task of maintaining cybersecurity by working with the public and private sectors to bring their partners cost-effective tools and services. The combined purchasing opportunities are reviewed by domain experts.[15]
There are three main objectives of the CIS CyberMarket:
CIS CyberMarket, like the MS-ISAC, serves government entities and non-profits in achieving greater cyber security. On its "resources" page, multiple newsletters and documents are available free of charge, including the "Cybersecurity Handbook for Cities and Counties".[36]
CIS Communities are "a volunteer, global community of IT professionals" who "continuously refine and verify" CIS best practices and cybersecurity tools.[37]To develop and structure its benchmarks, CIS uses a strategy in which members of the organization first form into teams. These teams then each collect suggestions, advice, official work and recommendations from a few participating organizations. Then, the teams analyze their data and information to determine what the most vital configuration settings are that would improve internet system security the most in as many work settings as possible. Each member of a team constantly works with their teammates and critically analyzes and critiques a rough draft until a consensus forms among the team. Before the benchmark is released to the general public, they are available for download and testing among the community. After reviewing all of the feedback from testing and making any necessary adjustments or changes, the final benchmark and other relevant security tools are made available to the public for download through the CIS website. This process is so extensive and so carefully executed that thousands of security professionals across the globe participate in it. According to ISACA, "during the development of the CIS Benchmark forSun MicrosystemsSolaris, more than 2,500 users downloaded the benchmark and monitoring tools."[38]
The organizations that participated in the founding of CIS in October 2000 includeISACA, theAmerican Institute of Certified Public Accountants(AICPA), theInstitute of Internal Auditors(IIA), theInternational Information Systems Security Certification Consortium(ISC2) and theSANS Institute(System Administration, Networking and Security). CIS has since grown to have hundreds of members with varying degrees of membership and cooperates and works with a variety of organizations and members at both the national and international levels. Some of these organizations include those in both the public and private sectors, government, ISACs and law enforcement.[1]
|
https://en.wikipedia.org/wiki/Center_for_Internet_Security
|
Anon-monotonic logicis aformal logicwhoseentailmentrelation is notmonotonic. In other words, non-monotonic logics are devised to capture and representdefeasible inferences, i.e., a kind of inference in which reasoners draw tentative conclusions, enabling reasoners to retract their conclusion(s) based on further evidence.[1]Most studied formal logics have a monotonic entailment relation, meaning that adding a formula to the hypotheses never produces a pruning of its set of conclusions. Intuitively, monotonicity indicates that learning a new piece of knowledge cannot reduce the set of what is known. Monotonic logics cannot handle various reasoning tasks such asreasoning by default(conclusions may be derived only because of lack of evidence of the contrary),abductive reasoning(conclusions are only deduced as most likely explanations), some important approaches to reasoning about knowledge (the ignorance of a conclusion must be retracted when the conclusion becomes known), and similarly,belief revision(new knowledge may contradict old beliefs).
Abductive reasoningis the process of deriving a sufficient explanation of the known facts. An abductive logic should not be monotonic because the likely explanations are not necessarily correct. For example, the likely explanation for seeing wet grass is that it rained; however, this explanation has to be retracted when learning that the real cause of the grass being wet was a sprinkler. Since the old explanation (it rained) is retracted because of the addition of a piece of knowledge (a sprinkler was active), any logic that models explanations is non-monotonic.
If a logic includes formulae that mean that something is not known, this logic should not be monotonic. Indeed, learning something that was previously not known leads to the removal of the formula specifying that this piece of knowledge is not known. This second change (a removal caused by an addition) violates the condition of monotonicity. A logic for reasoning about knowledge is theautoepistemic logic.
Belief revisionis the process of changing beliefs to accommodate a new belief that might be inconsistent with the old ones. In the assumption that the new belief is correct, some of the old ones have to be retracted in order to maintain consistency. This retraction in response to an addition of a new belief makes any logic for belief revision non-monotonic. The belief revision approach is alternative toparaconsistent logics, which tolerate inconsistency rather than attempting to remove it.
Proof-theoreticformalization of a non-monotonic logic begins with adoption of certain non-monotonicrules of inference, and then prescribes contexts in which these non-monotonic rules may be applied in admissible deductions. This typically is accomplished by means of fixed-point equations that relate the sets of premises and the sets of their non-monotonic conclusions.Default logicandautoepistemic logicare the most common examples of non-monotonic logics that have been formalized that way.[2]
Model-theoreticformalization of a non-monotonic logic begins with restriction of thesemanticsof a suitable monotonic logic to some special models, for instance, to minimal models,[3][4]and then derives a set of non-monotonicrules of inference, possibly with some restrictions on which contexts these rules may be applied in, so that the resulting deductive system issoundandcompletewith respect to the restrictedsemantics.[5]Unlike some proof-theoretic formalizations that suffered from well-known paradoxes and were often hard to evaluate with respect of their consistency with the intuitions they were supposed to capture, model-theoretic formalizations were paradox-free and left little, if any, room for confusion about what non-monotonic patterns of reasoning they covered. Examples of proof-theoretic formalizations of non-monotonic reasoning, which revealed some undesirable or paradoxical properties or did not capture the desired intuitive comprehensions, that have been successfully (consistent with respective intuitive comprehensions and with no paradoxical properties, that is) formalized by model-theoretic means includefirst-order circumscription,closed-world assumption,[5]andautoepistemic logic.[2]
|
https://en.wikipedia.org/wiki/Non-monotonic_logic
|
In distributed computing, aremote procedure call(RPC) is when a computer program causes a procedure (subroutine) to execute in a different address space (commonly on another computer on a shared computer network), which is written as if it were a normal (local) procedure call, without the programmer explicitly writing the details for the remote interaction. That is, the programmer writers essentially the same code whether the subroutine is local to the executing program, or remote. This is a form of server interaction (caller is client, executor is server), typically implemented via a request–response message passing system. In the object-oriented programming paradigm, RPCs are represented by remote method invocation (RMI). The RPC model implies a level of location transparency, namely that calling procedures are largely the same whether they are local or remote, but usually, they are not identical, so local calls can be distinguished from remote calls. Remote calls are usually orders of magnitude slower and less reliable than local calls, so distinguishing them is important.
RPCs are a form of inter-process communication (IPC), in that different processes have different address spaces: if on the same host machine, they have distinct virtual address spaces, even though the physical address space is the same; while if they are on different hosts, the physical address space is also different. Many different (often incompatible) technologies have been used to implement the concept.
Request–response protocols date to early distributed computing in the late 1960s, theoretical proposals of remote procedure calls as the model of network operations date to the 1970s, and practical implementations date to the early 1980s.Bruce Jay Nelsonis generally credited with coining the term "remote procedure call" in 1981.[1]
Remote procedure calls used in modern operating systems trace their roots back to the RC 4000 multiprogramming system,[2]which used a request-response communication protocol for process synchronization.[3]The idea of treating network operations as remote procedure calls goes back at least to the 1970s in earlyARPANETdocuments.[4]In 1978,Per Brinch Hansenproposed Distributed Processes, a language for distributed computing based on "external requests" consisting of procedure calls between processes.[5]
One of the earliest practical implementations was in 1982 byBrian Randelland colleagues for theirNewcastle Connectionbetween UNIX machines.[6]This was soon followed by "Lupine" by Andrew Birrell and Bruce Nelson in theCedarenvironment atXerox PARC.[7][8][9]Lupine automatically generated stubs, providing type-safe bindings, and used an efficient protocol for communication.[8]One of the first business uses of RPC was byXeroxunder the name "Courier" in 1981. The first popular implementation of RPC on Unix was Sun's RPC (now called ONC RPC), used as the basis for Network File System (NFS).
In the 1990s, with the popularity ofobject-oriented programming, an alternative model of remote method invocation (RMI) was widely implemented, such as in Common Object Request Broker Architecture (CORBA, 1991) and Java remote method invocation. RMIs, in turn, fell in popularity with the rise of the internet, particularly in the 2000s.
RPC is a request–response protocol. An RPC is initiated by theclient, which sends a request message to a known remoteserverto execute a specified procedure with supplied parameters. The remote server sends a response to the client, and the application continues its process. While the server is processing the call, the client is blocked (it waits until the server has finished processing before resuming execution), unless the client sends an asynchronous request to the server, such as an XMLHttpRequest. There are many variations and subtleties in various implementations, resulting in a variety of different (incompatible) RPC protocols.
An important difference between remote procedure calls and local calls is that remote calls can fail because of unpredictable network problems. Also, callers generally must deal with such failures without knowing whether the remote procedure was actually invoked. Idempotent procedures (those that have no additional effects if called more than once) are easily handled, but enough difficulties remain that code to call remote procedures is often confined to carefully written low-level subsystems.
To let different clients access servers, a number of standardized RPC systems have been created. Most of these use aninterface description language(IDL) to let various platforms call the RPC. The IDL files can then be used to generate code to interface between the client and servers.
Notable RPC implementations and analogues include:
|
https://en.wikipedia.org/wiki/Remote_procedure_call
|
Inmathematics, theconvolution poweris then-fold iteration of theconvolutionwith itself. Thus ifx{\displaystyle x}is afunctiononEuclidean spaceRdandn{\displaystyle n}is anatural number, then the convolution power is defined by
where∗denotes the convolution operation of functions onRdand δ0is theDirac delta distribution. This definition makes sense ifxis anintegrablefunction (inL1), a rapidly decreasingdistribution(in particular, a compactly supported distribution) or is a finiteBorel measure.
Ifxis the distribution function of arandom variableon the real line, then thenthconvolution power ofxgives the distribution function of the sum ofnindependent random variables with identical distributionx. Thecentral limit theoremstates that ifxis in L1and L2with mean zero and variance σ2, then
where Φ is the cumulativestandard normal distributionon the real line. Equivalently,x∗n/σn{\displaystyle x^{*n}/\sigma {\sqrt {n}}}tends weakly to the standard normal distribution.
In some cases, it is possible to define powersx*tfor arbitrary realt> 0. If μ is aprobability measure, then μ isinfinitely divisibleprovided there exists, for each positive integern, a probability measure μ1/nsuch that
That is, a measure is infinitely divisible if it is possible to define allnth roots. Not every probability measure is infinitely divisible, and a characterization of infinitely divisible measures is of central importance in the abstract theory ofstochastic processes. Intuitively, a measure should be infinitely divisible provided it has a well-defined "convolution logarithm." The natural candidate for measures having such a logarithm are those of (generalized)Poissontype, given in the form
In fact, theLévy–Khinchin theoremstates that a necessary and sufficient condition for a measure to be infinitely divisible is that it must lie in the closure, with respect to thevague topology, of the class of Poisson measures (Stroock 1993, §3.2).
Many applications of the convolution power rely on being able to define the analog ofanalytic functionsasformal power serieswith powers replaced instead by the convolution power. Thus ifF(z)=∑n=0∞anzn{\displaystyle \textstyle {F(z)=\sum _{n=0}^{\infty }a_{n}z^{n}}}is an analytic function, then one would like to be able to define
Ifx∈L1(Rd) or more generally is a finite Borel measure onRd, then the latter series converges absolutely in norm provided that the norm ofxis less than the radius of convergence of the original series definingF(z). In particular, it is possible for such measures to define theconvolutional exponential
It is not generally possible to extend this definition to arbitrary distributions, although a class of distributions on which this series still converges in an appropriate weak sense is identified byBen Chrouda, El Oued & Ouerdiane (2002).
Ifxis itself suitably differentiable, then from thepropertiesof convolution, one has
whereD{\displaystyle {\mathcal {D}}}denotes thederivativeoperator. Specifically, this holds ifxis a compactly supported distribution or lies in theSobolev spaceW1,1to ensure that the derivative is sufficiently regular for the convolution to be well-defined.
In the configuration random graph, the size distribution ofconnected componentscan be expressed via the convolution power of the excessdegree distribution(Kryven (2017)):
Here,w(n){\displaystyle w(n)}is the size distribution for connected components,u1(k)=k+1μ1u(k+1),{\displaystyle u_{1}(k)={\frac {k+1}{\mu _{1}}}u(k+1),}is the excess degree distribution, andu(k){\displaystyle u(k)}denotes thedegree distribution.
Asconvolution algebrasare special cases ofHopf algebras, the convolution power is a special case of the (ordinary) power in a Hopf algebra. In applications toquantum field theory, the convolution exponential, convolution logarithm, and other analytic functions based on the convolution are constructed as formal power series in the elements of the algebra (Brouder, Frabetti & Patras 2008). If, in addition, the algebra is aBanach algebra, then convergence of the series can be determined as above. In the formal setting, familiar identities such as
continue to hold. Moreover, by the permanence of functional relations, they hold at the level of functions, provided all expressions are well-defined in an open set by convergent series.
|
https://en.wikipedia.org/wiki/Convolution_power
|
ALGOL 68(short forAlgorithmic Language 1968) is animperativeprogramming languagemember of theALGOLfamily that was conceived as a successor to theALGOL 60language, designed with the goal of a much wider scope of application and more rigorously definedsyntaxand semantics.
The complexity of the language's definition, which runs to several hundred pages filled with non-standard terminology, madecompilerimplementation difficult and it was said it had "no implementations and no users". This was only partly true; ALGOL 68 did find use in several niche markets, notably in theUnited Kingdomwhere it was popular onInternational Computers Limited(ICL) machines, and in teaching roles. Outside these fields, use was relatively limited.
Nevertheless, the contributions of ALGOL 68 to the field ofcomputer sciencehave been deep, wide-ranging and enduring, although many of these contributions were only publicly identified when they had reappeared in subsequently developed programming languages. Many languages were developed specifically as a response to the perceived complexity of the language, the most notable beingPascal, or were reimplementations for specific roles, likeAda.
Many languages of the 1970s trace their design specifically to ALGOL 68, selecting some features while abandoning others that were considered too complex or out-of-scope for given roles. Among these is the languageC, which was directly influenced by ALGOL 68, especially by itsstrong typingand structures. Most modern languages trace at least some of their syntax to either C or Pascal, and thus directly or indirectly to ALGOL 68.
ALGOL 68 features include expression-based syntax, user-declared types and structures/tagged-unions, a reference model of variables and reference parameters, string, array and matrix slicing, and concurrency.
ALGOL 68 was designed by theInternational Federation for Information Processing(IFIP)IFIP Working Group 2.1on Algorithmic Languages and Calculi. On December 20, 1968, the language was formally adopted by the group, and then approved for publication by the General Assembly of IFIP.
ALGOL 68 was defined using aformalism, a two-levelformal grammar, invented byAdriaan van Wijngaarden.Van Wijngaarden grammarsuse acontext-free grammarto generate an infinite set of productions that will recognize a particular ALGOL 68 program; notably, they are able to express the kind of requirements that in many other programming languagetechnical standardsare labelledsemantics, and must be expressed in ambiguity-prone natural language prose, and then implemented in compilers asad hoccode attached to the formal language parser.
ALGOL 68 was the first (and possibly one of the last) major language for which a full formal definition was made before it was implemented.
The main aims and principles of design of ALGOL 68:
ALGOL 68 has been criticized, most prominently by some members of its design committee such asC. A. R. HoareandEdsger Dijkstra, for abandoning the simplicity ofALGOL 60, becoming a vehicle for complex or overly general ideas, and doing little to make thecompilerwriter's task easier, in contrast to deliberately simple contemporaries (and competitors) such asC,S-algolandPascal.
In 1970,ALGOL 68-Rbecame the first working compiler for ALGOL 68.
In the 1973 revision, certain features — such asproceduring, gommas[13]andformal bounds— were omitted.[14]Cf.The language of the unrevised report.r0
Though European defence agencies (in BritainRoyal Signals and Radar Establishment(RSRE)) promoted the use of ALGOL 68 for its expected security advantages, the American side of the NATO alliance decided to develop a different project, the languageAda, making its use obligatory for US defense contracts.
ALGOL 68 also had a notable influence in theSoviet Union, details of which can be found inAndrey Terekhov's 2014 paper: "ALGOL 68 and Its Impact on the USSR and Russian Programming",[15]and "Алгол 68 и его влияние на программирование в СССР и России".[16]
Steve Bourne, who was on the ALGOL 68 revision committee, took some of its ideas to hisBourne shell(and thereby, to descendantUnix shellssuch asBash) and toC(and thereby to descendants such asC++).
The complete history of the project can be found inC. H. Lindsey's "A History of ALGOL 68".[17]
For a full-length treatment of the language, see "Programming ALGOL 68 Made Easy"[18]by Dr. Sian Mountbatten, or "Learning ALGOL 68 Genie"[19]by Marcel van der Veer which includes the Revised Report.
ALGOL 68, as the name implies, is a follow-on to theALGOLlanguage that was first formalized in 1960. That same year theInternational Federation for Information Processing(IFIP) formed and started the Working Group on ALGOL, or WG2.1. This group released an updated ALGOL 60 specification in Rome in April 1962. At a follow-up meeting in March 1964, it was agreed that the group should begin work on two follow-on standards, ALGOL X, which would be a redefinition of the language with some additions, andALGOL Y, which would have the ability to modify its own programs in the style of the languageLISP.[20]
The first meeting of the ALGOL X group was held inPrinceton Universityin May 1965. A report of the meeting noted two broadly supported themes, the introduction ofstrong typingand interest inEuler'sconcepts of 'trees' or 'lists' for handling collections.[21]Although intended as a "short-term solution to existing difficulties",[22]ALGOL X got as far as having a compiler made for it. This compiler was written byDouglas T. Rossof theMassachusetts Institute of Technology(MIT) with theAutomated Engineering Design(AED-0) system, also termedALGOL Extended for Design.[23][24]
At the second meeting in October in France, three formal proposals were presented,Niklaus Wirth'sALGOL Walong with comments about record structures byC.A.R. (Tony) Hoare, a similar language by Gerhard Seegmüller, and a paper byAdriaan van Wijngaardenon "Orthogonal design and description of a formal language". The latter, written in almost indecipherable "W-Grammar", proved to be a decisive shift in the evolution of the language. The meeting closed with an agreement that van Wijngaarden would re-write the Wirth/Hoare submission using his W-Grammar.[21]
This seemingly simple task ultimately proved more difficult than expected, and the follow-up meeting had to be delayed six months. When it met in April 1966 inKootwijk, van Wijngaarden's draft remained incomplete and Wirth and Hoare presented a version using more traditional descriptions. It was generally agreed that their paper was "the right language in the wrong formalism".[25]As these approaches were explored, it became clear there was a difference in the way parameters were described that would have real-world effects, and while Wirth and Hoare protested that further delays might become endless, the committee decided to wait for van Wijngaarden's version. Wirth then implemented their current definition as ALGOL W.[26]
At the next meeting inWarsawin October 1966,[27]there was an initial report from the I/O Subcommittee who had met at theOak Ridge National Laboratoryand theUniversity of Illinoisbut had not yet made much progress. The two proposals from the previous meeting were again explored, and this time a new debate emerged about the use ofpointers; ALGOL W used them only to refer to records, while van Wijngaarden's version could point to any object. To add confusion,John McCarthypresented a new proposal foroperator overloadingand the ability to string togetherandandorconstructs, andKlaus Samelsonwanted to allowanonymous functions. In the resulting confusion, there was some discussion of abandoning the entire effort.[26]The confusion continued through what was supposed to be the ALGOL Y meeting inZandvoortin May 1967.[21]
A draft report was finally published in February 1968. This was met by "shock, horror and dissent",[21]mostly due to the hundreds of pages of unreadable grammar and odd terminology.Charles H. Lindseyattempted to figure out what "language was hidden inside of it",[28]a process that took six man-weeks of effort. The resulting paper, "ALGOL 68 with fewer tears",[29]was widely circulated. At a wider information processing meeting inZürichin May 1968, attendees complained that the language was being forced upon them and that IFIP was "the true villain of this unreasonable situation" as the meetings were mostly closed and there was no formal feedback mechanism. Wirth andPeter Naurformally resigned their authorship positions in WG2.1 at that time.[28]
The next WG2.1 meeting took place inTirreniain June 1968. It was supposed to discuss the release of compilers and other issues, but instead devolved into a discussion on the language. van Wijngaarden responded by saying (or threatening) that he would release only one more version of the report. By this point Naur, Hoare, and Wirth had left the effort, and several more were threatening to do so.[30]Several more meetings followed,North Berwickin August 1968, Munich in December which produced the release of the official Report in January 1969 but also resulted in a contentious Minority Report being written. Finally, atBanff, Albertain September 1969, the project was generally considered complete and the discussion was primarily on errata and a greatly expanded Introduction to the Report.[31]
The effort took five years, burned out many of the greatest names incomputer science, and on several occasions became deadlocked over issues both in the definition and the group as a whole. Hoare released a "Critique of ALGOL 68" almost immediately,[32]which has been widely referenced in many works. Wirth went on to further develop the ALGOL W concept and released this as Pascal in 1970.
The first implementation of the standard, based on the late-1968 draft Report, was introduced by theRoyal Radar Establishmentin the UK asALGOL 68-Rin July 1970. This was, however, a subset of the full language, andBarry Mailloux, the final editor of the Report, joked that "It is a question of morality. We have a Bible and you are sinning!"[33]This version nevertheless became very popular on theICLmachines, and became a widely-used language in military coding, especially in the UK.[34]
Among the changes in 68-R was the requirement for all variables to be declared before their first use. This had a significant advantage that it allowed the compiler to be one-pass, as space for the variables in theactivation recordwas set aside before it was used. However, this change also had the side-effect of demanding thePROCs be declared twice, once as a declaration of the types, and then again as the body of code. Another change was to eliminate the assumedVOIDmode, an expression that returns no value (named astatementin other languages) and demanding the wordVOIDbe added where it would have been assumed. Further, 68-R eliminated the explicitparallel processingcommands based onPAR.[33]
The first full implementation of the language was introduced in 1974 by CDC Netherlands for theControl Datamainframe series. This saw limited use, mostly teaching in Germany and the Netherlands.[34]
A version similar to 68-R was introduced fromCarnegie Mellon Universityin 1976 as 68S, and was again a one-pass compiler based on various simplifications of the original and intended for use on smaller machines like theDEC PDP-11. It too was used mostly for teaching purposes.[34]
A version forIBMmainframes did not become available until 1978, when one was released fromCambridge University. This was "nearly complete". Lindsey released a version for small machines including theIBM PCin 1984.[34]
Three open source Algol 68 implementations are known:[35]
"Van Wijngaarden once characterized the four authors, somewhat tongue-in-cheek, as: Koster:transputter, Peck: syntaxer, Mailloux: implementer, Van Wijngaarden: party ideologist." – Koster.
1968: On 20 December 1968, the "Final Report" (MR 101) was adopted by the Working Group, then subsequently approved by the General Assembly ofUNESCO'sIFIPfor publication. Translations of the standard were made forRussian,German,FrenchandBulgarian, and then laterJapaneseandChinese.[50]The standard was also made available inBraille.
1984:TC 97considered ALGOL 68 for standardisation as "New Work Item" TC97/N1642[2][3]. West Germany, Belgium, Netherlands, USSR and Czechoslovakia willing to participate in preparing the standard but the USSR and Czechoslovakia "were not the right kinds of member of the right ISO committees"[4]and Algol 68's ISO standardisation stalled.[5]
1988: Subsequently ALGOL 68 became one of theGOSTstandards in Russia.
The standard language contains about sixty reserved words, typically bolded in print, and some with "brief symbol" equivalents:
The basic language construct is theunit. A unit may be aformula, anenclosed clause, aroutine textor one of several technically needed constructs (assignation, jump, skip, nihil). The technical termenclosed clauseunifies some of the inherently bracketing constructs known asblock,do statement,switch statementin other contemporary languages. When keywords are used, generally the reversed character sequence of the introducing keyword is used for terminating the enclosure, e.g. (IF~THEN~ELSE~FI,CASE~IN~OUT~ESAC,FOR~WHILE~DO~OD). ThisGuarded Commandsyntax was reused byStephen Bournein the commonUnixBourne shell. An expression may also yield amultiple value, which is constructed from other values by acollateral clause. This construct just looks like the parameter pack of a procedure call.
The basicdata types(calledmodes in Algol 68 parlance) arereal,int,compl(complex number),bool,char,bitsandbytes. For example:
However, the declarationREALx;is justsyntactic sugarforREFREALx =LOCREAL;. That is,xis really theconstant identifierfor areference toa newly generated localREALvariable.
Furthermore, instead of defining bothfloatanddouble, orintandlongandshort, etc., ALGOL 68 providesmodifiers, so that the presently commondoublewould be written asLONGREALorLONGLONGREALinstead, for example. Theprelude constantsmax realandmin long intare provided to adapt programs to different implementations.
All variables need to be declared, but declaration does not have to precede the first use.
primitive-declarer:INT,REAL,COMPL,COMPLEXG,BOOL,CHAR,STRING,BITS,BYTES,FORMAT,FILE,PIPEG,CHANNEL,SEMA
Complex types can be created from simpler ones using various type constructors:
Other declaration symbols include:FLEX,HEAP,LOC,REF,LONG,SHORT,EVENTS
A name for a mode (type) can be declared using aMODEdeclaration,
which is similar toTYPEDEFin C/C++ andTYPEin Pascal:
This is similar to the following C code:
For ALGOL 68, only theNEWMODEmode-indication appears to the left of the equals symbol, and most notably the construction is made, and can be read, from left to right without regard to priorities. Also, thelower boundof Algol 68 arrays is one by default, but can be any integer from -max inttomax int.
Mode declarations allow types to berecursive: defined directly or indirectly in terms of themselves.
This is subject to some restrictions – for instance, these declarations are illegal:
while these are valid:
Thecoercionsproduce a coercee from a coercend according to three criteria: the a priori mode of the coercend before the application of any coercion, the a posteriori mode of the coercee required after those coercions, and the syntactic position or "sort" of the coercee. Coercions may be cascaded.
The six possible coercions are termeddeproceduring,dereferencing,uniting,widening,rowing, andvoiding. Each coercion, except foruniting, prescribes a corresponding dynamic effect on the associated values. Hence, many primitive actions can be programmed implicitly by coercions.
Context strength – allowed coercions:
ALGOL 68 has a hierarchy of contexts which determine the kind of coercions available at a particular point in the program. These contexts are:
Also:
Widening is always applied in theINTtoREALtoCOMPLdirection, provided the modes have the same size. For example: AnINTwill be coerced to aREAL, but not vice versa. Examples:
A variable can also be coerced (rowed) to an array of length 1.
For example:
UNION(INT,REAL) var := 1
IF~THEN...FIandFROM~BY~TO~WHILE~DO...ODetc
For more details about Primaries, Secondaries, Tertiary & Quaternaries refer toOperator precedence.
Pragmats aredirectivesin the program, typically hints to the compiler; in newer languages these are called "pragmas" (no 't'). e.g.
Comments can be inserted in a variety of ways:
Normally, comments cannot be nested in ALGOL 68. This restriction can be circumvented by using different comment delimiters (e.g. use hash only for temporary code deletions).
ALGOL 68 being anexpression-oriented programming language, the value returned by anassignmentstatement is a reference to the destination. Thus, the following is valid ALGOL 68 code:
This notion is present inCandPerl, among others. Note that as in earlier languages such asAlgol 60andFORTRAN, spaces are allowed in identifiers, so thathalf piis asingleidentifier (thus avoiding theunderscoresversuscamel caseversusall lower-caseissues).
As another example, to express the mathematical idea of asumoff(i)from i=1 to n, the following ALGOL 68integer expressionsuffices:
Note that, being an integer expression, the former block of code can be used inany context where an integer value can be used. A block of code returns the value of the last expression it evaluated; this idea is present inLisp, among other languages.
Compound statements are all terminated by distinctive closing brackets:
This scheme not only avoids thedangling elseproblem but also avoids having to useBEGINandENDin embeddedstatementsequences.
Choice clause example withBriefsymbols:
Choice clause example withBoldsymbols:
Choice clause example mixingBoldandBriefsymbols:
Algol68 allowed the switch to be of either typeINTor(uniquely)UNION. The latter allows the enforcingstrong typingontoUNIONvariables. cf.unionbelow for example.
This was consideredthe"universal" loop, the full syntax is:
The construct have several unusual aspects:
Subsequent "extensions" to the standard Algol68 allowed theTOsyntactic element to be replaced withUPTOandDOWNTOto achieve a small optimisation. The same compilers also incorporated:
Further examples can be found in the code examples below.
ALGOL 68 supportsarrayswith any number of dimensions, and it allows for theslicingof whole or partial rows or columns.
Matrices can be sliced either way, e.g.:
ALGOL 68 supports multiple field structures (STRUCT) andunited modes. Reference variables may point to anyMODEincluding array slices and structure fields.
For an example of all this, here is the traditional linked list declaration:
Usage example forUNIONCASEofNODE:
Procedure (PROC) declarations require type specifications for both the parameters and the result (VOIDif none):
or, using the "brief" form of the conditional statement:
The return value of aprocis the value of the last expression evaluated in the procedure. References to procedures (ref proc) are also permitted.Call-by-referenceparameters are provided by specifying references (such asref real) in the formal argument list. The following example defines a procedure that applies a function (specified as a parameter) to each element of an array:
This simplicity of code was unachievable in ALGOL 68's predecessorALGOL 60.
The programmer may define newoperatorsandboththose and the pre-defined ones may beoverloadedand their priorities may be changed by the coder. The following example defines operatorMAXwith both dyadic and monadic versions (scanning across the elements of an array).
These are technically not operators, rather they are considered "units associated with names"
-,ABS,ARG,BIN,ENTIER,LENG,LEVEL,ODD,REPR,ROUND,SHORTEN
-:=, +:=, *:=, /:=, %:=, %*:=, +=:
Specific details:
These are technically not operators, rather they are considered "units associated with names"
Note: Quaternaries include namesSKIPand ~.
:=:(alternativelyIS) tests if two pointers are equal;:/=:(alternativelyISNT) tests if they are unequal.
Consider trying to compare two pointer values, such as the following variables, declared as pointers-to-integer:
Now consider how to decide whether these two are pointing to the same location, or whether one of them is pointing toNIL. The following expression
will dereference both pointers down to values of typeINT, and compare those, since the=operator is defined forINT, but notREFINT. It isnot legalto define=for operands of typeREFINTandINTat the same time, because then calls become ambiguous, due to the implicit coercions that can be applied: should the operands be left asREFINTand that version of the operator called? Or should they be dereferenced further toINTand that version used instead? Therefore the following expression can never be made legal:
Hence the need for separate constructs not subject to the normal coercion rules for operands to operators. But there is a gotcha. The following expressions:
while legal, will probably not do what might be expected. They will always returnFALSE, because they are comparing theactual addresses of the variablesipandjp, rather than what they point to. To achieve the right effect, one would have to write
Most of Algol's "special" characters (⊂, ≡, ␣, ×, ÷, ≤, ≥, ≠, ¬, ⊃, ≡, ∨, ∧, →, ↓, ↑, ⌊, ⌈, ⎩, ⎧, ⊥, ⏨, ¢, ○ and □) can be found on theIBM 2741keyboard with theAPL"golf-ball" print head inserted; these became available in the mid-1960s while ALGOL 68 was being drafted. These characters are also part of theUnicodestandard and most of them are available in several popularfonts.
Transputis the term used to refer to ALGOL 68's input and output facilities. It includes pre-defined procedures for unformatted, formatted and binary transput. Files and other transput devices are handled in a consistent and machine-independent manner. The following example prints out some unformatted output to thestandard outputdevice:
Note the predefined proceduresnewpageandnewlinepassed as arguments.
TheTRANSPUTis considered to be ofBOOKS,CHANNELSandFILES:
"Formatted transput" in ALGOL 68's transput has its own syntax and patterns (functions), withFORMATs embedded between two $ characters.[53]
Examples:
ALGOL 68supports programming of parallel processing. Using the keywordPAR, acollateral clauseis converted to aparallel clause, where the synchronisation of actions is controlled usingsemaphores. In A68G the parallel actions are mapped to threads when available on the hostingoperating system. In A68S a different paradigm of parallel processing was implemented (see below).
For its technical intricacies, ALGOL 68 needs a cornucopia of methods to deny the existence of something:
The termNILISvaralways evaluates toTRUEfor any variable (but see above for correct use ofIS:/=:), whereas it is not known to which value a comparisonx<SKIPevaluates for any integerx.
ALGOL 68 leaves intentionally undefined what happens in case ofinteger overflow, the integer bit representation, and the degree of numerical accuracy for floating point.
Both official reports included some advanced features that were not part of the standard language. These were indicated with an ℵ and considered effectively private. Examples include "≮" and "≯" for templates, theOUTTYPE/INTYPEfor crudeduck typing, and theSTRAIGHTOUTandSTRAIGHTINoperators for "straightening" nested arrays and structures
This sample program implements theSieve of Eratosthenesto find all theprime numbersthat are less than 100.NILis the ALGOL 68 analogue of thenull pointerin other languages. The notationxOFyaccesses a memberxof aSTRUCTy.
Note: The Soviet Era computersЭльбрус-1 (Elbrus-1)and Эльбрус-2 were created using high-level language Эль-76 (AL-76), rather than the traditional assembly. Эль-76 resembles Algol-68, The main difference is the dynamic binding types in Эль-76 supported at the hardware level. Эль-76 is used for application, job control, system programming.[57]
BothALGOL 68CandALGOL 68-Rare written in ALGOL 68, effectively making ALGOL 68 an application of itself. Other applications include:
A feature of ALGOL 68, inherited from theALGOLtradition, is its different representations. Programs in thestrict language(which is rigorously defined in the Report) denote production trees in the form of a sequence of grammar symbols, and should be represented using somerepresentation language, of which there are many and tailored to different purposes.
The Revised Report defines areference languageand it is recommended for representation languages that are intended to be read by humans to be close enough to the reference language so symbols can be distinguished "without further elucidation". These representation languages are calledimplementations of the reference language.
For example, the construct in the strict languagebold-begin-symbolcould be represented asbeginin a publication language, asBEGINin a programming language or as the bytes 0xC000 in some hardware language. Similarly, the strict languagediffers from symbolcould be represented as ≠ or as /=.
ALGOL 68's reserved words are effectively in a differentnamespacefrom identifiers, and spaces are allowed in identifiers in most stropping regimes, so this next fragment is legal:
The programmer who writes executable code does not always have an option ofBOLDtypeface orunderliningin the code as this may depend on hardware and cultural issues. Different methods to denote these identifiers have been devised. This is called astroppingregime. For example, all or some of the following may be availableprogramming representations:
All implementations must recognize at least POINT, UPPER and RES inside PRAGMAT sections. Of these, POINT and UPPER stropping are quite common. QUOTE (single apostrophe quoting) was the original recommendation[citation needed].
It may seem that RES stropping is a contradiction to the specification, as there are no reserved words in Algol 68. This is not so. In RES stropping the representation of the bold word (or keyword)beginisbegin, and the representation of the identifierbeginisbegin_. Note that the underscore character is just a representation artifact and not part of the represented identifier. In contrast, in non-stropped languages with reserved words, like for example C, it is not possible to represent an identifierif, since the representationif_represents the identifierif_, notif.
The following characters were recommended for portability, and termed "worthy characters" in theReport on the Standard Hardware Representation of Algol 68:
This reflected a problem in the 1960s where some hardware didn't support lower-case, nor some other non-ASCIIcharacters, indeed in the 1973 report it was written: "Four worthy characters — "|", "_", "[", and "]" — are often coded differently, even at installations which nominally use the same character set."
ALGOL 68 allows for every natural language to define its own set of keywords Algol-68. As a result, programmers are able to write programs using keywords from their native language. Below is an example of a simple procedure that calculates "the day following", the code is in two languages: English and German.[citation needed]
Russian/Soviet example:In English Algol68's case statement readsCASE~IN~OUT~ESAC, inCyrillicthis readsвыб~в~либо~быв.
Except where noted (with asuperscript), the language described above is that of the "Revised Report(r1)".
The original language (As per the "Final Report"r0) differs in syntax of themode cast, and it had the feature ofproceduring, i.e. coercing the value of a term into a procedure which evaluates the term. Proceduring would be intended to make evaluationslazy. The most useful application could have been the short-circuited evaluation of Boolean operators. In:
bis only evaluated ifais true.
As defined in ALGOL 68, it did not work as expected, for example in the code:
against the programmers naïve expectations the printwouldbe executed as it is only thevalueof the elaborated enclosed-clause afterANDFthat was procedured. Textual insertion of the commented-outPROCBOOL: makes it work.
Some implementations emulate the expected behaviour for this special case by extension of the language.
Before revision, the programmer could decide to have the arguments of a procedure evaluated serially instead of collaterally by using semicolons instead of commas (gommas).
For example in:
The first argument to test is guaranteed to be evaluated before the second, but in the usual:
then the compiler could evaluate the arguments in whatever order it felt like.
After the revision of the report, some extensions to the language have been proposed to widen the applicability:
So far, only partial parametrisation has been implemented, in Algol 68 Genie.
TheS3 languagethat was used to write theICL VMEoperating system and much other system software on theICL 2900 Serieswas a direct derivative of Algol 68. However, it omitted many of the more complex features, and replaced the basic modes with a set of data types that mapped directly to the 2900 Series hardware architecture.
ALGOL 68R fromRREwas the first ALGOL 68 subset implementation, running on theICL 1900. Based on the original language, the main subset restrictions weredefinition before useand no parallel processing. This compiler was popular inUKuniversities in the 1970s, where manycomputer sciencestudents learnt ALGOL 68 as their first programming language; the compiler was renowned for good error messages.
ALGOL 68RS(RS)fromRSREwas a portable compiler system written in ALGOL 68RS (bootstrapped from ALGOL 68R), and implemented on a variety of systems including theICL 2900/Series 39,MulticsandDEC VAX/VMS. The language was based on the Revised Report, but with similar subset restrictions to ALGOL 68R. This compiler survives in the form of an Algol68-to-C compiler.
In ALGOL 68S(S)fromCarnegie Mellon Universitythe power of parallel processing was improved by adding an orthogonal extension,eventing. Any variable declaration containing keywordEVENTmade assignments to this variable eligible for parallel evaluation, i.e. the right hand side was made into a procedure which was moved to one of the processors of theC.mmpmultiprocessor system. Accesses to such variables were delayed after termination of the assignment.
CambridgeALGOL 68C(C)was a portable compiler that implemented a subset of ALGOL 68, restricting operator definitions and omitting garbage collection, flexible rows and formatted transput.
Algol 68 Genie(G)by M. van der Veer is an ALGOL 68 implementation for today's computers and operating systems.
"Despite good intentions, a programmer may violate portability by inadvertently employing a local extension. To guard against this, each implementation should provide a PORTCHECK pragmat option. While this option is in force, the compiler prints a message for each construct that it recognizes as violating some portability constraint."[69]
|
https://en.wikipedia.org/wiki/Algol68
|
Indatabases, andtransaction processing(transaction management),snapshot isolationis a guarantee that all reads made in atransactionwill see a consistent snapshot of the database (in practice it reads the last committed values that existed at the time it started), and the transaction itself will successfully commit only if no updates it has made conflict with any concurrent updates made since that snapshot.
Snapshot isolation has been adopted by several majordatabase management systems, such asInterBase,Firebird,Oracle,MySQL,[1]PostgreSQL,SQL Anywhere,MongoDB[2]andMicrosoft SQL Server(2005 and later). The main reason for its adoption is that it allows better performance thanserializability, yet still avoids most of the concurrency anomalies that serializability avoids (but not all). In practice snapshot isolation is implemented withinmultiversion concurrency control(MVCC), where generational values of each data item (versions) are maintained: MVCC is a common way to increase concurrency and performance by generating a new version of adatabase objecteach time the object is written, and allowing transactions' read operations of several last relevant versions (of each object). Snapshot isolation has been used[3]to criticize theANSISQL-92 standard's definition ofisolationlevels, as it exhibits none of the "anomalies" that the SQL standard prohibited, yet is not serializable (the anomaly-free isolation level defined by ANSI).
In spite of its distinction from serializability, snapshot isolation is sometimes referred to asserializableby Oracle.
A transaction executing under snapshot isolation appears to operate on a personalsnapshotof the database, taken at the start of the transaction. When the transaction concludes, it will successfully commit only if the values updated by the transaction have not been changed externally since the snapshot was taken. Such awrite–write conflictwill cause the transaction to abort.
In awrite skewanomaly, two transactions (T1 and T2) concurrently read an overlapping data set (e.g. values V1 and V2), concurrently make disjoint updates (e.g. T1 updates V1, T2 updates V2), and finally concurrently commit, neither having seen the update performed by the other. Were the system serializable, such an anomaly would be impossible, as either T1 or T2 would have to occur "first", and be visible to the other. In contrast, snapshot isolation permits write skew anomalies.
As a concrete example, imagine V1 and V2 are two balances held by a single person, Phil. The bank will allow either V1 or V2 to run a deficit, provided the total held in both is never negative (i.e. V1 + V2 ≥ 0). Both balances are currently $100. Phil initiates two transactions concurrently, T1 withdrawing $200 from V1, and T2 withdrawing $200 from V2.
If the database guaranteed serializable transactions, the simplest way of coding T1 is to deduct $200 from V1, and then verify that V1 + V2 ≥ 0 still holds, aborting if not. T2 similarly deducts $200 from V2 and then verifies V1 + V2 ≥ 0. Since the transactions must serialize, either T1 happens first, leaving V1 = −$100, V2 = $100, and preventing T2 from succeeding (since V1 + (V2 − $200) is now −$200), or T2 happens first and similarly prevents T1 from committing.
If the database is under snapshot isolation(MVCC), however, T1 and T2 operate on private snapshots of the database: each deducts $200 from an account, and then verifies that the new total is zero, using the other account value that held when the snapshot was taken. Since neitherupdateconflicts, both commit successfully, leaving V1 = V2 = −$100, and V1 + V2 = −$200.
Some systems built usingmultiversion concurrency control(MVCC) may support (only) snapshot isolation to allow transactions to proceed without worrying about concurrent operations, and more importantly without needing to re-verify all read operations when the transaction finally commits. This is convenient because MVCC maintains a series of recent history consistent states. The only information that must be stored during the transaction is a list of updates made, which can be scanned for conflicts fairly easily before being committed. However, MVCC systems (such as MarkLogic) will use locks to serialize writes together with MVCC to obtain some of the performance gains and still support the stronger "serializability" level of isolation.
Potential inconsistency problems arising from write skew anomalies can be fixed by adding (otherwise unnecessary) updates to the transactions in order to enforce theserializabilityproperty.[4][5][6][7]
In the example above, we can materialize the conflict by adding a new table which makes the hidden constraint explicit, mapping each person to theirtotal balance. Phil would start off with a total balance of $200, and each transaction would attempt to subtract $200 from this, creating a write–write conflict that would prevent the two from succeeding concurrently. However, this approach violates thenormal form.
Alternatively, we can promote one of the transaction's reads to a write. For instance, T2 could set V1 = V1, creating an artificial write–write conflict with T1 and, again, preventing the two from succeeding concurrently. This solution may not always be possible.
In general, therefore, snapshot isolation puts some of the problem of maintaining non-trivial constraints onto the user, who may not appreciate either the potential pitfalls or the possible solutions. The upside to this transfer is better performance.
Snapshot isolation is called "serializable" mode inOracle[8][9][10]andPostgreSQLversions prior to 9.1,[11][12][13]which may cause confusion with the "realserializability" mode. There are arguments both for and against this decision; what is clear is that users must be aware of the distinction to avoid possible undesired anomalous behavior in their database system logic.
Snapshot isolation arose from work onmultiversion concurrency controldatabases, where multiple versions of the database are maintained concurrently to allow readers to execute without colliding with writers. Such a system allows a natural definition and implementation of such an isolation level.[3]InterBase, later owned byBorland, was acknowledged to provide SI rather than full serializability in version 4,[3]and likely permitted write-skew anomalies since its first release in 1985.[14]
Unfortunately, the ANSISQL-92standard was written with alock-based database in mind, and hence is rather vague when applied to MVCC systems. Berensonet al.wrote a paper in 1995[3]critiquing the SQL standard, and cited snapshot isolation as an example of an isolation level that did not exhibit the standard anomalies described in the ANSI SQL-92 standard, yet still had anomalous behaviour when compared withserializabletransactions.
In 2008, Cahillet al.showed that write-skew anomalies could be prevented by detecting and aborting "dangerous" triplets of concurrent transactions.[15]This implementation of serializability is well-suited tomultiversion concurrency controldatabases, and has been adopted in PostgreSQL 9.1,[12][13][16]where it is known as Serializable Snapshot Isolation (SSI). When used consistently, this eliminates the need for the above workarounds. The downside over snapshot isolation is an increase in aborted transactions. This can perform better or worse than snapshot isolation with the above workarounds, depending on workload.
|
https://en.wikipedia.org/wiki/Snapshot_isolation
|
Featurecomparison of backup software. For a more general comparison seeList of backup software.
|
https://en.wikipedia.org/wiki/Comparison_of_backup_software
|
Polkit(formerlyPolicyKit) is a component for controlling system-wideprivilegesinUnix-likeoperating systems. It provides an organized way for non-privileged processes to communicate with privileged ones. Polkit allows a level of control of centralized system policy. It is developed and maintained by David Zeuthen fromRed Hatand hosted by thefreedesktop.orgproject. It is published asfree softwareunder the terms of version 2 of theGNU Lesser General Public License.[3]
Since version 0.105, released in April 2012,[4][5]the name of the project was changed fromPolicyKittopolkitto emphasize that the system component was rewritten[6]and that theAPIhad changed, breakingbackward compatibility.[7][dubious–discuss]
Fedorabecame the firstdistributionto include PolicyKit, and it has since been used in other distributions, includingUbuntusince version 8.04 andopenSUSEsince version 10.3. Some distributions, like Fedora,[8]have already switched to the rewritten polkit.
It is also possible to use polkit to execute commands with elevated privileges using the commandpkexecfollowed by the command intended to be executed (withrootpermission).[9]However, it may be preferable to usesudo, as this command provides more flexibility and security, in addition to being easier to configure.[10]
Thepolkitddaemonimplements Polkit functionality.[11]
Amemory corruptionvulnerability PwnKit (CVE-2021-4034[12]) discovered in thepkexeccommand (installed on all major Linux distributions) was announced on January 25, 2022.[13][14]The vulnerability dates back to the original distribution from 2009. The vulnerability received aCVSS scoreof 7.8 ("High severity") reflecting serious factors involved in a possible exploit: unprivileged users can gain full root privileges, regardless of the underlying machine architecture or whether thepolkitdaemon is running or not.
Thisfree and open-source softwarearticle is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Polkit
|
Anetworked control system(NCS) is acontrol systemwherein the control loops are closed through a communicationnetwork. The defining feature of an NCS is that control and feedback signals are exchanged among the system's components in the form of information packages through a network.
The functionality of a typical NCS is established by the use of four basic elements:
The most important feature of an NCS is that it connects cyberspace to physical space enabling the execution of several tasks from long distance. In addition, NCSs eliminate unnecessary wiring reducing the complexity and the overall cost in designing and implementing the control systems. They can also be easily modified or upgraded by adding sensors, actuators, and controllers to them with relatively low cost and no major change in their structure. Furthermore, featuring efficient sharing of data between their controllers, NCSs are able to easily fuse global information to make intelligent decisions over large physical spaces.
Their potential applications are numerous and cover a wide range of industries, such as space and terrestrial exploration, access in hazardous environments, factory automation, remote diagnostics and troubleshooting, experimental facilities, domestic robots, aircraft, automobiles, manufacturing plant monitoring, nursing homes and tele-operations. While the potential applications of NCSs are numerous, the proven applications are few, and the real opportunity in the area of NCSs is in developing real-world applications that realize the area's potential.
Advent and development of the Internet combined with the advantages provided by NCS attracted the interest of researchers around the globe. Along with the advantages, several challenges also emerged giving rise to many important research topics. New control strategies, kinematics of the actuators in the systems, reliability and security of communications, bandwidth allocation, development ofdata communicationprotocols, correspondingfault detectionandfault tolerantcontrol strategies, real-time information collection and efficient processing of sensors data are some of the relative topics studied in depth.
The insertion of the communication network in the feedbackcontrol loopmakes the analysis and design of an NCS complex, since it imposes additional time delays in control loops or possibility of packages loss. Depending on the application, time-delays could impose severe degradation on the system performance.
To alleviate the time-delay effect, Y. Tipsuwan and M-Y. Chow, in ADAC Lab at North Carolina State University, proposed thegain scheduler middleware(GSM) methodology and applied it in iSpace. S. Munir and W.J. Book (Georgia Institute of Technology) used aSmith predictor, aKalman filterand an energy regulator to perform teleoperation through the Internet.[1][2]
K.C. Lee, S. Lee and H.H. Lee used agenetic algorithmto design a controller used in a NCS. Many other researchers provided solutions using concepts from several control areas such as robust control, optimalstochastic control, model predictive control, fuzzy logic etc.
A most critical and important issue surrounding the design of distributed NCSs with the successively increasing complexity is to meet the requirements on system reliability and dependability, while guaranteeing a high system performance over a wide operating range. This makes network based fault detection and diagnosis techniques, which are essential to monitor the system performance, receive more and more attention.
|
https://en.wikipedia.org/wiki/Networked_control_system
|
Inmathematical modeling,overfittingis "the production of an analysis that corresponds too closely or exactly to a particular set of data, and may therefore fail to fit to additional data or predict future observations reliably".[1]Anoverfitted modelis amathematical modelthat contains moreparametersthan can be justified by the data.[2]In the special case where the model consists of a polynomial function, these parameters represent thedegree of a polynomial. The essence of overfitting is to have unknowingly extracted some of the residual variation (i.e., thenoise) as if that variation represented underlying model structure.[3]: 45
Underfittingoccurs when a mathematical model cannot adequately capture the underlying structure of the data. Anunder-fitted modelis a model where some parameters or terms that would appear in a correctly specified model are missing.[2]Underfitting would occur, for example, when fitting a linear model to nonlinear data. Such a model will tend to have poor predictive performance.
The possibility of over-fitting exists because the criterion used forselecting the modelis not the same as the criterion used to judge the suitability of a model. For example, a model might be selected by maximizing its performance on some set oftraining data, and yet its suitability might be determined by its ability to perform well on unseen data; overfitting occurs when a model begins to "memorize" training data rather than "learning" to generalize from a trend.
As an extreme example, if the number of parameters is the same as or greater than the number of observations, then a model can perfectly predict the training data simply by memorizing the data in its entirety. (For an illustration, see Figure 2.) Such a model, though, will typically fail severely when making predictions.
Overfitting is directly related to approximation error of the selected function class and the optimization error of the optimization procedure. A function class that is too large, in a suitable sense, relative to the dataset size is likely to overfit.[4]Even when the fitted model does not have an excessive number of parameters, it is to be expected that the fitted relationship will appear to perform less well on a new dataset than on the dataset used for fitting (a phenomenon sometimes known asshrinkage).[2]In particular, the value of thecoefficient of determinationwillshrinkrelative to the original data.
To lessen the chance or amount of overfitting, several techniques are available (e.g.,model comparison,cross-validation,regularization,early stopping,pruning,Bayesian priors, ordropout). The basis of some techniques is to either (1) explicitly penalize overly complex models or (2) test the model's ability to generalize by evaluating its performance on a set of data not used for training, which is assumed to approximate the typical unseen data that a model will encounter.
In statistics, aninferenceis drawn from astatistical model, which has beenselectedvia some procedure. Burnham & Anderson, in their much-cited text on model selection, argue that to avoid overfitting, we should adhere to the "Principle of Parsimony".[3]The authors also state the following.[3]: 32–33
Overfitted models ... are often free of bias in the parameter estimators, but have estimated (and actual) sampling variances that are needlessly large (the precision of the estimators is poor, relative to what could have been accomplished with a more parsimonious model). False treatment effects tend to be identified, and false variables are included with overfitted models. ... A best approximating model is achieved by properly balancing the errors of underfitting and overfitting.
Overfitting is more likely to be a serious concern when there is little theory available to guide the analysis, in part because then there tend to be a large number of models to select from. The bookModel Selection and Model Averaging(2008) puts it this way.[5]
Given a data set, you can fit thousands of models at the push of a button, but how do you choose the best? With so many candidate models, overfitting is a real danger. Is themonkey who typed Hamletactually a good writer?
Inregression analysis, overfitting occurs frequently.[6]As an extreme example, if there arepvariables in alinear regressionwithpdata points, the fitted line can go exactly through every point.[7]Forlogistic regressionor Coxproportional hazards models, there are a variety of rules of thumb (e.g. 5–9,[8]10[9]and 10–15[10]— the guideline of 10 observations per independent variable is known as the "one in ten rule"). In the process of regression model selection, the mean squared error of the random regression function can be split into random noise, approximation bias, and variance in the estimate of the regression function. Thebias–variance tradeoffis often used to overcome overfit models.
With a large set ofexplanatory variablesthat actually have no relation to thedependent variablebeing predicted, some variables will in general be falsely found to bestatistically significantand the researcher may thus retain them in the model, thereby overfitting the model. This is known asFreedman's paradox.
Usually, a learningalgorithmis trained using some set of "training data": exemplary situations for which the desired output is known. The goal is that the algorithm will also perform well on predicting the output when fed "validation data" that was not encountered during its training.
Overfitting is the use of models or procedures that violateOccam's razor, for example by including more adjustable parameters than are ultimately optimal, or by using a more complicated approach than is ultimately optimal. For an example where there are too many adjustable parameters, consider a dataset where training data forycan be adequately predicted by a linear function of two independent variables. Such a function requires only three parameters (the intercept and two slopes). Replacing this simple function with a new, more complex quadratic function, or with a new, more complex linear function on more than two independent variables, carries a risk: Occam's razor implies that any given complex function isa prioriless probable than any given simple function. If the new, more complicated function is selected instead of the simple function, and if there was not a large enough gain in training data fit to offset the complexity increase, then the new complex function "overfits" the data and the complex overfitted function will likely perform worse than the simpler function on validation data outside the training dataset, even though the complex function performed as well, or perhaps even better, on the training dataset.[11]
When comparing different types of models, complexity cannot be measured solely by counting how many parameters exist in each model; the expressivity of each parameter must be considered as well. For example, it is nontrivial to directly compare the complexity of a neural net (which can track curvilinear relationships) withmparameters to a regression model withnparameters.[11]
Overfitting is especially likely in cases where learning was performed too long or where training examples are rare, causing the learner to adjust to very specific random features of the training data that have nocausal relationto thetarget function. In this process of overfitting, the performance on the training examples still increases while the performance on unseen data becomes worse.
As a simple example, consider a database of retail purchases that includes the item bought, the purchaser, and the date and time of purchase. It's easy to construct a model that will fit the training set perfectly by using the date and time of purchase to predict the other attributes, but this model will not generalize at all to new data because those past times will never occur again.
Generally, a learning algorithm is said to overfit relative to a simpler one if it is more accurate in fitting known data (hindsight) but less accurate in predicting new data (foresight). One can intuitively understand overfitting from the fact that information from all past experience can be divided into two groups: information that is relevant for the future, and irrelevant information ("noise"). Everything else being equal, the more difficult a criterion is to predict (i.e., the higher its uncertainty), the more noise exists in past information that needs to be ignored. The problem is determining which part to ignore. A learning algorithm that can reduce the risk of fitting noise is called "robust."
The most obvious consequence of overfitting is poor performance on the validation dataset. Other negative consequences include:
The optimal function usually needs verification on bigger or completely new datasets. There are, however, methods likeminimum spanning treeorlife-time of correlationthat applies the dependence between correlation coefficients and time-series (window width). Whenever the window width is big enough, the correlation coefficients are stable and don't depend on the window width size anymore. Therefore, a correlation matrix can be created by calculating a coefficient of correlation between investigated variables. This matrix can be represented topologically as a complex network where direct and indirect influences between variables are visualized.
Dropout regularisation (random removal of training set data) can also improve robustness and therefore reduce over-fitting by probabilistically removing inputs to a layer.
Underfitting is the inverse of overfitting, meaning that the statistical model or machine learning algorithm is too simplistic to accurately capture the patterns in the data. A sign of underfitting is that there is a high bias and low variance detected in the current model or algorithm used (the inverse of overfitting: lowbiasand highvariance). This can be gathered from theBias-variance tradeoff, which is the method of analyzing a model or algorithm for bias error, variance error, and irreducible error. With a high bias and low variance, the result of the model is that it will inaccurately represent the data points and thus insufficiently be able to predict future data results (seeGeneralization error). As shown in Figure 5, the linear line could not represent all the given data points due to the line not resembling the curvature of the points. We would expect to see a parabola-shaped line as shown in Figure 6 and Figure 1. If we were to use Figure 5 for analysis, we would get false predictive results contrary to the results if we analyzed Figure 6.
Burnham & Anderson state the following.[3]: 32
... an underfitted model would ignore some important replicable (i.e., conceptually replicable in most other samples) structure in the data and thus fail to identify effects that were actually supported by the data. In this case, bias in the parameter estimators is often substantial, and the sampling variance is underestimated, both factors resulting in poor confidence interval coverage. Underfitted models tend to miss important treatment effects in experimental settings.
There are multiple ways to deal with underfitting:
Benign overfitting describes the phenomenon of a statistical model that seems to generalize well to unseen data, even when it has been fit perfectly on noisy training data (i.e., obtains perfect predictive accuracy on the training set). The phenomenon is of particular interest indeep neural networks, but is studied from a theoretical perspective in the context of much simpler models, such aslinear regression. In particular, it has been shown thatoverparameterizationis essential for benign overfitting in this setting. In other words, the number of directions in parameter space that are unimportant for prediction must significantly exceed the sample size.[16]
|
https://en.wikipedia.org/wiki/Underfitting
|
Elliptic-curve Diffie–Hellman(ECDH) is akey agreementprotocol that allows two parties, each having anelliptic-curvepublic–private key pair, to establish ashared secretover aninsecure channel.[1][2][3]This shared secret may be directly used as a key, or toderive another key. The key, or the derived key, can then be used to encrypt subsequent communications using asymmetric-key cipher. It is a variant of theDiffie–Hellmanprotocol usingelliptic-curve cryptography.
The following example illustrates how a shared key is established. SupposeAlicewants to establish a shared key withBob, but the only channel available for them may be eavesdropped by a third party. Initially, thedomain parameters(that is,(p,a,b,G,n,h){\displaystyle (p,a,b,G,n,h)}in the prime case or(m,f(x),a,b,G,n,h){\displaystyle (m,f(x),a,b,G,n,h)}in the binary case) must be agreed upon. Also, each party must have a key pair suitable for elliptic curve cryptography, consisting of a private keyd{\displaystyle d}(a randomly selected integer in the interval[1,n−1]{\displaystyle [1,n-1]}) and a public key represented by a pointQ{\displaystyle Q}(whereQ=d⋅G{\displaystyle Q=d\cdot G}, that is, the result ofaddingG{\displaystyle G}to itselfd{\displaystyle d}times). Let Alice's key pair be(dA,QA){\displaystyle (d_{\text{A}},Q_{\text{A}})}and Bob's key pair be(dB,QB){\displaystyle (d_{\text{B}},Q_{\text{B}})}. Each party must know the other party's public key prior to execution of the protocol.
Alice computes point(xk,yk)=dA⋅QB{\displaystyle (x_{k},y_{k})=d_{\text{A}}\cdot Q_{\text{B}}}. Bob computes point(xk,yk)=dB⋅QA{\displaystyle (x_{k},y_{k})=d_{\text{B}}\cdot Q_{\text{A}}}. The shared secret isxk{\displaystyle x_{k}}(thexcoordinate of the point). Most standardized protocols based on ECDH derive a symmetric key fromxk{\displaystyle x_{k}}using some hash-based key derivation function.
The shared secret calculated by both parties is equal, becausedA⋅QB=dA⋅dB⋅G=dB⋅dA⋅G=dB⋅QA{\displaystyle d_{\text{A}}\cdot Q_{\text{B}}=d_{\text{A}}\cdot d_{\text{B}}\cdot G=d_{\text{B}}\cdot d_{\text{A}}\cdot G=d_{\text{B}}\cdot Q_{\text{A}}}.
The only information about her key that Alice initially exposes is her public key. So, no party except Alice can determine Alice's private key (Alice of course knows it by having selected it), unless that party can solve the elliptic curvediscrete logarithmproblem. Bob's private key is similarly secure. No party other than Alice or Bob can compute the shared secret, unless that party can solve the elliptic curveDiffie–Hellman problem.
The public keys are either static (and trusted, say via a certificate) or ephemeral (also known asECDHE, where final 'E' stands for "ephemeral").Ephemeral keysare temporary and not necessarily authenticated, so if authentication is desired, authenticity assurances must be obtained by other means. Authentication is necessary to avoidman-in-the-middle attacks. If one of either Alice's or Bob's public keys is static, then man-in-the-middle attacks are thwarted. Static public keys provide neitherforward secrecynor key-compromise impersonation resilience, among other advanced security properties. Holders of static private keys should validate the other public key, and should apply a securekey derivation functionto the raw Diffie–Hellman shared secret to avoid leaking information about the static private key. For schemes with other security properties, seeMQV.
If Alice maliciously chooses invalid curve points for her key and Bob does not validate that Alice's points are part of the selected group, she can collect enough residues of Bob's key to derive his private key. SeveralTLSlibraries were found to be vulnerable to this attack.[4]
The shared secret is uniformly distributed on a subset of[0,p){\displaystyle [0,p)}of size(n+1)/2{\displaystyle (n+1)/2}. For this reason, the secret should not be used directly as a symmetric key, but it can be used as entropy for a key derivation function.
LetA,B∈Fp{\displaystyle A,B\in F_{p}}such thatB(A2−4)≠0{\displaystyle B(A^{2}-4)\neq 0}. The Montgomery form elliptic curveEM,A,B{\displaystyle E_{M,A,B}}is the set of all(x,y)∈Fp×Fp{\displaystyle (x,y)\in F_{p}\times F_{p}}satisfying the equationBy2=x(x2+Ax+1){\displaystyle By^{2}=x(x^{2}+Ax+1)}along with the point at infinity denoted as∞{\displaystyle \infty }. This is called the affine form of the curve. The set of allFp{\displaystyle F_{p}}-rational points ofEM,A,B{\displaystyle E_{M,A,B}}, denoted asEM,A,B(Fp){\displaystyle E_{M,A,B}(F_{p})}is the set of all(x,y)∈Fp×Fp{\displaystyle (x,y)\in F_{p}\times F_{p}}satisfyingBy2=x(x2+Ax+1){\displaystyle By^{2}=x(x^{2}+Ax+1)}along with∞{\displaystyle \infty }. Under a suitably defined addition operation,EM,A,B(Fp){\displaystyle E_{M,A,B}(F_{p})}is a group with∞{\displaystyle \infty }as the identity element. It is known that the order of this group is a multiple of 4. In fact, it is usually possible to obtainA{\displaystyle A}andB{\displaystyle B}such that the order ofEM,A,B{\displaystyle E_{M,A,B}}is4q{\displaystyle 4q}for a primeq{\displaystyle q}. For more extensive discussions of Montgomery curves and their arithmetic one may follow.[5][6][7]
For computational efficiency, it is preferable to work with projective coordinates. The projective form of the Montgomery curveEM,A,B{\displaystyle E_{M,A,B}}isBY2Z=X(X2+AXZ+Z2){\displaystyle BY^{2}Z=X(X^{2}+AXZ+Z^{2})}. For a pointP=[X:Y:Z]{\displaystyle P=[X:Y:Z]}onEM,A,B{\displaystyle E_{M,A,B}}, thex{\displaystyle x}-coordinate mapx{\displaystyle x}is the following:[7]x(P)=[X:Z]{\displaystyle x(P)=[X:Z]}ifZ≠0{\displaystyle Z\neq 0}andx(P)=[1:0]{\displaystyle x(P)=[1:0]}ifP=[0:1:0]{\displaystyle P=[0:1:0]}.Bernstein[8][9]introduced the mapx0{\displaystyle x_{0}}as follows:x0(X:Z)=XZp−2{\displaystyle x_{0}(X:Z)=XZ^{p-2}}which is defined for all values ofX{\displaystyle X}andZ{\displaystyle Z}inFp{\displaystyle F_{p}}. Following Miller,[10]Montgomery[5]and Bernstein,[9]the Diffie-Hellman key agreement can be carried out on a Montgomery curve as follows. LetQ{\displaystyle Q}be a generator of a prime order subgroup ofEM,A,B(Fp){\displaystyle E_{M,A,B}(F_{p})}. Alice chooses a secret keys{\displaystyle s}and has public keyx0(sQ){\displaystyle x_{0}(sQ)};
Bob chooses a secret keyt{\displaystyle t}and has public keyx0(tQ){\displaystyle x_{0}(tQ)}. The shared secret key of Alice and Bob isx0(stQ){\displaystyle x_{0}(stQ)}. Using classical computers, the best known method of obtainingx0(stQ){\displaystyle x_{0}(stQ)}fromQ,x0(sQ){\displaystyle Q,x_{0}(sQ)}andx0(tQ){\displaystyle x_{0}(tQ)}requires aboutO(p1/2){\displaystyle O(p^{1/2})}time using thePollards rho algorithm.[11]
The most famous example of Montgomery curve isCurve25519which was introduced by Bernstein.[9]For Curve25519,p=2255−19,A=486662{\displaystyle p=2^{255}-19,A=486662}andB=1{\displaystyle B=1}.
The other Montgomery curve which is part of TLS 1.3 isCurve448which was introduced
by Hamburg.[12]For Curve448,p=2448−2224−1,A=156326{\displaystyle p=2^{448}-2^{224}-1,A=156326}andB=1{\displaystyle B=1}. Couple of Montgomery curves named M[4698] and M[4058] competitive toCurve25519andCurve448respectively have been proposed in.[13]For M[4698],p=2251−9,A=4698,B=1{\displaystyle p=2^{251}-9,A=4698,B=1}and for M[4058],p=2444−17,A=4058,B=1{\displaystyle p=2^{444}-17,A=4058,B=1}. At 256-bit security level, three Montgomery curves named M[996558], M[952902] and M[1504058] have been proposed in.[14]For M[996558],p=2506−45,A=996558,B=1{\displaystyle p=2^{506}-45,A=996558,B=1}, for M[952902],p=2510−75,A=952902,B=1{\displaystyle p=2^{510}-75,A=952902,B=1}and for M[1504058],p=2521−1,A=1504058,B=1{\displaystyle p=2^{521}-1,A=1504058,B=1}respectively. Apart from these two, other proposals of Montgomery curves can be found at.[15]
|
https://en.wikipedia.org/wiki/Elliptic-curve_Diffie%E2%80%93Hellman
|
On theWorld Wide Web, alink farmis any group ofwebsitesthat allhyperlinkto other sites in the group for the purpose of increasingSEOrankings.[1]In graph theoretic terms, a link farm is aclique. Although some link farms can be created by hand, most are created throughautomated programsand services. A link farm is a form ofspammingthe index of aweb search engine(sometimes calledspamdexing). Other link exchange systems are designed to allow individual websites to selectively exchange links with other relevant websites, and are not considered a form of spamdexing.
Search engines require ways to confirm page relevancy. A known method is to examine for one-way links coming directly from relevant websites. The process of building links should not be confused with being listed on link farms, as the latter requires reciprocal return links, which often renders the overall backlink advantage useless. This is due to oscillation, causing confusion over which is the vendor site and which is the promoting site.
Link farms were first developed bysearch engine optimizers(SEOs) in 1999 to take advantage of theInktomisearch engine's dependence upon link popularity. Although link popularity is used by some search engines to help establish a ranking order for search results, the Inktomi engine at the time maintained two indexes. Search results were produced from the primary index, which was limited to approximately 100 million listings. Pages with few inbound links fell out of the Inktomi index on a monthly basis.
Inktomi was targeted for manipulation through link farms because it was then used by several independent but popular search engines.Yahoo!, then the most popular search service, also used Inktomi results to supplement its directory search feature. The link farms helped stabilize listings, primarily for online business Websites that had few natural links from larger, more stable sites in the Inktomi index.
Link farm exchanges were at first handled on an informal basis, but several service companies were founded to provide automated registration, categorization, and link page updates to member Websites.
When theGooglesearch engine became popular, search engine optimizers learned that Google's ranking algorithm depended in part on a link-weighting scheme calledPageRank. Rather than simply count all inbound links equally, the PageRank algorithm determines that some links may be more valuable than others, and therefore assigns them more weight than others. Link farming was adapted to help increase the PageRank of member pages.[2][3]
However, the link farms became susceptible to manipulation by unscrupulous webmasters who joined the services, received inbound linkage, and then found ways to hide their outbound links or to avoid posting any links on their sites at all. Link farm managers had to implement quality controls and monitor member compliance with their rules to ensure fairness.
Alternative link farm products emerged, particularly link-finding software that identified potential reciprocal link partners, sent them template-based emails offering to exchange links, and created directory-like link pages for Websites, in the hope of building their link popularity and PageRank. These link farms are sometimes considered aspamdexingstrategy.
Search engines countered the link farm movement by identifying specific attributes associated with link farm pages and filtering those pages fromindexingand search results. In some cases, entire domains were removed from the search engine indexes in order to prevent them from influencing search results.
Aprivate blog network(PBN) is a group ofblogsthat are owned by the same entity. A blog network can either be a group of loosely connected blogs, or a group of blogs that are owned by the same company. The purpose of such a network is usually to promote other sites outside the network and therefore increase the search engine rankings or advertising revenue generated fromonline advertisingon the sites the PBN links to.
In September 2014, Google targeted private blog networks (PBNs) with manual action ranking penalties.[4]This served to dissuadesearch engine optimizationand online marketers from using PBNs to increase their online rankings. The "thin content" warnings are closely tied toPandawhich focuses on thin content and on-page quality. PBNs have a history of being targeted by Google and therefore may not be the safest option. Since Google is on the search for blog networks, they are not always linked together. In fact, interlinking your blogs could help Google, and a single exposed blog could reveal the whole blog network by looking at the outbound links.
A blog network may also refer to a central website, such asWordPress, where a user creates an account and is then able to use their own blog. The created blog forms part of a network because it uses either a subdomain or a subfolder of the main domain, although in all other ways it can be entirely autonomous. This is also known as a hosted blog platform and usually uses the free WordPress Multisite software.
Hosted blog networks are also known asWeb 2.0networks, since they became more popular with the rise of the second phase of web development.
|
https://en.wikipedia.org/wiki/Link_farm
|
In mathematics,Legendre's formulagives an expression for the exponent of the largest power of aprimepthat divides thefactorialn!. It is named afterAdrien-Marie Legendre. It is also sometimes known asde Polignac's formula, afterAlphonse de Polignac.
For any prime numberpand any positive integern, letνp(n){\displaystyle \nu _{p}(n)}be the exponent of the largest power ofpthat dividesn(that is, thep-adic valuationofn). Then
where⌊x⌋{\displaystyle \lfloor x\rfloor }is thefloor function. While the sum on the right side is an infinite sum, for any particular values ofnandpit has only finitely many nonzero terms: for everyilarge enough thatpi>n{\displaystyle p^{i}>n}, one has⌊npi⌋=0{\displaystyle \textstyle \left\lfloor {\frac {n}{p^{i}}}\right\rfloor =0}. This reduces the infinite sum above to
whereL=⌊logpn⌋{\displaystyle L=\lfloor \log _{p}n\rfloor }.
Forn= 6, one has6!=720=24⋅32⋅51{\displaystyle 6!=720=2^{4}\cdot 3^{2}\cdot 5^{1}}. The exponentsν2(6!)=4,ν3(6!)=2{\displaystyle \nu _{2}(6!)=4,\nu _{3}(6!)=2}andν5(6!)=1{\displaystyle \nu _{5}(6!)=1}can be computed by Legendre's formula as follows:
Sincen!{\displaystyle n!}is the product of the integers 1 throughn, we obtain at least one factor ofpinn!{\displaystyle n!}for each multiple ofpin{1,2,…,n}{\displaystyle \{1,2,\dots ,n\}}, of which there are⌊np⌋{\displaystyle \textstyle \left\lfloor {\frac {n}{p}}\right\rfloor }. Each multiple ofp2{\displaystyle p^{2}}contributes an additional factor ofp, each multiple ofp3{\displaystyle p^{3}}contributes yet another factor ofp, etc. Adding up the number of these factors gives the infinite sum forνp(n!){\displaystyle \nu _{p}(n!)}.
One may also reformulate Legendre's formula in terms of thebase-pexpansion ofn. Letsp(n){\displaystyle s_{p}(n)}denote the sum of the digits in the base-pexpansion ofn; then
For example, writingn= 6 inbinaryas 610= 1102, we have thats2(6)=1+1+0=2{\displaystyle s_{2}(6)=1+1+0=2}and so
Similarly, writing 6 internaryas 610= 203, we have thats3(6)=2+0=2{\displaystyle s_{3}(6)=2+0=2}and so
Writen=nℓpℓ+⋯+n1p+n0{\displaystyle n=n_{\ell }p^{\ell }+\cdots +n_{1}p+n_{0}}in basep. Then⌊npi⌋=nℓpℓ−i+⋯+ni+1p+ni{\displaystyle \textstyle \left\lfloor {\frac {n}{p^{i}}}\right\rfloor =n_{\ell }p^{\ell -i}+\cdots +n_{i+1}p+n_{i}}, and therefore
Legendre's formula can be used to proveKummer's theorem. As one special case, it can be used to prove that ifnis a positive integer then 4 divides(2nn){\displaystyle {\binom {2n}{n}}}if and only ifnis not a power of 2.
It follows from Legendre's formula that thep-adic exponential functionhas radius of convergencep−1/(p−1){\displaystyle p^{-1/(p-1)}}.
|
https://en.wikipedia.org/wiki/Legendre%27s_formula
|
Crowd computingis a form of distributed work where tasks that are hard for computers to do, are handled by large numbers of humans distributed across the internet.
It is an overarching term encompassing tools that enable idea sharing, non-hierarchical decision making and utilization of "cognitive surplus" - the ability of the world’s population to collaborate on large, sometimes global projects.[1]Crowd computing combines elements ofcrowdsourcing,automation,distributed computing, andmachine learning.
Prof. Rob Miller of MIT further defines crowd computing as “harnessing the power of people out in the web to do tasks that are hard for individual users or computers to do alone. Like cloud computing, crowd computing offers elastic, on-demand human resources that can drive new applications and new ways of thinking about technology.”[2]
The practice predates the internet. At the end of the 18th century, the British Royal Astronomers distributed spreadsheets by mail, asking the crowd to help them create maps of the stars and the seas. In the United States during the 1930s, when the government employed hundreds of “human computers” to work on the WPA and the Manhattan Project.[3]
The modern day microchip made using large crowds for mechanical computation less attractive in the second half of the twentieth century. However, as the volume of data online grew, it became clear to companies like Amazon and Google that there were some things humans were simply better at doing than machines.[4]
|
https://en.wikipedia.org/wiki/Crowd_computing
|
Arandom seed(orseed state, or justseed) is anumber(orvector) used toinitializeapseudorandom number generator.
A pseudorandom number generator's number sequence is completely determined by the seed: thus, if a pseudorandom number generator is later reinitialized with the same seed, it will produce the same sequence of numbers.
For a seed to be used in a pseudorandom number generator, it does not need to be random. Because of the nature of number generating algorithms, so long as the original seed is ignored, the rest of the values that the algorithm generates will followprobability distributionin a pseudorandom manner.
The choice of a good random seed is crucial in the field ofcomputer security. When a secretencryptionkeyispseudorandomlygenerated, having the seed will allow one to obtain the key. Highentropyis important for selecting good random seed data.[1]
Random seeds need to be chosen carefully in order to ensure random number generation. If a seed is chosen that doesn't provide actual random results, the numbers given by thePRNG (pseudo random number generator)will not work properly in an application that needs them. Charting the output values of a PRNG with ascatter plotis a good way to find out if the seed is working. If the graph shows static, then the PRNG is giving random results, but if a pattern appears, the seed needs to be fixed.[2][3]
If the samerandomseed is deliberately shared, it becomes asecret key, so two or more systems using matching pseudorandom number algorithms and matching seeds can generate matching sequences of non-repeating numbers which can be used to synchronize remote systems, such asGPSsatellites and receivers.[3]
Random seeds are often generated from the state of the computer system (such as thetime), acryptographically secure pseudorandom number generatoror from ahardware random number generator.
Thistheoretical computer science–related article is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Random_seed
|
Algorithmic inferencegathers new developments in thestatistical inferencemethods made feasible by the powerful computing devices widely available to any data analyst. Cornerstones in this field arecomputational learning theory,granular computing,bioinformatics, and, long ago, structural probability (Fraser 1966).
The main focus is on the algorithms which compute statistics rooting the study of a random phenomenon, along with the amount of data they must feed on to produce reliable results. This shifts the interest of mathematicians from the study of thedistribution lawsto the functional properties of thestatistics, and the interest of computer scientists from the algorithms for processing data to theinformationthey process.
Concerning the identification of the parameters of a distribution law, the mature reader may recall lengthy disputes in the mid 20th century about the interpretation of their variability in terms offiducial distribution(Fisher 1956), structural probabilities (Fraser 1966), priors/posteriors (Ramsey 1925), and so on. From anepistemologyviewpoint, this entailed a companion dispute as to the nature ofprobability: is it a physical feature of phenomena to be described throughrandom variablesor a way of synthesizing data about a phenomenon? Opting for the latter, Fisher defines afiducial distributionlaw of parameters of a given random variable that he deduces from a sample of its specifications. With this law he computes, for instance "the probability that μ (mean of aGaussian variable– omeur note) is less than any assigned value, or the probability that it lies between any assigned values, or, in short, its probability distribution, in the light of the sample observed".
Fisher fought hard to defend the difference and superiority of his notion of parameter distribution in comparison to
analogous notions, such as Bayes'posterior distribution, Fraser's constructive probability and Neyman'sconfidence intervals. For half a century, Neyman's confidence intervals won out for all practical purposes, crediting the phenomenological nature of probability. With this perspective, when you deal with a Gaussian variable, its mean μ is fixed by the physical features of the phenomenon you are observing, where the observations are random operators, hence the observed values are specifications of arandom sample. Because of their randomness, you may compute from the sample specific intervals containing the fixed μ with a given probability that you denoteconfidence.
LetXbe a Gaussian variable[1]with parametersμ{\displaystyle \mu }andσ2{\displaystyle \sigma ^{2}}and{X1,…,Xm}{\displaystyle \{X_{1},\ldots ,X_{m}\}}a sample drawn from it. Working with statistics
and
is the sample mean, we recognize that
follows aStudent's t distribution(Wilks 1962) with parameter (degrees of freedom)m− 1, so that
GaugingTbetween two quantiles and inverting its expression as a function ofμ{\displaystyle \mu }you obtain confidence intervals forμ{\displaystyle \mu }.
With the sample specification:
having sizem= 10, you compute the statisticssμ=43.37{\displaystyle s_{\mu }=43.37}andsσ2=46.07{\displaystyle s_{\sigma ^{2}}=46.07}, and obtain a 0.90 confidence interval forμ{\displaystyle \mu }with extremes (3.03, 5.65).
From a modeling perspective the entire dispute looks like a chicken-egg dilemma: either fixed data by first and probability distribution of their properties as a consequence, or fixed properties by first and probability distribution of the observed data as a corollary.
The classic solution has one benefit and one drawback. The former was appreciated particularly back when people still did computations with sheet and pencil. Per se, the task of computing a Neyman confidence interval for the fixed parameter θ is hard: you do not know θ, but you look for disposing around it an interval with a possibly very low probability of failing. The analytical solution is allowed for a very limited number of theoretical cases.Vice versaa large variety of instances may be quickly solved in anapproximate wayvia thecentral limit theoremin terms of confidence interval around a Gaussian distribution – that's the benefit.
The drawback is that the central limit theorem is applicable when the sample size is sufficiently large. Therefore, it is less and less applicable with the sample involved in modern inference instances. The fault is not in the sample size on its own part. Rather, this size is not sufficiently large because of thecomplexityof the inference problem.
With the availability of large computing facilities, scientists refocused from isolated parameters inference to complex functions inference, i.e. re sets of highly nested parameters identifying functions. In these cases we speak aboutlearning of functions(in terms for instance ofregression,neuro-fuzzy systemorcomputational learning) on the basis of highly informative samples. A first effect of having a complex structure linking data is the reduction of the number of sampledegrees of freedom, i.e. the burning of a part of sample points, so that the effective sample size to be considered in the central limit theorem is too small. Focusing on the sample size ensuring a limited learning error with a givenconfidence level, the consequence is that the lower bound on this size grows withcomplexity indicessuch asVC dimensionordetail of a classto which the function we want to learn belongs.
A sample of 1,000 independent bits is enough to ensure an absolute error of at most 0.081 on the estimation of the parameterpof the underlying Bernoulli variable with a confidence of at least 0.99. The same size cannot guarantee a threshold less than 0.088 with the same confidence 0.99 when the error is identified with the probability that a 20-year-old man living in New York does not fit the ranges of height, weight and waistline observed on 1,000 Big Apple inhabitants. The accuracy shortage occurs because both the VC dimension and the detail of the class of parallelepipeds, among which the one observed from the 1,000 inhabitants' ranges falls, are equal to 6.
With insufficiently large samples, the approach:fixed sample – random propertiessuggests inference procedures in three steps:
a sampling mechanism(U,g(a,k)){\displaystyle (U,g_{(a,k)})}forXwith seedUreads:
or, equivalently,g(a,k)(u)=ku−1/a.{\displaystyle g_{(a,k)}(u)=ku^{-1/a}.}
With these relations we may inspect the values of the parameters that could have generated a sample with the observed statistic from a particular setting of the seeds representing the seed of the sample. Hence, to the population of sample seeds corresponds a population of parameters. In order to ensure this population clean properties, it is enough to draw randomly the seed values and involve eithersufficient statisticsor, simply,well-behaved statisticsw.r.t. the parameters, in the master equations.
For example, the statisticss1=∑i=1mlogxi{\displaystyle s_{1}=\sum _{i=1}^{m}\log x_{i}}ands2=mini=1,…,m{xi}{\displaystyle s_{2}=\min _{i=1,\ldots ,m}\{x_{i}\}}prove to be sufficient for parametersaandkof a Pareto random variableX. Thanks to the (equivalent form of the) sampling mechanismg(a,k){\displaystyle g_{(a,k)}}we may read them as
respectively.
wheres1{\displaystyle s_{1}}ands2{\displaystyle s_{2}}are the observed statistics andu1,…,um{\displaystyle u_{1},\ldots ,u_{m}}a set of uniform seeds. Transferring to the parameters the probability (density) affecting the seeds, you obtain the distribution law of the random parametersAandKcompatible with the statistics you have observed.
Compatibility denotes parameters of compatible populations, i.e. of populations thatcould have generateda sample giving rise to the observed statistics. You may formalize this notion as follows:
For a random variable and a sample drawn from it acompatible distributionis a distribution having the samesampling mechanismMX=(Z,gθ){\displaystyle {\mathcal {M}}_{X}=(Z,g_{\boldsymbol {\theta }})}ofXwith a valueθ{\displaystyle {\boldsymbol {\theta }}}of the random parameterΘ{\displaystyle \mathbf {\Theta } }derived from a master equation rooted on a well-behaved statistics.
You may find the distribution law of the Pareto parametersAandKas an implementation example of thepopulation bootstrapmethod as in the figure on the left.
Implementing thetwisting argumentmethod, you get the distribution lawFM(μ){\displaystyle F_{M}(\mu )}of the meanMof a Gaussian variableXon the basis of the statisticsM=∑i=1mxi{\displaystyle s_{M}=\sum _{i=1}^{m}x_{i}}whenΣ2{\displaystyle \Sigma ^{2}}is known to be equal toσ2{\displaystyle \sigma ^{2}}(Apolloni, Malchiodi & Gaito 2006). Its expression is:
shown in the figure on the right, whereΦ{\displaystyle \Phi }is thecumulative distribution functionof astandard normal distribution.
Computing aconfidence intervalforMgiven its distribution function is straightforward: we need only find two quantiles (for instanceδ/2{\displaystyle \delta /2}and1−δ/2{\displaystyle 1-\delta /2}quantiles in case we are interested in a confidence interval of level δ symmetric in the tail's probabilities) as indicated on the left in the diagram showing the behavior of the two bounds for different values of the statisticsm.
The Achilles heel of Fisher's approach lies in the joint distribution of more than one parameter, say mean and variance of a Gaussian distribution. On the contrary, with the last approach (and above-mentioned methods:population bootstrapandtwisting argument) we may learn the joint distribution of many parameters. For instance, focusing on the distribution of two or many more parameters, in the figures below we report two confidence regions where the function to be learnt falls with a confidence of 90%. The former concerns the probability with which an extendedsupport vector machineattributes a binary label 1 to the points of the(x,y){\displaystyle (x,y)}plane. The two surfaces are drawn on the basis of a set of sample points in turn labelled according to a specific distribution law (Apolloni et al. 2008). The latter concerns the confidence region of the hazard rate of breast cancer recurrence computed from a censored sample (Apolloni, Malchiodi & Gaito 2006).
|
https://en.wikipedia.org/wiki/Algorithmic_inference
|
TrustRankis analgorithmthat conductslink analysisto separate usefulwebpagesfromspamand helps search engine rank pages inSERPs(Search Engine Results Pages). It is semi-automated process which means that it needs some human assistance in order to function properly. Search engines have many different algorithms and ranking factors that they use when measuring the quality of webpages. TrustRank is one of them.
Because manual review of the Internet is impractical and very expensive, TrustRank was introduced in order to help achieve this task much more quickly and cheaply. It was first introduced by researchers Zoltan Gyongyi and Hector Garcia-Molina ofStanford Universityand Jan Pedersen ofYahoo!in their paper "Combating Web Spam with TrustRank" in 2004.[1]Today, this algorithm is a part of major web search engines like Yahoo! and Google.[2]
One of the most important factors that helpweb search enginedetermine the quality of a web page when returning results arebacklinks. Search engines take a number and quality of backlinks into consideration when assigning a place to a certain web page in SERPs. Manyweb spampages are created only with the intention of misleadingsearch engines. These pages, chiefly created for commercial reasons, use various techniques toachieve higher-than-deserved rankingsin thesearch engines' result pages. While human experts can easily identify spam, search engines are still being improved daily in order to do it without help of humans.
One popular method for improving rankings is to increase the perceived importance of a document through complex linking schemes.Google'sPageRankand other search ranking algorithms have been subjected to such manipulation.
TrustRank seeks to combat spam by filtering the web based upon reliability. The method calls for selecting a small set of seed pages to be evaluated by an expert. Once the reputable seed pages are manually identified, a crawl extending outward from the seed set seeks out similarly reliable and trustworthy pages. TrustRank's reliability diminishes with increased distance between documents and the seed set.
The logic works in the opposite way as well, which is called Anti-Trust Rank. The closer a site is to spam resources, the more likely it is to be spam as well.[3]
The researchers who proposed the TrustRank methodology have continued to refine their work by evaluating related topics, such as measuringspam mass.
|
https://en.wikipedia.org/wiki/TrustRank
|
Language acquisitionis the process by which humans acquire the capacity to perceive and comprehendlanguage. In other words, it is how human beings gain the ability to be aware of language, to understand it, and to produce and usewordsandsentencesto communicate.
Language acquisition involves structures, rules, and representation. The capacity to successfully use language requires human beings to acquire a range of tools, includingphonology,morphology,syntax,semantics, and an extensivevocabulary. Language can be vocalized as in speech, or manual as insign.[1]Human language capacity isrepresented in the brain. Even though human language capacity is finite, one can say and understand an infinite number of sentences, which is based on a syntactic principle calledrecursion. Evidence suggests that every individual has three recursivemechanismsthat allow sentences to go indeterminately. These three mechanisms are:relativization,complementationandcoordination.[2]
There are two main guiding principles in first-language acquisition:speech perceptionalways precedesspeech production, and the gradually evolving system by which a child learns a language is built up one step at a time, beginning with the distinction between individualphonemes.[3]
For many years, linguists interested in child language acquisition have questioned how language is acquired. Lidz et al. state, "The question of how these structures are acquired, then, is more properly understood as the question of how a learner takes the surface forms in the input and converts them into abstract linguistic rules and representations."[4]
Language acquisition usually refers tofirst-language acquisition. It studies infants' acquisition of theirnative language, whether that is a spoken language or a sign language,[1]though it can also refer tobilingual first language acquisition(BFLA), referring to an infant's simultaneous acquisition of two native languages.[5][6][7][8][9][10][11]This is distinguished fromsecond-language acquisition, which deals with the acquisition (in bothchildrenand adults) of additional languages. On top of speech, reading and writing a language with an entirely different script increases the complexities of true foreign languageliteracy. Language acquisition is one of the quintessential human traits.[12][13]
Some early observation-based ideas about language acquisition were proposed byPlato, who felt that word-meaning mapping in some form was innate. Additionally,Sanskrit grammariansdebated for over twelve centuries whether humans' ability to recognize the meaning of words was god-given (possibly innate) or passed down by previous generations and learned from already established conventions: a child learning the word forcowby listening to trusted speakers talking about cows.[14]
Philosophers in ancient societies were interested in how humans acquired the ability to understand and produce language well beforeempirical methodsfor testing those theories were developed, but for the most part they seemed to regard language acquisition as a subset of man's ability to acquire knowledge and learn concepts.[15]
Empiricists, likeThomas HobbesandJohn Locke, argued that knowledge (and, for Locke, language) emerge ultimately from abstracted sense impressions. These arguments lean towards the "nurture" side of the argument: that language is acquired through sensory experience, which led toRudolf Carnap's Aufbau, an attempt to learn all knowledge from sense datum, using the notion of "remembered as similar" to bind them into clusters, which would eventually map into language.[16]
Proponents ofbehaviorismargued that language may be learned through a form ofoperant conditioning. InB. F. Skinner'sVerbal Behavior(1957), he suggested that the successful use of a sign, such as a word orlexical unit, given a certain stimulus,reinforcesits "momentary" or contextual probability. Since operant conditioning is contingent on reinforcement by rewards, a child would learn that a specific combination of sounds means a specific thing through repeated successful associations made between the two. A "successful" use of a sign would be one in which the child is understood (for example, a child saying "up" when they want to be picked up) and rewarded with the desired response from another person, thereby reinforcing the child's understanding of the meaning of that word and making it more likely that they will use that word in a similar situation in the future. Someempiricisttheories of language acquisition include thestatistical learning theory. Charles F. Hockett of language acquisition,relational frame theory,functionalist linguistics,social interactionist theory, and usage-based language acquisition.
Skinner's behaviorist idea was strongly attacked byNoam Chomskyin a review article in 1959, calling it "largely mythology" and a "serious delusion."[17]Arguments against Skinner's idea of language acquisition through operant conditioning include the fact that children often ignore language corrections from adults. Instead, children typically follow a pattern of using an irregular form of a word correctly, making errors later on, and eventually returning to the proper use of the word. For example, a child may correctly learn the word "gave" (past tense of "give"), and later on use the word "gived". Eventually, the child will typically go back to using the correct word, "gave". Chomsky claimed the pattern is difficult to attribute to Skinner's idea of operant conditioning as the primary way that children acquire language. Chomsky argued that if language were solely acquired through behavioral conditioning, children would not likely learn the proper use of a word and suddenly use the word incorrectly.[18]Chomsky believed that Skinner failed to account for the central role of syntactic knowledge in language competence. Chomsky also rejected the term "learning", which Skinner used to claim that children "learn" language through operant conditioning.[19]Instead, Chomsky argued for a mathematical approach to language acquisition, based on a study ofsyntax.
The capacity to acquire and use language is a key aspect that distinguisheshumansfrom other beings. Although it is difficult to pin down what aspects of language are uniquely human, there are a few design features that can be found in all known forms of human language, but that are missing from forms ofanimal communication. For example, many animals are able to communicate with each other by signaling to the things around them, but this kind of communication lacks the arbitrariness of human vernaculars (in that there is nothing about the sound of the word "dog" that would hint at its meaning). Other forms of animal communication may utilize arbitrary sounds, but are unable to combine those sounds in different ways to create completely novel messages that can then be automatically understood by another.Hockettcalled this design feature of human language "productivity". It is crucial to the understanding of human language acquisition that humans are not limited to a finite set of words, but, rather, must be able to understand and utilize a complex system that allows for an infinite number of possible messages. So, while many forms of animal communication exist, they differ from human language in that they have a limited range of vocabulary tokens, and the vocabulary items are not combined syntactically to create phrases.[20]
Herbert S. Terraceconducted a study on a chimpanzee known asNim Chimpskyin an attempt to teach himAmerican Sign Language. This study was an attempt to further research done with a chimpanzee namedWashoe, who was reportedly able to acquire American Sign Language. However, upon further inspection, Terrace concluded that both experiments were failures.[21]While Nim was able to acquire signs, he never acquired a knowledge of grammar, and was unable to combine signs in a meaningful way. Researchers noticed that "signs that seemed spontaneous were, in fact, cued by teachers",[22]and not actually productive. When Terrace reviewed Project Washoe, he found similar results. He postulated that there is a fundamental difference between animals and humans in their motivation to learn language; animals, such as in Nim's case, are motivated only by physical reward, while humans learn language in order to "create a new type of communication".[23]
In another language acquisition study,Jean-Marc-Gaspard Itardattempted to teachVictor of Aveyron, a feral child, how to speak. Victor was able to learn a few words, but ultimately never fully acquired language.[24]Slightly more successful was a study done onGenie, another child never introduced to society. She had been entirely isolated for the first thirteen years of her life by her father. Caretakers and researchers attempted to measure her ability to learn a language. She was able to acquire a large vocabulary, but never acquired grammatical knowledge. Researchers concluded that the theory of acritical periodwastrue —Genie was too old to learn how to speak productively, although she was still able to comprehend language.[25]
A major debate in understanding language acquisition is how these capacities are picked up by infants from the linguistic input.[26]Input in the linguisticcontextis defined as "All words, contexts, and other forms of language to which a learner is exposed, relative to acquired proficiency in first or second languages".Nativistssuch as Chomsky have focused on the hugely complex nature of human grammars, the finiteness andambiguityof the input that children receive, and the relatively limitedcognitive abilitiesof an infant. From these characteristics, they conclude that the process of language acquisition in infants must be tightly constrained and guided by the biologically given characteristics of the human brain. Otherwise, they argue, it is extremely difficult to explain how children, within the first five years of life, routinely master the complex, largely tacitgrammatical rulesof their native language.[27]Additionally, the evidence of such rules in their native language is all indirect—adult speech to children cannot encompass all of what children know by the time they have acquired their native language.[28]
Other scholars,[who?]however, have resisted the possibility that infants' routine success at acquiring the grammar of their native language requires anything more than the forms of learning seen with other cognitive skills, including such mundane motor skills as learning to ride a bike. In particular, there has been resistance to the possibility that human biology includes any form of specialization for language. This conflict is often referred to as the "nature and nurture" debate. Of course, most scholars acknowledge that certain aspects of language acquisition must result from the specific ways in which the human brain is "wired" (a "nature" component, which accounts for the failure of non-human species to acquire human languages) and that certain others are shaped by the particular language environment in which a person is raised (a "nurture" component, which accounts for the fact that humans raised in different societies acquire different languages). The as-yet unresolved question is the extent to which the specific cognitive capacities in the "nature" component are also used outside of language.[citation needed]
Emergentisttheories, such as Brian MacWhinney'scompetition model, posit that language acquisition is acognitive processthat emerges from the interaction of biological pressures and the environment. According to these theories, neither nature nor nurture alone is sufficient to trigger language learning; both of these influences must work together in order to allow children to acquire a language. The proponents of these theories argue that general cognitive processes subserve language acquisition and that the result of these processes is language-specific phenomena, such asword learningandgrammar acquisition. The findings of many empirical studies support the predictions of these theories, suggesting that language acquisition is a more complex process than many have proposed.[29]
Although Chomsky's theory of agenerative grammarhas been enormously influential in the field of linguistics since the 1950s, many criticisms of the basic assumptions of generative theory have been put forth by cognitive-functional linguists, who argue that language structure is created through language use.[30]These linguists argue that the concept of alanguage acquisition device(LAD) is unsupported by evolutionary anthropology, which tends to show a gradual adaptation of the human brain and vocal cords to the use of language, rather than a sudden appearance of a complete set of binary parameters delineating the whole spectrum of possible grammars ever to have existed and ever to exist.[31]On the other hand, cognitive-functional theorists use this anthropological data to show how human beings have evolved the capacity for grammar and syntax to meet our demand for linguistic symbols. (Binary parameters are common to digital computers, but may not be applicable to neurological systems such as the human brain.)[citation needed]
Further, the generative theory has several constructs (such as movement, empty categories, complex underlying structures, and strict binary branching) that cannot possibly be acquired from any amount of linguistic input. It is unclear that human language is actuallyanything likethe generative conception of it. Since language, as imagined by nativists, is unlearnably complex,[citation needed]subscribers to this theory argue that it must, therefore, be innate.[32]Nativists hypothesize that some features of syntactic categories exist even before a child is exposed to any experience—categories on which children map words of their language as they learn their native language.[33]A differenttheory of language, however, may yield different conclusions. While all theories of language acquisition posit some degree of innateness, they vary in how much value they place on this innate capacity to acquire language. Empiricism places less value on the innate knowledge, arguing instead that the input, combined with both general and language-specific learning capacities, is sufficient for acquisition.[34]
Since 1980, linguists studying children, such asMelissa BowermanandAsifa Majid,[35]and psychologists followingJean Piaget, like Elizabeth Bates[36]and Jean Mandler, came to suspect that there may indeed be many learning processes involved in the acquisition process, and that ignoring the role of learning may have been a mistake.[citation needed]
In recent years, the debate surrounding the nativist position has centered on whether the inborn capabilities are language-specific or domain-general, such as those that enable the infant to visually make sense of the world in terms of objects and actions. The anti-nativist view has many strands, but a frequent theme is that language emerges from usage in social contexts, using learning mechanisms that are a part of an innate general cognitive learning apparatus. This position has been championed byDavid M. W. Powers,[37]Elizabeth Bates,[38]Catherine Snow,Anat Ninio,Brian MacWhinney,Michael Tomasello,[20]Michael Ramscar,[39]William O'Grady,[40]and others. Philosophers, such as Fiona Cowie[41]andBarbara ScholzwithGeoffrey Pullum[42]have also argued against certain nativist claims in support of empiricism.
The new field ofcognitive linguisticshas emerged as a specific counter to Chomsky's Generative Grammar and to Nativism.
Some language acquisition researchers, such asElissa Newport, Richard Aslin, andJenny Saffran, emphasize the possible roles of generallearningmechanisms, especially statistical learning, in language acquisition. The development ofconnectionistmodels that when implemented are able to successfully learn words and syntactical conventions[43]supports the predictions of statistical learning theories of language acquisition, as do empirical studies of children's detection of word boundaries.[44]In a series of connectionist model simulations, Franklin Chang has demonstrated that such a domain general statistical learning mechanism could explain a wide range of language structure acquisition phenomena.[45]
Statistical learning theorysuggests that, when learning language, a learner would use the natural statistical properties of language to deduce its structure, including sound patterns, words, and the beginnings of grammar.[46]That is, language learners are sensitive to how oftensyllablecombinations or words occur in relation to other syllables.[44][47][48]Infants between 21 and 23 months old are also able to use statistical learning to develop "lexical categories", such as an animal category, which infants might later map to newly learned words in the same category. These findings suggest that early experience listening to language is critical to vocabulary acquisition.[48]
The statistical abilities are effective, but also limited by what qualifies as input, what is done with that input, and by the structure of the resulting output.[46]Statistical learning (and more broadly, distributional learning) can be accepted as a component of language acquisition by researchers on either side of the "nature and nurture" debate. From the perspective of that debate, an important question is whether statistical learning can, by itself, serve as an alternative to nativist explanations for the grammatical constraints of human language.
The central idea of these theories is that language development occurs through the incremental acquisition of meaningfulchunksof elementaryconstituents, which can be words, phonemes, or syllables. Recently, this approach has been highly successful in simulating several phenomena in the acquisition ofsyntactic categories[49]and the acquisition of phonological knowledge.[50]
Chunking theories of language acquisition constitute a group of theories related to statistical learning theories, in that they assume that the input from the environment plays an essential role; however, they postulate different learning mechanisms.[clarification needed]
Researchers at theMax Planck Institute for Evolutionary Anthropologyhave developed a computer model analyzing early toddler conversations to predict the structure of later conversations. They showed that toddlers develop their own individual rules for speaking, with 'slots' into which they put certain kinds of words. A significant outcome of this research is that rules inferred from toddler speech were better predictors of subsequent speech than traditional grammars.[51]
This approach has several features that make it unique: the models are implemented as computer programs, which enables clear-cut and quantitative predictions to be made; they learn from naturalistic input—actual child-directed utterances; and attempt to create their own utterances, the model was tested in languages including English, Spanish, and German. Chunking for this model was shown to be most effective in learning a first language but was able to create utterances learning a second language.[52]
Therelational frame theory(RFT) (Hayes, Barnes-Holmes, Roche, 2001), provides a wholly selectionist/learning account of the origin and development of language competence and complexity. Based upon the principles of Skinnerian behaviorism, RFT posits that children acquire language purely through interacting with the environment. RFT theorists introduced the concept offunctional contextualismin language learning, which emphasizes the importance of predicting and influencing psychological events, such as thoughts, feelings, and behaviors, by focusing on manipulable variables in their own context. RFT distinguishes itself from Skinner's work by identifying and defining a particular type of operant conditioning known as derived relational responding, a learning process that, to date, appears to occur only in humans possessing a capacity for language. Empirical studies supporting the predictions of RFT suggest that children learn language through a system of inherent reinforcements, challenging the view that language acquisition is based upon innate, language-specific cognitive capacities.[53]
Social interactionist theory is an explanation oflanguage developmentemphasizing the role of social interaction between the developing child and linguistically knowledgeable adults. It is based largely on the socio-cultural theories of Soviet psychologistLev Vygotsky, and was made prominent in the Western world byJerome Bruner.[54]
Unlike other approaches, it emphasizes the role of feedback and reinforcement in language acquisition. Specifically, it asserts that much of a child's linguistic growth stems from modeling of and interaction with parents and other adults, who very frequently provide instructive correction.[55]It is thus somewhat similar to behaviorist accounts of language learning. It differs substantially, though, in that it posits the existence of a social-cognitive model and other mental structures within children (a sharp contrast to the "black box" approach of classical behaviorism).
Another key idea within the theory of social interactionism is that of thezone of proximal development. This is a theoretical construct denoting the set of tasks a child is capable of performing with guidance but not alone.[56]As applied to language, it describes the set of linguistic tasks (for example, proper syntax, suitable vocabulary usage) that a child cannot carry out on its own at a given time, but can learn to carry out if assisted by an able adult.
As syntax began to be studied more closely in the early 20th century in relation to language learning, it became apparent to linguists, psychologists, and philosophers that knowing a language was not merely a matter of associating words with concepts, but that a critical aspect of language involves knowledge of how to put words together; sentences are usually needed in order to communicate successfully, not just isolated words.[15]A child will use short expressions such asBye-bye MummyorAll-gone milk, which actually are combinations of individualnounsand anoperator,[57]before they begin to produce gradually more complex sentences. In the 1990s, within theprinciples and parametersframework, this hypothesis was extended into a maturation-basedstructure building model of child languageregarding the acquisition of functional categories. In this model, children are seen as gradually building up more and more complex structures, with lexical categories (like noun and verb) being acquired before functional-syntactic categories (like determiner and complementizer).[58]It is also often found that in acquiring a language, the most frequently used verbs areirregular verbs.[citation needed]In learning English, for example, young children first begin to learn the past tense of verbs individually. However, when they acquire a "rule", such as adding-edto form the past tense, they begin to exhibit occasional overgeneralization errors (e.g. "runned", "hitted") alongside correct past tense forms. One influential[citation needed]proposal regarding the origin of this type of error suggests that the adult state of grammar stores each irregular verb form in memory and also includes a "block" on the use of the regular rule for forming that type of verb. In the developing child's mind, retrieval of that "block" may fail, causing the child to erroneously apply the regular rule instead of retrieving the irregular.[59][60]
In bare-phrase structure (minimalist program), theory-internal considerations define the specifier position of an internal-merge projection (phases vP and CP) as the only type of host which could serve as potential landing-sites for move-based elements displaced from lower down within the base-generated VP structure—e.g. A-movement such as passives (["The apple was eaten by [John (ate the apple)"]]), or raising ["Some work does seem to remain [(There) does seem to remain (some work)"]]). As a consequence, any strong version of a structure building model of child language which calls for an exclusive "external-merge/argument structure stage" prior to an "internal-merge/scope-discourse related stage" would claim that young children's stage-1 utterances lack the ability to generate and host elements derived via movement operations. In terms of a merge-based theory of language acquisition,[61]complements and specifiers are simply notations for first-merge (= "complement-of" [head-complement]), and later second-merge (= "specifier-of" [specifier-head], with merge always forming to a head. First-merge establishes only a set {a, b} and is not an ordered pair—e.g., an {N, N}-compound of 'boat-house' would allow the ambiguous readings of either 'a kind of house' and/or 'a kind of boat'. It is only with second-merge that order is derived out of a set {a {a, b}} which yields the recursive properties of syntax—e.g., a 'house-boat' {house {house, boat}} now reads unambiguously only as a 'kind of boat'. It is this property of recursion that allows for projection and labeling of a phrase to take place;[62]in this case, that the Noun 'boat' is the Head of the compound, and 'house' acting as a kind of specifier/modifier. External-merge (first-merge) establishes substantive 'base structure' inherent to the VP, yielding theta/argument structure, and may go beyond the lexical-category VP to involve the functional-category light verb vP. Internal-merge (second-merge) establishes more formal aspects related to edge-properties of scope and discourse-related material pegged to CP. In a Phase-based theory, this twin vP/CP distinction follows the "duality of semantics" discussed within the Minimalist Program, and is further developed into a dual distinction regarding a probe-goal relation.[63]As a consequence, at the "external/first-merge-only" stage, young children would show an inability to interpret readings from a given ordered pair, since they would only have access to the mental parsing of a non-recursive set. (See Roeper for a full discussion of recursion in child language acquisition).[64]In addition to word-order violations, other more ubiquitous results of a first-merge stage would show that children's initial utterances lack the recursive properties of inflectional morphology, yielding a strict Non-inflectional stage-1, consistent with an incremental Structure-building model of child language.
Generative grammar, associated especially with the work of Noam Chomsky, is currently one of the approaches to explaining children's acquisition of syntax.[65]Its leading idea is that human biology imposes narrow constraints on the child's "hypothesis space" during language acquisition. In the principles and parameters framework, which has dominated generative syntax since Chomsky's (1980)Lectures on Government and Binding: The Pisa Lectures, the acquisition of syntax resembles ordering from a menu: the human brain comes equipped with a limited set of choices from which the child selects the correct options by imitating the parents' speech while making use of the context.[66]
An important argument which favors the generative approach, is thepoverty of the stimulusargument. The child's input (a finite number of sentences encountered by the child, together with information about the context in which they were uttered) is, in principle, compatible with an infinite number of conceivable grammars. Moreover, rarely can children rely oncorrective feedbackfrom adults when they make a grammatical error; adults generally respond and provide feedback regardless of whether a child's utterance was grammatical or not, and children have no way of discerning if a feedback response was intended to be a correction. Additionally, when children do understand that they are being corrected, they don't always reproduce accurate restatements.[dubious–discuss][67][68]Yet, barring situations of medical abnormality or extreme privation, all children in a given speech-community converge on very much the same grammar by the age of about five years. An especially dramatic example is provided by children who, for medical reasons, are unable to produce speech and, therefore, can never be corrected for a grammatical error but nonetheless, converge on the same grammar as their typically developing peers, according to comprehension-based tests of grammar.[69][70]
Considerations such as those have led Chomsky,Jerry Fodor,Eric Lennebergand others to argue that the types of grammar the child needs to consider must be narrowly constrained by human biology (the nativist position).[71]These innate constraints are sometimes referred to asuniversal grammar, the human "language faculty", or the "language instinct".[72]
The comparative method of crosslinguistic research applies thecomparative methodused inhistorical linguisticstopsycholinguisticresearch.[73]In historical linguistics the comparative method uses comparisons between historically related languages to reconstruct a proto-language and trace the history of each daughter language. The comparative method can be repurposed for research on language acquisition by comparing historically related child languages. The historical ties within each language family provide a roadmap for research. ForIndo-European languages, the comparative method would first compare language acquisition within the Slavic, Celtic, Germanic, Romance and Indo-Iranian branches of the family before attempting broader comparisons between the branches. ForOtomanguean languages, the comparative method would first compare language acquisition within the Oto-pamean, Chinantecan, Tlapanecan, Popolocan, Zapotecan, Amuzgan and Mixtecan branches before attempting broader comparisons between the branches. The comparative method imposes an evaluation standard for assessing the languages used in language acquisition research.
The comparative method derives its power by assembling comprehensive datasets for each language. Descriptions of theprosodyandphonologyfor each language inform analyses ofmorphologyand thelexicon, which in turn inform analyses ofsyntaxandconversationalstyles. Information on prosodic structure in one language informs research on the prosody of the related languages and vice versa. The comparative method produces a cumulative research program in which each description contributes to a comprehensive description of language acquisition for each language within a family as well as across the languages within each branch of the language family.
Comparative studies of language acquisition control the number of extraneous factors that impact language development. Speakers of historically related languages typically share a common culture that may include similar lifestyles and child-rearing practices. Historically related languages have similar phonologies and morphologies that impact early lexical and syntactic development in similar ways. The comparative method predicts that children acquiring historically related languages will exhibit similar patterns of language development, and that these common patterns may not hold in historically unrelated languages. The acquisition ofDutchwill resemble the acquisition ofGerman, but not the acquisition ofTotonacorMixtec. A claim about any universal of language acquisition must control for the shared grammatical structures that languages inherit from a common ancestor.
Several language acquisition studies have accidentally employed features of the comparative method due to the availability of datasets from historically related languages. Research on the acquisition of theRomanceandScandinavianlanguages used aspects of the comparative method, but did not produce detailed comparisons across different levels of grammar.[74][75][76][77]The most advanced use of the comparative method to date appears in research on the acquisition of theMayanlanguages. This research has yielded detailed comparative studies on the acquisition of phonological, lexical, morphological and syntactic features in eight Mayan languages as well as comparisons of language input and language socialization.[78][79][80][81][82][83][84][85][86]
Recent advances in functionalneuroimaging technologyhave allowed for a better understanding of how language acquisition is manifested physically in the brain. Language acquisition almost always occurs in children during a period of rapid increase in brain volume. At this point in development, a child has many more neural connections than he or she will have as an adult, allowing for the child to be more able to learn new things than he or she would be as an adult.[87]
Language acquisition has been studied from the perspective ofdevelopmental psychologyandneuroscience,[88]which looks at learning to use and understand language parallel to a child's brain development. It has been determined, through empirical research on developmentally normal children, as well as through some extreme cases oflanguage deprivation, that there is a "sensitive period" of language acquisition in which human infants have the ability to learn any language. Several researchers have found that from birth until the age of six months, infants can discriminate the phonetic contrasts of all languages. Researchers believe that this gives infants the ability to acquire the language spoken around them. After this age, the child is able to perceive only the phonemes specific to the language being learned. The reduced phonemic sensitivity enables children to build phonemic categories and recognize stress patterns and sound combinations specific to the language they are acquiring.[89]As Wilder Penfield noted, "Before the child begins to speak and to perceive, the uncommitted cortex is a blank slate on which nothing has been written. In the ensuing years much is written, and the writing is normally never erased. After the age of ten or twelve, the general functional connections have been established and fixed for the speech cortex." According to the sensitive or critical period models, the age at which a child acquires the ability to use language is a predictor of how well he or she is ultimately able to use language.[90]However, there may be an age at which becoming a fluent and natural user of a language is no longer possible; Penfield and Roberts (1959) cap their sensitive period at nine years old.[91]The human brainmayvery well be automatically wired to learn languages, but this ability does not last into adulthood in the same way that it exists during childhood.[92]By around age 12, language acquisition has typically been solidified, and it becomes more difficult to learn a language in the same way a native speaker would.[93]Just like children who speak, deaf children go through a critical period for learning language. Deaf children who acquire their first language later in life show lower performance in complex aspects of grammar.[94]At that point, it is usually a second language that a person is trying to acquire and not a first.[27]
Assuming that children are exposed to language during the critical period,[95]acquiring language is almost never missed by cognitively normal children. Humans are so well-prepared to learn language that it becomes almost impossible not to. Researchers are unable to experimentally test the effects of the sensitive period of development on language acquisition, because it would be unethical to deprive children of language until this period is over. However, case studies on abused,language-deprivedchildrenshow that they exhibit extreme limitations in language skills, even after instruction.[96]
At a very young age, children can distinguish different sounds but cannot yet produce them. During infancy, children begin to babble. Deaf babies babble in the same patterns as hearing babies do, showing thatbabblingis not a result of babies simply imitating certain sounds, but is actually a natural part of the process of language development. Deaf babies do, however, often babble less than hearing babies, and they begin to babble later on in infancy—at approximately 11 months as compared to approximately 6 months for hearing babies.[97]
Prelinguistic language abilities that are crucial for language acquisition have been seen even earlier than infancy. There have been many different studies examining different modes of language acquisition prior to birth. The study of language acquisition in fetuses began in the late 1980s when several researchers independently discovered that very young infants could discriminate their native language from other languages. InMehler et al. (1988),[98]infants underwent discrimination tests, and it was shown that infants as young as 4 days old could discriminate utterances in their native language from those in an unfamiliar language, but could not discriminate between two languages when neither was native to them. These results suggest that there are mechanisms for fetal auditory learning, and other researchers have found further behavioral evidence to support this notion. Fetus auditory learning through environmental habituation has been seen in a variety of different modes, such as fetus learning of familiar melodies,[99]story fragments (DeCasper & Spence, 1986),[100]recognition of mother's voice,[101]and other studies showing evidence of fetal adaptation to native linguistic environments.[102]
Prosody is the property of speech that conveys an emotional state of the utterance, as well as the intended form of speech, for example, question, statement or command. Some researchers in the field of developmental neuroscience argue that fetal auditory learning mechanisms result solely from discrimination of prosodic elements. Although this would hold merit in an evolutionary psychology perspective (i.e. recognition of mother's voice/familiar group language from emotionally valent stimuli), some theorists argue that there is more than prosodic recognition in elements of fetal learning. Newer evidence shows that fetuses not only react to the native language differently from non-native languages, but that fetuses react differently and can accurately discriminate between native and non-native vowel sounds (Moon, Lagercrantz, & Kuhl, 2013).[103]Furthermore, a 2016 study showed that newborn infants encode the edges of multisyllabic sequences better than the internal components of the sequence (Ferry et al., 2016).[104]Together, these results suggest that newborn infants have learned important properties of syntactic processing in utero, as demonstrated by infant knowledge of native language vowels and the sequencing of heard multisyllabic phrases. This ability to sequence specific vowels gives newborn infants some of the fundamental mechanisms needed in order to learn the complex organization of a language.
From a neuroscientific perspective, neural correlates have been found that demonstrate human fetal learning of speech-like auditory stimuli that most other studies have been analyzing[clarification needed](Partanen et al., 2013).[105]In a study conducted by Partanen et al. (2013),[105]researchers presented fetuses with certain word variants and observed that these fetuses exhibited higher brain activity in response to certain word variants as compared to controls. In this same study, "a significant correlation existed between the amount of prenatal exposure and brain activity, with greater activity being associated with a higher amount of prenatal speech exposure," pointing to the important learning mechanisms present before birth that are fine-tuned to features in speech (Partanen et al., 2013).[105]
Learning a new word, that is, learning to speak this word and speak it on the appropriate occasions, depends upon many factors. First, the learner needs to be able to hear what they are attempting to pronounce. Also required is the capacity to engage inspeech repetition.[106][107][108][109]Children with reduced ability to repeat non-words (a marker of speech repetition abilities) show a slower rate of vocabulary expansion than children with normal ability.[110]Several computational models of vocabulary acquisition have been proposed.[111][112][113][114][115][116][117]Various studies have shown that the size of a child's vocabulary by the age of 24 months correlates with the child's future development and language skills. If a child knows fifty or fewer words by the age of 24 months, he or she is classified as alate-talker, and future language development, like vocabulary expansion and the organization of grammar, is likely to be slower and stunted.[citation needed]
Two more crucial elements of vocabulary acquisition are word segmentation and statistical learning (described above). Word segmentation, or the ability to break down words into syllables from fluent speech can be accomplished by eight-month-old infants.[44]By the time infants are 17 months old, they are able to link meaning to segmented words.[47]
Recent evidence also suggests that motor skills and experiences may influence vocabulary acquisition during infancy. Specifically, learning to sit independently between 3 and 5 months of age has been found to predict receptive vocabulary at both 10 and 14 months of age,[118]and independent walking skills have been found to correlate with language skills at around 10 to 14 months of age.[119][120]These findings show that language acquisition is an embodied process that is influenced by a child's overall motor abilities and development. Studies have also shown a correlation betweensocioeconomic status and vocabulary acquisition.[121]
Children learn, on average, ten to fifteen new word meanings each day, but only one of these can be accounted for by direct instruction.[122]The other nine to fourteen word meanings must have been acquired in some other way. It has been proposed that children acquire these meanings through processes modeled bylatent semantic analysis; that is, when they encounter an unfamiliar word, children use contextual information to guess its rough meaning correctly.[122]A child may expand the meaning and use of certain words that are already part of itsmental lexiconin order to denominate anything that is somehow related but for which it does not know the specific word. For instance, a child may broaden the use ofmummyanddadain order to indicate anything that belongs to its mother or father, or perhaps every person who resembles its own parents; another example might be to sayrainwhile meaningI don't want to go out.[123]
There is also reason to believe that children use variousheuristicsto infer the meaning of words properly.Markmanand others have proposed that children assume words to refer to objects with similar properties ("cow" and "pig" might both be "animals") rather than to objects that are thematically related ("cow" and "milk" are probably not both "animals").[124]Children also seem to adhere to the "whole object assumption" and think that a novel label refers to an entire entity rather than to one of its parts.[124]This assumption along with other resources, such as grammar and morphological cues or lexical constraints, may help the child in acquiring word meaning, but conclusions based on such resources may sometimes conflict.[125]
According to several linguists, neurocognitive research has confirmed many standards of language learning, such as: "learning engages the entire person (cognitive, affective, and psychomotor domains), the human brain seeks patterns in its searching for meaning, emotions affect all aspects of learning, retention and recall, past experience always affects new learning, the brain's working memory has a limited capacity, lecture usually results in the lowest degree of retention, rehearsal is essential for retention, practice [alone] does not make perfect, and each brain is unique" (Sousa, 2006, p. 274). In terms of genetics, the geneROBO1has been associated with phonological buffer integrity or length.[126]
Genetic research has found two major factors predicting successful language acquisition and maintenance. These include inherited intelligence, and the lack of genetic anomalies that may cause speech pathologies, such as mutations in the FOXP2 gene which causeverbal dyspraxia. The role of inherited intelligence increases with age, accounting for 20% of IQ variation in infants, and for 60% in adults. It affects a vast variety of language-related abilities, from spatio-motor skills to writing fluency. There have been debates in linguistics, philosophy, psychology, and genetics, with some scholars arguing that language is fully or mostly innate, but the research evidence points to genetic factors only working in interaction with environmental ones.[127]
Although it is difficult to determine without invasive measures which exact parts of the brain become most active and important for language acquisition,fMRIandPETtechnology has allowed for some conclusions to be made about where language may be centered.Kuniyoshi Sakaihas proposed, based on several neuroimaging studies, that there may be a "grammar center" in the brain, whereby language is primarily processed in the left lateralpremotor cortex(located near the pre central sulcus and theinferior frontal sulcus). Additionally, these studies have suggested that first language and second language acquisition may be represented differently in thecortex.[27]In a study conducted by Newman et al., the relationship between cognitive neuroscience and language acquisition was compared through a standardized procedure involving native speakers of English and native Spanish speakers who all had a similar length of exposure to the English language (averaging about 26 years). It was concluded that the brain does in fact process languages differently[clarification needed], but rather than being related to proficiency levels, language processing relates more to the function of the brain itself.[128]
During early infancy, language processing seems to occur over many areas in the brain. However, over time, it gradually becomes concentrated into two areas—Broca's areaandWernicke's area. Broca's area is in the leftfrontal cortexand is primarily involved in the production of the patterns in vocal and sign language. Wernicke's area is in the lefttemporal cortexand is primarily involved in language comprehension. The specialization of these language centers is so extensive[clarification needed]that damage to them can result inaphasia.[129]
Kelly et al. (2015: 286) comment that “There is a dawning realization that the field of child language needs data from the broadest typological array of languages and language-learning environments.”[130]This realization is part of a broader recognition inpsycholinguisticsfor the need to document diversity.[131][132][133]Children's linguistic accomplishments are all the more impressive with recognition of the diversity that exists at every level of the language system.[134]Different levels of grammar interact in language-specific ways so that differences in morphosyntax build on differences inprosody, which in turn reflect differences in conversational style. The diversity of adult languages results in diverse child language phenomena that challenge every acquisition theory.
One such challenge is to explain how children acquire complex vowels inOtomangueanand other languages. The complex vowels in these languages combine oral and laryngeal gestures produced with laryngeal constriction [ʔ] or laryngeal spreading [h]. The production of thelaryngealizedvowels is complicated by the production of tonal contrasts, which rely upon contrasts in vocal fold vibration. Otomanguean languages manage the conflict between tone and laryngeal gesture by timing the gesture at the start, middle or end of the vowel, e.g. ʔV, VʔV and Vʔ. The phonetic realization of laryngealized vowels gives rise to the question of whether children acquire laryngealized vowels as single phonemes or sequences of phonemes. The unit analysis enlarges the vowel inventory but simplifies the syllable inventory, while the sequence analysis simplifies the vowel inventory but complicates the syllable inventory. The Otomanguean languages exhibit language-specific differences in the types and timing of the laryngeal gestures, and thus children must learn the specific laryngeal gestures that contribute to the phonological contrasts in the adult language.[135]
An acquisition challenge in morphosyntax is to explain how children acquire ergative grammatical structures.Ergativelanguages treat the subject of intransitive verbs like the object of transitive verbs at the level of morphology, syntax or both. At the level of morphology, ergative languages assign an ergative marker to the subject of transitive verbs. The ergative marking may be realized by case markers on nouns or agreement markers on verbs.[136][137]At the level of syntax, ergative languages have syntactic operations that treat the subject of transitive verbs differently from the subject of intransitive verbs. Languages with ergative syntax likeK'iche'may restrict the use of subject questions for transitive verbs but not intransitive verbs. The acquisition challenge that ergativity creates is to explain how children acquire the language-specific manifestations of morphological and syntactic ergativity in the adult languages.[138]TheMayanlanguageMamhas ergative agreement making on its transitive verbs but extends the ergative marking to both the subject of intransitive verbs and the object of transitive verbs yielding transitive verbs with two ergative agreement markers.[139]The contexts for extended ergative marking differ in type and frequency between Mayan languages, but two-year-old children produce extended ergative marking equally proficiently despite vast differences in the frequency of extended ergative marking in the adult languages.[83]
Children acquire language through exposure to a diverse variety of cultural practices.[140]Local groups vary in size and mobility depending on their means of subsistence. Some cultures require men to marry women who speak another language. Their children may be exposed to their mother's language for several years before moving in with their father and learning his language. Language groups have diverse beliefs about when children say their first words and what words they say. Such beliefs shape the time when parents perceive that children understand language. In many cultures, children hear more speech directed to others than to themselves, yet children acquire language in all cultures.
Documenting the diversity of child languages is made more urgent by the rapid loss of languages around the world.[141][142][143]It may not be possible to document child language in half of the world's languages by the end of this century.[144][145]Documenting child language should be a part of everylanguage documentationproject, and has an important role to play inrevitalizinglocal languages.[146][147]Documenting child language preserves cultural modes of language transmission and can emphasize their significance throughout the language community.
Some algorithms for language acquisition are based onstatistical machine translation.[148]Language acquisition can be modeled as amachine learningprocess, which may be based on learningsemantic parsers[149]orgrammar inductionalgorithms.[150][151]
Prelingual deafness is defined as hearing loss that occurred at birth or before an individual has learned to speak. In the United States, 2 to 3 out of every 1000 children are born deaf or hard of hearing. Even though it might be presumed that deaf children acquire language in different ways since they are not receiving the same auditory input as hearing children, many research findings indicate that deaf children acquire language in the same way that hearing children do and when given the proper language input, understand and express language just as well as their hearing peers. Babies who learn sign language produce signs or gestures that are more regular and more frequent than hearing babies acquiring spoken language. Just as hearing babies babble, deaf babies acquiring sign language will babble with their hands, otherwise known asmanual babbling. Therefore, as many studies have shown,language acquisition by deaf childrenparallels the language acquisition of a spoken language by hearing children because humans are biologically equipped for language regardless of themodality.
Deaf children's visual-manual language acquisition not only parallel spoken language acquisition but by the age of 30 months, most deaf children that were exposed to a visual language had a more advanced grasp with subject-pronoun copy rules than hearing children. Their vocabulary bank at the ages of 12–17 months exceed that of a hearing child's, though it does even out when they reach the two-word stage. The use of space for absent referents and the more complex handshapes in some signs prove to be difficult for children between 5 and 9 years of age because of motor development and the complexity of remembering the spatial use.
Other options besides sign language for kids with prelingual deafness include the use of hearing aids to strengthen remaining sensory cells orcochlear implantsto stimulate the hearing nerve directly. Cochlear implants (often known simply as CIs) are hearing devices that are placed behind the ear and contain a receiver and electrodes which are placed under the skin and inside the cochlea. Despite these developments, there is still a risk that prelingually deaf children may not develop good speech and speech reception skills. Although cochlear implants produce sounds, they are unlike typical hearing and deaf and hard of hearing people must undergo intensive therapy in order to learn how to interpret these sounds. They must also learn how to speak given the range of hearing they may or may not have. However, deaf children of deaf parents tend to do better with language, even though they are isolated from sound and speech because their language uses a different mode of communication that is accessible to them: the visual modality of language.
Although cochlear implants were initially approved for adults, now there is pressure to implant children early in order to maximize auditory skills for mainstream learning which in turn has created controversy around the topic. Due to recent advances in technology, cochlear implants allow some deaf people to acquire some sense of hearing. There are interior and exposed exterior components that are surgically implanted. Those who receive cochlear implants earlier on in life show more improvement on speech comprehension and language. Spoken language development does vary widely for those with cochlear implants though due to a number of different factors including: age at implantation, frequency, quality and type of speech training. Some evidence suggests that speech processing occurs at a more rapid pace in some prelingually deaf children with cochlear implants than those with traditional hearing aids. However, cochlear implants may not always work.
Research shows that people develop better language with a cochlear implant when they have a solid first language to rely on to understand the second language they would be learning. In the case of prelingually deaf children with cochlear implants, a signed language, likeAmerican Sign Languagewould be an accessible language for them to learn to help support the use of the cochlear implant as they learn a spoken language as their L2. Without a solid, accessible first language, these children run the risk of language deprivation, especially in the case that a cochlear implant fails to work. They would have no access to sound, meaning no access to the spoken language they are supposed to be learning. If a signed language was not a strong language for them to use and neither was a spoken language, they now have no access to any language and run the risk of missing theircritical period.
In June 2024, a cross-sectional study that the notableacademic journalScientific Reportspublished cautioned that "children with CIs exhibit significant variability in speech and language development": both "with too many recipients demonstrating suboptimal outcomes" and also with the investigations of those individuals broadly being "not well defined for prelingually deafened children with CIs, for whom language development is ongoing." The authors found that "the relationships between spectral resolution, temporal resolution, and speech recognition are well defined in adults with cochlear implants (CIs)" in contrast to the situation with children, and they concluded from their research that "[f]urther investigation is warranted to better understand the relationships between spectral resolution, temporal resolution, and speech recognition so that" medical experts methodologically "can identify the underlying mechanisms driving auditory-based speech perception in children with CIs."[152]
|
https://en.wikipedia.org/wiki/Language_acquisition
|
Asynthetic languageis a language that is statistically characterized by a higher morpheme-to-word ratio. Rule-wise, a synthetic language is characterized by denotingsyntacticrelationships between words viainflectionoragglutination, withfusional languagesfavoring the former andagglutinative languagesthe latter subtype of word synthesis.
Further divisions includepolysynthetic languages(most belonging to an agglutinative-polysynthetic subtype, althoughNavajoand otherAthabaskan languagesare often classified as belonging to a fusional subtype) andoligosynthetic languages(only found inconstructed languages). In contrast, rule-wise, theanalytic languagesrely more onauxiliary verbsandword orderto denote syntactic relationship between words.
Addingmorphemesto a root word is used in inflection to convey a grammatical property of the word, such as denoting a subject or an object.[1]Combining two or more morphemes into one word is used inagglutinating languages, instead.[2]For example, the wordfast, if inflectionally combined with-erto form the wordfaster, remains an adjective, while the wordteachderivatively combined with-erto form the wordteacherceases to be a verb. Some linguists consider relational morphology to be a type of derivational morphology, which may complicate the classification.[3]
Derivational and relational morphology represent opposite ends of a spectrum; that is, a single word in a given language may exhibit varying degrees of both of them simultaneously. Similarly, some words may have derivational morphology while others have relational morphology.
Inderivational synthesis, morphemes of different types (nouns,verbs,affixes, etc.) are joined to create new words. That is, in general, the morphemes being combined are more concrete units of meaning.[3]The morphemes being synthesized in the following examples either belong to a particular grammatical class – such asadjectives, nouns, orprepositions– or are affixes that usually have a single form and meaning:
Aufsicht
supervision
-s-
Rat
council
-s-
Mitglieder
members
Versammlung
assembly
Aufsicht-s-Rat-s-MitgliederVersammlung
supervision {} council {} members assembly
"Meeting of members of the supervisory board"
προ
pro
pre
παρ-
par
next to
οξύ
oxý
sharp
τόν
tón
pitch/tone
-ησις
-esis
tendency
προπαρ-οξύτόν-ησις
pro par oxý tón -esis
pre {next to} sharp pitch/tone tendency
"Tendency to accent on theproparoxytone[third-to-last] position"
przystań
harbor
-ek
DIM
przystań-ek
harbor DIM
"Public transportation stop [without facilities]" (i.e.bus stop,tram stop, orrail halt)—compare todworzec.
anti-
against
dis-
ending
establish
to institute
-ment
NS
-arian
advocate
-ism
ideology
anti-dis-establish-ment-arian-ism
against ending {to institute}NSadvocate ideology
"the movement to prevent revoking the Church of England's status as the official church [of England, Ireland, and Wales]."
досто
dosto
deserving
примечательн
primečátelʹn
notable
-ость
-ostʹ
NS
достопримечательн-ость
dosto primečátelʹn -ostʹ
deserving notableNS
"Place of interest"
نواز
navâz
play music
ــنده
-ande
-ing
ــگی
-gi
NS
نوازــندهــگی
navâz -ande -gi
{play music} -ingNS
"musicianship" or "playing a musical instrument"
на
na
direction/intent
вз
vz
adjective
до
do
approach
гін
hin
fast movement
на вз до гін
na vz do hin
{direction/intent} {adjective} {approach} {fast movement}
"after something or someone that is moving away"
hyper-
high
cholesterol
cholesterol
-emia
blood
hyper-cholesterol-emia
high cholesterol blood
the presence of high levels ofcholesterolin the blood.
Inrelational synthesis,root wordsare joined tobound morphemesto show grammatical function. In other words, it involves the combination of more abstract units of meaning than derivational synthesis.[3]In the following examples many of the morphemes are related tovoice(e.g. passive voice), whether a word is in thesubjectorobjectof the sentence,possession,plurality, or other abstract distinctions in a language:
comunic
communicate
-ando
GER
ve
you.PL
le
those.FEM.PL
comunic-andovele
communicate GER you.PL those.FEM.PL
"Communicating those[feminine plural] to you[plural]"
escrib
write
iéndo
GER
me
me
lo
it
escribiéndomelo
write GER me it
"Writing it to me"
raske
raske-sti-kasvata-tav
kasvatama
raskekasvatama
raske-sti-kasvata-tav
heavy-ly-educat-ableMismatch in the number of words between lines: 2 word(s) in line 1, 1 word(s) in line 2 (help);
|"with learning disabilities"
an
go
-em
we
-se/-nos
ourselves
-en/'n
from
an-em-se/-nos-en/'n
go we ourselves from
"Let's get out of here"
ō
PAST
c
3SG-OBJ
ā
water
lti
CAUS
zquiya
IRR
ō c ā lti zquiya
PAST 3SG-OBJwater CAUS IRR
"She would have bathed him"
com
together
prim
crush
unt
they
ur
PASS
comprimuntur
together crush they PASS
"They are crushed together"
見
mi
see
させ
sase
CAUS
られ
rare
PASS
がたい
gatai
difficult
見させられがたい
mi sase rare gatai
see CAUS PASS difficult
"It's difficult to be shown [this]"
juosta
run
-ella
FREQ
-isin
I.COND
-ko
Q
-han
CAS
juosta-ella-isin-ko-han
run FREQ I.COND QCAS
"I wonder if I should run around [aimlessly]"
ház
house
-a
POSS
-i
PL
-tok
your.PL
-ban
in
ház-a-i-tok-ban
house POSS PL your.PL in
"In your houses"
szeret
love
-lek
IREFLyou
szeret-lek
love {I REFL you}
"I love you"
Afyonkarahisar
Afyonkarahisar
-lı
citizen of
-laş
transform
-tır
PASS
-ama
notbe
(y)
(thematic)
-abil
able
-ecek
FUT
-ler
PL
-imiz
we
-den
among
misiniz?
you-PL-FUT-Q
Afyonkarahisar-lı-laş -tır-ama(y)-abil-ecek-ler-imiz-denmisiniz?
Afyonkarahisar {citizen of} transform PASS notbe (thematic) able FUT PL we among you-PL-FUT-Q
"Are you[plural/formal] amongst the ones whom we might not be able to make citizens ofAfyonkarahisar?"
გადმო-
gadmo
გვ-
gv
ა-
a
ხტუნ
khtun
-ებ
eb
-ინ
in
-ებ
eb
-დ
d
-ნენ
nen
-ო
o
გადმო-გვ- ა-ხტუნ-ებ-ინ-ებ-დ -ნენ-ო
gadmo gv a khtun eb in eb d nen o
"They said that they would be forced by them [the others] to make someone to jump over in this direction." (The word describes the whole sentence that incorporates tense, subject, object, relation between them, direction of the action, conditional and causative markers etc.)
Agglutinating languages have a high rate of agglutination in their words and sentences, meaning that the morphological construction of words consists of distinct morphemes that usually carry a single unique meaning.[4]These morphemes tend to look the same no matter what word they are in, so it is easy to separate a word into its individual morphemes.[1]Morphemes may be bound (that is, they must be attached to a word to have meaning, like affixes) orfree(they can stand alone and still have meaning).
Fusional languages are similar to agglutinating languages in that they involve the combination of many distinct morphemes. However, morphemes in fusional languages are often assigned several different lexical meanings, and they tend to be fused together so that it is difficult to separate individual morphemes from one another.[1][5]
Polysynthetic languages are considered the most synthetic of the three types because they combine multiplestemsas well as other morphemes into a single continuous word. These languages often turn nouns into verbs.[1]ManyNative Alaskanand other Native American languages are polysynthetic.
Oligosynthetic languages are a theoretical notion created byBenjamin Whorf. Such languages would be functionally synthetic, but make use of a very limited array of morphemes (perhaps just a few hundred). The concept of an oligosynthetic language type was proposed by Whorf to describe theNative AmericanlanguageNahuatl, although he did not further pursue this idea.[6]Though no natural language uses this process, it has found its use in the world ofconstructed languages, inauxlangssuch as Ygyde[7]andaUI.
Synthetic languages combine (synthesize) multiple concepts into each word.Analytic languagesbreak up (analyze) concepts into separate words. These classifications comprise two ends of a spectrum along which different languages can be classified. The present-dayEnglishis seen as analytic, but it used to be fusional. Certain synthetic qualities (as in the inflection of verbs to showtense) were retained.
The distinction is, therefore, a matter of degree. The most analytic languages,isolating languages, consistently have one morpheme per word, while at the other extreme, in polysynthetic languages such as someNative American languages[8]a single inflected verb may contain as much information as an entire English sentence.
In order to demonstrate the nature of the isolating-analytic–synthetic–polysynthetic classification as a "continuum", some examples are shown below.
However, with rare exceptions, each syllable in Mandarin (corresponding to a single written character) represents a morpheme with an identifiable meaning, even if many of such morphemes arebound. This gives rise to thecommon misconceptionthat Chinese consists exclusively of "words of one syllable". As the sentence above illustrates, however, even simple Chinese words such asmíngtiān'tomorrow' (míng"next" +tīan"day") andpéngyou'friend' (a compound ofpéngandyǒu, both of which mean 'friend') are synthetic compound words.
The Chinese language of the classic works (ofConfuciusfor example) and southern dialects to a certain extent is more strictly monosyllabic: each character represents one word. The evolution of modern Mandarin Chinese was accompanied by a reduction in the total number of phonemes. Words which previously were phonetically distinct became homophones. Many disyllabic words in modern Mandarin are the result of joining two related words (such as péngyou, literally "friend-friend") in order to resolve the phonetic ambiguity. A similar process is observed in some English dialects. For instance, in theSouthern dialects of American English, it is not unusual for the short vowel sounds[ɪ]and[ɛ]to be indistinguishable beforenasal consonants: thus the words "pen" and "pin" arehomophones(seepin-pen merger). In these dialects, the ambiguity is often resolved by using the compounds "ink-pen" and "stick-pin", in order to clarify which "p*n" is being discussed.
The definite articles are not only suffixes but are also noun inflections expressing thought in a synthetic manner.
Haspelmath and Michaelis[9]observed that analyticity is increasing in a number of European languages. In theGermanexample, the first phrase makes use of inflection, but the second phrase uses a preposition. The development of preposition suggests the moving from synthetic to analytic.
des
the.GEN.SG
Hauses
house.GEN.SG
des Hauses
the.GEN.SG house.GEN.SG
'the house's'
von
of
dem
the.DAT.SG
Haus
house.DAT.SG
von dem Haus
of the.DAT.SG house.DAT.SG
'of the house'
It has been argued that analytic grammatical structures are easier for adultslearning a foreign language. Consequently, a larger proportion of non-native speakers learning a language over the course of its historical development may lead to a simpler morphology, as the preferences of adult learners get passed on to second generation native speakers. This is especially noticeable in the grammar ofcreole languages. A 2010 paper inPLOS ONEsuggests that evidence for this hypothesis can be seen in correlations between morphological complexity and factors such as the number of speakers of a language, geographic spread, and the degree of inter-linguistic contact.[10]
According toGhil'ad Zuckermann,Modern Hebrew(which he calls "Israeli") "is much more analytic, both with nouns and verbs", compared withClassical Hebrew(which he calls "Hebrew").[11]
|
https://en.wikipedia.org/wiki/Synthetic_language
|
In the design of moderncomputers,memory geometrydescribes the internal structure ofrandom-access memory. Memory geometry is of concern to consumers upgrading their computers, since older memory controllers may not be compatible with later products. Memory geometry terminology can be confusing because of the number of overlapping terms.
The geometry of a memory system can be thought of as a multi-dimensional array. Each dimension has its own characteristics and physical realization. For example, the number of data pins on a memory module is one dimension.
Memory geometry describes the logical configuration of a RAM module, but consumers will always find it easiest to grasp the physical configuration. Much of the confusion surrounding memory geometry occurs when the physical configuration obfuscates the logical configuration. The first defining feature of RAM is form factor. RAM modules can be in compactSO-DIMMform for space constrained applications likelaptops,printers,embedded computers, andsmall form factorcomputers, and inDIMMformat, which is used in most desktops.[citation needed]
The other physical characteristics, determined by physical examination, are the number of memory chips, and whether both sides of thememory "stick"are populated. Modules with the number of RAM chips equal to some power of two do not support memory error detection or correction. If there are extra RAM chips (between powers of two), these are used forECC.
RAM modules are 'keyed' by indentations on the sides, and along the bottom of the module. This designates the technology, and classification of the modules, for instance whether it is DDR2, or DDR3, and whether it is suitable for desktops, or for servers. Keying was designed to make it difficult to install incorrect modules in a system (but there are more requirements than are embodied in keys). It is important to make sure that the keying of the module matches the key of the slot it is intended to occupy.[citation needed]
Additional, non-memory chips on the module may be an indication that it was designed[by whom?]for high capacity memory systems for servers, and that the module may be incompatible with mass-market systems.[citation needed]
As the next section of this article will cover the logical architecture, which covers the logical structure spanning every populated slot in a system, the physical features of the slots themselves become important. By consulting the documentation of your motherboard, or reading the labels on the board itself, you can determine the underlying logical structure of the slots. When there is more than one slot, they are numbered, and when there is more than one channel, the different slots are separated in that way as well – usually color-coded.[citation needed]
In the 1990s, computers usingcache-coherent non-uniform memory accesswere released, which allowed combining multiple computers that each had their own memory controller such that the software running on them could use I/O devices, memory, and CPU of all participating systems as if they were one unit (single system image). With AMD's release of the Opteron, which integrated the memory controller into the CPU, NUMA systems that share more than one memory controller in a single system have become common in applications that require the power of more than the common desktop.[citation needed]
Channels are the highest-level structure at the local memory controller level. Modern computers can havetwo, three or even more channels. It is usually important that, for each module in any one channel, there is a logically identical module in the same location on each of the other populated channels.[citation needed]
Module capacity is theaggregatespace in a module measured inbytes, or – more generally – inwords. Module capacity is equal to the product of the number of ranks and the rank density, and where the rank density is the product of rank depth and rank width.[1]The standard format for expressing this specification is (rank depth)Mbit× (rank width) × (number of ranks).[citation needed]
Ranksare sub-units of a memory module that share the same address and data buses and are selected bychip select(CS) in low-level addressing. For example, a memory module with 8 chips on each side, with each chip having an 8-bit-wide data bus, would have one rank for each side for a total of 2 ranks, if we define a rank to be 64 bits wide. A module composed ofMicron TechnologyMT47H128M16 chips with the organization 128 Mib × 16, meaning 128 Mi memory depth and 16-bit-wide data bus per chip; if the module has 8 of these chips on each side of the board, there would be a total of 16 chips × 16-bit-wide data = 256 total bits width of data. For a 64-bit-wide memory data interface, this equates to having 4 ranks, where each rank can be selected by a 2-bit chip select signal. Memory controllers such as theIntel 945Chipsetlist the configurations they support: "256-Mib, 512-Mib, and 1-Gib DDR2 technologies for ×8 and ×16 devices", "four ranks for all DDR2 devices up to 512-Mibit density", "eight ranks for 1-Gibit DDR2 devices". As an example, take ani945memory controller with fourKingstonKHX6400D2/1G memory modules, where each module has a capacity of 1GiB.[2]Kingston describes each module as composed of 16 "64M×8-bit" chips with each chip having an 8-bit-wide data bus. 16 × 8 equals 128, therefore, each module has two ranks of 64 bits each. So, from theMCHpoint of view there are four 1 GB modules. At a higher logical level, the MCH also sees two channels, each with four ranks.
In contrast,banks, while similar from a logical perspective to ranks, are implemented quite differently in physical hardware. Banks are sub-units inside a single memory chip, while ranks are sub-units composed of a subset of the chips on a module. Similar to chip select, banks are selected by bank select bits, which are part of the memory interface.[citation needed]
The lowest form of organization covered by memory geometry, sometimes called "memory device". These are the componentICsthat make up each module, or module of RAM. The most important measurement of a chip is its density, measured in bits. Because memory bus width is usually larger than the number of chips, most chips are designed to have width, meaning that they are divided into equal parts internally, and when one address "depth" is called up, instead of returning just one value, more than one value is returned. In addition to the depth, a second addressing dimension has been added at the chip level, banks. Banks allow one bank to be available, while another bank is unavailable because it isrefreshing.[citation needed]
Some measurements of modules are size, width, speed, and latency. A memory module consists of a multiple of the memory chips to equal the desired module width. So a 32-bitSIMMmodule could be composed of four 8-bit wide (×8) chips. As noted in the memory channel part, one physical module can be made up of one or more logical ranks. If that 32-bit SIMM were composed of eight 8-bit chips the SIMM would have two ranks.[citation needed]
A memory channel is made up of ranks. Physically a memory channel with just one memory module might present itself as having one or more logical ranks.[citation needed]
This is the highest level. A typical computer has only a single memory controller with only one or two channels. The logical features section described NUMA configurations, which can take the form of anetworkof memory controllers. For example, each socket of a two-socketAMDK8can have a two-channel memory controller, giving the system a total of four memory channels.
Various methods of specifying memory geometry can be encountered, giving different types of information.
(memory depth) × (memory width)
The memory width specifies the data width of the memory module interface in bits. For example, 64 would indicate a 64-bit data width, as is found on non-ECCDIMMscommon in SDR and DDR1–4 families of RAM. A memory of width of 72 would indicate an ECC module, with 8 extra bits in the data width for the error-correcting code syndrome. (The ECC syndrome allows single-bit errors to be corrected). The memory depth is the total memory capacity in bits divided by thenon-paritymemory width. Sometimes the memory depth is indicated in units of Meg (220), as in 32×64 or 64×64, indicating 32 Mi depth and 64 Mi depth respectively.
(memory density)
This is the total memory capacity of the chip.
Example: 128 Mib.
(memory depth) × (memory width)
Memory depth is the memory density divided by memory width. Example: for a memory chip with 128 Mib capacity and 8-bit wide data bus, it can be specified as: 16 Meg × 8. Sometimes the "Mi" is dropped, as in 16×8.
(memory depth per bank) × (memory width) × (number of banks)
Example: a chip with the same capacity and memory width as above but constructed with 4 banks would be specified as 4 Mi × 8 × 4.
|
https://en.wikipedia.org/wiki/Memory_geometry
|
TheContent Scramble System(CSS) is adigital rights management(DRM) andencryptionsystem employed on many commercially producedDVD-Videodiscs. CSS utilizes aproprietary40-bitstream cipheralgorithm. The system was introduced around 1996 and was first compromised in 1999.[1]
CSS is one of several complementary systems designed torestrict DVD-Videoaccess.
It has been superseded by newer DRM schemes such asContent Protection for Recordable Media(CPRM), or byAdvanced Encryption Standard(AES) in theAdvanced Access Content System(AACS) DRM scheme used byHD DVDandBlu-ray Disc, which have 56-bit and 128-bitkey sizes, respectively, providing a much higherlevel of securitythan the less secure 40-bit key size of CSS.
The content scramble system (CSS) is a collection of proprietary protection mechanisms forDVD-Videodiscs. CSS attempts to restrict access to the content only for licensed applications. According to theDVD Copy Control Association(CCA), which is the consortium that grants licenses, CSS is supposed to protect the intellectual property rights of the content owner.
The details of CSS are only given to licensees for a fee. The license,[2]which binds the licensee to anon-disclosure agreement, would not permit the development ofopen-source softwarefor DVD-Video playback. Instead, there islibdvdcss, areverse engineeredimplementation of CSS. Libdvdcss is a source for documentation, along with the publicly availableDVD-ROM[3]andMMC[4]specifications. There has also been some effort to collect CSS details from various sources.[5]
A DVD-Video can be produced with or without CSS. A publisher may decide to not use CSS protection in order to save license and production costs.
The content scramble system deals with three participants: the disc, the drive and the player. The disc holds the purported copyright information and the encrypted feature. The drive provides the means to read the disc. The player decrypts and presents the audio and visual content of the feature. All participants must conform to the CCA's license agreement.
There are three protection methods:
The first two protection methods have been broken. Circumvention of regional protection is not possible with every drive—even if the drive grants access to the feature, prediction of title keys may fail.[5]However, DVD players exist which do not enforce regional restrictions (after being disabled manually), which makes regional restrictions less effective as a component of CSS.[6]
The DVD-ROM's main-data (§16[3]), consisting of consecutive logical blocks of 2048 bytes, is structured according to the DVD-Video format. The DVD-Video contains (besides others) anMPEG program streamwhich consists of so-called Packs. If CSS is applied to the disc then a subset of all Packs is encrypted with a title-key.
A DVD-ROM contains, besides the main-data, additional data areas. CSS stores there:
CSS also uses six bytes in the frame header for each logical block of user data (§16.3,[3]§6.29.3.1.5[4]):
The drive treats a DVD-Video disc as any DVD-ROM disc. The player reads the disc's user-data and processes them according to the DVD-Video format. However, if the drive detects a disc that has been compiled with CSS, it denies access to logical blocks that are marked as copyrighted (§6.15.3[4]). The player has to execute an authentication handshake first (§4.10.2.2[4]). The authentication handshake is also used to retrieve the disc-key-block and the title-keys.
The drive may also supportRegional Playback Control(RPC) to limit the playback of DVD-Video content to specific regions of the world (§3.3.26[4]). RPC Phase II drives hold an 8-bit region-code and adhere to all requirements of the CSS license agreement (§6.29.3.1.7[4]). It appears that RPC Phase II drives reject title-key requests on region mismatch. However, reading of user-data may still work.[5]
CSS employs astream cipherand mangles thekeystreamwith the plain-text data to produce the cipher text.[7]The stream cipher is based on twolinear-feedback shift register(LFSR) and set up with a 40-bit seed.
Mangling depends on the type of operation. There are three types:
In order to decrypt a DVD-Video, the player reads the disc-key-block and uses its player-key to decrypt the disc-key. Thereafter, the player reads the title-keys and decrypts them with the disc-key. A different title-key can be assigned for theVideo Managerand for eachVideo Title Set. The title-keys are used to decrypt the encrypted Packs.[5]
CSS employs cryptographic keys with a size of only 40 bits. This makes CSS vulnerable to abrute-force attack. At the time CSS was introduced, it was forbidden in the United States for manufacturers toexportcryptographic systems employing keys in excess of 40 bits, a key length that had already been shown to be wholly inadequate in the face of increasing computer processing power (seeData Encryption Standard).
Based on the leakedDeCSSsource-code,Frank A. Stevensonpublished in November 1999 three exploits that rendered the CSS cipher practically ineffective:[7]
The latter exploit recovers a disk-key from its hash-value in less than 18 seconds on a 450 MHz Intel Pentium III.
The CSS design was prepared for the leak of a few player-keys. New discs would not contain an encrypted variant for these player-keys in the disc-key-block. However, Stevenson's exploits made it possible to generate all player-keys.Libdvdcssuses such a list of generated player-keys.
There are cases when no title-keys are available. A drive may deny access on region mismatch but still permit reading of the encrypted DVD-Video. Ethan Hawke presented a plain-text prediction for data repetitions in theMPEG program streamthat enables the recovery of title-keys in real-time directly from the encrypted DVD-Video.[8]
InGeeks Bearing Gifts, authorTed Nelsonstates "DVD encryption was intentionally made light by the DVD encryption committee, based on arguments in a libertarian bookComputer Lib", a claim cited as originating from personal communication with ananonymous source; Nelson is the author ofComputer Lib.[9]
|
https://en.wikipedia.org/wiki/Content_Scramble_System
|
Writing systemsare used to record human language, and may be classified according to certain common features.
The usual name of the script is given first; the name of thelanguagesin which the script is written follows (in brackets), particularly in the case where the language name differs from the script name. Other informative or qualifying annotations for the script may also be provided.
Ideographic scripts (in which graphemes areideogramsrepresenting concepts or ideas rather than a specific word in a language) and pictographic scripts (in which the graphemes are iconic pictures) are not thought to be able to express all that can be communicated by language, as argued by the linguistsJohn DeFrancisandJ. Marshall Unger. Essentially, they postulate that notruewriting system can be completely pictographic or ideographic; it must be able to refer directly to a language in order to have the full expressive capacity of a language. Unger disputes claims made on behalf ofBlissymbolsin his 2004 bookIdeogram.
Although a fewpictographicorideographic scriptsexist today, there is no single way to read them because there is no one-to-one correspondence between symbol and language. Hieroglyphs were commonly thought to be ideographic before they were translated, and to this day, Chinese is often erroneously said to be ideographic.[1]In some cases of ideographic scripts, only the author of a text can read it with any certainty, and it may be said that they areinterpretedrather than read. Such scripts often work best as mnemonic aids for oral texts or as outlines that will be fleshed out in speech.
There are also symbol systems used to represent things other than language:
In logographic writing systems,glyphsrepresentwordsormorphemes(meaningful components of words, as inmean-ing-ful) rather than phonetic elements.
No logographic script is composed solely oflogograms. All contain graphemes that representphonetic(sound-based) elements as well. These phonetic elements may be used on their own (to represent, for example, grammatical inflections or foreign words), or may serve asphonetic complementsto a logogram (used to specify the sound of a logogram that might otherwise represent more than one word). In the case of Chinese, the phonetic element is built into the logogram itself; in Egyptian and Mayan, many glyphs are purely phonetic, whereas others function as either logograms or phonetic elements, depending on context. For this reason, many such scripts may be more properly referred to as logosyllabic or complex scripts; the terminology used is largely a product of custom in the field, and is to an extent arbitrary.
In asyllabary, graphemes representsyllablesormoras. (The 19th-century termsyllabicsusually referred toabugidasrather than true syllabaries.)
In most of these systems, some consonant-vowel combinations are written as syllables, but others are written as consonant plus vowel. In the case of Old Persian, all vowels were written regardless, so it waseffectivelya true alphabet despite its syllabic component. In Japanese a similar system plays a minor role in foreign borrowings; for example, [tu] is written [to]+[u], and [ti] as [te]+[i]. Paleohispanicsemi-syllabariesbehaved as asyllabaryfor thestop consonantsand as analphabetfor the rest of consonants and vowels.
The Tartessian or Southwestern script is typologically intermediate between a pure alphabet and the Paleohispanic full semi-syllabaries. Although the letter used to write a stop consonant was determined by the following vowel, as in a fullsemi-syllabary, the following vowel was also written, as in an alphabet. Some scholars treat Tartessian as a redundant semi-syllabary, others treat it as a redundant alphabet. Other scripts, such as Bopomofo, are semi-syllabic in a different sense: they transcribe half syllables. That is, they have letters forsyllable onsetsandrimes(kan = "k-an")rather than for consonants and vowels(kan = "k-a-n").
Asegmental scripthasgraphemeswhich represent thephonemes(basic unit of sound) of a language.
Note that there need not be (and rarely is) a one-to-one correspondence between the graphemes of the script and the phonemes of a language. A phoneme may be represented only by some combination or string of graphemes, the same phoneme may be represented by more than one distinct grapheme, the same grapheme may stand for more than one phoneme, or some combination of all of the above.
Segmental scripts may be further divided according to the types of phonemes they typically record:
Anabjadis a segmental script containing symbols forconsonantsonly, or where vowels areoptionallywritten withdiacritics("pointing") or only written word-initially.
A truealphabetcontains separate letters (notdiacriticmarks) for bothconsonantsandvowels.
Linearalphabets are composed of lines on a surface, such as ink on paper.
Afeatural scripthas elements that indicate the components of articulation, such asbilabial consonants,fricatives, orback vowels. Scripts differ in how many features they indicate.
Manual alphabetsare frequently found as parts ofsign languages. They are not used for writingper se, but for spelling out words while signing.
These are other alphabets composed of something other than lines on a surface.
Anabugida, oralphasyllabary, is a segmental script in whichvowelsounds are denoted bydiacritical marksor other systematic modification of theconsonants. Generally, however, if a single letter is understood to have an inherent unwritten vowel, and only vowels other than this are written, then the system is classified as an abugida regardless of whether the vowels look like diacritics or full letters. The vast majority of abugidas are found from India to Southeast Asia and belong historically to the Brāhmī family, however the term is derived from the first characters of the abugida inGe'ez: አ (a) ቡ (bu) ጊ (gi) ዳ (da) — (compare withalphabet). Unlike abjads, the diacritical marks and systemic modifications of the consonants are not optional.
In at least one abugida, not only the vowel but anysyllable-finalconsonant is written with a diacritic. That is, if representing [o] with an under-ring, and final [k] with an over-cross, [sok] would be written ass̥̽.
In a few abugidas, the vowels are basic, and the consonants secondary. If no consonant is written in Pahawh Hmong, it is understood to be /k/; consonants are written after the vowel they precede in speech. In Japanese Braille, the vowels but not the consonants have independent status, and it is the vowels which are modified when the consonant isyorw.
The following list contains writing systems that are in active use by a population of at least 50,000.
Malay(inBrunei)others
These systems have not been deciphered. In some cases, such asMeroitic, the sound values of the glyphs are known, but the texts still cannot be read because the language is not understood. Several of these systems, such asIsthmian scriptandIndus script, are claimed to have been deciphered, but these claims have not been confirmed by independent researchers. In many cases it is doubtful that they are actually writing. TheVinča symbolsappear to beproto-writing, andquipumay have recorded only numerical information. There are doubts that the Indus script is writing, and thePhaistos Dischas so little content or context that its nature is undetermined.
Comparatively recent manuscripts and other texts written in undeciphered (and often unidentified) writing systems; some of these may represent ciphers of known languages orhoaxes.
This section lists alphabets used to transcribephoneticorphonemicsound; not to be confused withspelling alphabetslike theICAO spelling alphabet. Some of these are used for transcription purposes by linguists; others are pedagogical in nature or intended as general orthographic reforms.
Alphabets may exist in forms other than visible symbols on a surface. Some of these are:
SeeList of constructed scriptsfor an expanded version of this table.
|
https://en.wikipedia.org/wiki/List_of_writing_systems
|
The computer toolpatchis aUnixprogramthat updatestext filesaccording to instructions contained in a separate file, called apatch file. The patch file (also called apatchfor short) is a text file that consists of a list of differences and is produced by running the relateddiffprogram with the original and updated file as arguments. Updating files with patch is often referred to asapplying the patchor simplypatchingthe files.
The original patch program was written byLarry Wall(who went on to create thePerlprogramming language) and posted tomod.sources[1](which later becamecomp.sources.unix) in May 1985.
patch was added to XPG4, which later becamePOSIX.[2]Wall's code remains the basis of "patch" programs provided inOpenBSD,[3]FreeBSD,[4]and schilytools.[5][dubious–discuss]TheOpen Software Foundation, which merged intoThe Open Group, is said to have maintained a derived version.[dubious–discuss]
TheGNU project/FSFmaintains its patch, forked from the Larry Wall version. The repository is different from that of GNU diffutils, but the documentation is managed together.[6]
Developed by a programmer for other programmers, patch was frequently used for updatingsource codeto a newer version. Because of this, many people came to associate patches with source code, whereas patches can in fact be applied to any text.Patchedfiles do not accumulate any unneeded text, which is what some people perceive based on the English meaning of the word; patch is as capable of removing text as it is of adding it.
Patches described here should not be confused withbinary patches, which, although can be conceptually similar, are distributed to update binary files comprising the program to a new release.
The diff files that serve as input to patch are readable text files, which means that they can be easily reviewed or modified by humans before use.
In addition to the "diff" program, diffs can also be produced by other programs, such asSubversion,CVS,RCS,MercurialandGit.
Patches have been the crucial component of manysource controlsystems, includingCVS.
When more advanced diffs are used, patches can be applied even to files that have been modified in the meantime, as long as those modifications do not interfere with the patch. This is achieved by using "context diffs" and "unified diffs" (also known as "unidiffs"), which surround each change withcontext, which is the text immediately before and after the changed part. Patch can then use this context to locate the region to be patched even if it has been displaced by changes earlier in the file, using the line numbers in the diffs as a starting point. Because of this property, context and unified diffs are the preferred form of patches for submission to many software projects.
The above features make diff and patch especially popular for exchanging modifications toopen-source software. Outsiders can download the latest publicly available source code, make modifications to it, and send them, in diff form, to the development team. Using diffs, the development team has the ability to effectively review the patches before applying them, and can apply them to a newer code base than the one the outside developer had access to.
To create a patch, one could run the following command in a shell:
To apply a patch, one could run the following command in a shell:
This tells patch to apply the changes to the specified files described inmods.diff. Patches to files in subdirectories require the additional-pnumberoption, wherenumberis 1 if the base directory of the source tree is included in the diff, and 0 otherwise.
Patches can be undone, or reversed, with the '-R' option:
In some cases when the file is not identical to the version the diff was generated against, the patch will not be able to be applied cleanly. For example, if lines of text are inserted at the beginning, the line numbers referred to in the patch will be incorrect. patch is able to recover from this, by looking at nearby lines to relocate the text to be patched. It will also recover when lines ofcontext(for context and unified diffs) are altered; this is described asfuzz.
Originally written for Unix andUnix-likesystems, patch has also been ported toWindowsand many other platforms. Windows ports of patch are provided byGnuWin32andUnxUtils.
Apatchcommand is also part ofASCII'sMSX-DOS2 ToolsforMSX-DOSversion 2.[7]
|
https://en.wikipedia.org/wiki/Patch_(Unix)
|
Incomputer science, areaders–writer(single-writerlock,[1]amulti-readerlock,[2]apush lock,[3]or anMRSW lock) is asynchronizationprimitive that solves one of thereaders–writers problems. An RW lock allowsconcurrentaccess for read-only operations, whereas write operations require exclusive access. This means that multiple threads can read the data in parallel but an exclusivelockis needed for writing or modifying data. When a writer is writing the data, all other writers and readers will be blocked until the writer is finished writing. A common use might be to control access to a data structure in memory that cannot be updatedatomicallyand is invalid (and should not be read by another thread) until the update is complete.
Readers–writer locks are usually constructed on top ofmutexesandcondition variables, or on top ofsemaphores.
Some RW locks allow the lock to be atomically upgraded from being locked in read-mode to write-mode, as well as being downgraded from write-mode to read-mode.[1]Upgrading a lock from read-mode to write-mode is prone to deadlocks, since whenever two threads holding reader locks both attempt to upgrade to writer locks, a deadlock is created that can only be broken by one of the threads releasing its reader lock. The deadlock can be avoided by allowing only one thread to acquire the lock in "read-mode with intent to upgrade to write" while there are no threads in write mode and possibly non-zero threads in read-mode.
RW locks can be designed with different priority policies for reader vs. writer access. The lock can either be designed to always give priority to readers (read-preferring), to always give priority to writers (write-preferring) or beunspecifiedwith regards to priority. These policies lead to different tradeoffs with regards toconcurrencyandstarvation.
Several implementation strategies for readers–writer locks exist, reducing them to synchronization primitives that are assumed to pre-exist.
Raynaldemonstrates how to implement an R/W lock using two mutexes and a single integer counter. The counter,b, tracks the number of blocking readers. One mutex,r, protectsband is only used by readers; the other,g(for "global") ensures mutual exclusion of writers. This requires that a mutex acquired by one thread can be released by another. The following ispseudocodefor the operations:
Initialize
Begin Read
End Read
Begin Write
End Write
This implementation is read-preferring.[4]: 76
Alternatively an RW lock can be implemented in terms of acondition variable,cond, an ordinary (mutex) lock,g, and various counters and flags describing the threads that are currently active or waiting.[7][8][9]For a write-preferring RW lock one can use two integer counters and one Boolean flag:
Initiallynum_readers_activeandnum_writers_waitingare zero andwriter_activeis false.
The lock and release operations can be implemented as
Begin Read
End Read
Begin Write
End Write
Theread-copy-update(RCU) algorithm is one solution to the readers–writers problem. RCU iswait-freefor readers. TheLinux kernelimplements a special solution for few writers calledseqlock.
|
https://en.wikipedia.org/wiki/Readers%E2%80%93writer_lock
|
Theknowledge divideis the gap between those who can find, create, manage, process, and disseminate information orknowledge, and those who are impaired in this process. According to a 2005 UNESCO World Report, the rise in the 21st century of a global information society has resulted in the emergence of knowledge as a valuable resource, increasingly determining who has access to power and profit.[1]The rapid dissemination of information on a potentially global scale as a result of new information media[2]and the globally uneven ability to assimilate knowledge and information has resulted in potentially expanding gaps in knowledge between individuals and nations.[3]The digital divide is an extension of the knowledge divide, dividing people who have access to the internet and those who do not.[citation needed]The knowledge divide also represents the inequalities of knowledge among different identities, including but not limited to race, economic status, and gender.
In the 21st century, the emergence of theknowledge societybecomes pervasive.[4]The transformations of world's economy and of each society have a fast pace. Together with information and communication technologies (ICT), these new paradigms have the power to reshape the global economy.[5]In order to keep pace with innovations, to come up with new ideas, people need to produce and manage knowledge. This is why knowledge has become essential for all societies. While knowledge has become essential for all societies due to the growth of new technologies, the increase of mass media information continues to facilitate the knowledge divide between those with educational differences.[6]
According toUNESCOand theWorld Bank,[7]knowledge gaps between nations may occur due to the varying degrees by which individual nations incorporate the following elements:
First, it was noticed that a great difference exists between theNorth and the South[where?](rich countries vs. poor countries). The development of knowledge depends on spreading Internet and computer technology and also on the development of education in these countries. If a country has attained a higher literacy level then this will result in a higher level of knowledge.
Indeed, UNESCO's report details many social issues in knowledge divide related to globalization. There was noticed a knowledge divide with respect to
Scholars have made similar possibilities in closing or minimizing the knowledge divide between individuals, communities, and nations. Providing access to computers and other technologies that disseminate knowledge is not enough to bridge the digital divide, rather importance must be out on developing digital literacy to bridge the gap.[28]Addressing the digital divide will not be enough to close the knowledge divide, disseminating relevant knowledge also depends on training and cognitive skills.[29]
|
https://en.wikipedia.org/wiki/Knowledge_divide
|
Operating signalsare a type ofbrevity codeused in operational communication among radio and telegraph operators. For example:
|
https://en.wikipedia.org/wiki/Operating_signals
|
PowerShellis ashellprogramdeveloped byMicrosoftfor task automation andconfiguration management. As is typical for a shell, it provides acommand-lineinterpreterfor interactive use and ascriptinterpreter for automation via alanguagedefined for it. Originally only for Windows, known asWindows PowerShell, it was madeopen-sourceandcross-platformon August 18, 2016, with the introduction ofPowerShell Core.[9]The former is built on the.NET Framework; the latter on.NET(previously .NET Core).
PowerShell is bundled with currentversions of Windowsand can be installed onmacOSandLinux.[9]SinceWindows 10build 14971, PowerShell replacedCommand Promptas the defaultcommand shellexposed byFile Explorer.[10][11]
In PowerShell, administrative tasks are generally performed viacmdlets(pronouncedcommand-lets), which are specialized .NETclassesimplementing a particular operation. These work by accessing data in different data stores, like thefile systemorWindows Registry, which are made available to PowerShell viaproviders. Third-party developers can add cmdlets and providers to PowerShell.[12][13]Cmdlets may be used by scripts, which may in turn be packaged into modules. Cmdlets work in tandem with the .NETAPI.
PowerShell's support for.NET Remoting,WS-Management,CIM, andSSHenables administrators to perform administrative tasks on both local and remote Windows systems. PowerShell also provides a hostingAPIwith which the PowerShell runtime can be embedded inside other applications. These applications can then use PowerShell functionality to implement certain operations, including those exposed via thegraphical interface. This capability has been used byMicrosoft Exchange Server2007 to expose its management functionality as PowerShell cmdlets and providers and implement thegraphicalmanagement tools as PowerShell hosts which invoke the necessary cmdlets.[12][14]Other Microsoft applications includingMicrosoft SQL Server 2008also expose their management interface via PowerShell cmdlets.[15]
PowerShell includes its own extensive,console-basedhelp (similar toman pagesinUnix shells) accessible via theGet-Helpcmdlet. Updated local help contents can be retrieved from the Internet via theUpdate-Helpcmdlet. Alternatively, help from the web can be acquired on a case-by-case basis via the-onlineswitch toGet-Help.
Shell programs, including PowerShell, trace lineage to shells in olderoperating systemssuch asMS-DOSandXenixwhich exposed system functionality to the user almost exclusively via acommand-line interface(CLI) – althoughMS-DOS 5also came with a complementary graphicalDOS Shell. TheWindows 9xfamily came bundled withCOMMAND.COM, the command-line environment of MS-DOS. TheWindows NTandWindows CEfamilies, however, came with the newercmd.exe– a significant upgrade from COMMAND.COM. Both environments provide CLI for both internal and external commands and automation viabatch files– a relatively primitive language for scripting.
To address limitations of these shells – including the inability to directly use asoftware componentexposed viaCOM– Microsoft introduced theWindows Script Hostin 1998 withWindows 98, and its command-line based host,cscript.exe. It integrates with theActive Scriptengine and allows scripts to be written in compatible languages, such asJScriptandVBScript. These scripts can use COM components directly, but it has relatively inaccessible documentation and gained a reputation as a systemvulnerability vectorafter several high-profilecomputer virusesexploited weaknesses in its security provisions.
Different versions of Windows provided various special-purpose command-line interpreters (such asnetshandWMIC) with their own command sets but they were not interoperable.Windows Server 2003further attempted to improve the command-line experience but scripting support was still unsatisfactory.[16]
By the late 1990s,Intelhad come to Microsoft asking for help in making Windows, which ran on Intel CPUs, a more appropriate platform to support the development of future Intel CPUs. At the time, Intel CPU development was accomplished onSun Microsystemscomputers which ranSolaris(aUnixvariant) onRISC-architecture CPUs. The ability to run Intel's manyKornShellautomation scripts on Windows was identified as a key capability. Internally, Microsoft began an effort to create a Windows port of Korn Shell, which was code-named Kermit.[17]Intel ultimately pivoted to aLinux-based development platform that could run on Intel CPUs, rendering the Kermit project redundant. However, with a fully funded team, Microsoft program managerJeffrey Snoverrealized there was an opportunity to create a more general-purpose solution to Microsoft's problem of administrative automation.
By 2002, Microsoft had started to develop a new approach to command-line management, including a CLI called Monad (also known asMicrosoft Shellor MSH). The ideas behind it were published in August 2002 in a white paper called the "Monad Manifesto" by its chief architect,Jeffrey Snover.[18]In a 2017 interview, Snover explains the genesis of PowerShell, saying that he had been trying to makeUnixtools available on Windows, which didn't work due to "core architectural difference[s] between Windows and Linux". Specifically, he noted thatLinuxconsiders everything atext file, whereas Windows considers everything an "APIthat returns structured data". They were fundamentally incompatible, which led him to take a different approach.[19]
Monad was to be a new extensible CLI with a fresh design capable of automating a range of core administrative tasks. Microsoft first demonstrated Monad publicly at the Professional Development Conference in Los Angeles in October 2003. A few months later, they opened up private beta, which eventually led to a public beta. Microsoft published the first Monad publicbeta releaseon June 17, 2005, and the Beta 2 on September 11, 2005, and Beta 3 on January 10, 2006.
On April 25, 2006, not long after the initial Monad announcement, Microsoft announced that Monad had been renamedWindows PowerShell, positioning it as a significant part of its management technology offerings.[20]Release Candidate (RC) 1 of PowerShell was released at the same time. A significant aspect of both the name change and the RC was that this was now a component of Windows, rather than a mere add-on.
Release Candidate 2 of PowerShell version 1 was released on September 26, 2006, with finalrelease to the webon November 14, 2006. PowerShell for earlier versions of Windows was released on January 30, 2007.[21]PowerShell v2.0 development began before PowerShell v1.0 shipped. During the development, Microsoft shipped threecommunity technology previews (CTP). Microsoft made these releases available to the public. The last CTP release of Windows PowerShell v2.0 was made available in December 2008.
PowerShell v2.0 was completed and released to manufacturing in August 2009, as an integral part of Windows 7 and Windows Server 2008 R2. Versions of PowerShell for Windows XP, Windows Server 2003, Windows Vista and Windows Server 2008 were released in October 2009 and are available for download for both 32-bit and 64-bit platforms.[22]In an October 2009 issue ofTechNet Magazine, Microsoft called proficiency with PowerShell "the single most important skill a Windowsadministratorwill need in the coming years".[23]
Windows 10 shipped with Pester, a script validation suite for PowerShell.[24]
On August 18, 2016, Microsoft announced[25]that they had made PowerShell open-source and cross-platform with support for Windows,macOS,CentOSandUbuntu.[9]The source code was published onGitHub.[26]The move to open source created a second incarnation of PowerShell called "PowerShell Core", which runs on.NET Core. It is distinct from "Windows PowerShell", which runs on the full.NET Framework.[27]Starting with version 5.1, PowerShell Core is bundled withWindows Server 2016 Nano Server.[28][29]
A project namedPash, apunon the widely known "bash" Unix shell, has been anopen-sourceandcross-platformreimplementation of PowerShell via theMono framework.[30]Pash was created by Igor Moochnick, written inC#and was released under theGNU General Public License. Pash development stalled in 2008, was restarted onGitHubin 2012,[31]and finally ceased in 2016 when PowerShell was officially made open-source and cross-platform.[32]
A key design goal for PowerShell was to leverage the large number ofAPIsthat already existed in Windows, Windows Management Instrumentation, .NET Framework, and other software. PowerShell cmdlets generally wrap and expose existing functionality instead of implementing new functionality. The intent was to provide an administrator-friendly, more-consistent interface between administrators and a wide range of underlying functionality. With PowerShell, an administrator doesn't need to know .NET, WMI, or low-level API coding, and can instead focus on using the cmdlets exposed by PowerShell. In this regard, PowerShell creates little new functionality, instead focusing on making existing functionality more accessible to a particular audience.[33]
PowerShell's developers based the core grammar of the tool on that of thePOSIX 1003.2KornShell.[34]
However, PowerShell's language was also influenced byPHP,Perl, and many other existing languages.[35]
PowerShell can execute four kinds of named commands:[36]
If a command is a standalone executable program, PowerShell launches it in a separateprocess; if it is a cmdlet, it executes in the PowerShell process. PowerShell provides an interactivecommand-line interface, where the commands can be entered and their output displayed. The user interface offers customizabletab completion. PowerShell enables the creation ofaliasesfor cmdlets, which PowerShell textually translates into invocations of the original commands. PowerShell supports bothnamedand positionalparametersfor commands. In executing a cmdlet, the job of binding the argument value to the parameter is done by PowerShell itself, but for external executables, arguments are parsed by the external executable independently of PowerShell interpretation.[37]
The PowerShellExtended Type System(ETS) is based on the .NET type system, but with extended semantics (for example, propertySets and third-party extensibility). For example, it enables the creation of different views of objects by exposing only a subset of the data fields, properties, and methods, as well as specifying custom formatting and sorting behavior. These views are mapped to the original object usingXML-based configuration files.[38]
A cmdlet is a .NETclassthat derives either fromCmdletor fromPSCmdlet; the latter used when it needs to interact with the PowerShell runtime.[39]The base classes specify methods –BeginProcessing(),ProcessRecord()andEndProcessing()– which a cmdlet overrides to provide functionality based on the events that these functions represent.ProcessRecord()is called if the object receives pipeline input.[40]If a collection of objects is piped, the method is invoked for each object in the collection. The cmdlet class must have theattributeCmdletAttributewhich specifies the verb and the noun that make up the name of the cmdlet.
A cmdlet name follows aVerb-Nounnaming pattern, such asGet-ChildItem, which tends to make itself-documented.[39]Common verbs are provided as anenum.[41][42]
If a cmdlet receives either pipeline input or command-line parameter input, there must be a correspondingpropertyin the class, with amutatorimplementation. PowerShell invokes the mutator with the parameter value or pipeline input, which is saved by the mutator implementation in class variables. These values are then referred to by the methods which implement the functionality. Properties that map to command-line parameters are marked byParameterAttribute[43]and are set before the call toBeginProcessing(). Those which map to pipeline input are also flanked byParameterAttribute, but with theValueFromPipelineattribute parameter set.[44]
A cmdlet can use any.NETAPIand may be written in any.NET language. In addition, PowerShell makes certain APIs available, such asWriteObject(), which is used to access PowerShell-specific functionality, such as writing objects to the pipeline. A cmdlet can use .NET a data accessAPIdirectly or use the PowerShell infrastructure ofProviders, which make data stores addressable using uniquepaths. Data stores are exposed usingdrive letters, and hierarchies within them, addressed as directories. PowerShell ships with providers for thefile system,registry, thecertificatestore, as well as thenamespacesfor command aliases, variables, and functions.[45]PowerShell also includes various cmdlets for managing variousWindowssystems, including thefile system, or usingWindows Management Instrumentationto controlWindows components. Other applications can register cmdlets with PowerShell, thus allowing it to manage them, and, if they enclose any datastore (such as a database), they can add specific providers as well.[citation needed]
A cmdlet can be added to the shell via modules or before v2 snap-ins. Users are not limited to the cmdlets included in the base PowerShell installation.
The number of cmdlets included in the base PowerShell install for various versions:
To enablepipelinesemantics, similar to theUnix pipeline, a cmdlet receives input and outputs result as objects. If a cmdlet outputs multiple objects, each object of the collection is passed through the pipeline before the next object is processed.[39]. A PowerShell pipeline enables complex logic using the pipe (|) operator to connect stages. However, the PowerShell pipeline differs from Unix pipelines in that stages executewithinthe PowerShell runtime rather than as a set of processes coordinated by theoperating system. Additionally, structured .NET objects, rather thanbyte streams, are passed from one stage to the next. Usingobjectsand executing stages within the PowerShell runtime eliminates the need toserializedata structures, or to extract them by explicitlyparsingtext output.[50]An object can alsoencapsulatecertain functions that work on the contained data, which become available to the recipient command for use.[51][52]For the last cmdlet in a pipeline, PowerShell automatically pipes its output object to theOut-Defaultcmdlet, which transforms the objects into a stream of format objects and then renders those to the screen.[53][54]
Because a PowerShell object is a .NET object, it has a.ToString()method which is used to serialize object state. In addition, PowerShell allows formatting definitions to be specified, so the text representation of objects can be customized by choosing which data elements to display, and in what manner. However, in order to maintainbackward compatibility, if an external executable is used in a pipeline, it receives a text stream representing the object, instead of directly integrating with the PowerShell type system.[55][56][57]
PowerShell includes adynamically typedlanguage for scriptingwhich can implement complex operations using cmdletsimperatively. The language supports variables, functions, branching (if-then-else), loops (while,do,for, andforeach), structured error/exception handling andclosures/lambda expressions,[58]as well as integration with .NET. Variables in PowerShell scripts are prefixed with$. Variables can be assigned any value, including the output of cmdlets. Strings can be enclosed either in single quotes or in double quotes: when using double quotes, variables will be expanded even if they are inside the quotation marks. Enclosing the path to a file in braces preceded by a dollar sign (as in${C:\foo.txt}) creates a reference to the contents of the file. If it is used as anL-value, anything assigned to it will be written to the file. When used as anR-value, the contents of the file will be read. If an object is assigned, it is serialized before being stored.[citation needed]
Object members can be accessed using.notation, as in C# syntax. PowerShell provides special variables, such as$args, which is an array of all the command-line arguments passed to a function from the command line, and$_, which refers to the current object in the pipeline.[59]PowerShell also providesarraysandassociative arrays. The PowerShell language also evaluates arithmetic expressions entered on the command line immediately, and it parses common abbreviations, such as GB, MB, and KB.[60][61]
Using thefunctionkeyword, PowerShell provides for the creation of functions. A simple function has the following general look:[62]
However, PowerShell allows for advanced functions that support named parameters, positional parameters, switch parameters and dynamic parameters.[62]
The defined function is invoked in either of the following forms:[62]
PowerShell allows any static .NET methods to be called by providing their namespaces enclosed in brackets ([]), and then using a pair of colons (::) to indicate the static method.[63]For example:
There are dozens of ways to create objects in PowerShell. Once created, one can access the properties and instance methods of an object using the.notation.[63]
PowerShell acceptsstrings, both raw andescaped. A string enclosed between singlequotation marksis a raw string while a string enclosed between double quotation marks is an escaped string. PowerShell treats straight and curly quotes as equivalent.[64]
The following list of special characters is supported by PowerShell:[65]
For error handling, PowerShell provides a .NET-basedexception-handlingmechanism. In case of errors, objects containing information about the error (Exceptionobject) are thrown, which are caught using thetry ... catchconstruct (although atrapconstruct is supported as well). PowerShell can be configured to silently resume execution, without actually throwing the exception; this can be done either on a single command, a single session or perpetually.[66]
Scripts written using PowerShell can be made to persist across sessions in either a.ps1file or a.psm1file (the latter is used to implement a module). Later, either the entire script or individual functions in the script can be used. Scripts and functions operate analogously with cmdlets, in that they can be used as commands in pipelines, and parameters can be bound to them. Pipeline objects can be passed between functions, scripts, and cmdlets seamlessly. To prevent unintentional running of scripts, script execution is disabled by default and must be enabled explicitly.[67]Enabling of scripts can be performed either at system, user or session level. PowerShell scripts can besignedto verify their integrity, and are subject toCode Access Security.[68]
The PowerShell language supportsbinary prefixnotation similar to thescientific notationsupported by many programming languages in the C-family.[69]
One can also use PowerShell embedded in a management application, which uses the PowerShell runtime to implement the management functionality. For this, PowerShell provides amanagedhostingAPI. Via the APIs, the application can instantiate arunspace(one instantiation of the PowerShell runtime), which runs in the application'sprocessand is exposed as aRunspaceobject.[12]The state of the runspace is encased in aSessionStateobject. When the runspace is created, the PowerShell runtime initializes the instantiation, including initializing the providers and enumerating the cmdlets, and updates theSessionStateobject accordingly. The Runspace then must be opened for either synchronous processing or asynchronous processing. After that it can be used to execute commands.[citation needed]
To execute a command, a pipeline (represented by aPipelineobject) must be created and associated with the runspace. The pipeline object is then populated with the cmdlets that make up the pipeline. For sequential operations (as in a PowerShell script), a Pipeline object is created for each statement and nested inside another Pipeline object.[12]When a pipeline is created, PowerShell invokes the pipeline processor, which resolves the cmdlets into their respectiveassemblies(thecommand processor) and adds a reference to them to the pipeline, and associates them withInputPipe,OutputPipeandErrorOutputPipeobjects, to represent the connection with the pipeline. The types are verified and parameters bound usingreflection.[12]Once the pipeline is set up, the host calls theInvoke()method to run the commands, or its asynchronous equivalent,InvokeAsync(). If the pipeline has theWrite-Hostcmdlet at the end of the pipeline, it writes the result onto the console screen. If not, the results are handed over to the host, which might either apply further processing or display the output itself.[citation needed]
Microsoft Exchange Server2007 uses the hosting APIs to provide its management GUI. Each operation exposed in the GUI is mapped to a sequence of PowerShell commands (or pipelines). The host creates the pipeline and executes them. In fact, the interactive PowerShell console itself is a PowerShell host, whichinterpretsthe scripts entered at command line and creates the necessaryPipelineobjects and invokes them.[citation needed]
DSC allows for declaratively specifying how a software environment should be configured.[70]
Upon running aconfiguration, DSC will ensure that the system gets the state described in the configuration. DSC configurations are idempotent. TheLocal Configuration Manager(LCM) periodically polls the system using the control flow described byresources(imperative pieces of DSC) to make sure that the state of a configuration is maintained.
All major releases are still supported, and each major release has featured backwards compatibility with preceding versions.[dubious–discuss]
Initially using the code name "Monad", PowerShell was first shown publicly at the Professional Developers Conference in October 2003 in Los Angeles.
Named Windows PowerShell, version 1.0 was released in November 2006 forWindows XP SP2,Windows Server 2003 SP1andWindows Vista[71]and as an optional component ofWindows Server 2008.
Version 2.0 integrates withWindows 7andWindows Server 2008 R2[72]and is released forWindows XPwith Service Pack 3,Windows Server 2003with Service Pack 2, andWindows Vistawith Service Pack 1.[73][74]
The version includes changes to the language and hosting API, in addition to including more than 240 new cmdlets.[75][76]
New features include:[77][78][79]
Version 3.0 integrates withWindows 8,Windows Server 2012,Windows 7with Service Pack 1,Windows Server 2008with Service Pack 1, andWindows Server 2008 R2with Service Pack 1.[84][85]
Version 3.0 is part of a larger package,Windows Management Framework3.0 (WMF3), which also contains theWinRMservice to support remoting.[85]Microsoft made severalCommunity Technology Previewreleases of WMF3. An early community technology preview 2 (CTP 2) version of Windows Management Framework 3.0 was released on December 2, 2011.[86]Windows Management Framework 3.0 was released for general availability in December 2012[87]and is included with Windows 8 and Windows Server 2012 by default.[88]
New features include:[85][89]: 33–34
Version 4.0 integrates withWindows 8.1,Windows Server 2012 R2,Windows 7 SP1,Windows Server 2008 R2SP1 andWindows Server 2012.[90]
New features include:
Version 5.0 was re-released with Windows Management Framework (WMF) 5.0 on February 24, 2016, following an initial release with a severe bug.[94]
Key features included:
Version 5.1 was released along with theWindows 10 Anniversary Update[97]on August 2, 2016, and inWindows Server 2016.[98]PackageManagement now supports proxies, PSReadLine now has ViMode support, and two new cmdlets were added: Get-TimeZone and Set-TimeZone. The LocalAccounts module allows for adding/removing local user accounts.[99]A preview for was released for Windows 7, Windows Server 2008, Windows Server 2008 R2, Windows Server 2012, and Windows Server 2012 R2 on July 16, 2016,[100]and was released on January 19, 2017.[101]
Version 5.1 is the first to come in two editions of "Desktop" and "Core". The "Desktop" edition is the continuation product line that uses the .NET Framework, and the "Core" edition runs on .NET Core and is bundled with Windows Server 2016 Nano Server. In exchange for smaller footprint, the latter lacks some features such as the cmdlets to manage clipboard or join a computer to a domain, WMI version 1 cmdlets, Event Log cmdlets and profiles.[29]This was the final version exclusively for Windows. Version 5.1 remains pre-installed on Windows 10, Windows 11 and Windows Server 2022, while the .NET version needs to be installed separately and can run side-by-side with the .NET Framework version.[102][103]
Renamed to PowerShell Core, version 6.0 was first announced on August 18, 2016, when Microsoft unveiled its decision to make the productcross-platform, independent of Windows, free and open source.[9]It achievedgeneral availabilityon January 10, 2018, for Windows,macOSandLinux.[104]It has its own support lifecycle and adheres to the Microsoft lifecycle policy that is introduced with Windows 10: Only the latest version of PowerShell Core is supported. Microsoft expects to release one minor version for PowerShell Core 6.0 every six months.[105]
The most significant change in this version is the expansion to the other platforms. For Windows administrators, this version did not include any major new features. In an interview with the community on January 11, 2018, the development team was asked to list the top 10 most exciting things that would happen for a Windows IT professional who would migrate from version 5.1 to version 6.0. In response, Angel Calvo of Microsoft could only name two: cross-platform and open-source.[106]PowerShell 6 changed toUTF-8as default encoding, with some exceptions.[107](version 7.4 changes more to UTF-8)[108]
According to Microsoft, one of the new features of version 6.1 is "Compatibility with 1900+ existing cmdlets in Windows 10 andWindows Server 2019."[109]Still, no details of these cmdlets can be found in the full version of the change log.[110]Microsoft later professes that this number was insufficient as PowerShell Core failed to replace Windows PowerShell 5.1 and gain traction on Windows.[111]It was, however, popular on Linux.[111]
Version 6.2 is focused primarily on performance improvements, bug fixes, and smaller cmdlet and language enhancements that improved developer productivity.[112]
Renamed to simply PowerShell, version 7 replaces the previous product lines: PowerShell Core and Windows PowerShell.[113][111]The focus in development was to make version 7 a viable replacement for version 5.1, i.e. to have near parity with it in terms of compatibility with modules that ship with Windows.[114]
New features include:[115]
Version 7.2 is the next long-term support version, after version 7.0. It uses .NET 6.0 and features universal installer packages for Linux. On Windows, updates to version 7.2 and later come via theMicrosoft Updateservice; this feature has been missing from versions 6.0 through 7.1.[116]
Version 7.3 includes some general Cmdlet updates and fixes, testing for framework dependent package in release pipeline as well as build and packaging improvements.[117]
Version 7.4 is based on .NET 8 and is considered the long term support (LTS) release.[118]
Changes include:[119]
Version 7.5, is the latest stable release; released January 2025; built on .NET 9.0.1. It includes enhancements for performance, usability, and security.[120]Key updates include improvements to tab completion, such as better type inference and new argument completers, as well as fixes for Invoke-WebRequest and Invoke-RestMethod. This release also adds the new ConvertTo-CliXml and ConvertFrom-CliXml cmdlets, and updates core modules like PSReadLine and Microsoft.PowerShell.PSResourceGet. Breaking changes include updates to Test-Path parameter handling, and default settings for New-FileCatalog.
Prior to GA Release there were 5 preview releases and 1 RC release of PowerShell v7.5.0,[121]with a full release blog post for this version expected soon.
Version 7.6 is based on .NET 9 and is the latest preview release. The first preview release v7.6.0-preview.2[122]was released on 2025-01-15.
Changes include: TBD[123]
The following table contains various cmdlets that ship with PowerShell that have notably similar functionality to commands in other shells. Many of these cmdlets are exposed to the user via predefined aliases to make their use familiar to users of the other shells.
Notes
|
https://en.wikipedia.org/wiki/PowerShell
|
Roman Vatslavovich Malinovsky(Russian:Рома́н Ва́цлавович Малино́вский; 18 March 1876 – 5 November 1918) was a prominentBolshevikpolitician before theRussian revolution, while at the same time working as the best-paid agent for theOkhrana, the Tsarist secret police. They codenamed him 'Portnoi' (the tailor).
He was a brilliant orator, tall, red-haired, yellow-eyed and pockmarked,[1]"robust, ruddy complexioned, vigorous, excitable, a heavy drinker, a gifted leader of men."[2]
Malinovsky was born inPlotskprovince,Poland, at the time part of theRussian Empire. His parents were ethnicPolishpeasants, who died while he was still a child. He was jailed for several robberies from 1894 to 1899, for which he spent three years in prison and was also charged with attempted rape. In 1902, he enlisted in the prestigiousIzmaylovsky Regimentby impersonating a cousin with the same name.[3]Malinovsky began as an Okhrana agent within the regiment, reporting on fellow soldiers and officers. He was discharged from the army at the end of theRusso-Japanese Warand relocated toSaint Petersburg.
In 1906, he found a job as a lathe operator and joined the Petersburg Metalworkers' Union and theRussian Social Democratic Labour Party(RSDLP). Initially, he was inclined to support theMensheviks, who believed in trade union autonomy, rather than theBolshevikfaction, who sought to control the union. He was arrested five times as a union activist, but his Okhrana handlers arranged each time for him to be released without arousing suspicion.[4]Exiled from St Petersburg in 1910, he moved to Moscow. Here, for the first time, he was awarded a regular salary as a police informer, to supplement his wages as a metal turner, and was instructed by the Okhrana DirectorS. P. Beletskyto ensure that the different factions of the RSDLP never reunited. Malinovsky, therefore, joined the Bolsheviks. In January 1912, he travelled to Prague, whereVladimir Leninhad organised a conference to finalise the break with the Mensheviks and create a separate Bolshevik organisation. He made such a good impression on Lenin that he was elected to the Central Committee, and chosen to represent the Bolsheviks in the forthcoming elections to theFourth Duma, to which he was elected as its most prominent working-class deputy, in November 1912. He was simultaneously the Okhrana's best-paid agent, earning 8,000 rubles a year, 1,000 more than the Director of the Imperial Police.[5]He led the six-member Bolshevik group (two of whom were Okhrana agents) and was deputy chairman of the Social Democrats in the Duma. As a secret agent, he helped send several important Bolsheviks (likeSergo Ordzhonikidze,Joseph Stalin, andYakov Sverdlov) into Siberian exile.
In November 1912, he visited Lenin inKrakówand was urged not to unite with the Mensheviks. Malinovsky ignored that by reading a conciliatory speech in the Duma, to throw any suspicion off of himself.[6]On 28 December 1912, he attended aCentral Committeemeeting inVienna. He persuaded Lenin to appoint an Okhrana agent,Miron Chernomazov, as editor ofPravdaas opposed to Stalin's candidateStepan Shahumyan. The tsarist regime was determined to keep the RSDLP split, meaning that conciliators and pro-party groups were targeted for sabotage, whileliquidatorsandrecallistswere encouraged.
WhenMenshevikleaderJulius Martovfirst denounced Malinovsky as a spy in January 1913, Lenin refused to believe him and stood by Malinovsky. The accusing article was signed Ts, short for Tsederbaum, Martov's real name. Stalin threatened Martov's sister and brother-in-law, Lydia andFedor Danby saying they would regret it if the Mensheviks denounced Malinovsky.[7]
Malinovsky's efforts helped the Okhrana arrestSergo Ordzhonikidze(14 April 1912),Yakov Sverdlov(10 February 1913) and Stalin (23 February 1913). The latter was arrested at a Bolshevik fundraising ball, which Malinovsky had persuaded him to attend by lending him a suit and silk cravat. Malinovsky was talking to Stalin when detectives took him and even shouting he would free him.[8]
In July 1913, he betrayed a plan for Sverdlov and Stalin to escape, warning the police chief inTurukhansk. He was then the only Bolshevik leader not in foreign or Siberian exile. Soon after this foiled escape plan, Stalin came over to Martov's view and strongly suspected Malinovsky to be an Okhrana spy, which was confirmed correct years later, fuelling Stalin's future distrust of his comrades.
On 8 May 1914, he was forced to resign from the Duma after Russia's recently promoted Deputy Minister for the Interior, GeneralVladimir Dzhunkovsky, decided that having a police agent in such a prominent position might cause a scandal.[9]He was given a pay off of 6,000 roubles, and ordered to leave the country. He joined Lenin in Kraków, where a Bolshevik commission looked into rumours that he was a police spy. Despite testimony fromNikolai BukharinandElena Troyanovskaya, who both suspected that they had been betrayed to the police by Malinovsky when they were arrested in Moscow, respectively in 1910 and 1912, the commission accepted Malinovsky's story that he had been forced to resign when the police had blackmailed him by threatening to publicise the old charge of attempted rape.[10]When World War I broke out, he was interned in a POW camp by the Germans. Lenin, still standing by him, sent him clothes. He said: "If he is a provocateur, the police gained less from it than our Party did." This refers to his strong anti-Menshevism. Eventually, Lenin changed his mind: "What a swine: shooting's too good for him!"[11]
In 1918, he tried to join thePetrograd Soviet, butGrigory Zinovievrecognized him. In November, after a brief trial, Malinovsky was executed by a firing squad.
According to the British historianSimon Sebag Montefiore, his successful infiltration into the Bolsheviks helped fuel the paranoia of the Soviets (and, more specifically, Stalin) that eventually gave way to theGreat Terror.
According to the transcribed recollections of Nikolay Vladimirovich Veselago, a former Okhrana officer and relative of the director of the Russian police departmentStepan Petrovich Beletsky, both Malinovsky and Stalin reported on Lenin as well as on each other although Stalin was unaware that Malinovosky was also a penetration agent.[12][13][14]
|
https://en.wikipedia.org/wiki/Roman_Malinovsky
|
Relevance logic, also calledrelevant logic, is a kind ofnon-classical logicrequiring theantecedentandconsequentofimplicationsto be relevantly related. They may be viewed as a family ofsubstructuralormodallogics. It is generally, but not universally, calledrelevant logicby British and, especially, Australianlogicians, andrelevance logicby American logicians.
Relevance logic aims to capture aspects of implication that are ignored by the "material implication" operator in classicaltruth-functional logic, namely the notion of relevance between antecedent and conditional of a true implication. This idea is not new:C. I. Lewiswas led to invent modal logic, and specificallystrict implication, on the grounds that classical logic grantsparadoxes of material implicationsuch as the principle thata falsehood implies any proposition.[1][2]Hence "if I'm a donkey, then two and two is four" is true when translated as a material implication, yet it seems intuitively false since a true implication must tie the antecedent and consequent together by some notion of relevance. And whether or not the speaker is a donkey seems in no way relevant to whether two and two is four.
In terms of a syntactical constraint for apropositional calculus, it is necessary, but not sufficient, that premises and conclusion shareatomic formulae(formulae that do not contain anylogical connectives). In apredicate calculus, relevance requires sharing of variables and constants between premises and conclusion. This can be ensured (along with stronger conditions) by, e.g., placing certain restrictions on the rules of a natural deduction system. In particular, a Fitch-stylenatural deductioncan be adapted to accommodate relevance by introducing tags at the end of each line of an application of an inference indicating the premises relevant to the conclusion of the inference.Gentzen-stylesequent calculican be modified by removing the weakening rules that allow for the introduction of arbitrary formulae on the right or left side of thesequents.
A notable feature of relevance logics is that they areparaconsistent logics: the existence of a contradiction will not necessarily cause an "explosion." This follows from the fact that a conditional with a contradictory antecedent that does not share any propositional or predicate letters with the consequent cannot be true (or derivable).
Relevance logic was proposed in 1928 by Soviet philosopherIvan E. Orlov(1886 – circa 1936) in his strictly mathematical paper "The Logic of Compatibility of Propositions" published inMatematicheskii Sbornik. The basic idea of relevant implication appears in medieval logic, and some pioneering work was done byAckermann,[3]Moh,[4]andChurch[5]in the 1950s. Drawing on them,Nuel BelnapandAlan Ross Anderson(with others) wrote themagnum opusof the subject,Entailment: The Logic of Relevance and Necessityin the 1970s (the second volume being published in the nineties). They focused on both systems ofentailmentand systems of relevance, where implications of the former kinds are supposed to be both relevant and necessary.
The early developments in relevance logic focused on the stronger systems. The development of the Routley–Meyer semantics brought out a range of weaker logics. The weakest of these logics is the relevance logic B. It is axiomatized with the following axioms and rules.
The rules are the following.
Stronger logics can be obtained by adding any of the following axioms.
There are some notable logics stronger than B that can be obtained by adding axioms to B as follows.
The standard model theory for relevance logics is the Routley-Meyer ternary-relational semantics developed byRichard RoutleyandRobert Meyer. A Routley–Meyer frame F for a propositional language is a quadruple (W,R,*,0), where W is a non-empty set, R is a ternary relation on W, and * is a function from W to W, and0∈W{\displaystyle 0\in W}. A Routley-Meyer model M is a Routley-Meyer frame F together with a valuation,⊩{\displaystyle \Vdash }, that assigns a truth value to each atomic proposition relative to each pointa∈W{\displaystyle a\in W}. There are some conditions placed on Routley-Meyer frames. Definea≤b{\displaystyle a\leq b}asR0ab{\displaystyle R0ab}.
WriteM,a⊩A{\displaystyle M,a\Vdash A}andM,a⊮A{\displaystyle M,a\nVdash A}to indicate that the formulaA{\displaystyle A}is true, or not true, respectively, at pointa{\displaystyle a}inM{\displaystyle M}.
One final condition on Routley-Meyer models is the hereditariness condition.
By an inductive argument, hereditariness can be shown to extend to complex formulas, using the truth conditions below.
The truth conditions for complex formulas are as follows.
A formulaA{\displaystyle A}holds in a modelM{\displaystyle M}just in caseM,0⊩A{\displaystyle M,0\Vdash A}. A formulaA{\displaystyle A}holds on a frameF{\displaystyle F}iff A holds in every model(F,⊩){\displaystyle (F,\Vdash )}. A formulaA{\displaystyle A}is valid in a class of frames iff A holds on every frame in that class.
The class of all Routley–Meyer frames satisfying the above conditions validates that relevance logic B. One can obtain Routley-Meyer frames for other relevance logics by placing appropriate restrictions on R and on *. These conditions are easier to state using some standard definitions. LetRabcd{\displaystyle Rabcd}be defined as∃x(Rabx∧Rxcd){\displaystyle \exists x(Rabx\land Rxcd)}, and letRa(bc)d{\displaystyle Ra(bc)d}be defined as∃x(Rbcx∧Raxd){\displaystyle \exists x(Rbcx\land Raxd)}. Some of the frame conditions and the axioms they validate are the following.
The last two conditions validate forms of weakening that relevance logics were originally developed to avoid. They are included to show the flexibility of the Routley–Meyer models.
Operational models for negation-free fragments of relevance logics were developed byAlasdair Urquhartin his PhD thesis and in subsequent work. The intuitive idea behind the operational models is that points in a model are pieces of information, and combining information supporting a conditional with the information supporting its antecedent yields some information that supports the consequent. Since the operational models do not generally interpret negation, this section will consider only languages with a conditional, conjunction, and disjunction.
An operational frameF{\displaystyle F}is a triple(K,⋅,0){\displaystyle (K,\cdot ,0)}, whereK{\displaystyle K}is a non-empty set,0∈K{\displaystyle 0\in K}, and⋅{\displaystyle \cdot }is a binary operation onK{\displaystyle K}. Frames have conditions, some of which may be dropped to model different logics. The conditions Urquhart proposed to model the conditional of the relevance logic R are the following.
Under these conditions, the operational frame is ajoin-semilattice.
An operational modelM{\displaystyle M}is a frameF{\displaystyle F}with a valuationV{\displaystyle V}that maps pairs of points and atomic propositions to truth values, T or F.V{\displaystyle V}can be extended to a valuation⊩{\displaystyle \Vdash }on complex formulas as follows.
A formulaA{\displaystyle A}holds in a modelM{\displaystyle M}iffM,0⊩A{\displaystyle M,0\Vdash A}. A formulaA{\displaystyle A}is valid in a class of modelsC{\displaystyle C}iff it holds in each modelM∈C{\displaystyle M\in C}.
The conditional fragment of R is sound and complete with respect to the class of semilattice models. The logic with conjunction and disjunction is properly stronger than the conditional, conjunction, disjunction fragment of R. In particular, the formula(A→(B∨C))∧(B→C)→(A→C){\displaystyle (A\to (B\lor C))\land (B\to C)\to (A\to C)}is valid for the operational models but it is invalid in R. The logic generated by the operational models for R has a complete axiomatic proof system, dueKit Fineand to Gerald Charlwood. Charlwood also provided a natural deduction system for the logic, which he proved equivalent to the axiomatic system. Charlwood showed that his natural deduction system is equivalent to a system provided byDag Prawitz.
The operational semantics can be adapted to model the conditional of E by adding a non-empty set of worldsW{\displaystyle W}and an accessibility relation≤{\displaystyle \leq }onW×W{\displaystyle W\times W}to the frames. The accessibility relation is required to be reflexive and transitive, to capture the idea that E's conditional has an S4 necessity. The valuations then map triples of atomic propositions, points, and worlds to truth values. The truth condition for the conditional is changed to the following.
The operational semantics can be adapted to model the conditional of T by adding a relation≤{\displaystyle \leq }onK×K{\displaystyle K\times K}. The relation is required to obey the following conditions.
The truth condition for the conditional is changed to the following.
There are two ways to model the contraction-less relevance logics TW and RW with the operational models. The first way is to drop the condition thatx⋅x=x{\displaystyle x\cdot x=x}. The second way is to keep the semilattice conditions on frames and add a binary relation,J{\displaystyle J}, of disjointness to the frame. For these models, the truth conditions for the conditional is changed to the following, with the addition of the ordering in the case of TW.
Urquhart showed that the semilattice logic for R is properly stronger than the positive fragment of R. Lloyd Humberstone provided an enrichment of the operational models that permitted a different truth condition for disjunction. The resulting class of models generates exactly the positive fragment of R.
An operational frameF{\displaystyle F}is a quadruple(K,⋅,+,0){\displaystyle (K,\cdot ,+,0)}, whereK{\displaystyle K}is a non-empty set,0∈K{\displaystyle 0\in K}, and {⋅{\displaystyle \cdot },+{\displaystyle +}} are binary operations onK{\displaystyle K}. Leta≤b{\displaystyle a\leq b}be defined as∃x(a+x=b){\displaystyle \exists x(a+x=b)}. The frame conditions are the following.
An operational modelM{\displaystyle M}is a frameF{\displaystyle F}with a valuationV{\displaystyle V}that maps pairs of points and atomic propositions to truth values, T or F.V{\displaystyle V}can be extended to a valuation⊩{\displaystyle \Vdash }on complex formulas as follows.
A formulaA{\displaystyle A}holds in a modelM{\displaystyle M}iffM,0⊩A{\displaystyle M,0\Vdash A}. A formulaA{\displaystyle A}is valid in a class of modelsC{\displaystyle C}iff it holds in each modelM∈C{\displaystyle M\in C}.
The positive fragment of R is sound and complete with respect to the class of these models. Humberstone's semantics can be adapted to model different logics by dropping or adding frame conditions as follows.
Some relevance logics can be given algebraic models, such as the logic R. The algebraic structures for R are de Morgan monoids, which are sextuples(D,∧,∨,¬,∘,e){\displaystyle (D,\land ,\lor ,\lnot ,\circ ,e)}where
The operationx→y{\displaystyle x\to y}interpreting the conditional of R is defined as¬(x∘¬y){\displaystyle \lnot (x\circ \lnot y)}.
A de Morgan monoid is aresiduated lattice, obeying the following residuation condition.
An interpretationv{\displaystyle v}is ahomomorphismfrom the propositional language to a de Morgan monoidM{\displaystyle M}such that
Given a de Morgan monoidM{\displaystyle M}and an interpretationv{\displaystyle v}, one can say that formulaA{\displaystyle A}holds onv{\displaystyle v}just in casee≤v(A){\displaystyle e\leq v(A)}. A formulaA{\displaystyle A}is valid just in case it holds on all interpretations on all de Morgan monoids. The logic R is sound and complete for de Morgan monoids.
|
https://en.wikipedia.org/wiki/Relevance_logic
|
Probability distribution fittingor simplydistribution fittingis the fitting of aprobability distributionto a series of data concerning the repeated measurement of a variable phenomenon.
The aim of distribution fitting is topredicttheprobabilityor toforecastthefrequencyof occurrence of the magnitude of the phenomenon in a certain interval.
There are many probability distributions (seelist of probability distributions) of which some can be fitted more closely to the observed frequency of the data than others, depending on the characteristics of the phenomenon and of the distribution. The distribution giving a close fit is supposed to lead to good predictions.
In distribution fitting, therefore, one needs to select a distribution that suits the data well.
The selection of the appropriate distribution depends on the presence or absence of symmetry of the data set with respect to thecentral tendency.
Symmetrical distributions
When the data are symmetrically distributed around the mean while the frequency of occurrence of data farther away from the mean diminishes, one may for example select thenormal distribution, thelogistic distribution, or theStudent's t-distribution. The first two are very similar, while the last, with one degree of freedom, has "heavier tails" meaning that the values farther away from the mean occur relatively more often (i.e. thekurtosisis higher). TheCauchy distributionis also symmetric.
Skew distributions to the right
When the larger values tend to be farther away from the mean than the smaller values, one has a skew distribution to the right (i.e. there is positiveskewness), one may for example select thelog-normal distribution(i.e. the log values of the data arenormally distributed), thelog-logistic distribution(i.e. the log values of the data follow alogistic distribution), theGumbel distribution, theexponential distribution, thePareto distribution, theWeibull distribution, theBurr distribution, or theFréchet distribution. The last four distributions are bounded to the left.
Skew distributions to the left
When the smaller values tend to be farther away from the mean than the larger values, one has a skew distribution to the left (i.e. there is negative skewness), one may for example select thesquare-normal distribution(i.e. the normal distribution applied to the square of the data values),[1]the inverted (mirrored) Gumbel distribution,[1]theDagum distribution(mirrored Burr distribution), or theGompertz distribution, which is bounded to the left.
The following techniques of distribution fitting exist:[2]
It is customary to transform data logarithmically to fit symmetrical distributions (like thenormalandlogistic) to data obeying a distribution that is positively skewed (i.e. skew to the right, withmean>mode, and with a right hand tail that is longer than the left hand tail), seelognormal distributionand theloglogistic distribution. A similar effect can be achieved by taking the square root of the data.
To fit a symmetrical distribution to data obeying a negatively skewed distribution (i.e. skewed to the left, withmean<mode, and with a right hand tail this is shorter than the left hand tail) one could use the squared values of the data to accomplish the fit.
More generally one can raise the data to a powerpin order to fit symmetrical distributions to data obeying a distribution of any skewness, wherebyp< 1 when the skewness is positive andp> 1 when the skewness is negative. The optimal value ofpis to be found by anumerical method. The numerical method may consist of assuming a range ofpvalues, then applying the distribution fitting procedure repeatedly for all the assumedpvalues, and finally selecting the value ofpfor which the sum of squares of deviations of calculated probabilities from measured frequencies (chi squared) is minimum, as is done inCumFreq.
The generalization enhances the flexibility of probability distributions and increases their applicability in distribution fitting.[6]
The versatility of generalization makes it possible, for example, to fit approximately normally distributed data sets to a large number of different probability distributions,[7]while negatively skewed distributions can be fitted to
square normal and mirrored Gumbel distributions.[8]
Skewed distributions can be inverted (or mirrored) by replacing in the mathematical expression of thecumulative distribution function(F) by its complement: F'=1-F, obtaining thecomplementary distribution function(also calledsurvival function) that gives a mirror image. In this manner, a distribution that is skewed to the right is transformed into a distribution that is skewed to the left and vice versa.
The technique of skewness inversion increases the number of probability distributions available for distribution fitting and enlarges the distribution fitting opportunities.
Some probability distributions, like theexponential, do not support negative data values (X). Yet, when negative data are present, such distributions can still be used replacingXbyY=X-Xm, whereXmis the minimum value ofX. This replacement represents a shift of the probability distribution in positive direction, i.e. to the right, becauseXmis negative. After completing the distribution fitting ofY, the correspondingX-values are found fromX=Y+Xm, which represents a back-shift of the distribution in negative direction, i.e. to the left.The technique of distribution shifting augments the chance to find a properly fitting probability distribution.
The option exists to use two different probability distributions, one for the lower data range, and one for the higher like for example theLaplace distribution. The ranges are separated by a break-point. The use of such composite (discontinuous) probability distributions can be opportune when the data of the phenomenon studied were obtained under two sets different conditions.[6]
Predictions of occurrence based on fitted probability distributions are subject touncertainty, which arises from the following conditions:
An estimate of the uncertainty in the first and second case can be obtained with thebinomial probability distributionusing for example the probability of exceedancePe(i.e. the chance that the eventXis larger than a reference valueXrofX) and the probability of non-exceedancePn(i.e. the chance that the eventXis smaller than or equal to the reference valueXr, this is also calledcumulative probability). In this case there are only two possibilities: either there is exceedance or there is non-exceedance. This duality is the reason that the binomial distribution is applicable.
With the binomial distribution one can obtain aprediction interval. Such an interval also estimates the risk of failure, i.e. the chance that the predicted event still remains outside the confidence interval. The confidence or risk analysis may include thereturn periodT=1/Peas is done inhydrology.
A Bayesian approach can be used for fitting a modelP(x|θ){\displaystyle P(x|\theta )}having a prior distributionP(θ){\displaystyle P(\theta )}for the parameterθ{\displaystyle \theta }. When one has samplesX{\displaystyle X}that are independently drawn from the underlying distribution then one can derive the so-called posterior distributionP(θ|X){\displaystyle P(\theta |X)}. This posterior can be used to update the probability mass function for a new samplex{\displaystyle x}given the observationsX{\displaystyle X}, one obtains
Pθ(x|X):=∫dθP(x|θ)P(θ|X).{\displaystyle P_{\theta }(x|X):=\int d\theta \ P(x|\theta )\ P(\theta |X).}
The variance of the newly obtained probability mass function can also be determined. The variance for a Bayesian probability mass function can be defined as
σPθ(x|X)2:=∫dθ[P(x|θ)−Pθ(x|X)]2P(θ|X).{\displaystyle \sigma _{P_{\theta }(x|X)}^{2}:=\int d\theta \ \left[P(x|\theta )-P_{\theta }(x|X)\right]^{2}\ P(\theta |X).}
This expression for the variance can be substantially simplified (assuming independently drawn samples). Defining the "self probability mass function" as
Pθ(x|{X,x})=∫dθP(x|θ)P(θ|{X,x}),{\displaystyle P_{\theta }(x|\left\{X,x\right\})=\int d\theta \ P(x|\theta )\ P(\theta |\left\{X,x\right\}),}
one obtains for the variance[12]
σPθ(x|X)2=Pθ(x|X)[Pθ(x|{X,x})−Pθ(x|X)].{\displaystyle \sigma _{P_{\theta }(x|X)}^{2}=P_{\theta }(x|X)\left[P_{\theta }(x|\left\{X,x\right\})-P_{\theta }(x|X)\right].}
The expression for variance involves an additional fit that includes the samplex{\displaystyle x}of interest.
By ranking thegoodness of fitof various distributions one can get an impression of which distribution is acceptable and which is not.
From thecumulative distribution function(CDF) one can derive ahistogramand theprobability density function(PDF).
|
https://en.wikipedia.org/wiki/Probability_distribution_fitting
|
Thehistory of programming languagesspans from documentation of early mechanical computers to modern tools forsoftware development. Early programming languages were highly specialized, relying onmathematical notationand similarly obscuresyntax.[1]Throughout the 20th century, research incompilertheory led to the creation ofhigh-level programming languages, which use a more accessible syntax to communicate instructions.
The first high-level programming language wasPlankalkül, created byKonrad Zusebetween 1942 and 1945.[2]The first high-level language to have an associatedcompilerwas created byCorrado Böhmin 1951, for hisPhDthesis.[3]The first commercially available language wasFORTRAN(FORmula TRANslation), developed in 1956 (first manual appeared in 1956, but first developed in 1954) by a team led byJohn BackusatIBM.
During 1842–1849,Ada Lovelacetranslated the memoir of Italian mathematicianLuigi MenabreaaboutCharles Babbage's newest proposed machine: theAnalytical Engine; she supplemented the memoir with notes that specified in detail a method for calculatingBernoulli numberswith the engine, recognized by most of historians as the world's first published computer program.[4]
Jacquard Loomsand Charles Babbage'sDifference Engineboth were designed to utilizepunched cards,[5][6]which would describe the sequence of operations that their programmable machines should perform.
The first computercodeswere specialized for their applications: e.g.,Alonzo Churchwas able to express thelambda calculusin a formulaic way and theTuring machinewas an abstraction of the operation of a tape-marking machine.
In the 1940s, the first recognizably modern electrically powered computers were created. The limited speed andmemory capacityforced programmers to write hand-tunedassembly languageprograms. It was eventually realized that programming in assembly language required a great deal of intellectual effort.[citation needed]
An early proposal for ahigh-level programming languagewasPlankalkül, developed byKonrad Zusefor hisZ1 computerbetween 1942 and 1945 but not implemented at the time.[7]
The first functioning programming languages designed to communicate instructions to a computer were written in the early 1950s.John Mauchly'sShort Code, proposed in 1949, was one of the first high-level languages ever developed for anelectronic computer.[8]Unlikemachine code, Short Code statements representedmathematical expressionsin understandable form. However, the program had to beinterpretedinto machine code every time it ran, making the process much slower than running the equivalent machine code.
In the early 1950s,Alick GlenniedevelopedAutocode, possibly the first compiled programming language, at theUniversity of Manchester. In 1954, a second iteration of the language, known as the "Mark 1 Autocode", was developed for theMark 1byR. A. Brooker. Brooker, with the University of Manchester, also developed an autocode for theFerranti Mercuryin the 1950s. The version for theEDSAC2 was devised byDouglas HartreeofUniversity of Cambridge Mathematical Laboratoryin 1961. Known as EDSAC 2 Autocode, it was a straight development from Mercury Autocode adapted for local circumstances and was noted for itsobject codeoptimization and source-language diagnostics which were advanced for the time. A contemporary but separate thread of development,Atlas Autocodewas developed for theUniversity of ManchesterAtlas 1machine.
In 1954,FORTRANwas invented at IBM by a team led byJohn Backus; it was the first widely used high-level general purpose language to have a functional implementation, in contrast to only a design on paper.[9][10]When FORTRAN was first introduced, it was viewed with skepticism due to bugs, delays in development, and the comparative efficiency of "hand-coded" programs written in assembly.[11]However, in a hardware market that was rapidly evolving, the language eventually became known for its efficiency. It is still a popular language forhigh-performance computing[12]and is used for programs that benchmark and rank the world'sTOP500fastest supercomputers.[13]
Another early programming language was devised byGrace Hopperin the US, namedFLOW-MATIC. It was developed for theUNIVAC IatRemington Randduring the period from 1955 until 1959. Hopper found that businessdata processingcustomers were uncomfortable withmathematical notation, and in early 1955, she and her team wrote a specification for anEnglish languageprogramming language and implemented a prototype.[14]The FLOW-MATIC compiler became publicly available in early 1958 and was substantially complete in 1959.[15]Flow-Matic was a major influence in the design ofCOBOL, since only it and its direct descendantAIMACOwere in use at the time.[16]
Other languages still in use today includeLISP(1958), invented byJohn McCarthyandCOBOL(1959), created by the Short Range Committee. Another milestone in the late 1950s was the publication, by a committee of American and European computer scientists, of "a new language for algorithms"; theALGOL60 Report(the "ALGOrithmicLanguage"). This report consolidated many ideas circulating at the time and featured three key language innovations:
Another innovation, related to this, was in how the language was described:
ALGOL 60was particularly influential in the design of later languages, some of which soon became more popular. TheBurroughs large systemswere designed to be programmed in an extended subset of ALGOL.
ALGOL's key ideas were continued, producingALGOL 68:
ALGOL 68's many little-used language features (for example, concurrent and parallel blocks) and its complex system of syntactic shortcuts and automatic type coercions made it unpopular with implementers and gained it a reputation of beingdifficult.Niklaus Wirthactually walked out of the design committee to create the simplerPascallanguage.
Some notable languages that were developed in this period include:
The period from the late 1960s to the late 1970s brought a major flowering of programming languages. Most of the major languageparadigmsnow in use were invented in this period:[original research?]
Each of these languages spawned an entire family of descendants, and most modern languages count at least one of them in their ancestry.
The 1960s and 1970s also saw considerable debate over the merits of "structured programming", which essentially meant programming without the use ofgoto. A significant fraction of programmers believed that, even in languages that providegoto, it is badprogramming styleto use it except in rare circumstances. This debate was closely related to language design: some languages had nogoto, which forced the use of structured programming.
To provide even faster compile times, some languages were structured for "one-pass compilers" which expect subordinate routines to be defined first, as withPascal, where the main routine, or driver function, is the final section of the program listing.
Some notable languages that were developed in this period include:
The 1980s were years of relative consolidation inimperative languages. Rather than inventing new paradigms, all of these movements elaborated upon the ideas invented in the prior decade.C++combined object-oriented and systems programming. The United States government standardizedAda, a systems programming language intended for use by defense contractors. In Japan and elsewhere, vast sums were spent investigating so-calledfifth-generation programming languagesthat incorporated logic programming constructs. The functional languages community moved to standardize ML and Lisp. Research inMiranda, a functional language withlazy evaluation, began to take hold in this decade.
One important new trend in language design was an increased focus on programming for large-scale systems through the use ofmodules, or large-scale organizational units of code.Modula, Ada, and ML all developed notable module systems in the 1980s. Module systems were often wedded togeneric programmingconstructs: generics being, in essence, parametrized modules[citation needed](see alsoPolymorphism (computer science)).
Although major new paradigms for imperative programming languages did not appear, many researchers expanded on the ideas of prior languages and adapted them to new contexts. For example, the languages of theArgusand Emerald systems adapted object-oriented programming todistributed computingsystems.
The 1980s also brought advances in programming language implementation. Thereduced instruction set computer(RISC) movement incomputer architecturepostulated that hardware should be designed forcompilersrather than for human assembly programmers. Aided bycentral processing unit(CPU) speed improvements that enabled increasingly aggressive compiling methods, the RISC movement sparked greater interest in compiler technology for high-level languages.
Language technology continued along these lines well into the 1990s.
Some notable languages that were developed in this period include:
The rapid growth of the Internet in the mid-1990s was the next major historic event in programming languages. By opening up a radically new platform for computer systems, the Internet created an opportunity for new languages to be adopted. In particular, theJavaScriptprogramming language rose to popularity because of its early integration with the Netscape Navigator web browser. Various other scripting languages achieved widespread use in developing customized applications for web servers such as PHP. The 1990s saw no fundamental novelty inimperative languages, but much recombination and maturation of old ideas. This era began the spread offunctional languages. A big driving philosophy was programmer productivity. Manyrapid application development(RAD) languages emerged, which usually came with anintegrated development environment(IDE),garbage collection, and were descendants of older languages. All such languages wereobject-oriented. These includedObject Pascal, Objective Caml (renamedOCaml),Visual Basic, andJava. Java in particular received much attention.
More radical and innovative than the RAD languages were the newscripting languages. These did not directly descend from other languages and featured new syntaxes and more liberal incorporation of features. Many consider these scripting languages to be more productive than even the RAD languages, but often because of choices that make small programs simpler but large programs more difficult to write and maintain.[citation needed]Nevertheless, scripting languages came to be the most prominent ones used in connection with the Web.
Some programming languages included other languages in their distribution to save the development time. for example both ofPythonandRubyincludedTclto supportGUIprogramming through libraries likeTkinter.
Some notable languages that were developed in this period include:
Programming language evolution continues, and more programming paradigms are used in production.
Some of the trends have included:
Big Techcompanies introduced multiple new programming languages that are designed to serve their needs. for example:
Some notable languages developed during this period include:
Programming language evolution continues with the rise of new programming domains.
ManyBig Techcompanies continued introducing new programming languages that are designed to serve their needs and provides first-class support for their platforms. for example:
Some notable languages developed during this period include:[20][21]
Other new programming languages includeElm,Ballerina,Red,Crystal,V (Vlang),Reason.
The development of new programming languages continues, and some new languages appears with focus on providing a replacement for current languages. These new languages try to provide the advantages of a known language like C++ (versatile and fast) while adding safety or reducing complexity. Other new languages try to bring ease of use as provided by Python while adding performance as a priority. Also, the growing of Machine Learning and AI tools still plays a big rule behind these languages' development, where some visual languages focus on integrating these AI tools while other textual languages focus on providing more suitable support for developing them.[22][23][24]
Some notable new programming languages include:
Some key people who helped develop programming languages:
|
https://en.wikipedia.org/wiki/History_of_programming_languages
|
The following is alist ofMicrosoft Windowscomponents.
This list is not all-inclusive.
|
https://en.wikipedia.org/wiki/List_of_Microsoft_Windows_components#Services
|
Das U-Boot(subtitled "the Universal Boot Loader" and often shortened toU-Boot; seeHistoryfor more about the name) is anopen-sourceboot loaderused inembedded devicesto perform various low-level hardware initialization tasks and boot the device's operating system kernel. It is available for a number ofcomputer architectures, includingM68000,ARM,Blackfin,MicroBlaze,AArch64,MIPS,Nios II,SuperH,PPC,Power ISA,RISC-V,LoongArchandx86.
U-Boot is both a first-stage and second-stage bootloader. It is loaded by the system's ROM (e.g. on-chip ROM of an ARM CPU) from a supported boot device, such as an SD card, SATA drive, NOR flash (e.g. usingSPIorI²C), or NAND flash. If there are size constraints, U-Boot may be split into two stages: the platform would load a small SPL (Secondary Program Loader), which is a stripped-down version of U-Boot, and the SPL would do some initial hardware configuration (e.g.DRAMinitialization using CPU cache as RAM) and load the larger, fully featured version of U-Boot.[3][4][5]Regardless of whether the SPL is used, U-Boot performs both first-stage (e.g., configuringmemory controller,SDRAM,mainboardand other I/O devices) and second-stage booting (e.g., loadingOS kerneland other related files from storage device).
U-Boot implements a subset of theUEFIspecification as defined in the Embedded Base Boot Requirements (EBBR) specification.[6]UEFI binaries likeGRUBor theLinuxkernel can be booted via the boot manager or from the command-line interface.
U-Boot runs acommand-line interfaceon a console or a serial port. Using the CLI, users can load and boot a kernel, possibly changing parameters from the default. There are also commands to read device information, read and write flash memory, download files (kernels, boot images, etc.) from the serial port or network, manipulatedevice trees, and work with environment variables (which can be written to persistent storage, and are used to control U-Boot behavior such as the default boot command and timeout before auto-booting, as well as hardware data such as the Ethernet MAC address).
Unlike PC bootloaders which obscure or automatically choose the memory locations of the kernel and other boot data, U-Boot requires its boot commands to explicitly specify the physical memory addresses as destinations for copying data (kernel, ramdisk, device tree, etc.) and for jumping to the kernel and as arguments for the kernel. Because U-Boot's commands are fairly low-level, it takes several steps to boot a kernel, but this also makes U-Boot more flexible than other bootloaders, since the same commands can be used for more general tasks. It's even possible to upgrade U-Boot using U-Boot, simply by reading the new bootloader from somewhere (local storage, or from the serial port or network) into memory, and writing that data to persistent storage where the bootloader belongs.
U-Boot has support for USB, so it can use a USB keyboard to operate the console (in addition to input from the serial port), and it can access and boot from USB Mass Storage devices such as SD card readers.
U-Boot boots an operating system by reading the kernel and any other required data (e.g. device tree or ramdisk image) into memory, and then executing the kernel with the appropriate arguments.
U-Boot's commands are actually generalized commands which can be used to read or write any arbitrary data. Using these commands, data can be read from or written to any storage system that U-Boot supports, which include:
(Note: These are boot sources from which U-Boot is capable of loading data (e.g. a kernel or ramdisk image) into memory. U-Boot itself must be booted by the platform, and that must be done from a device that the platform's ROM is capable of booting from, which naturally depends on the platform.)
On some embedded device implementations, the CPU or SoC will locate and load the bootloader (such as Das U-Boot) from the boot partition (such asext4orFATfilesystems) directly.
U-Boot does not need to be able to read a filesystem in order for the kernel to use it as a root filesystem or initial ramdisk; U-Boot simply provides an appropriate parameter to the kernel, and/or copies the data to memory without understanding its contents.
However, U-Boot can also read from (and in some cases, write to) filesystems. This way, rather than requiring the data that U-Boot will load to be stored at a fixed location on the storage device, U-Boot can read the filesystem to search for and load the kernel, device tree, etc., by pathname.
U-Boot includes support for these filesystems:
Device treeis a data structure for describing hardware layout. Using Device tree, a vendor might be able to use a less modifiedmainlineU-Boot on otherwise special purpose hardware. As also adopted by the Linux kernel, Device tree is intended to ameliorate the situation in theembeddedindustry, where a vast number of product specificforks(of U-Boot and Linux) exist. The ability to run mainline software practically gives customers indemnity against lack of vendor updates.
The project started as a 8xx PowerPC bootloader called8xxROMwritten by Magnus Damm.[7]In October 1999 Wolfgang Denk moved the project to SourceForge.net and renamed it toPPCBoot, because SF.net did not allow project names starting with digits.[7]Version 0.4.1 of PPCBoot was first publicly released July 19, 2000.
In 2002 a previous version of thesource codewas brieflyforkedinto a product calledARMBoot, but was merged back into the PPCBoot project shortly thereafter. On October 31, 2002PPCBoot−2.0.0was released. This marked the last release under the PPCBoot name, as it was renamed to reflect its ability to work on other architectures besides the PPC ISA.[8][9]
PPCBoot−2.0.0 becameU−Boot−0.1.0in November 2002, expanded to work on thex86processor architecture. Additional architecture capabilities were added in the following months:MIPS32in March 2003,MIPS64in April,Nios IIin October,ColdFirein December, andMicroBlazein April 2004. The May 2004 release of U-Boot-1.1.2 worked on the products of 216 board manufacturers across the various architectures.[9]
The current nameDas U-Bootadds aGerman definite article, to create a bilingualpunon the classic 1981 German submarine filmDas Boot, which takes place on a World War II GermanU-boat. It isfree softwarereleased under the terms of theGNU General Public License. It can be built on an x86 PC for any of its intended architectures using a cross development GNUtoolchain, for example crosstool, the Embedded Linux Development Kit (ELDK) or OSELAS.Toolchain.
The importance of U-Boot in embedded Linux systems is quite succinctly stated in the bookBuilding Embedded Linux Systems, by Karim Yaghmour, whose text about U-Boot begins, "Though there are quite a few other bootloaders, 'Das U-Boot', the universal bootloader, is arguably the richest, most flexible, and most actively developed open source bootloader available."[10]
In 2025, multiplevulnerabilitiesdiscovered in 2024 have been disclosed in U-Boot.[11]Abusing the filesystem support feature (ext4,SquashFS) of U-Boot by manually modifying filesystem data structures, an attacker can cause aninteger overflow, astack overflowor aheap overflow. As a result, an attacker can perform anarbitrary code executionand bypass the bootchain of trust. These issues are mitigated by the version v2025.01-rc1.
|
https://en.wikipedia.org/wiki/Das_U-Boot
|
Data-centric securityis an approach to security that emphasizes the dependability of thedataitself rather than the security ofnetworks,servers, or applications. Data-centric security is evolving rapidly as enterprises increasingly rely on digital information torun their businessandbig dataprojects become mainstream.[1][2][3]It involves the separation of data anddigital rights managementthat assign encrypted files to pre-defined access control lists, ensuring access rights to critical and confidential data are aligned with documented business needs and job requirements that are attached to user identities.[4]
Data-centric security also allows organizations to overcome the disconnect between IT security technology and the objectives of business strategy by relating security services directly to the data they implicitly protect; a relationship that is often obscured by the presentation of security as an end in itself.[5]
Common processes in a data-centric security model include:[6]
From a technical point of view, information (data)-centric security relies on the implementation of the following:[7]
Dataaccess controlis the selective restriction of access to data. Accessing may mean viewing, editing, or using. Defining proper access controls requires to map out the information, where it resides, how important it is, who it is important to, how sensitive the data is and then designing appropriate controls.[8]
Encryptionis a proven data-centric technique to address the risk of data theft in smartphones, laptops, desktops and even servers, including the cloud. One limitation is that encryption is not always effective once a network intrusion has occurred and cybercriminals operate with stolen valid user credentials.[9]
Data Maskingis the process of hiding specific data within a database table or cell to ensure that data security is maintained and that sensitive information is not exposed to unauthorized personnel. This may include masking the data from users, developers, third-party and outsourcing vendors, etc.
Data masking can be achieved multiple ways: by duplicating data to eliminate the subset of the data that needs to be hidden, or by obscuring the data dynamically as users perform requests.[10]
Monitoring all activity at the data layer is a key component of a data-centric security strategy. It provides visibility into the types of actions that users and tools have requested and been authorized to on specific data elements. Continuous monitoring at the data layer combined with precise access control can contribute significantly to the real-time detection of data breaches, limits the damages inflicted by a breach and can even stop the intrusion if proper controls are in place. A 2016 survey[11]shows that most organizations still do not assess database activity continuously and lack the capability to identify database breaches in a timely fashion.
Aprivacy-enhancing technology(PET) is a method of protecting data. PETs allow online users to protect the privacy of their personally identifiable information (PII) provided to and handled by services or applications. PETs use techniques to minimize possession of personal data without losing the functionality of an information system.
Cloud computingis an evolving paradigm with tremendous momentum, but its unique aspects exacerbate security and privacy challenges. Heterogeneity and diversity of cloud services and environments demand fine-grained access control policies and services that should be flexible enough to capture dynamic, context, or attribute-based access requirements and data protection.[12]Data-centric security measures can also help protect againstdata-leakageand life cycle management of information.[13]
|
https://en.wikipedia.org/wiki/Data-centric_security
|
This page is a glossary ofOperating systemsterminology.[1][2]
|
https://en.wikipedia.org/wiki/Glossary_of_operating_systems_terms
|
Apositioning systemis a system for determining thepositionof an object inspace.[1]Positioning system technologies exist ranging from interplanetary coverage with meter accuracy to workspace and laboratory coverage with sub-millimeter accuracy. A major subclass is made ofgeopositioningsystems, used for determining an object's position with respect to Earth, i.e., itsgeographical position; one of the most well-known and commonly used geopositioning systems is theGlobal Positioning System(GPS) and similarglobal navigation satellite systems(GNSS).
Interplanetary-radio communication systems not only communicate with spacecraft, but they are also used to determine their position.Radarcan track targets near the Earth, but spacecraft in deep space must have a workingtransponderon board to echo a radio signal back. Orientation information can be obtained usingstar trackers.
Global navigation satellite systems(GNSS) allow specialized radio receivers to determine their 3-D space position, as well as time, with an accuracy of 2–20 metres or tens of nanoseconds. Currently deployed systems use microwave signals that can only be received reliably outdoors and that cover most of Earth's surface, as well as near-Earth space.
The existing and planned systems are:
Networks of land-based positioning transmitters allow specializedradio receiversto determine their 2-D position on the surface of the Earth. They are generally less accurate than GNSS because their signals are not entirely restricted toline-of-sight propagation, and they have only regional coverage. However, they remain useful for special purposes and as a backup where their signals are more reliably received, including underground and indoors, and receivers can be built that consume very low battery power.LORANis an example of such a system.
Alocal positioning system(LPS) is a navigation system that provides location information in all weather, anywhere within the coverage of the network, where there is an unobstructedline of sightto three or more signalingbeaconsof which the exact position on Earth is known.[2][3][4][5]
UnlikeGPSor otherglobal navigation satellite systems,local positioning systemsdon't provide global coverage. Instead, they use beacons, which have a limited range, hence requiring the user to be near these. Beacons includecellularbase stations,Wi-FiandLiFiaccess points, and radiobroadcast towers.
In the past, long-range LPS's have been used for navigation of ships and aircraft. Examples are theDecca Navigator SystemandLORAN.
Nowadays, local positioning systems are often used as complementary (and in some cases alternative) positioning technology to GPS, especially in areas where GPS does not reach or is weak, for example,inside buildings, orurban canyons. Local positioning using cellular andbroadcast towerscan be used on cell phones that do not have a GPS receiver. Even if the phone has a GPS receiver, battery life will be extended if cell tower location accuracy is sufficient.
They are also used in trackless amusement rides likePooh's Hunny HuntandMystic Manor.
Examples of existing systems include
Indoor positioning systems are optimized for use within individual rooms, buildings, or construction sites. They typically offer centimeter-accuracy. Some provide6-Dlocation and orientation information.
Examples of existing systems include
These are designed to cover only a restricted workspace, typically a few cubic meters, but can offer accuracy in the millimeter-range or better. They typically provide 6-D position and orientation. Example applications includevirtual realityenvironments, alignment tools forcomputer-assisted surgeryor radiology, and cinematography (motion capture,match moving).
Examples:Wii Remotewith Sensor Bar, Polhemus Tracker, Precision Motion Tracking Solutions InterSense.[6]
High performance positioning systemis used in manufacturing processes to move an object (tool or part) smoothly and accurately in six degrees of freedom, along a desired path, at a desired orientation, with highacceleration, highdeceleration, highvelocityand lowsettling time. It is designed to quickly stop its motion and accurately place the moving object at its desired final position and orientation with minimal jittering.
Examples: high velocitymachine tools,laser scanning,wire bonding,printed circuit boardinspection,lab automationassaying,flight simulators
Multiple technologies exist to determine the position and orientation of an object or person in a room, building or in the world.
Time of flightsystems determine the distance by measuring the time of propagation of pulsed signals between a transmitter and receiver. When distances of at least three locations are known, a fourth position can be determined usingtrilateration.Global Positioning Systemis an example.
Optical trackers, such aslaser ranging trackerssuffer fromline of sightproblems and their performance is adversely affected by ambient light and infrared radiation. On the other hand, they do not suffer from distortion effects in the presence of metals and can have high update rates because of the speed of light.[7]
Ultrasonic trackershave a more limited range because of the loss of energy with the distance traveled. Also they are sensitive to ultrasonic ambient noise and have a low update rate. But the main advantage is that they do not need line of sight.
Systems usingradio wavessuch as theGlobal navigation satellite systemdo not suffer ambient light, but still need line of sight.
A spatial scan system uses (optical) beacons and sensors. Two categories can be distinguished:
By aiming the sensor at the beacon the angle between them can be measured. Withtriangulationthe position of the object can be determined.
The main advantage of aninertial sensingis that it does not require an external reference. Instead it measures rotation with agyroscopeor position with anaccelerometerwith respect to a known starting position and orientation. Because these systems measure relative positions instead of absolute positions they can suffer from accumulated errors and therefore are subject to drift. A periodic re-calibration of the system will provide more accuracy.
This type of tracking system uses mechanical linkages between the reference and the target. Two types of linkages have been used. One is an assembly of mechanical parts that can each rotate, providing the user with multiple rotation capabilities. The orientation of the linkages is computed from the various linkage angles measured with incremental encoders or potentiometers. Other types of mechanical linkages are wires that are rolled in coils. A spring system ensures that the wires are tensed in order to measure the distance accurately. The degrees of freedom sensed by mechanical linkage trackers are dependent upon the constitution of the tracker's mechanical structure. While six degrees of freedom are most often provided, typically only a limited range of motions is possible because of the kinematics of the joints and the length of each link. Also, the weight and the deformation of the structure increase with the distance of the target from the reference and impose a limit on the working volume.[8]
Phase differencesystems measure the shift in phase of an incoming signal from an emitter on a moving target compared to the phase of an incoming signal from a reference emitter. With this the relative motion of the emitter with respect to the receiver can be calculated.
Like inertial sensing systems, phase-difference systems can suffer from accumulated errors and therefore are subject to drift, but because the phase can be measured continuously they are able to generate high data rates.Omega (navigation system)is an example.
Direct field sensing systems use a known field to derive orientation or position: A simplecompassuses theEarth's magnetic fieldto know its orientation in two directions.[8]Aninclinometeruses theearth gravitational fieldto know its orientation in the remaining third direction. The field used for positioning does not need to originate from nature, however. A system of threeelectromagnetsplaced perpendicular to each other can define a spatial reference. On the receiver, three sensors measure the components of the field's flux received as a consequence ofmagnetic coupling. Based on these measures, the system determines the position and orientation of the receiver with respect to the emitters' reference.
Optical positioning systems are based onopticscomponents, such as intotal stations.[9]
Magnetic positioningis an IPS (Indoor positioning system) solution that takes advantage of the magnetic field anomalies typical of indoor settings by using them as distinctive place recognition signatures. The first citation of positioning based on magnetic anomaly can be traced back to military applications in 1970.[10]The use of magnetic field anomalies for indoor positioning was first claimed in 1999,[11]with later publications related to robotics in the early 2000s.[12][13]
Most recent applications can employ magnetic sensor data from asmartphoneused to wirelessly locate objects or people inside a building.[14]
Because every technology has its pros and cons, most systems use more than one technology. A system based on relative position changes like the inertial system needs periodic calibration against a system with absolute position measurement. Systems combining two or more technologies are called hybrid positioning systems.[16]
Hybrid positioning systems are systems for finding the location of a mobile device using several different positioning technologies. Usually GPS (Global Positioning System) is one major component of such systems, combined with cell tower signals, wireless internet signals,Bluetoothsensors,IP addressesand network environment data.[17]
These systems are specifically designed to overcome the limitations of GPS, which is very exact in open areas, but works poorly indoors or between tall buildings (theurban canyoneffect). By comparison, cell tower signals are not hindered by buildings or bad weather, but usually provide less precise positioning.Wi-Fi positioning systemsmay give very exact positioning, in urban areas with high Wi-Fi density - and depend on a comprehensive database of Wi-Fi access points.
Hybrid positioning systems are increasingly being explored for certain civilian and commerciallocation-based servicesandlocation-based media, which need to work well in urban areas in order to be commercially and practically viable.
Early works in this area include the Place Lab project, which started in 2003 and went inactive in 2006. Later methods let smartphones combine the accuracy of GPS with the low power consumption of cell-ID transition point finding.[18]In 2022, the satellite-free positioning systemSuperGPSwith higher-resolution than GPS using existing telecommunications networks was demonstrated.[19][20]
|
https://en.wikipedia.org/wiki/Positioning_technology
|
Inalgebra, asimplicial commutative ringis acommutative monoidin thecategoryofsimplicial abelian groups, or, equivalently, asimplicial objectin thecategory of commutative rings. IfAis a simplicial commutative ring, then it can be shown thatπ0A{\displaystyle \pi _{0}A}is aringandπiA{\displaystyle \pi _{i}A}aremodulesover that ring (in fact,π∗A{\displaystyle \pi _{*}A}is agraded ringoverπ0A{\displaystyle \pi _{0}A}.)
Atopology-counterpart of this notion is acommutative ring spectrum.
LetAbe a simplicial commutative ring. Then the ring structure ofAgivesπ∗A=⊕i≥0πiA{\displaystyle \pi _{*}A=\oplus _{i\geq 0}\pi _{i}A}the structure of a graded-commutative graded ring as follows.
By theDold–Kan correspondence,π∗A{\displaystyle \pi _{*}A}is the homology of thechain complexcorresponding toA; in particular, it is a graded abelian group. Next, to multiply two elements, writingS1{\displaystyle S^{1}}for thesimplicial circle, letx:(S1)∧i→A,y:(S1)∧j→A{\displaystyle x:(S^{1})^{\wedge i}\to A,\,\,y:(S^{1})^{\wedge j}\to A}be two maps. Then the composition
the second map the multiplication ofA, induces(S1)∧i∧(S1)∧j→A{\displaystyle (S^{1})^{\wedge i}\wedge (S^{1})^{\wedge j}\to A}. This in turn gives an element inπi+jA{\displaystyle \pi _{i+j}A}. We have thus defined the graded multiplicationπiA×πjA→πi+jA{\displaystyle \pi _{i}A\times \pi _{j}A\to \pi _{i+j}A}. It isassociativebecause the smash product is. It isgraded-commutative(i.e.,xy=(−1)|x||y|yx{\displaystyle xy=(-1)^{|x||y|}yx}) since the involutionS1∧S1→S1∧S1{\displaystyle S^{1}\wedge S^{1}\to S^{1}\wedge S^{1}}introduces a minus sign.
IfMis asimplicial moduleoverA(that is,Mis asimplicial abelian groupwith an action ofA), then the similar argument shows thatπ∗M{\displaystyle \pi _{*}M}has the structure of a graded module overπ∗A{\displaystyle \pi _{*}A}(cf.Module spectrum).
By definition, the category of affinederived schemesis theopposite categoryof the category of simplicial commutative rings; an object corresponding toAwill be denoted bySpecA{\displaystyle \operatorname {Spec} A}.
Thiscommutative algebra-related article is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Simplicial_commutative_ring
|
Semi-structured data[1]is a form ofstructured datathat does not obey the tabular structure of data models associated withrelational databasesor other forms ofdata tables, but nonetheless containstagsor other markers to separate semantic elements and enforce hierarchies of records and fields within the data. Therefore, it is also known asself-describingstructure.
In semi-structured data, the entities belonging to the same class may have differentattributeseven though they are grouped together, and the attributes' order is not important.
Semi-structured data are increasingly occurring since the advent of theInternetwherefull-textdocumentsanddatabasesare not the only forms of data anymore, and different applications need a medium forexchanging information. Inobject-oriented databases, one often finds semi-structured data.
XML,[2]other markup languages,email, andEDIare all forms of semi-structured data.OEM(Object Exchange Model)[3]was created prior to XML as a means of self-describing a data structure. XML has been popularized by web services that are developed utilizingSOAPprinciples.
Some types of data described here as "semi-structured", especially XML, suffer from the impression that they are incapable of structural rigor at the same functional level as Relational Tables and Rows. Indeed, the view of XML as inherently semi-structured (previously, it was referred to as "unstructured") has handicapped its use for a widening range of data-centric applications. Even documents, normally thought of as the epitome of semi-structure, can be designed with virtually the same rigor asdatabase schema, enforced by theXML schemaand processed by both commercial and custom software programs without reducing their usability by human readers.
In view of this fact, XML might be referred to as having "flexible structure" capable of human-centric flow and hierarchy as well as highly rigorous element structure and data typing.
The concept of XML as "human-readable", however, can only be taken so far. Some implementations/dialects of XML, such as the XML representation of the contents of a Microsoft Word document, as implemented in Office 2007 and later versions, utilize dozens or even hundreds of different kinds of tags that reflect a particular problem domain - in Word's case, formatting at the character and paragraph and document level, definitions of styles, inclusion of citations, etc. - which are nested within each other in complex ways. Understanding even a portion of such an XML document by reading it, let alone catching errors in its structure, is impossible without a very deep prior understanding of the specific XML implementation, along with assistance by software that understands the XML schema that has been employed. Such text is not "human-understandable" any more than a book written in Swahili (which uses the Latin alphabet) would be to an American or Western European who does not know a word of that language: the tags are symbols that are meaningless to a person unfamiliar with the domain.
JSONor JavaScript Object Notation, is an open standard format that uses human-readable text to transmit data objects. JSON has been popularized by web services developed utilizingRESTprinciples.
Databases such asMongoDBandCouchbasestore data natively in JSON format, leveraging the pros of semi-structured data architecture.
Thesemi-structured modelis adatabase modelwhere there is no separation between thedataand theschema, and the amount of structure used depends on the purpose.
The advantages of this model are the following:
The primary trade-off being made in using a semi-structureddatabase modelis that queries cannot be made as efficiently as in a more constrained structure, such as in therelational model. Typically the records in a semi-structured database are stored with unique IDs that are referenced with pointers to their location on disk. This makes navigational or path-based queries quite efficient, but for doing searches over many records (as is typical inSQL), it is not as efficient because it has to seek around the disk following pointers.
TheObject Exchange Model(OEM) is one standard to express semi-structured data, another way isXML.
|
https://en.wikipedia.org/wiki/Semi-structured_model
|
Eventual consistencyis aconsistency modelused indistributed computingto achievehigh availability. Put simply: if no new updates are made to a given data item,eventuallyall accesses to that item will return the last updated value.[1]Eventual consistency, also calledoptimistic replication,[2]is widely deployed in distributed systems and has origins in early mobile computing projects.[3]A system that has achieved eventual consistency is often said to haveconverged, or achievedreplica convergence.[4]Eventual consistency is a weak guarantee – most stronger models, likelinearizability, are trivially eventually consistent.
Eventually-consistent services are often classified as providing BASE semantics (basically-available,soft-state,eventual consistency), in contrast to traditionalACID (atomicity, consistency, isolation, durability).[5][6]In chemistry, abaseis the opposite of anacid, which helps in remembering the acronym.[7]According to the same resource, these are the rough definitions of each term in BASE:
Eventual consistency faces criticism[8]for adding complexity to distributed software applications. This complexity arises because eventual consistency provides only alivenessguarantee (ensuring reads eventually return the same value) withoutsafetyguarantees—allowing any intermediate value before convergence. Application developers find this challenging because it differs from single-threaded programming, where variables reliably return their assigned values immediately. With weak consistency guarantees, developers must carefully consider these limitations, as incorrect assumptions about consistency levels can lead to subtle bugs that only surface during network failures or high concurrency.[9]
In order to ensure replica convergence, a system must reconcile differences between multiple copies of distributed data. This consists of two parts:
The most appropriate approach to reconciliation depends on the application. A widespread approach is "last writer wins".[1]Another is to invoke a user-specified conflict handler.[4]Timestampsandvector clocksare often used to detect concurrency between updates. Some people use "first writer wins" in situations where "last writer wins" is unacceptable.[11]
Reconciliation of concurrent writes must occur sometime before the next read, and can be scheduled at different instants:[3][12]
Whereas eventual consistency is only alivenessguarantee (updates will be observed eventually),strong eventual consistency(SEC) adds thesafetyguarantee that any two nodes that have received the same (unordered) set of updates will be in the same state. If, furthermore, the system ismonotonic, the application will never suffer rollbacks. A common approach to ensure SEC isconflict-free replicated data types.[13]
|
https://en.wikipedia.org/wiki/Eventual_consistency
|
Theenvironmental movement(sometimes referred to as theecology movement) is asocial movementthat aims to protect the natural world from harmful environmental practices in order to createsustainable living.[1]In its recognition of humanity as a participant in (not an enemy of)ecosystems, the movement is centered onecology,health, as well ashuman rights.
The environmental movement is an international movement, represented by a range of environmental organizations, from enterprises tograssrootsand varies from country to country. Due to its large membership, varying and strong beliefs, and occasionally speculative nature, the environmental movement is not always united in its goals. At its broadest, the movement includes private citizens, professionals,religious devotees, politicians, scientists,nonprofit organizations, and individual advocates like former Wisconsin SenatorGaylord NelsonandRachel Carsonin the 20th century.
Since the 1970s, public awareness,environmental sciences,ecology, and technology have advanced to include modern focus points likeozonedepletion,climate change,acid rain,mutation breeding,genetically modified cropsandgenetically modified livestock.
Theclimate movementcan be regarded as a sub-type of the environmental movement.
The environmental movement contains a number of subcommunities, that have developed with different approaches and philosophies in different parts of the world. Notably, the early environmental movement experienced a deep tension between the philosophies ofconservationand broaderenvironmental protection.[2]In recent decades the rise to prominence ofenvironmental justice, indigenous rights and key environmental crises like theclimate crises, has led to the development of other environmentalist identities.
The environmental movement is broad in scope and can include any topic related to the environment, conservation, and biology, as well as the preservation of landscapes, flora, and fauna for a variety of purposes and uses. Examples include:
Genetically modified plantsandanimalsare said by some environmentalists to be inherently bad because they are unnatural. Others point out the possible benefits of GM crops such aswater conservationthrough corn modified to be less "thirsty" and decreased pesticide use through insect-resistant crops. They also point out that somegenetically modified livestockhave accelerated growth which means there are shorter production cycles which again results in a more efficient use of feed.[5]
Besides genetically modified crops and livestock,synthetic biologyis also on the rise and environmentalists argue that these also contain risks, if these organisms were ever to end up in nature. This, as unlike with genetically modified organisms, synthetic biology even usesbase pairsthat do not exist in nature.[6]
Theanti-nuclear movementopposes the use of variousnuclear technologies. The initial anti-nuclear objective wasnuclear disarmamentand later the focus began to shift to other issues, mainly opposition to the use ofnuclear power. There have been many large anti-nucleardemonstrationsandprotests. Thepro-nuclear movementconsists of people, including former opponents of nuclear energy, who calculate that the threat to humanity from climate change is far worse than any risk associated with nuclear energy.
By the mid-1970s anti-nuclear activism had moved beyond local protests and politics to gain a wider appeal and influence. Although it lacked a single coordinating organization the anti-nuclear movement's efforts gained a great deal of attention, especially in theUnited Kingdomand United States.[7]In the aftermath of theThree Mile Island accidentin 1979, many mass demonstrations took place. The largest one was held in New York City in September 1979 and involved 200,000 people.[8][9][10]
Tree sittingis a form of activism in which the protester sits in a tree in an attempt to stop the removal of a tree or to impede the demolition of an area with the longest and most famous tree-sitter beingJulia Butterfly Hill, who spent 738 days in a California Redwood, saving a three-acre tract of forest.[11]Also notable is theYellow Finch tree sit, which was a 932-day blockade of theMountain Valley Pipelinefrom 2018 to 2021.[12][13]
Sit-inscan be used to encourage social change, such as the Greensboro sit-ins, a series of protests in 1960 to stop racial segregation, but can also be used in ecoactivism, as in theDakota Access PipelineProtest.[14]
Notable environmental protests and campaigns include:
The origins of the environmental movement in Europe and North America lay in response to increasing levels ofsmokepollutionin theatmosphereduring theIndustrial Revolution. The emergence of great factories and the concomitant immense growth incoal consumptiongave rise to an unprecedented level ofair pollutionin industrial centers; after 1900 the large volume of industrialchemicaldischarges added to the growing load of untreated human waste.[17]
Conservative critics of the movement characterize it as radical and misguided. Especially critics of theUnited States Endangered Species Act, which has come under scrutiny lately,[when?]and the Clean Air Act, which they said conflict with private property rights, corporate profits and the nation's overall economic growth. Critics alsochallenge the scientific evidence for global warming. They argue that the environmental movement has diverted attention from more pressing issues.[18]Western environmental activists have also been criticized forperformative activism,eco-colonialism, and enactingwhite saviortropes, especially celebrities who promote conservation in developing countries.[19][20]
When residents living near proposed developments organize opposition they are sometimes called"NIMBYS", short for "not in my back yard".[21]
Mithun Roy Chowdhury, President, Save Nature & Wildlife (SNW),Bangladesh, insisted that the people of Bangladesh raise their voice againstTipaimukh Dam, being constructed by theGovernment of India. He said the Tipaimukh Dam project will be another "death trap for Bangladesh like theFarakka Barrage," which would lead to anenvironmental disasterfor 50 million people in theMeghna Riverbasin. He said that this project will startdesertificationin Bangladesh.[22][23][24][25]
Bangladesh was ranked the most polluted country in the world due to defective automobiles, particularly diesel-powered vehicles, and hazardous gases from industry. The air is a hazard to Bangladesh's human health, ecology, and economic progress.[26]
China's environmental movement is characterized by the rise of environmental NGOs, policy advocacy, spontaneous alliances, and protests that often only occur at the local level.[27]Environmental protests in China are increasingly expanding their scope of concerns, calling for broader participation "in the name of the public."[28]
The Chinese have realized the ability of riots and protests to have success and had led to an increase in disputes in China by 30% since 2005 to more than 50,000 events. Protests cover topics such as environmental issues,land loss, income, and political issues. They have also grown in size from about 10 people or fewer in the mid-1990s to 52 people per incident in 2004. China has more relaxed environmental laws than other countries in Asia, so many polluting factories have relocated to China, causingpollution in China.
Water pollution,water scarcity,soil pollution,soil degradation, anddesertificationare issues currently in discussion in China. Thegroundwater tableof theNorth China Plainis dropping by 1.5 m (5 ft) per year. This groundwater table occurs in the region of China that produces 40% of the country's grain.[29][30]The Center for Legal Assistance to Pollution Victimsworks to confront legal issues associated with environmental justice by hearing court cases that expose the narratives of victims of environmental pollution.[31][page needed]As China continues domestic economic reforms and integration into global markets, there emerge new linkages between China's domesticenvironmental degradationand global ecological crisis.[32]
Comparing the experience of China, South Korea, Japan and Taiwan reveals that the impact of environmental activism is heavily modified by domestic political context, particularly the level of integration of mass-based protests and policy advocacy NGOs. Hinted by the history of neighboring Japan and South Korea, the possible convergence of NGOs and anti-pollution protests will have significant implications for Chinese environmental politics in the coming years.[33]
Environmental and public health is an ongoing struggle within India. The first seed of an environmental movement in India was the foundation in 1964 ofDasholi Gram Swarajya Sangh, a labour cooperative started byChandi Prasad Bhatt. It was inaugurated bySucheta Kriplaniand founded on land donated by Shyma Devi. This initiative was eventually followed up with theChipko movementstarting in 1974.[34][35]
The most severe single event underpinning the movement was theBhopal gas leakageon 3 December 1984.[36]40 tons ofmethyl isocyanatewas released, immediately killing 2,259 people and ultimately affecting 700,000 citizens.
India has a national campaign againstCoca-ColaandPepsi Colaplants due to their practices of drawing groundwater and contaminating fields with sludge. The movement is characterized by local struggles against intensiveaquaculturefarms. The most influential part of the environmental movement in India is the anti-dam movement. Dam creation has been thought of as a way for India to catch up with the West by connecting to thepower gridwith giant dams, coal or oil-powered plants, or nuclear plants. Jhola Aandolan a massmovementis conducting as fighting againstpolyethylenecarry bags uses and promoting cloth/jute/paper carry bags to protect the environment andnature. Activists in the Indian environmental movement consider global warming, sea levels rising, and glaciers retreating decreasing the amount of water flowing into streams to be the biggest challenges for them to face in the early twenty-first century.[29]Eco Revolution movement has been started byEco Needs Foundation[37]in 2008 from Aurangabad Maharashtra that seeks the participation of children, youth, researchers, spiritual and political leaders to organise awareness programmes and conferences. Child activists againstair pollution in Indiaandgreenhouse gas emissionsby India includeLicypriya Kangujam. From the mid to late 2010s a coalition of urban and Indigenous communities came together to protectAarey, a forest located in the suburbs ofMumbai.[38]Farming and indigenous communities have also opposed pollution and clearing caused by mining in states such asGoa,Odisha, andChhattisgarh.[39]
Environmental activism in theArab world, includingMiddle East and North Africa(MENA), mobilizes around issues such asindustrial pollution, and insistence that the government providesirrigation.[40]TheLeague of Arab Stateshas one specialized sub-committee, of 12 standing specialized subcommittees in the Foreign Affairs Ministerial Committees, which deals with Environmental Issues. Countries in the League of Arab States have demonstrated an interest in environmental issues, on paper some environmental activists have doubts about the level of commitment to environmental issues; being a part of the world community may have obliged these countries to portray concern for the environment. The initial level of environmental awareness may be the creation of a ministry of the environment. The year of establishment of a ministry is also indicative of the level of engagement. Saudi Arabia was the first to establish environmental law in 1992 followed by Egypt in 1994. Somalia is the only country without environmental law. In 2010 the Environmental Performance Index listed Algeria as the top Arab country at 42 of 163; Morocco was at 52 and Syria at 56. TheEnvironmental Performance Indexmeasures the ability of a country to actively manage and protect its environment and the health of its citizens. A weighted index is created by giving 50% weight for environmental health objective (health) and 50% for ecosystem vitality (ecosystem); values range from 0–100. No Arab countries were in the top quartile, and 7 countries were in the lowest quartile.[41]
South Korea and Taiwan experienced similar growth in industrialization from 1965 to 1990 with few environmental controls.[42]South Korea'sHan RiverandNakdong Riverwere so polluted by unchecked dumping of industrial waste that they were close to being classified as biologically dead. Taiwan's formula for balanced growth was to prevent industrial concentration and encourage manufacturers to set up in the countryside. This led to 20% of the farmland being polluted by industrial waste and 30% of the rice grown on the island was contaminated with heavy metals. Both countries had spontaneous environmental movements drawing participants from different classes. Their demands were linked with issues of employment, occupational health, and agricultural crisis. They were also quite militant; the people learned that protesting can bring results. The polluting factories were forced to make immediate improvements to the conditions or pay compensation to victims. Some were even forced to shut down or move locations. The people were able to force the government to come out with new restrictive rules on toxins, industrial waste, and air pollution. All of these new regulations caused the migration of those polluting industries from Taiwan and South Korea to China and other countries in Southeast Asia with more relaxed environmental laws.
The modern conservation movement was manifested in the forests ofIndia, with the practical application of scientific conservation principles. Theconservation ethicthat began to evolve included three core principles: human activity damaged theenvironment, there was acivic dutyto maintain the environment for future generations, and scientific, empirically based methods should be applied to ensure this duty was carried out.James Ranald Martinwas prominent in promoting this ideology, publishing manymedico-topographicalreports that demonstrated the scale of damage wrought through large-scale deforestation and desiccation, and lobbying extensively for theinstitutionalizationof forest conservation activities inBritish Indiathrough the establishment ofForest Departments.[43]
TheMadrasBoard of Revenue started local conservation efforts in 1842, headed byAlexander Gibson, a professionalbotanistwho systematically adopted a forest conservation programme based on scientific principles. This was the first case of state management of forests in the world.[44]Eventually, the government underGovernor-GeneralLord Dalhousieintroduced the first permanent and large-scale forest conservation programme in the world in 1855, a model that soon spread toother colonies, as well as theUnited States. In 1860, the Department banned the use ofshifting cultivation.[45]Hugh Cleghorn's 1861 manual,The forests and gardens of South India, became the definitive work on the subject and was widely used by forest assistants in the subcontinent.[46][47]
Dietrich Brandisjoined the British service in 1856 as superintendent of the teak forests of Pegu division in easternBurma. During that time Burma'steakforests were controlled by militantKarentribals. He introduced the "taungya" system,[48]in which Karen villagers provided labour for clearing, planting, and weeding teak plantations. Also, he formulated new forest legislation and helped establish research and training institutions. Brandis as well as founded the Imperial Forestry School at Dehradun.[49][50]
In 2022, a court in South Africa has confirmed the constitutional right of the country's citizens to an environment that isn't harmful to their health, which includes the right to clean air. The case is referred to "Deadly Air" case. The area includes one of South Africa's largest cities, Ekurhuleni, and a large portion of the Mpumalanga province.[51]
After theInternational Environmental Conference in Stockholmin 1972 Latin American officials returned with a high hope of growth and protection of the fairly untouched natural resources. Governments spent millions of dollars, and created departments and pollution standards. However, the outcomes have not always been what officials had initially hoped. Activists blame this on growing urban populations and industrial growth. Many Latin American countries have had a large inflow of immigrants that are living in substandard housing. Enforcement of the pollution standards is lax and penalties are minimal; in Venezuela, the largest penalty for violating an environmental law is 50,000bolivarfine ($3,400) and three days in jail. In the 1970s or 1980s, many Latin American countries were transitioning from military dictatorships to democratic governments.[52]
In 1992, Brazil came under scrutiny with theUnited Nations Conference on Environment and Developmentin Rio de Janeiro. Brazil has a history of little environmental awareness. It has the highestbiodiversityin the world and also the highest amount ofhabitat destruction. One-third of the world's forests lie in Brazil. It is home to the largest river,The Amazon, and the largest rainforest, theAmazon Rainforest. People have raised funds to create state parks and increase the consciousness of people who have destroyed forests and polluted waterways. From 1973 to the 1990s, and then in the 2000s, indigenous communities and rubber tappers also carried out blockades that protected much rainforest.[53]It is home to several organizations that have fronted the environmental movement. The Blue Wave Foundation was created in 1989 and has partnered with advertising companies to promote national education campaigns to keep Brazil's beaches clean. Funatura was created in 1986 and is a wildlife sanctuary program.Pro-Natura Internationalis a private environmental organization created in 1986.[54]
From the late 2000s onwards community resistance saw the formerly pro-mining southeastern state of Minas Gerais cancel a number of projects that threatened to destroy forests. In northern Brazil’s Pará state the Movimento dos Trabalhadores Rurais Sem Terra (Landless Workers Movement) and others campaigned and took part in occupations and blockades against the environmentally harmful Carajás iron ore mine.[55]
The movement in theUnited Statesbegan in the late 19th century, out of concerns for protecting the natural resources of the West, with individuals such asJohn MuirandHenry David Thoreaumaking key philosophical contributions. Thoreau was interested in peoples' relationship with nature and studied this by living close to nature in a simple life. He published his experiences in the 1854 bookWalden, which argues that people should become intimately close with nature. Muir came to believe in nature's inherent right, especially after spending time hiking inYosemite Valleyand studying both the ecology and geology. He successfully lobbied congress to formYosemite National Parkand went on to set up theSierra Clubin 1892.[56]The conservationist principles as well as the belief in an inherent right of nature became the bedrock of modern environmentalism.
Beginning in the conservation movement at the beginning of the 20th century, the contemporary environmental movement's roots can be traced back toRachel Carson's 1962 bookSilent Spring,Murray Bookchin's 1962 bookOur Synthetic Environment, andPaul R. Ehrlich's 1968The Population Bomb. American environmentalists have campaigned againstnuclear weaponsandnuclear powerin the 1960s and 1970s,acid rainin the 1980s,ozone depletionanddeforestationin the 1990s, and most recentlyclimate changeandglobal warming.[53]
The United States passed many pieces of environmental legislation in the 1970s, such as theClean Water Act,[57]theClean Air Act, theEndangered Species Act, and theNational Environmental Policy Act. These remain as the foundations for current environmental standards.
In the 1990s, theanti-environmental'Wise Use' movement emerged in the United States.[58]
TheEU's environmental policywas formally founded by aEuropean Councildeclaration and the first five-year environment programme was adopted.[59]Thepolluter pays principlewas well established inenvironmental economicsbefore it was included in theSingle European Act.[60]Following the1973 oil crisistheSocial Democratic Party of Germany(SPD) passed groundbreaking laws onenergy efficiency.[61]
During the 1930s the Nazis had elements that were supportive of animal rights, zoos and wildlife,[62]and took several measures to ensure their protection.[63]In 1933 the government created a stringent animal-protection law and in 1934,Das Reichsjagdgesetz(The Reich Hunting Law) was enacted which limited hunting.[64][65]Several Nazis were environmentalists(notablyRudolf Hess), and species protection andanimal welfarewere significant issues in the regime.[63]In 1935, the regime enacted the "Reich Nature Protection Act" (Reichsnaturschutzgesetz). The concept of theDauerwald(best translated as the "perpetual forest") which included concepts such asforest managementand protection was promoted and efforts were also made to curbair pollution.[66]
During theSpanish Revolutionin 1936, anarchist-controlled territories undertook several environmental reforms, which were possibly the largest in the world at the time.Daniel Guerinnotes thatanarchist territorieswould diversify crops, extendirrigation, initiatereforestation, start tree nurseries and help to establishnaturist communities.[67]Once there was a link discovered between air pollution and tuberculosis, theCNTshut down several metal factories.[68]
The late 19th century saw the formation of the first wildlife conservation societies. The zoologistAlfred Newtonpublished a series of investigations into theDesirability of establishing a 'Close-time' for the preservation of indigenous animalsbetween 1872 and 1903. His advocacy for legislation to protect animals from hunting during the mating season led to the formation of the Plumage League (later theRoyal Society for the Protection of Birds) in 1889.[69]The society acted as aprotest groupcampaigning against the use ofgreat crested grebeandkittiwakeskins and feathers infur clothing.[70][better source needed]The Society campaigned for greater protection for the indigenous birds of theisland.[71]The Society attracted growing support from the suburban middle-classes,[72]and influenced the passage of theSea Birds Preservation Actin 1869 as the first nature protection law in the world.[73][74]It also attracted support from many other influential figures, such as theornithologistProfessorAlfred Newton. By 1900, public support for the organisation had grown, and it had over 25,000 members. Thegarden city movementincorporated many environmental concerns into itsurban planningmanifesto; theSocialist LeagueandThe Clarionmovement also began to advocate measures ofnature conservation.[75]
For most of the century from 1850 to 1950, however, the primary environmental cause was the mitigation of air pollution. TheCoal Smoke Abatement Societywas formed in 1898 making it one of the oldest environmental NGOs. It was founded by artist SirWilliam Blake Richmond, frustrated with the pall cast by coal smoke. Although there were earlier pieces of legislation, thePublic Health Act 1875required all furnaces and fireplaces to consume their own smoke.
Systematic and general efforts on behalf of the environment only began in the late 19th century; it grew out of the amenity movement in Britain in the 1870s, which was a reaction toindustrialization, the growth of cities, and worsening air andwater pollution. Starting with the formation of theCommons Preservation Societyin 1865, the movement championed rural preservation against the encroachments of industrialisation.Robert Hunter, solicitor for the society, worked withHardwicke Rawnsley,Octavia Hill, andJohn Ruskinto lead a successful campaign to prevent the construction of railways to carry slate from the quarries, which would have ruined the unspoilt valleys ofNewlandsandEnnerdale. This success led to the formation of the Lake District Defence Society (later to become The Friends of the Lake District).[76][77]
In 1893 Hill, Hunter and Rawnsley agreed to set up a national body to coordinate environmental conservation efforts across the country; the "National Trust for Places of Historic Interest or Natural Beauty" was formally inaugurated in 1894.[78]The organisation obtained secure footing through the 1907 National Trust Bill, which gave the trust the status of a statutory corporation.[79]and the bill was passed in August 1907.[80]
Early interest in the environment was a feature of theRomantic movementin the early 19th century. The poetWilliam Wordsworthhad travelled extensively in England'sLake Districtand wrote that it is a "sort of national property in which every man has a right and interest who has an eye to perceive and a heart to enjoy".[81][82]
An early "Back-to-Nature" movement, which anticipated the romantic ideal of modern environmentalism, was advocated by intellectuals such asJohn Ruskin,William Morris,George Bernard ShawandEdward Carpenter, who were all againstconsumerism,pollutionand other activities that were harmful to the natural world.[83]The movement was a reaction to the urban conditions of the industrial towns, where sanitation was awful, pollution levels intolerable and housing terribly cramped.[84]Idealists championed the rural life as a mythicalutopiaand advocated a return to it. John Ruskin argued that people should return to a "small piece of English ground, beautiful, peaceful, and fruitful. We will have no steam engines upon it ... we will have plenty of flowers and vegetables ... we will have some music and poetry; the children will learn to dance to it and sing it."[85]Ruskin moved out of London and together with his friends started to think about thepost-industrial society. The predictions Ruskin made for the post-coalutopia coincided withforecastingpublished by the economistWilliam Stanley Jevons.[86]Practical ventures in the establishment of small cooperative farms were even attempted and old rural traditions, without the "taint of manufacture or the canker of artificiality", were enthusiastically revived, including theMorris danceand themaypole.[87]
The Coal Smoke Abatement Society (nowEnvironmental Protection UK) was formed in 1898 making it one of the oldest environmental NGOs. It was founded by artist SirWilliam Blake Richmond, frustrated with the pall cast by coal smoke. Although there were earlier pieces of legislation, thePublic Health Act 1875required all furnaces and fireplaces to consume their own smoke. It also provided for sanctions against factories that emitted large amounts of black smoke. This law's provisions were extended in 1926 with the Smoke Abatement Act to include other emissions, such as soot, ash, and gritty particles, and to empower local authorities to impose their own regulations.
It was only under the impetus of theGreat Smogof 1952 in London, which almost brought the city to a standstill and may have caused upward of 6,000 deaths, that theClean Air Act 1956was passed and airborne pollution in the city was first tackled. Financial incentives were offered to householders to replace open coal fires with alternatives (such as installing gas fires) or those who preferred, to burn coke instead (a byproduct of town gas production) which produces minimal smoke. 'Smoke control areas' were introduced in some towns and cities where only smokeless fuels could be burnt and power stations were relocated away from cities. The act formed an important impetus to modern environmentalism and caused a rethinking of the dangers of environmental degradation to people's quality of life.[88]
Beginning as aconservation movement, theenvironmental movement in Australiawas the first in the world to become a political movement.Australiais home toUnited Tasmania Group, the world's firstgreen party.[89][90]
The environmental movement is represented by a wide range of groups sometimes callednon-governmental organizations(NGOs). These exist on local, national, and international scales. Environmental NGOs vary widely in political views and in the amount they seek to influenceenvironmental policyin Australia and elsewhere. The environmental movement today consists of both large national groups and also many smaller local groups with local concerns.[91]There are also 5,000Landcare groupsin the six states and two mainland territories. Otherenvironmental issueswithin the scope of the movement include forest protection,climate changeandopposition to nuclear activities.[92][93]
|
https://en.wikipedia.org/wiki/Environmental_movement
|
Inmathematics, specificallygroup theory,Cauchy's theoremstates that ifGis afinite groupandpis aprime numberdividing theorderofG(the number of elements inG), thenGcontains an element of orderp. That is, there isxinGsuch thatpis the smallest positiveintegerwithxp=e, whereeis theidentity elementofG. It is named afterAugustin-Louis Cauchy, who discovered it in 1845.[1][2]
The theorem is a partial converse toLagrange's theorem, which states that the order of anysubgroupof a finite groupGdivides the order ofG. In general, not every divisor of|G|{\displaystyle |G|}arises as the order of a subgroup ofG{\displaystyle G}.[3]Cauchy's theorem states that for anyprimedivisorpof the order ofG, there is a subgroup ofGwhose order isp—thecyclic groupgeneratedby the element in Cauchy's theorem.
Cauchy's theorem is generalized bySylow's first theorem, which implies that ifpnis the maximal power ofpdividing the order ofG, thenGhas a subgroup of orderpn(and using the fact that ap-group issolvable, one can show thatGhas subgroups of orderprfor anyrless than or equal ton).
Many texts prove the theorem with the use ofstrong inductionand theclass equation, though considerably less machinery is required to prove the theorem in theabeliancase. One can also invokegroup actionsfor the proof.[4]
Cauchy's theorem—LetGbe afinite groupandpbe aprime. Ifpdivides theorderofG, thenGhas an element of orderp.
We first prove the special case that whereGisabelian, and then the general case; both proofs are by induction onn= |G|, and have as starting casen=pwhich is trivial because any non-identity element now has orderp. Suppose first thatGis abelian. Take any non-identity elementa, and letHbe thecyclic groupit generates. Ifpdivides |H|, thena|H|/pis an element of orderp. Ifpdoes not divide |H|, then it divides the order [G:H] of thequotient groupG/H, which therefore contains an element of orderpby the inductive hypothesis. That element is a classxHfor somexinG, and ifmis the order ofxinG, thenxm=einGgives (xH)m=eHinG/H, sopdividesm; as beforexm/pis now an element of orderpinG, completing the proof for the abelian case.
In the general case, letZbe thecenterofG, which is an abelian subgroup. Ifpdivides |Z|, thenZcontains an element of orderpby the case of abelian groups, and this element works forGas well. So we may assume thatpdoes not divide the order ofZ. Sincepdoes divide |G|, andGis the disjoint union ofZand of theconjugacy classesof non-central elements, there exists a conjugacy class of a non-central elementawhose size is not divisible byp. But theclass equationshows that size is [G:CG(a)], sopdivides the order of thecentralizerCG(a) ofainG, which is a proper subgroup becauseais not central. This subgroup contains an element of orderpby the inductive hypothesis, and we are done.
This proof uses the fact that for anyactionof a (cyclic) group of prime orderp, the only possible orbit sizes are 1 andp, which is immediate from theorbit stabilizer theorem.
The set that our cyclic group shall act on is the set
ofp-tuples of elements ofGwhose product (in order) gives the identity. Such ap-tuple is uniquely determined by all its components except the last one, as the last element must be the inverse of the product of those preceding elements. One also sees that thosep− 1elements can be chosen freely, soXhas |G|p−1elements, which is divisible byp.
Now from the fact that in a group ifab=ethenba=e, it follows that anycyclic permutationof the components of an element ofXagain gives an element ofX. Therefore one can define an action of the cyclic groupCpof orderponXby cyclic permutations of components, in other words in which a chosen generator ofCpsends
As remarked, orbits inXunder this action either have size 1 or sizep. The former happens precisely for those tuples(x,x,…,x){\displaystyle (x,x,\ldots ,x)}for whichxp=e{\displaystyle x^{p}=e}. Counting the elements ofXby orbits, and dividing byp, one sees that the number of elements satisfyingxp=e{\displaystyle x^{p}=e}is divisible byp. Butx=eis one such element, so there must be at leastp− 1other solutions forx, and these solutions are elements of orderp. This completes the proof.
Cauchy's theorem implies a rough classification of allelementary abelian groups(groups whose non-identity elements all have equal, finite order). IfG{\displaystyle G}is such a group, andx∈G{\displaystyle x\in G}has orderp{\displaystyle p}, thenp{\displaystyle p}must be prime, since otherwise Cauchy's theorem applied to the (finite) subgroup generated byx{\displaystyle x}produces an element of order less thanp{\displaystyle p}. Moreover, every finite subgroup ofG{\displaystyle G}has order a power ofp{\displaystyle p}(includingG{\displaystyle G}itself, if it is finite). This argument applies equally top-groups, where every element's order is a power ofp{\displaystyle p}(but not necessarily every order is the same).
One may use the abelian case of Cauchy's Theorem in an inductive proof[5]of the first of Sylow's theorems, similar to the first proof above, although there are also proofs that avoid doing this special case separately.
|
https://en.wikipedia.org/wiki/Cauchy%27s_theorem_(group_theory)
|
In algebra, anoperad algebrais an "algebra" over anoperad. It is a generalization of anassociative algebraover a commutative ringR, with an operad replacingR.
Given an operadO(say, asymmetric sequencein asymmetric monoidal ∞-categoryC), analgebra over an operad, orO-algebrafor short, is, roughly, a left module overOwith multiplications parametrized byO.
IfOis atopological operad, then one can say an algebra over an operad is anO-monoid object inC. IfCis symmetric monoidal, this recovers the usual definition.
LetCbe symmetric monoidal ∞-category with monoidal structure distributive over colimits. Iff:O→O′{\displaystyle f:O\to O'}is a map of operads and, moreover, iffis a homotopy equivalence, then the ∞-category of algebras overOinCis equivalent to the ∞-category of algebras overO'inC.[1]
Thisabstract algebra-related article is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Algebra_over_an_operad
|
Aheliograph(fromAncient Greekἥλιος(hḗlios)'sun'andγράφειν(gráphein)'to write') is a solar telegraph[1]system that signals by flashes of sunlight (generally usingMorse codefrom the 1840s) reflected by amirror. The flashes are produced by momentarily pivoting the mirror, or by interrupting the beam with a shutter.[2]The heliograph was a simple but effective instrument for instantaneousoptical communicationover long distances during the late 19th and early 20th centuries.[2]Its main uses were military,surveyingandforest protectionwork. Heliographs were standard issue in the British and Royal Australian armies until the 1960s, and were used by the Pakistani army as late as 1975.[3]
There were many heliograph types. Most heliographs were variants of theBritish ArmyMance Mark V version (Fig.1). It used a flat[4]round mirror with a small unsilvered spot in the centre. The sender aligned the heliograph to the target by looking at the reflected target in the mirror and moving their head until the target was hidden by the unsilvered spot. Keeping their head still, they then adjusted the aiming rod so its cross wires bisected the target.[5]They then turned up the sighting vane, which covered the cross wires with a diagram of a cross, and aligned the mirror with the tangent and elevation screws, so the small shadow that was the reflection of the unsilvered spot hole was on the cross target.[5]This indicated that the sunbeam was pointing at the target.
The flashes were produced by a keying mechanism that tilted the mirror up a few degrees at the push of a lever at the back of the instrument. If the Sun was in front of the sender, its rays were reflected directly from this mirror to the receiving station. If the Sun was behind the sender, the sighting rod was replaced by a second mirror, to capture the sunlight from the main mirror and reflect it to the receiving station.[6][7]TheU.S. Army's Signal Corpsheliograph used a flat square mirror that did not tilt.[8]This type produced flashes by ashuttermounted on a second tripod (Fig 4).[6]
The heliograph had certain advantages. It allowed long-distance communication without a fixed infrastructure, though it could also be linked to make a fixed network extending for hundreds of miles, as in the fort-to-fort network used for theGeronimomilitary campaign. It was very portable, did not require any power source, and was relatively secure since it was invisible to those not near the axis of operation, and the beam was very narrow, spreading only 50 ft (15 m) per 1 mi (1.6 km) of range. However, anyone in the beam with the correct knowledge could intercept signals without being detected.[3][9]In theSecond Boer War(1899–1902) in South Africa, where both sides used heliographs, tubes were sometimes used to decrease the dispersion of the beam.[3]In some other circumstances, though, a narrow beam made it difficult to stay aligned with a moving target, as when communicating from shore to a moving ship, so the British issued a dispersing lens to broaden the heliograph beam from its natural diameter of 0.5 degrees to 15 degrees.[10]
The range of a heliograph depends on the opacity of the air and the effective collecting area of the mirrors. Heliograph mirrors ranged from 1.5 to 12 in (38 to 305 mm) or more. Stations at higher altitudes benefit from thinner, clearer air, and are required in any event for great ranges, to clear thecurvature of the Earth. A good approximation for ranges of 20 to 50 mi (32 to 80 km) is that the flash of a circular mirror is visible to the naked eye at a distance of 10 mi (16 km) for each inch of mirror diameter,[11]and farther apart seen with atelescope. The world record distance was established by a detachment of U.S. Army signal sergeants by the inter-operation of stations in North America onMount Ellen, (Utah), andMount Uncompahgre, (Colorado), 183 mi (295 km) apart on 17 September 1894, with Army Signal Corps heliographs carrying mirrors only 8 inches (20 cm) on a side.[12]
The German professorCarl Friedrich Gauss(1777–1855), of theUniversity of Göttingendeveloped and used a predecessor of the heliograph (theheliotrope) in 1821.[2][13]His device directed a controlled beam of sunlight to a distant station to be used as a marker forgeodetic surveywork, and was suggested as a means of telegraphic communications.[14]This is the first reliably documented heliographic device,[15]despite much speculation about possible ancient incidents of sun-flash signalling, and the documented existence of other forms ofancient optical telegraphy.
For example, one author in 1919 chose to "hazard the theory"[16]that the Italian mainland signals from the capital ofRomethat ancientRoman emperorTiberius(42 B.C.-A.D.37, reigned A.D.14 to 37), watched for from his imperial retreat on the island ofCapri.[17]were mirror flashes, but admitted "there are no references in ancient writings to the use of signaling by mirrors", and that the documented means of ancient long-range visual telecommunications was by beacon fires and beacon smoke, not mirrors.
Similarly, the story that a shield was used as a heliograph at the ancient famousBattle of Marathonbetween theGreeksandPersiansin 490 B.C. is also unfortunately a modern myth,[18]originating in the 1800s. The ancient historianHerodotusnever mentioned any flash.[19]What Herodotus did write was that someone was accused of having arranged to "hold up a shield as a signal".[20]Suspicion grew in the later 1900s, that the flash theory was implausible.[21]The conclusion after testing the theory was "Nobody flashed a shield at the Battle of Marathon".[22]
In a letter dated 3 June 1778,John Norris, High Sheriff ofBuckinghamshire, England, notes: "Did this day heliograph intelligence from Dr [Benjamin] Franklin in Paris to Wycombe".[23]However, there is little evidence that "heliograph" here is other than a misspelling of "holograph". The term "heliograph" for solar telegraphy did not enter the English language until the 1870s—even the word "telegraphy" was not coined until the 1790s.
Henry Christopher Mance(1840–1926), of the British Government's Persian Gulf Telegraph Department, developed the first widely accepted heliograph about 1869,[2][24][25]while stationed atKarachi(now in modernPakistan) in the thenBombay PresidencyofBritish India. Mance was familiar with heliotropes by their use earlier for the mapping project of theGreat Trigonometrical Surveyof India (done 1802–1871).[12]The Mance Heliograph was operated easily by one man, and since it weighed about 7 lb (3.2 kg), the operator could readily carry the device and its supporting tripod. The British Army tested the heliograph in India at a range of 35 mi (56 km) with favorable results.[26]During theJowaki Afridi expeditionsent by the British-Indian government in 1877, the heliograph was first tested in war.[27][28]
The simple and effective instrument that Mance invented was to be an important part of military communications for more than 60 years. The usefulness of heliographs was limited to daytimes with strong sunlight, but they were the most powerful type of visual signalling device known. In pre-radio times heliography was often the only means of communication that could span ranges of as much as 100 mi (160 km) with a lightweight portable instrument.[12]
In theUnited States military, by mid-1878, a younger ColonelNelson A. Mileshad established a line of heliographs connecting far-flung military outposts ofFort KeoghandFort Custer, in the northernMontana Territory, a distance of 140 mi (230 km).[29][30][31]In 1886,United States Armynow GeneralNelson A. Miles(1839–1925), set up a network of 27 heliograph stations in theArizonaandNew Mexicoterritories of the oldSouthwestduring the extended campaign and hunt for the nativeApacherenegade chief / guerrilla warfare leaderGeronimo(1829–1909).[32]In 1890, now little-known Major W.J. Volkmar of the U.S. Army demonstrated in theArizonaandNew Mexicoterritories, the possibility of performing communication by heliograph over a heliograph network aggregating 2,000 mi (3,200 km) in length.[33]The network of communication begun by General Miles in 1886, and continued by unsung and now unfortunately relatively unknown Lieutenant W. A. Glassford, was perfected in 1889 at ranges of 85, 88, 95 and 125 mi (137, 142, 153 and 201 km) over a rugged and broken country, which was the stronghold of theApache,Commancheand other hostile native Indian tribes.[12]
By 1887, heliographs in use included not only the British Mance and Begbie heliographs, but also the American Grugan, Garner and Pursell heliographs. The Grugan and Pursell heliographs used shutters, and the others used movable mirrors operated by a finger key. The Mance, Grugan and Pursell heliographs used two tripods, and the others one. The signals could either be momentary flashes, or momentary obscurations.[34]In 1888, the U.S. Army Signal Corps reviewed all of these devices, as well as the Finley Helio-Telegraph,[34]and finding none completely suitable, developed its own instrument of the U.S. Army Signal Corps heliograph, a two-tripod, shutter-based machine of13+7⁄8lb (6.3 kg) total weight, and ordered 100, for a total cost of $4,205.[35]By 1893, the number of heliographs manufactured for the American Army Signal Corps was 133.[36]
The heyday of the heliograph was probably theSecond Boer Warof the 1890s and early 1900s in South Africa, where it was much used by both the British and the native immigrantBoers.[2][3]The terrain and climate, as well as the nature of the campaign, made heliography a logical choice. For night communications, the British used some largesignal lamps, brought inland on railroad cars, and equipped with leaf-type shutters for keying a beam of light into dots and dashes. During the early stages of the war, theBritish Armygarrisons were besieged inKimberley, along with the sieges ofLadysmith, and atMafeking. With land wiretelegraphlines cut, the only contact with the outside world was via light-beam communication, helio by day, and signal lamps at night.[12]
In 1909, the use of heliography for forestry protection was introduced by theUnited States Forestry Servicein the western States. By 1920, such use was widespread in the US and beginning in the neighboringDominion of Canadato the north, and the heliograph was regarded as "next to the telephone, the most useful communication device that is at present available for forest-protection services".[6]D.P. Godwin of the U.S. Forestry Service invented a very portable (4.5 lb [2.0 kg]) heliograph of the single-tripod, shutter plus mirror type for forestry use.[6]
Immediately prior to the outbreak ofWorld War I(1914–1918), the mounted cavalry regiments of theRussian Imperial Armyin theRussian Empirewere still being trained in heliograph communications to augment the efficiency of their scouting and reporting roles.[37]Following the twoRussian Revolutions of 1917, the revolutionaryBolshevik/Communistunits of theirRed Armyduring the subsequentRussian Civil Warof 1918–1922, made use of a series of heliograph stations to disseminate intelligence efficiently. This continued even a decade later about counter-revolutionarybasmachirebel movements in Central Asia'sTurkestanregion in 1926.[38]
DuringWorld War II(1939–1945),Union of South Africaand Royal Australian military forces used the heliograph while fighting enemyNazi GermanandFascist Italianforces along the southern coast of theMediterranean SeainLibyaand westernEgyptwith fellow defending British military in the desertNorth African campaignin 1940, 1941 and 1942.[2]
The heliograph remained standard equipment for militarysignallersin the Royal Australian andBritish armiesuntil the 1940s, where it was considered a "low probability of intercept" type of communication. TheCanadian Armywas the last major military force to have the heliograph as an issue item. By the time the mirror instruments were retired, they were seldom used for signalling.[12]However, as recently as the 1980s, heliographs were used by insurgent Afghan mujahedeen forces during theSoviet invasion of Afghanistanin 1978–1979.[2]Signal mirrors are still included insurvival kitsfor emergency signaling tosearch and rescueaircraft.[2]
Most heliographs of the 19th and 20th centuries were completely manual.[6]The steps of aligning the heliograph on the target, co-aligning the reflected sunbeam with the heliograph, maintaining the sunbeam alignment as the sun moved, transcribing the message into flashes, modulating the sunbeam into those flashes, detecting the flashes at the receiving end, and transcribing the flashes into the message were all done manually.[6]One notable exception – many French heliographs used clockwork heliostats to automatically steer out the sun's motion. By 1884, all active units of the "Mangin apparatus" (a dual-modeFrench Armymilitary field optical telegraph that could use either lantern or sunlight) were equipped with clockwork heliostats.[39]The Mangin apparatus with heliostat was still in service in 1917.[40][41][42]Proposals to automate both the modulation of the sunbeam (by clockwork) and the detection (by electrical selenium photodetectors, or photographic means) date back to at least 1882.[43]In 1961, theUnited States Air Forcewas working on a space heliograph to signal between satellites[44]
In May 2012, "Solar Beacon" robotic mirrors designed at theUniversity of California at Berkeleywere mounted on the twin towers of theGolden Gate Bridgeat the entrance toSan Francisco Bay, and a web site set up[45]where the public could schedule times for the mirrors to signal with sun-flashes, entering the time and their latitude, longitude and altitude.[46]The solar beacons were later moved to Sather Tower at the U.C. – Berkeley campus.[47][48]By June 2012, the public could specify a "custom show" of up to 32 "on" or "off" periods of 4 seconds each, permitting the transmission of a few characters of Morse Code.[49]The designer described the Solar Beacon as a "heliostat", not a "heliograph".[46]
The first digitally controlled heliograph was designed and built in 2015.[50][51]It was a semi-finalist in the Broadcom MASTERS competition.[52]
|
https://en.wikipedia.org/wiki/Heliograph
|
TheBlue Brain Projectwas a Swiss brain research initiative that aimed to create adigital reconstructionof the mouse brain. The project was founded in May 2005 by the Brain Mind Institute ofÉcole Polytechnique Fédérale de Lausanne(EPFL) in Switzerland. The project ended in December 2024. Its mission was to use biologically-detailed digital reconstructions andsimulations of the mammalian brainto identify the fundamental principles of brain structure and function.
The project was headed by the founding directorHenry Markram—who also launched the EuropeanHuman Brain Project—and was co-directed by Felix Schürmann, Adriana Salvatore andSean Hill. Using aBlue Genesupercomputerrunning Michael Hines'sNEURON, the simulation involved a biologically realistic model ofneurons[1][2][3]and an empirically reconstructed modelconnectome.
There were a number of collaborations, including theCajal Blue Brain, which is coordinated by theSupercomputing and Visualization Center of Madrid(CeSViMa), and others run by universities and independent laboratories.
In 2006, the project made its first model of aneocortical columnwith simplified neurons.[4]In November 2007, it completed an initial model of the rat neocortical column. This marked the end of the first phase, delivering a data-driven process for creating, validating, and researching the neocortical column.[5][4][6]
Neocortical columns are considered by some researchers to be the smallest functional units of theneocortex,[7][8]and they are thought to be responsible for higher functions such asconscious thought. In humans, each column is about 2 mm (0.079 in) in length, has a diameter of 0.5 mm (0.020 in) and contains about 60,000 neurons.Ratneocortical columns are very similar in structure but contain only 10,000 neurons and 108synapses.
In 2009, Henry Markram claimed that a "detailed, functional artificial human brain can be built within the next 10 years".[9]He conceived theHuman Brain Project, to which the Blue Brain Project contributed,[4]and which became funded in 2013 by the European Union with up to $1.3 billion.[10]
In 2015, the project simulated part of a rat brain with 30,000 neurons.[11]Also in 2015, scientists atÉcole Polytechnique Fédérale de Lausanne(EPFL) developed a quantitative model of the previously unknown relationship between the neurons and theastrocytes. This model describes the energy management of the brain through the function of the neuro-glial vascular unit (NGV). The additional layer of neuron andglial cellsis being added to Blue Brain Project models to improve functionality of the system.[12]
In 2017, Blue Brain Project discovered thatneural cliquesconnected to one another in up to eleven dimensions. The project's director suggested that the difficulty of understanding the brain is partly because the mathematics usually applied for studyingneural networkscannot detect that many dimensions. The Blue Brain Project was able to model these networks usingalgebraic topology.[13]
In 2018, Blue Brain Project released its first digital 3D brain cell atlas[14]which, according toScienceDaily, is like "going from hand-drawn maps to Google Earth", providing information about major cell types, numbers, and positions in 737 regions of the brain.[15]
In 2019, Idan Segev, one of thecomputational neuroscientistsworking on the Blue Brain Project, gave a talk titled: "Brain in the computer: what did I learn from simulating the brain." In his talk, he mentioned that the whole cortex for the mouse brain was complete and virtualEEGexperiments would begin soon. He also mentioned that the model had become too heavy on the supercomputers they were using at the time, and that they were consequently exploring methods in which every neuron could be represented as anartificial neural network(see citation for details).[16]
In 2022, scientists at the Blue Brain Project used algebraic topology to create an algorithm, Topological Neuronal Synthesis, that generates a large number of unique cells using only a few examples, synthesizing millions of unique neuronal morphologies. This allows them to replicate both healthy and diseased states of the brain. In a paper Kenari et al. were able to digitally synthesize dendritic morphologies from the mouse brain using this algorithm. They mapped entire brain regions from just a few reference cells. Since it is open source, this will enable the modelling of brain diseases and eventually, the algorithm could lead to digital twins of brains.[17]
The Blue Brain Project has developed a number of software to reconstruct and to simulate the mouse brain. All software tools mentioned below areopen source softwareand available for everyone onGitHub.[18][19][20][21][22][23]
Blue Brain Nexus[24][25][26]is a data integration platform which uses aknowledge graphto enable users to search, deposit, and organise data. It stands on theFAIR dataprinciples to provide flexible data management solutions beyond neuroscience studies.
BluePyOpt[27]is a tool that is used to build electrical models of single neurons. For this, it usesevolutionary algorithmsto constrain the parameters to experimental electrophysiological data. Attempts to reconstruct single neurons using BluePyOpt are reported by Rosanna Migliore,[28]and Stefano Masori.[29]
CoreNEURON[30]is a supplemental tool toNEURON, which allows large scale simulation by boosting memory usage and computational speed.
NeuroMorphoVis[31]is a visualisation tool for morphologies of neurons.
SONATA[32]is a joint effort between Blue Brain Project andAllen Institute for Brain Science, to develop a standard for data format, which realises a multiple platform working environment with greater computational memory and efficiency.
The project was funded primarily by theSwiss governmentand theFuture and Emerging Technologies(FET) Flagship grant from theEuropean Commission,[33]and secondarily by grants and donations from private individuals. The EPFL bought the Blue Gene computer at a reduced cost because it was still a prototype and IBM was interested in exploring how applications would perform on the machine. BBP was viewed as a validation of theBlue Genesupercomputer concept.[34]
Although the Blue Brain Project is often associated with theHuman Brain Project(HBP), it is important to distinguish between the two. While the Blue Brain Project was a key participant of the HBP, much of the criticism regarding targets and management issues actually pertains to theHuman Brain Projectrather than the Blue Brain Project itself.[35][36]
Voices raised as early as September 2014 highlighted concerns over the trajectory of the Human Brain Project, noting challenges in meeting its high-level goals and questioning its organizational structure and the project's key promoter, Professor Henry Markram.[37][38]In 2016, the HBP underwent a restructuring with resources originally earmarked for brain simulation redistributed to support a wider array of neuroscience research groups. Since then, scientists and engineers from the Blue Brain Project have contributed to various aspects of the HBP, including the Neuroinformatics, EBRAINS, Neurorobotics, and High-Performance Computing Platforms.[39]This distinction is important because some of the criticism directed at the initial incarnation of HBP may have been misattributed to the Blue Brain Project due to their shared leadership and early involvement in the initiative.
The Cajal Blue Brain Project is coordinated by theTechnical University of Madridled byJavier de Felipeand uses the facilities of theSupercomputing and Visualization Center of Madridand its supercomputerMagerit.[40]TheCajal Institutealso participates in this collaboration. The main lines of research currently being pursued atCajal Blue Braininclude neurological experimentation and computer simulations.[41]Nanotechnology, in the form of a newly designed brain microscope, plays an important role in its research plans.[42]
Noah Huttoncreated the documentary filmIn Silicoover a 10-year period. The film was released in April 2021.[43]The film covers the "shifting goals and landmarks"[44]of the Blue Brain Project as well as the drama, "In the end, this isn’t about science. It’s about the universals of power, greed, ego, and fame."[45][46]
|
https://en.wikipedia.org/wiki/Blue_brain
|
Smoothing splinesare function estimates,f^(x){\displaystyle {\hat {f}}(x)}, obtained from a set of noisy observationsyi{\displaystyle y_{i}}of the targetf(xi){\displaystyle f(x_{i})}, in order to balance a measure ofgoodness of fitoff^(xi){\displaystyle {\hat {f}}(x_{i})}toyi{\displaystyle y_{i}}with a derivative based measure of the smoothness off^(x){\displaystyle {\hat {f}}(x)}. They provide a means for smoothing noisyxi,yi{\displaystyle x_{i},y_{i}}data. The most familiar example is the cubic smoothing spline, but there are many other possibilities, including for the case wherex{\displaystyle x}is a vector quantity.
Let{xi,Yi:i=1,…,n}{\displaystyle \{x_{i},Y_{i}:i=1,\dots ,n\}}be a set of observations, modeled by the relationYi=f(xi)+ϵi{\displaystyle Y_{i}=f(x_{i})+\epsilon _{i}}where theϵi{\displaystyle \epsilon _{i}}are independent, zero mean random variables. The cubic smoothing spline estimatef^{\displaystyle {\hat {f}}}of the functionf{\displaystyle f}is defined to be the unique minimizer, in theSobolev spaceW22{\displaystyle W_{2}^{2}}on a compact interval, of[1][2]
Remarks:
It is useful to think of fitting a smoothing spline in two steps:
Now, treat the second step first.
Given the vectorm^=(f^(x1),…,f^(xn))T{\displaystyle {\hat {m}}=({\hat {f}}(x_{1}),\ldots ,{\hat {f}}(x_{n}))^{T}}of fitted values, the sum-of-squares part of the spline criterion is fixed. It remains only to minimize∫f^″(x)2dx{\displaystyle \int {\hat {f}}''(x)^{2}\,dx}, and the minimizer is a natural cubicsplinethat interpolates the points(xi,f^(xi)){\displaystyle (x_{i},{\hat {f}}(x_{i}))}. This interpolating spline is a linear operator, and can be written in the form
wherefi(x){\displaystyle f_{i}(x)}are a set of spline basis functions. As a result, the roughness penalty has the form
where the elements ofAare∫fi″(x)fj″(x)dx{\displaystyle \int f_{i}''(x)f_{j}''(x)dx}. The basis functions, and hence the matrixA, depend on the configuration of the predictor variablesxi{\displaystyle x_{i}}, but not on the responsesYi{\displaystyle Y_{i}}orm^{\displaystyle {\hat {m}}}.
Ais ann×nmatrix given byA=ΔTW−1Δ{\displaystyle A=\Delta ^{T}W^{-1}\Delta }.
Δis an(n-2)×nmatrix of second differences with elements:
Δii=1/hi{\displaystyle \Delta _{ii}=1/h_{i}},Δi,i+1=−1/hi−1/hi+1{\displaystyle \Delta _{i,i+1}=-1/h_{i}-1/h_{i+1}},Δi,i+2=1/hi+1{\displaystyle \Delta _{i,i+2}=1/h_{i+1}}
Wis an(n-2)×(n-2)symmetric tri-diagonal matrix with elements:
Wi−1,i=Wi,i−1=hi/6{\displaystyle W_{i-1,i}=W_{i,i-1}=h_{i}/6},Wii=(hi+hi+1)/3{\displaystyle W_{ii}=(h_{i}+h_{i+1})/3}andhi=ξi+1−ξi{\displaystyle h_{i}=\xi _{i+1}-\xi _{i}}, the distances between successive knots (or x values).
Now back to the first step. The penalized sum-of-squares can be written as
whereY=(Y1,…,Yn)T{\displaystyle Y=(Y_{1},\ldots ,Y_{n})^{T}}.
Minimizing overm^{\displaystyle {\hat {m}}}by differentiating againstm^{\displaystyle {\hat {m}}}. This results in:−2{Y−m^}+2λAm^=0{\displaystyle -2\{Y-{\hat {m}}\}+2\lambda A{\hat {m}}=0}[6]andm^=(I+λA)−1Y.{\displaystyle {\hat {m}}=(I+\lambda A)^{-1}Y.}
De Boor's approach exploits the same idea, of finding a balance between having a smooth curve and being close to the given data.[7]
wherep{\displaystyle p}is a parameter called smooth factor and belongs to the interval[0,1]{\displaystyle [0,1]}, andδi;i=1,…,n{\displaystyle \delta _{i};i=1,\dots ,n}are the quantities controlling the extent of smoothing (they represent the weightδi−2{\displaystyle \delta _{i}^{-2}}of each pointYi{\displaystyle Y_{i}}). In practice, sincecubic splinesare mostly used,m{\displaystyle m}is usually2{\displaystyle 2}. The solution form=2{\displaystyle m=2}was proposed byChristian Reinschin 1967.[8]Form=2{\displaystyle m=2}, whenp{\displaystyle p}approaches1{\displaystyle 1},f^{\displaystyle {\hat {f}}}converges to the "natural" spline interpolant to the given data.[7]Asp{\displaystyle p}approaches0{\displaystyle 0},f^{\displaystyle {\hat {f}}}converges to a straight line (the smoothest curve). Since finding a suitable value ofp{\displaystyle p}is a task of trial and error, a redundant constantS{\displaystyle S}was introduced for convenience.[8]S{\displaystyle S}is used to numerically determine the value ofp{\displaystyle p}so that the functionf^{\displaystyle {\hat {f}}}meets the following condition:
The algorithm described by de Boor starts withp=0{\displaystyle p=0}and increasesp{\displaystyle p}until the condition is met.[7]Ifδi{\displaystyle \delta _{i}}is an estimation of the standard deviation forYi{\displaystyle Y_{i}}, the constantS{\displaystyle S}is recommended to be chosen in the interval[n−2n,n+2n]{\displaystyle \left[n-{\sqrt {2n}},n+{\sqrt {2n}}\right]}. HavingS=0{\displaystyle S=0}means the solution is the "natural" spline interpolant.[8]IncreasingS{\displaystyle S}means we obtain a smoother curve by getting farther from the given data.
There are two main classes of method for generalizing from smoothing with respect to a scalarx{\displaystyle x}to smoothing with respect to a vectorx{\displaystyle x}. The first approach simply generalizes the spline smoothing penalty to the multidimensional setting. For example, if trying to estimatef(x,z){\displaystyle f(x,z)}we might use theThin plate splinepenalty and find thef^(x,z){\displaystyle {\hat {f}}(x,z)}minimizing
The thin plate spline approach can be generalized to smoothing with respect to more than two dimensions and to other orders of differentiation in the penalty.[1]As the dimension increases there are some restrictions on the smallest order of differential that can be used,[1]but actually Duchon's original paper,[9]gives slightly more complicated penalties that can avoid this restriction.
The thin plate splines are isotropic, meaning that if we rotate thex,z{\displaystyle x,z}co-ordinate system the estimate will not change, but also that we are assuming that the same level of smoothing is appropriate in all directions. This is often considered reasonable when smoothing with respect to spatial location, but in many other cases isotropy is not an appropriate assumption and can lead to sensitivity to apparently arbitrary choices of measurement units. For example, if smoothing with respect to distance and time an isotropic smoother will give different results if distance is measure in metres and time in seconds, to what will occur if we change the units to centimetres and hours.
The second class of generalizations to multi-dimensional smoothing deals directly with this scale invariance issue using tensor product spline constructions.[10][11][12]Such splines have smoothing penalties with multiple smoothing parameters, which is the price that must be paid for not assuming that the same degree of smoothness is appropriate in all directions.
Smoothing splines are related to, but distinct from:
Source code forsplinesmoothing can be found in the examples fromCarl de Boor'sbookA Practical Guide to Splines. The examples are in theFortranprogramming language. The updated sources are available also on Carl de Boor's official site[1].
|
https://en.wikipedia.org/wiki/Spline_regression
|
Instatisticsandcoding theory, aHamming spaceis usually the set of all2N{\displaystyle 2^{N}}binary stringsof lengthN, where different binary strings are considered to beadjacentwhen they differ only in one position. The total distance between any two binary strings is then the total number of positions at which the corresponding bits are different, called theHamming distance.[1][2]Hamming spaces are named after American mathematicianRichard Hamming, who introduced the concept in 1950.[3]They are used in the theory of coding signals and transmission.
More generally, a Hamming space can be defined over anyalphabet(set)Qas the set ofwordsof a fixed lengthNwith letters fromQ.[4][5]IfQis afinite field, then a Hamming space overQis anN-dimensionalvector spaceoverQ. In the typical, binary case, the field is thusGF(2)(also denoted byZ2).[4]
In coding theory, ifQhasqelements, then anysubsetC(usually assumed ofcardinalityat least two) of theN-dimensional Hamming space overQis called aq-arycodeof length N; the elements ofCare calledcodewords.[4][5]In the case whereCis alinear subspaceof its Hamming space, it is called alinear code.[4]A typical example of linear code is theHamming code. Codes defined via a Hamming space necessarily have the same length for every codeword, so they are calledblock codeswhen it is necessary to distinguish them fromvariable-length codesthat are defined by unique factorization on a monoid.
TheHamming distanceendows a Hamming space with ametric, which is essential in defining basic notions of coding theory such aserror detecting and error correcting codes.[4]
Hamming spaces over non-field alphabets have also been considered, especially overfinite rings(most notably overZ4) giving rise tomodulesinstead of vector spaces andring-linear codes(identified withsubmodules) instead of linear codes. The typical metric used in this case theLee distance. There exist aGray isometrybetweenZ22m{\displaystyle \mathbb {Z} _{2}^{2m}}(i.e. GF(22m)) with the Hamming distance andZ4m{\displaystyle \mathbb {Z} _{4}^{m}}(also denoted as GR(4,m)) with the Lee distance.[6][7][8]
Thislinear algebra-related article is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Hamming_space
|
Instatistics, thek-nearest neighbors algorithm(k-NN) is anon-parametricsupervised learningmethod. It was first developed byEvelyn FixandJoseph Hodgesin 1951,[1]and later expanded byThomas Cover.[2]Most often, it is used forclassification, as ak-NN classifier, the output of which is a class membership. An object is classified by a plurality vote of its neighbors, with the object being assigned to the class most common among itsknearest neighbors (kis a positiveinteger, typically small). Ifk= 1, then the object is simply assigned to the class of that single nearest neighbor.
Thek-NN algorithm can also be generalized forregression. Ink-NN regression, also known asnearest neighbor smoothing, the output is the property value for the object. This value is the average of the values ofknearest neighbors. Ifk= 1, then the output is simply assigned to the value of that single nearest neighbor, also known asnearest neighbor interpolation.
For both classification and regression, a useful technique can be to assign weights to the contributions of the neighbors, so that nearer neighbors contribute more to the average than distant ones. For example, a common weighting scheme consists of giving each neighbor a weight of 1/d, wheredis the distance to the neighbor.[3]
The input consists of thekclosest training examples in adata set.
The neighbors are taken from a set of objects for which the class (fork-NN classification) or the object property value (fork-NN regression) is known. This can be thought of as the training set for the algorithm, though no explicit training step is required.
A peculiarity (sometimes even a disadvantage) of thek-NN algorithm is its sensitivity to the local structure of the data.
Ink-NN classification the function is only approximated locally and all computation is deferred until function evaluation. Since this algorithm relies on distance, if the features represent different physical units or come in vastly different scales, then feature-wisenormalizingof the training data can greatly improve its accuracy.[4]
Suppose we have pairs(X1,Y1),(X2,Y2),…,(Xn,Yn){\displaystyle (X_{1},Y_{1}),(X_{2},Y_{2}),\dots ,(X_{n},Y_{n})}taking values inRd×{1,2}{\displaystyle \mathbb {R} ^{d}\times \{1,2\}}, whereYis the class label ofX, so thatX|Y=r∼Pr{\displaystyle X|Y=r\sim P_{r}}forr=1,2{\displaystyle r=1,2}(and probability distributionsPr{\displaystyle P_{r}}). Given some norm‖⋅‖{\displaystyle \|\cdot \|}onRd{\displaystyle \mathbb {R} ^{d}}and a pointx∈Rd{\displaystyle x\in \mathbb {R} ^{d}}, let(X(1),Y(1)),…,(X(n),Y(n)){\displaystyle (X_{(1)},Y_{(1)}),\dots ,(X_{(n)},Y_{(n)})}be a reordering of the training data such that‖X(1)−x‖≤⋯≤‖X(n)−x‖{\displaystyle \|X_{(1)}-x\|\leq \dots \leq \|X_{(n)}-x\|}.
The training examples are vectors in a multidimensional feature space, each with a class label. The training phase of the algorithm consists only of storing thefeature vectorsand class labels of the training samples.
In the classification phase,kis a user-defined constant, and an unlabeled vector (a query or test point) is classified by assigning the label which is most frequent among thektraining samples nearest to that query point.
A commonly used distance metric forcontinuous variablesisEuclidean distance. For discrete variables, such as for text classification, another metric can be used, such as theoverlap metric(orHamming distance). In the context of gene expression microarray data, for example,k-NN has been employed with correlation coefficients, such as Pearson and Spearman, as a metric.[5]Often, the classification accuracy ofk-NN can be improved significantly if the distance metric is learned with specialized algorithms such asLarge Margin Nearest NeighbororNeighbourhood components analysis.
A drawback of the basic "majority voting" classification occurs when the class distribution is skewed. That is, examples of a more frequent class tend to dominate the prediction of the new example, because they tend to be common among theknearest neighbors due to their large number.[7]One way to overcome this problem is to weight the classification, taking into account the distance from the test point to each of itsknearest neighbors. The class (or value, in regression problems) of each of theknearest points is multiplied by a weight proportional to the inverse of the distance from that point to the test point. Another way to overcome skew is by abstraction in data representation. For example, in aself-organizing map(SOM), each node is a representative (a center) of a cluster of similar points, regardless of their density in the original training data.K-NN can then be applied to the SOM.
The best choice ofkdepends upon the data; generally, larger values ofkreduces effect of the noise on the classification,[8]but make boundaries between classes less distinct. A goodkcan be selected by variousheuristictechniques (seehyperparameter optimization). The special case where the class is predicted to be the class of the closest training sample (i.e. whenk= 1) is called the nearest neighbor algorithm.
The accuracy of thek-NN algorithm can be severely degraded by the presence of noisy or irrelevant features, or if the feature scales are not consistent with their importance. Much research effort has been put intoselectingorscalingfeatures to improve classification. A particularly popular[citation needed]approach is the use ofevolutionary algorithmsto optimize feature scaling.[9]Another popular approach is to scale features by themutual informationof the training data with the training classes.[citation needed]
In binary (two class) classification problems, it is helpful to choosekto be an odd number as this avoids tied votes. One popular way of choosing the empirically optimalkin this setting is via bootstrap method.[10]
The most intuitive nearest neighbour type classifier is the one nearest neighbour classifier that assigns a pointxto the class of its closest neighbour in the feature space, that isCn1nn(x)=Y(1){\displaystyle C_{n}^{1nn}(x)=Y_{(1)}}.
As the size of training data set approaches infinity, the one nearest neighbour classifier guarantees an error rate of no worse than twice theBayes error rate(the minimum achievable error rate given the distribution of the data).
Thek-nearest neighbour classifier can be viewed as assigning theknearest neighbours a weight1/k{\displaystyle 1/k}and all others0weight. This can be generalised to weighted nearest neighbour classifiers. That is, where theith nearest neighbour is assigned a weightwni{\displaystyle w_{ni}}, with∑i=1nwni=1{\textstyle \sum _{i=1}^{n}w_{ni}=1}. An analogous result on the strong consistency of weighted nearest neighbour classifiers also holds.[11]
LetCnwnn{\displaystyle C_{n}^{wnn}}denote the weighted nearest classifier with weights{wni}i=1n{\displaystyle \{w_{ni}\}_{i=1}^{n}}. Subject to regularity conditions, which in asymptotic theory are conditional variables which require assumptions to differentiate among parameters with some criteria. On the class distributions the excess risk has the following asymptotic expansion[12]RR(Cnwnn)−RR(CBayes)=(B1sn2+B2tn2){1+o(1)},{\displaystyle {\mathcal {R}}_{\mathcal {R}}(C_{n}^{wnn})-{\mathcal {R}}_{\mathcal {R}}(C^{\text{Bayes}})=\left(B_{1}s_{n}^{2}+B_{2}t_{n}^{2}\right)\{1+o(1)\},}for constantsB1{\displaystyle B_{1}}andB2{\displaystyle B_{2}}wheresn2=∑i=1nwni2{\displaystyle s_{n}^{2}=\sum _{i=1}^{n}w_{ni}^{2}}andtn=n−2/d∑i=1nwni{i1+2/d−(i−1)1+2/d}{\displaystyle t_{n}=n^{-2/d}\sum _{i=1}^{n}w_{ni}\left\{i^{1+2/d}-(i-1)^{1+2/d}\right\}}.
The optimal weighting scheme{wni∗}i=1n{\displaystyle \{w_{ni}^{*}\}_{i=1}^{n}}, that balances the two terms in the display above, is given as follows: setk∗=⌊Bn4d+4⌋{\displaystyle k^{*}=\lfloor Bn^{\frac {4}{d+4}}\rfloor },wni∗=1k∗[1+d2−d2k∗2/d{i1+2/d−(i−1)1+2/d}]{\displaystyle w_{ni}^{*}={\frac {1}{k^{*}}}\left[1+{\frac {d}{2}}-{\frac {d}{2{k^{*}}^{2/d}}}\{i^{1+2/d}-(i-1)^{1+2/d}\}\right]}fori=1,2,…,k∗{\displaystyle i=1,2,\dots ,k^{*}}andwni∗=0{\displaystyle w_{ni}^{*}=0}fori=k∗+1,…,n{\displaystyle i=k^{*}+1,\dots ,n}.
With optimal weights the dominant term in the asymptotic expansion of the excess risk isO(n−4d+4){\displaystyle {\mathcal {O}}(n^{-{\frac {4}{d+4}}})}. Similar results are true when using abagged nearest neighbour classifier.
k-NN is a special case of avariable-bandwidth, kernel density "balloon" estimatorwith a uniformkernel.[13][14]
The naive version of the algorithm is easy to implement by computing the distances from the test example to all stored examples, but it is computationally intensive for large training sets. Using an approximatenearest neighbor searchalgorithm makesk-NN computationally tractable even for large data sets. Many nearest neighbor search algorithms have been proposed over the years; these generally seek to reduce the number of distance evaluations actually performed.
k-NN has some strongconsistencyresults. As the amount of data approaches infinity, the two-classk-NN algorithm is guaranteed to yield an error rate no worse than twice theBayes error rate(the minimum achievable error rate given the distribution of the data).[2]Various improvements to thek-NN speed are possible by using proximity graphs.[15]
For multi-classk-NN classification,CoverandHart(1967) prove an upper bound error rate ofR∗≤RkNN≤R∗(2−MR∗M−1){\displaystyle R^{*}\ \leq \ R_{k\mathrm {NN} }\ \leq \ R^{*}\left(2-{\frac {MR^{*}}{M-1}}\right)}whereR∗{\displaystyle R^{*}}is the Bayes error rate (which is the minimal error rate possible),RkNN{\displaystyle R_{kNN}}is the asymptotick-NN error rate, andMis the number of classes in the problem. This bound is tight in the sense that both the lower and upper bounds are achievable by some distribution.[16]ForM=2{\displaystyle M=2}and as the Bayesian error rateR∗{\displaystyle R^{*}}approaches zero, this limit reduces to "not more than twice the Bayesian error rate".
There are many results on the error rate of theknearest neighbour classifiers.[17]Thek-nearest neighbour classifier is strongly (that is for any joint distribution on(X,Y){\displaystyle (X,Y)})consistentprovidedk:=kn{\displaystyle k:=k_{n}}diverges andkn/n{\displaystyle k_{n}/n}converges to zero asn→∞{\displaystyle n\to \infty }.
LetCnknn{\displaystyle C_{n}^{knn}}denote theknearest neighbour classifier based on a training set of sizen. Under certain regularity conditions, theexcess riskyields the following asymptotic expansion[12]RR(Cnknn)−RR(CBayes)={B11k+B2(kn)4/d}{1+o(1)},{\displaystyle {\mathcal {R}}_{\mathcal {R}}(C_{n}^{knn})-{\mathcal {R}}_{\mathcal {R}}(C^{\text{Bayes}})=\left\{B_{1}{\frac {1}{k}}+B_{2}\left({\frac {k}{n}}\right)^{4/d}\right\}\{1+o(1)\},}for some constantsB1{\displaystyle B_{1}}andB2{\displaystyle B_{2}}.
The choicek∗=⌊Bn4d+4⌋{\displaystyle k^{*}=\left\lfloor Bn^{\frac {4}{d+4}}\right\rfloor }offers a trade off between the two terms in the above display, for which thek∗{\displaystyle k^{*}}-nearest neighbour error converges to the Bayes error at the optimal (minimax) rateO(n−4d+4){\displaystyle {\mathcal {O}}\left(n^{-{\frac {4}{d+4}}}\right)}.
The K-nearest neighbor classification performance can often be significantly improved through (supervised) metric learning. Popular algorithms areneighbourhood components analysisandlarge margin nearest neighbor. Supervised metric learning algorithms use the label information to learn a newmetricorpseudo-metric.
When the input data to an algorithm is too large to be processed and it is suspected to be redundant (e.g. the same measurement in both feet and meters) then the input data will be transformed into a reduced representation set of features (also named features vector). Transforming the input data into the set of features is calledfeature extraction. If the features extracted are carefully chosen it is expected that the features set will extract the relevant information from the input data in order to perform the desired task using this reduced representation instead of the full size input. Feature extraction is performed on raw data prior to applyingk-NN algorithm on the transformed data infeature space.
An example of a typicalcomputer visioncomputation pipeline forface recognitionusingk-NN including feature extraction and dimension reduction pre-processing steps (usually implemented withOpenCV):
For high-dimensional data (e.g., with number of dimensions more than 10)dimension reductionis usually performed prior to applying thek-NN algorithm in order to avoid the effects of thecurse of dimensionality.[18]
Thecurse of dimensionalityin thek-NN context basically means thatEuclidean distanceis unhelpful in high dimensions because all vectors are almost equidistant to the search query vector (imagine multiple points lying more or less on a circle with the query point at the center; the distance from the query to all data points in the search space is almost the same).
Feature extractionand dimension reduction can be combined in one step usingprincipal component analysis(PCA),linear discriminant analysis(LDA), orcanonical correlation analysis(CCA) techniques as a pre-processing step, followed by clustering byk-NN onfeature vectorsin reduced-dimension space. This process is also called low-dimensionalembedding.[19]
For very-high-dimensional datasets (e.g. when performing a similarity search on live video streams, DNA data or high-dimensionaltime series) running a fastapproximatek-NN search usinglocality sensitive hashing, "random projections",[20]"sketches"[21]or other high-dimensional similarity search techniques from theVLDBtoolbox might be the only feasible option.
Nearest neighbor rules in effect implicitly compute thedecision boundary. It is also possible to compute the decision boundary explicitly, and to do so efficiently, so that the computational complexity is a function of the boundary complexity.[22]
Data reductionis one of the most important problems for work with huge data sets. Usually, only some of the data points are needed for accurate classification. Those data are called theprototypesand can be found as follows:
A training example surrounded by examples of other classes is called a class outlier. Causes of class outliers include:
Class outliers withk-NN produce noise. They can be detected and separated for future analysis. Given two natural numbers,k>r>0, a training example is called a (k,r)NN class-outlier if itsknearest neighbors include more thanrexamples of other classes.
Condensed nearest neighbor (CNN, theHartalgorithm) is an algorithm designed to reduce the data set fork-NN classification.[23]It selects the set of prototypesUfrom the training data, such that 1NN withUcan classify the examples almost as accurately as 1NN does with the whole data set.
Given a training setX, CNN works iteratively:
UseUinstead ofXfor classification. The examples that are not prototypes are called "absorbed" points.
It is efficient to scan the training examples in order of decreasing border ratio.[24]The border ratio of a training examplexis defined as
where‖x-y‖is the distance to the closest exampleyhaving a different color thanx, and‖x'-y‖is the distance fromyto its closest examplex'with the same label asx.
The border ratio is in the interval [0,1] because‖x'-y‖never exceeds‖x-y‖. This ordering gives preference to the borders of the classes for inclusion in the set of prototypesU. A point of a different label thanxis called external tox. The calculation of the border ratio is illustrated by the figure on the right. The data points are labeled by colors: the initial point isxand its label is red. External points are blue and green. The closest toxexternal point isy. The closest toyred point isx'. The border ratioa(x) = ‖x'-y‖ / ‖x-y‖is the attribute of the initial pointx.
Below is an illustration of CNN in a series of figures. There are three classes (red, green and blue). Fig. 1: initially there are 60 points in each class. Fig. 2 shows the 1NN classification map: each pixel is classified by 1NN using all the data. Fig. 3 shows the 5NN classification map. White areas correspond to the unclassified regions, where 5NN voting is tied (for example, if there are two green, two red and one blue points among 5 nearest neighbors). Fig. 4 shows the reduced data set. The crosses are the class-outliers selected by the (3,2)NN rule (all the three nearest neighbors of these instances belong to other classes); the squares are the prototypes, and the empty circles are the absorbed points. The left bottom corner shows the numbers of the class-outliers, prototypes and absorbed points for all three classes. The number of prototypes varies from 15% to 20% for different classes in this example. Fig. 5 shows that the 1NN classification map with the prototypes is very similar to that with the initial data set. The figures were produced using the Mirkes applet.[24]
Ink-NN regression, also known ask-NN smoothing, thek-NN algorithm is used for estimatingcontinuous variables.[citation needed]One such algorithm uses a weighted average of theknearest neighbors, weighted by the inverse of their distance. This algorithm works as follows:
The distance to thekth nearest neighbor can also be seen as a local density estimate and thus is also a popular outlier score inanomaly detection. The larger the distance to thek-NN, the lower the local density, the more likely the query point is an outlier.[25]Although quite simple, this outlier model, along with another classic data mining method,local outlier factor, works quite well also in comparison to more recent and more complex approaches, according to a large scale experimental analysis.[26]
Aconfusion matrixor "matching matrix" is often used as a tool to validate the accuracy ofk-NN classification. More robust statistical methods such aslikelihood-ratio testcan also be applied.[how?]
|
https://en.wikipedia.org/wiki/K-nearest_neighbors_algorithm
|
Incomputing, theSystem Management BIOS(SMBIOS) specification definesdata structures(and access methods) that can be used to read management information produced by theBIOSof acomputer.[1]This eliminates the need for theoperating systemto probe hardware directly to discover what devices are present in the computer. The SMBIOS specification is produced by theDistributed Management Task Force(DMTF), a non-profitstandards development organization. The DMTF estimates that two billion client and server systems implement SMBIOS.[2]
SMBIOS was originally known as Desktop Management BIOS (DMIBIOS), since it interacted with theDesktop Management Interface(DMI).[3]
The DMTF released the version 3.7.1 of the specification on May 24, 2024.[4]
Version 1 of the Desktop Management BIOS (DMIBIOS) specification was produced byPhoenix Technologiesin or before 1996.[5][6]
Version 2.0 of the Desktop Management BIOS specification was released on March 6, 1996 byAmerican Megatrends(AMI),Award Software,Dell,Intel, Phoenix Technologies, andSystemSoft Corporation. It introduced 16-bit plug-and-play functions used to access the structures from Windows 95.[7]
The last version to be published directly by vendors was 2.3 on August 12, 1998. The authors were American Megatrends, Award Software,Compaq, Dell,Hewlett-Packard, Intel,International Business Machines(IBM), Phoenix Technologies, and SystemSoft Corporation.
Circa 1999, theDistributed Management Task Force(DMTF) took ownership of the specification. The first version published by the DMTF was 2.3.1 on March 16, 1999. At approximately the same timeMicrosoftstarted to require thatOEMsand BIOS vendors support the interface/data-set in order to have Microsoftcertification.
Version 3.0.0, introduced in February 2015, added a 64-bit entry point, which can coexist with the previously defined 32-bit entry point.
Version 3.4.0 was released in August 2020.[8]
Version 3.5.0 was released in September 2021.[9]
Version 3.6.0 was released in June 2022.[10]
Version 3.7.0 was released in July 2023.[11]
The SMBIOS table consists of an entry point (two types are defined, 32-bit and 64-bit), and a variable number of structures that describe platform components and features. These structures are occasionally referred to as "tables" or "records" in third-party documentation.
As of version 3.3.0, the SMBIOS specification defines the following structure types:[12][13]
The EFI configuration table (EFI_CONFIGURATION_TABLE) contains entries pointing to the SMBIOS 2 and/or SMBIOS 3 tables.[14]There are several ways to access the data, depending on the platform and operating system.
In theUEFI Shell, theSmbiosViewcommand can retrieve and display the SMBIOS data.[15][16]One can often enter the UEFI shell by entering the system firmware settings, and then selecting the shell as a boot option (as opposed to a DVD drive or hard drive).
ForLinux,FreeBSD, etc., thedmidecodeutility can be used.
MicrosoftspecifiesWMIas the preferred mechanism for accessing SMBIOS information fromMicrosoft Windows.[17][18]
On Windows systems that support it (XP and later), some SMBIOS information can be viewed with either theWMICutility with 'BIOS'/'MEMORYCHIP'/'BASEBOARD' and similar parameters, or by looking in the Windows Registry under HKLM\HARDWARE\DESCRIPTION\System.
Various software utilities can retrieve raw SMBIOS data, including FirmwareTablesView[19]andAIDA64.
Table and structure creation is normally up to the system firmware/BIOS. TheUEFI Platform Initialization(PI) specification includes an SMBIOS protocol (EFI_SMBIOS_PROTOCOL) that allows components to submit SMBIOS structures for inclusion, and enables the producer to create the SMBIOS table for a platform.[20]
Platform virtualization softwarecan also generate SMBIOS tables for use inside VMs, for instanceQEMU.[21]
If the SMBIOS data is not generated and filled correctly then the machine may behave unexpectedly. For example, aMini PCthat advertisesChassis Information | Type = Tabletmay behave unexpectedly using Linux. A desktop manager likeGNOMEwill attempt to monitor a non-existent battery and shut down the screen and network interfaces when the missing battery drops below a threshold. Additionally, if theChassis Information | Manufactureris not filled in correctly then work-arounds for the incorrectType = Tabletproblem cannot be applied.[22]
|
https://en.wikipedia.org/wiki/System_Management_BIOS
|
AtriplestoreorRDF storeis a purpose-builtdatabasefor the storage and retrieval oftriples[1]throughsemantic queries. A triple is a data entity composed ofsubject–predicate–object, like "Bob is 35" (i.e., Bob's age measured in years is 35) or "Bob knows Fred".
Much like arelational database, information in a triplestore is stored and retrieved via aquery language. Unlike a relational database, a triplestore is optimized for the storage and retrieval of triples. In addition to queries, triples can usually be imported and exported using theResource Description Framework(RDF) and other formats.
Some triplestores have been built as database engines from scratch, while others have been built on top of existing commercial relational database engines (such asSQL-based)[2]or NoSQLdocument-oriented databaseengines.[3]Like the early development ofonline analytical processing(OLAP) databases, this intermediate approach allowed large and powerful database engines to be constructed for little programming effort in the initial phases of triplestore development. A difficulty with implementing triplestores over SQL is that although "triples" may thus be "stored", implementing efficient querying of a graph-based RDF model (such as mapping fromSPARQL) onto SQL queries is difficult.[4]
Adding a name to the triple makes a "quad store" ornamed graph.
Agraph databasehas a more generalized structure than a triplestore, using graph structures with nodes, edges, and properties to represent and store data. Graph databases might provide index-free adjacency, meaning every element contains a direct pointer to its adjacent elements, and no index lookups are necessary. General graph databases that can store any graph are distinct from specialized graph databases such as triplestores andnetwork databases.
|
https://en.wikipedia.org/wiki/RDF_Database
|
Instatistics,semiparametric regressionincludesregressionmodels that combineparametricandnonparametricmodels. They are often used in situations where the fully nonparametric model may not perform well or when the researcher wants to use a parametric model but the functional form with respect to a subset of the regressors or the density of the errors is not known. Semiparametric regression models are a particular type ofsemiparametric modellingand, since semiparametric models contain a parametric component, they rely on parametric assumptions and may bemisspecifiedandinconsistent, just like a fully parametric model.
Many different semiparametric regression methods have been proposed and developed. The most popular methods are the partially linear, index and varying coefficient models.
Apartially linear modelis given by
whereYi{\displaystyle Y_{i}}is the dependent variable,Xi{\displaystyle X_{i}}is ap×1{\displaystyle p\times 1}vector of explanatory variables,β{\displaystyle \beta }is ap×1{\displaystyle p\times 1}vector of unknown parameters andZi∈Rq{\displaystyle Z_{i}\in \operatorname {R} ^{q}}. The parametric part of the partially linear model is given by the parameter vectorβ{\displaystyle \beta }while the nonparametric part is the unknown functiong(Zi){\displaystyle g\left(Z_{i}\right)}. The data is assumed to be i.i.d. withE(ui|Xi,Zi)=0{\displaystyle E\left(u_{i}|X_{i},Z_{i}\right)=0}and the model allows for a conditionallyheteroskedasticerror processE(ui2|x,z)=σ2(x,z){\displaystyle E\left(u_{i}^{2}|x,z\right)=\sigma ^{2}\left(x,z\right)}of unknown form. This type of model was proposed by Robinson (1988) and extended to handle categorical covariates by Racine and Li (2007).
This method is implemented by obtaining an{\displaystyle {\sqrt {n}}}consistent estimator ofβ{\displaystyle \beta }and then deriving an estimator ofg(Zi){\displaystyle g\left(Z_{i}\right)}from thenonparametric regressionofYi−Xi′β^{\displaystyle Y_{i}-X'_{i}{\hat {\beta }}}onz{\displaystyle z}using an appropriate nonparametric regression method.[1]
A single index model takes the form
whereY{\displaystyle Y},X{\displaystyle X}andβ0{\displaystyle \beta _{0}}are defined as earlier and the error termu{\displaystyle u}satisfiesE(u|X)=0{\displaystyle E\left(u|X\right)=0}. The single index model takes its name from the parametric part of the modelx′β{\displaystyle x'\beta }which is ascalarsingle index. The nonparametric part is the unknown functiong(⋅){\displaystyle g\left(\cdot \right)}.
The single index model method developed by Ichimura (1993) is as follows. Consider the situation in whichy{\displaystyle y}is continuous. Given a known form for the functiong(⋅){\displaystyle g\left(\cdot \right)},β0{\displaystyle \beta _{0}}could be estimated using thenonlinear least squaresmethod to minimize the function
Since the functional form ofg(⋅){\displaystyle g\left(\cdot \right)}is not known, we need to estimate it. For a given value forβ{\displaystyle \beta }an estimate of the function
usingkernelmethod. Ichimura (1993) proposes estimatingg(Xi′β){\displaystyle g\left(X'_{i}\beta \right)}with
theleave-one-outnonparametric kernelestimator ofG(Xi′β){\displaystyle G\left(X'_{i}\beta \right)}.
If the dependent variabley{\displaystyle y}is binary andXi{\displaystyle X_{i}}andui{\displaystyle u_{i}}are assumed to beindependent, Klein and Spady (1993) propose a technique for estimatingβ{\displaystyle \beta }usingmaximum likelihoodmethods. The log-likelihood function is given by
whereg^−i(Xi′β){\displaystyle {\hat {g}}_{-i}\left(X'_{i}\beta \right)}is theleave-one-outestimator.
Hastie and Tibshirani (1993) propose a smooth coefficient model given by
whereXi{\displaystyle X_{i}}is ak×1{\displaystyle k\times 1}vector andβ(z){\displaystyle \beta \left(z\right)}is a vector of unspecified smooth functions ofz{\displaystyle z}.
γ(⋅){\displaystyle \gamma \left(\cdot \right)}may be expressed as
|
https://en.wikipedia.org/wiki/Semiparametric_regression
|
Trademark stuffingis a form ofkeyword stuffing, an unethicalsearch engine optimizationmethod used bywebmastersand Internet marketers in order to manipulatesearch engineranking results served by websites such asGoogle,Yahoo!andMicrosoft Bing. A key characteristic of trademark stuffing is the intent of the infringer to confuse search engines and Internet users into thinking a website or web page is owned or otherwise authorized by thetrademarkowner. Trademark stuffing does not include using trademarks on third party website pages with the boundaries ofFair Use. When used effectively, trademark stuffing enables infringing websites to capture search engine traffic that may have otherwise been received by an authorized website or trademark owner.
Trademark stuffing is most often used to manipulate organic search engine optimization, however, can also be used with other forms ofsearch engine marketing, such as within the text ofpay-per-clickadvertisements. Using another's trademark or service mark as a keyword without permission is ill-advised, could constitute trademark infringement and result in other claims.[citation needed]
Trademark stuffing may be accomplished by placing trademarked text with the following areas of a web page:
By extension, another form of keyword stuffing involves placing trademarks within theanchor textof third party websites, then pointing the website address within the linked text back to an infringing website. An anchor link signals to Internet users that the link points to a website address relating to the trademark. Additionally, search engines are widely known to use anchor text linking data within their search engine ranking algorithms. Thus, trademark-stuffed anchor links signal relationship information to the search engines, thereby increasing the chance that an infringing website could achieve higher organic search rankings for a trademarkkeywordphrase.
|
https://en.wikipedia.org/wiki/Trademark_stuffing
|
Method engineeringin the "field ofinformation systemsis thedisciplineto construct new methods from existing methods".[2]It focuses on "the design, construction and evaluation of methods, techniques and support tools forinformation systems development".[3]
Furthermore, method engineering "wants to improve the usefulness ofsystems development methodsby creating an adaptation framework whereby methods are created to match specific organisational situations".[4]
Themeta-process modelingprocess is often not supported through software tools, called computer aided method engineering (CAME) tools, orMetaCASE tools(Meta-level Computer Assisted Software Engineering tools). Often the instantiation technique "has been utilised to build the repository of Computer Aided Method Engineering environments".[5]There are many tools for meta-process modeling.[6][7][8][9][10]
In the literature, different terms refer to the notion of method adaptation, including 'method tailoring', 'method fragment adaptation' and 'situational method engineering'. Method tailoring is defined as:
A process or capability in which human agents through responsive changes in, and dynamic interplays between contexts, intentions, and method fragments determine a system development approach for a specific project situation.[11]
Potentially, almost all agile methods are suitable for method tailoring. Even theDSDMmethod is being used for this purpose and has been successfully tailored in aCMMcontext.[12]Situation-appropriateness can be considered as a distinguishing characteristic between agile methods and traditional software development methods, with the latter being relatively much more rigid and prescriptive. The practical implication is that agile methods allow project teams to adapt workingpracticesaccording to the needs of individual projects. Practices are concrete activities and products that are part of a method framework. At a more extreme level, the philosophy behind the method, consisting of a number ofprinciples, could be adapted.[11]
Situational method engineering is the construction of methods which are tuned to specific situations of development projects.[13]It can be described as the creation of a new method by
This enables the creation of development methods suitable for any development situation. Each system development starts then, with a method definition phase where the development method is constructed on the spot.[4]
In case of mobile business development, there are methods available for specific parts of thebusiness modeldesign process and ICT development. Situational method engineering can be used to combine these methods into one unified method that adopts the characteristics of mobile ICT services.
The developers of theIDEFmodeling languages, Richard J. Mayer et al. (1995), have developed an early approach to method engineering from studying common method engineering practice and experience in developing other analysis anddesign methods. The following figure provides a process-oriented view of this approach. This image uses theIDEF3Process Description Capture method to describe this process where boxes with verb phrases represent activities, arrows represent precedence relationships, and "exclusive or" conditions among possible paths are represented by the junction boxes labeled with an "X.".[1]
According to this approach there are three basic strategies in method engineering:[1]
This basic strategies can be developed in a similar process of concept development
Aknowledge engineeringapproach is the predominant mechanism for method enhancement and new method development. In other words, with very few exceptions, method development involves isolating, documenting, and packaging existing practice for a given task in a form that promotes reliable success among practitioners. Expert attunements are first characterized in the form of basic intuitions and method concepts. These are often initially identified through analysis of the techniques, diagrams, and expressions used by experts. These discoveries aid in the search for existing methods that can be leveraged to support novice practitioners in acquiring the same attunements and skills.[1]
New method development is accomplished by establishing the scope of the method, refining characterizations of the method concepts and intuitions, designing a procedure that provides both task accomplishment and basic apprenticeship support to novice practitioners, and developing a language(s) of expression. Method application techniques are then developed outlining guidelines for use in a stand-alone mode and in concert with other methods. Each element of the method then undergoes iterative refinement through both laboratory and field testing.[1]
The method language design process is highly iterative and experimental in nature. Unlike procedure development, where a set of heuristics and techniques from existing practice can be identified, merged, and refined, language designers rarely encounter well-developed graphical display or textual information capture mechanisms. When potentially reusable language structures can be found, they are often poorly defined or only partially suited to the needs of the method.[1]
A critical factor in the design of a method language is clearly establishing the purpose and scope of the method. The purpose of the method establishes the needs the method must address. This is used to determine the expressive power required of the supporting language. The scope of the method establishes the range and depth of coverage which must also be established before one can design an appropriate language design strategy. Scope determination also involves deciding what cognitive activities will be supported through method application. For example, language design can be confined to only display the final results of method application (as in providing IDEF9 with graphical and textual language facilities that capture the logic and structure of constraints). Alternatively, there may be a need for in-process language support facilitating information collection and analysis. In those situations, specific language constructs may be designed to help method practitioners organize, classify, and represent information that will later be synthesized into additional representation structures intended for display.[1]
With this foundation, language designers begin the process of deciding what needs to be expressed in the language and how it should be expressed. Language design can begin by developing a textual language capable of representing the full range of information to be addressed. Graphical language structures designed to display select portions of the textual language can then be developed. Alternatively, graphical language structures may evolve prior to, or in parallel with, the development of the textual language. The sequence of these activities largely depends on the degree of understanding of the language requirements held among language developers. These may become clear only after several iterations of both graphical and textual language design.[1]
Graphical language design begins by identifying a preliminary set of schematics and the purpose or goals of each in terms of where and how they will support the method application process. The central item of focus is determined for each schematic. For example, in experimenting with alternative graphical language designs for IDEF9, a Context Schematic was envisioned as a mechanism to classify the varying environmental contexts in which constraints may apply. The central focus of this schematic was the context. After deciding on the central focus for the schematic, additional information (concepts and relations) that should be captured or conveyed is identified.[1]
Up to this point in the language design process, the primary focus has been on the information that should be displayed in a given schematic to achieve the goals of the schematic. This is where the language designer must determine which items identified for possible inclusion in the schematic are amenable to graphical representation and will serve to keep the user focused on the desired information content. With this general understanding, previously developed graphical language structures are explored to identify potential reuse opportunities. While exploring candidate graphical language designs for emerging IDEF methods, a wide range of diagrams were identified and explored. Quite often, even some of the central concepts of a method will have no graphical language element in the method.[1]
For example, theIDEF1Information Modeling method includes the notion of an entity but has no syntactic element for an entity in the graphical language.8. When the language designer decides that a syntactic element should be included for a method concept, candidate symbols are designed and evaluated. Throughout the graphical language design process, the language designer applies a number of guiding principles to assist in developing high quality designs. Among these, the language designer avoids overlapping concept classes or poorly defined ones. They also seek to establish intuitive mechanisms to convey the direction for reading the schematics.[1]
For example, schematics may be designed to be read from left to right, in a bottom-up fashion, or center-out. The potential for clutter or overwhelmingly large amounts of information on a single schematic is also considered as either condition makes reading and understanding the schematic extremely difficult.[1]
Each candidate design is then tested by developing a wide range of examples to explore the utility of the designs relative to the purpose for each schematic. Initial attempts at method development, and the development of supporting language structures in particular, are usually complicated. With successive iterations on the design, unnecessary and complex language structures are eliminated.[1]
As the graphical language design approaches a level of maturity, attention turns to the textual language. The purposes served by textual languages range from providing a mechanism for expressing information that has explicitly been left out of the graphical language to providing a mechanism for standard data exchange and automated model interpretation. Thus, the textual language supporting the method may be simple and unstructured (in terms of computer interpretability), or it may emerge as a highly structured, and complex language. The purpose of the method largely determines what level of structure will be required of the textual language.[1]
As the method language begins to approach maturity, mathematical formalization techniques are employed so the emerging language has clear syntax and semantics. The method formalization process often helps uncover ambiguities, identify awkward language structures, and streamline the language.[1]
These general activities culminate in a language that helps focus user attention on the information that needs to be discovered, analyzed, transformed, or communicated in the course of accomplishing the task for which the method was designed. Both the procedure and language components of the method also help users develop the necessary skills and attunements required to achieve consistently high quality results for the targeted task.[1]
Once the method has been developed, application techniques will be designed to successfully apply the method in stand-alone mode as well as together with other methods. Application techniques constitute the "use" component of the method which continues to evolve and grow throughout the life of the method. The method procedure, language constructs, and application techniques are reviewed and tested to iteratively refine the method.[1]
This article incorporates text fromUS Air Force,Information Integration for Concurrent Engineering (IICE) Compendium of methods reportbyRichard J. Mayeret al., 1995, a publication now in the public domain.
|
https://en.wikipedia.org/wiki/Method_engineering
|
Theparallelization contractorPACTprogramming model is a generalization of theMapReduceprogramming modeland usessecond order functionsto perform concurrent computations on large (Petabytes) data sets in parallel.
Similar to MapReduce, arbitrary user code is handed and executed by PACTs. However, PACT generalizes a couple of MapReduce's concepts:
Apache Flink, an open-source parallel data processing platform has implementedPACTs. Flink allows users to specify user functions with annotations.
Parallelization Contracts (PACTs) are data processing operators in a data flow. Therefore, a PACT has one or more data inputs and one or more outputs. A PACT consists of two components:
The figure below shows how those components work together. Input Contracts split the input data into independently processable subset. The user code is called for each of these independent subsets. All calls can be executed in parallel, because the subsets are independent.
Optionally, the user code can be annotated with additional information. These annotations disclose some information on the behavior of the black-box user function. ThePACT Compilercan utilize the information to obtain more efficient execution plans. However, while a missing annotation will not change the result of the execution, an incorrect Output Contract produces wrong results.
The currently supported Input Contracts and annotation are presented and discussed in the following.
Input Contracts split the input data of a PACT into independently processable subsets that are handed to the user function of the PACT.
Input Contracts vary in the number of data inputs and the way how independent subsets are generated.
More formally, Input Contracts are second-order functions with a first-order function (the user code), one or more input sets, and none or more key fields per input as parameters. The first-order function is called (one or) multiple times with subsets of the input set(s). Since the first-order functions have no side effects, each call is independent from each other and all calls can be done in parallel.
The second-order functionsmap()andreduce()of the MapReduce programming model are Input Contracts in the context of the PACT programming model.
The Map Input Contract works in the same way as in MapReduce. It has a single input and assigns each input record to its own subset. Hence, all records are processed independently from each other.
The Reduce Input Contract has the same semantics as the reduce function in MapReduce. It has a single input and groups together all records that have identical key fields. Each of these groups is handed as a whole to the user code and processed by it (see figure below). The PACT Programming Model does also support optional Combiners, e.g. for partial aggregations.
The Cross Input Contract works on two inputs. It builds the Cartesian product of the records of both inputs. Each element of the Cartesian product (pair of records) is handed to the user code.
The Match Input Contract works on two inputs. From both inputs it matches those records that are identical on their key fields come from different inputs. Hence, it resembles an equality join where the keys of both inputs are the attributes to join on. Each matched pair of records is handed to the user code.
The CoGroup Input Contract works on two inputs as well. It can be seen as a Reduce on two inputs. On each input, the records are grouped by key (such as Reduce does) and handed to the user code. In contrast to Match, the user code is also called for a key if only one input has a pair with it.
In contrast to MapReduce, PACT uses a more generic data model of records (Pact Record) to pass data between functions. The Pact Record can be thought of as a tuple with a free schema. The interpretation of the fields of a record is up to the user function. A Key/Value pair (as in MapReduce) is a special case of that record with only two fields (the key and the value).
For input contracts that operate on keys (like //Reduce//, //Match//, or //CoGroup//, one specifies which combination of the record's fields make up the key. An arbitrary combination of fields may used. See theQuery Exampleon how programs defining //Reduce// and //Match// contracts on one or more fields and can be written to minimally move data between fields.
The record may be sparsely filled, i.e. it may have fields that have //null// values. It is legal to produce a record where for example only fields 2 and 5 are set. Fields 1, 3, 4 are interpreted to be //null//. Fields that are used by a contract as key fields may however not be null, or an exception is raised.
User code annotation are optional in the PACT programming model. They allow the developer to make certain behaviors of her/his user code explicit to the optimizer. The PACT optimizer can utilize that information to obtain more efficient execution plans. However, it will not impact the correctness of the result if a valid annotation was not attached to the user code. On the other hand, invalidly specified annotations might cause the computation of wrong results. In the following, we list the current set of available Output Contracts.
TheConstant Fieldsannotation marks fields that are not modified by the user code function. Note that for every input record a constant field may not change its content and position in any output record! In case of binary second-order functions such as Cross, Match, and CoGroup, the user can specify one annotation per input.
TheConstant Fields Exceptannotation is inverse to the Constant Fields annotation. It annotates all fields which might be modified by the annotated user-function, hence the optimizer considers any not annotated field as constant. This annotation should be used very carefully! Again, for binary second-order functions (Cross, Match, CoGroup), one annotation per input can be defined. Note that either the Constant Fields or the Constant Fields Except annotation may be used for an input.
PACT programs are constructed as data flow graphs that consist of data sources, PACTs, and data sinks. One or more data sources read files that contain the input data and generate records from those files. Those records are processed by one or more PACTs, each consisting of an Input Contract, user code, and optional code annotations. Finally, the results are written back to output files by one or more data sinks. In contrast to the MapReduce programming model, a PACT program can be arbitrary complex and has no fixed structure.
The figure below shows a PACT program with two data sources, four PACTs, and one data sink. Each data source reads data from a specified location in the file system. Both sources forward the data to respective PACTs with Map Input Contracts. The user code is not shown in the figure. The output of both Map PACTs streams into a PACT with a Match Input Contract. The last PACT has a Reduce Input Contract and forwards its result to the data sink.
Wiki:pactProgram.png?nolink&600
For a more detailed comparison of the MapReduce and PACT programming models you can read our paper //"MapReduce and PACT - Comparing Data Parallel Programming Models"// (see ourpage).
|
https://en.wikipedia.org/wiki/Parallelization_contract
|
Rhyming slangis a form of slang word construction in theEnglish language. It is especially prevalent amongCockneysin England, and was first used in the early 19th century in theEast End of London; hence its alternative name,Cockney rhyming slang.[2][3]In the US, especially thecriminal underworldof theWest Coastbetween 1880 and 1920, rhyming slang has sometimes been known asAustralian slang.[4][5][6]
The construction of rhyming slang involves replacing a common word with a phrase of two or more words, the last of which rhymes with the original word; then, in almost all cases, omitting, from the end of the phrase, the secondary rhyming word (which is thereafter implied),[7][page needed][8][page needed]making the origin and meaning of the phrase elusive to listeners not in the know.[9][page needed]
The form ofCockneyslang is made clear with the following example. The rhyming phrase "apples and pears" is used to mean "stairs". Following the pattern of omission, "and pears" is dropped, thus the spoken phrase "I'm going up the apples" means "I'm going up the stairs".[10]
The following are further common examples of these phrases:[10][11][12]
In some examples the meaning is further obscured by adding a second iteration of rhyme and truncation to the original rhymed phrase. For example, the word "Aris" is often used to indicate the buttocks. This is the result of a double rhyme, starting with the original rough synonym "arse", which is rhymed with "bottle and glass", leading to "bottle". "Bottle" was then rhymed with "Aristotle" and truncated to "Aris". "Aris" was then rhymed with "plaster of Paris" and truncated to "plaster".[14]
Ghil'ad Zuckermann, alinguistandrevivalist, has proposed a distinction between rhyming slang based on sound only, and phono-semantic rhyming slang, which includes a semantic link between the slang expression and itsreferent(the thing it refers to).[15]: 29An example of rhyming slang based only on sound is the Cockney "tea leaf" (thief).[15]: 29An example ofphono-semanticrhyming slang is the Cockney "sorrowful tale" ((three months in) jail),[15]: 30in which case the person coining the slang term sees a semantic link, sometimes jocular, between the Cockney expression and its referent.[15]: 30
The use of rhyming slang has spread beyond the purely dialectal and some examples are to be found in the mainstream British English lexicon, although many users may be unaware of the origin of those words.[10]
Most of the words changed by this process are nouns, but a few are adjectival, e.g., "bales" of cotton (rotten), or the adjectival phrase "on one's tod" for "on one's own", afterTod Sloan, a famous jockey.[2][18]
Rhyming slang is believed to have originated in the mid-19th century in theEast Endof London, with several sources suggesting some time in the 1840s.[19]: 12[20][21]The Flash Dictionary, of unknown authorship, published in 1921 by Smeeton (48mo), contains a few rhymes.[22]: 3John Camden Hotten's 1859Dictionary of Modern Slang, Cant, and Vulgar Wordslikewise states that it originated in the 1840s ("about twelve or fifteen years ago"), but with "chaunters" and "patterers" in theSeven Dialsarea of London.[20]Hotten'sDictionaryincluded the first known "Glossary of the Rhyming Slang", which included later mainstays such as "frog and toad" (the main road) and "apples and pears" (stairs), as well as many more obscure examples, e.g. "Battle of the Nile" (a tile, a common term for a hat), "Duke of York" (take a walk), and "Top of Rome" (home).[20][23][22]
It remains a matter of speculation exactly how rhyming slang originated, for example, as a linguistic game among friends or as acryptolectdeveloped intentionally to confuse non-locals. If deliberate, it may also have been used to maintain a sense of community, or to allow traders to talk amongst themselves in marketplaces to facilitatecollusion, without customers knowing what they were saying, or by criminals to confuse the police (seethieves' cant).[citation needed]
The academic, lexicographer and radio personalityTerence Dolanhas suggested that rhyming slang was invented by Irish immigrants to London "so the actual English wouldn't understand what they were talking about."[24]
Many examples of rhyming slang are based on locations in London, such as "Peckham Rye", meaning "tie",[25]: 265which dates from the late nineteenth century; "Hampstead Heath", meaning "teeth"[25]: 264(usually as "Hampsteads"), which was first recorded in 1887; and "barnet" (Barnet Fair), meaning "hair",[25]: 231which dates from the 1850s.
In the 20th century, rhyming slang began to be based on the names of celebrities —Gregory Peck(neck;cheque),[25]: 74Ruby Murray[as Ruby] (curry),[25]:159Alan Whicker[as "Alan Whickers"] (knickers),[25]: 3Puff Daddy(caddy),[25]: 147Max Miller(pillow[pronounced/ˈpilə/]),[citation needed]Meryl Streep(cheap),[25]: 119Nat King Cole("dole"),[25]: 221Britney Spears(beers,tears),[25]: 27Henry Halls(balls)[25]: 82— and after pop culture references —Captain Kirk(work),[25]: 33Pop Goes the Weasel(diesel),[25]: 146Mona Lisa(pizza),[25]: 122Mickey Mouse(Scouse),[25]: 120Wallace and Gromit(vomit),[25]: 195Brady Bunch(lunch),[25]: 25Bugs Bunny(money),[25]: 29Scooby-Doo(clue),[25]: 164Winnie the Pooh(shoe),[25]: 199andSchindler's List(pissed).[25]: 163–164Some words have numerous definitions, such as dead (Father Ted, "gone to bed",brown bread),[25]: 220door(Roger Moore,Andrea Corr,George Bernard Shaw,Rory O'Moore),[25]: 221cocaine(Kurt Cobain; [as "Charlie"]Bob Marley,Boutros Boutros-Ghali,Gianluca Vialli,oatsandbarley; [as "line"]Patsy Cline; [as "powder"]Niki Lauda),[25]: 218flares("Lionel Blairs", "Tony Blairs", "Rupert Bears", "Dan Dares"),[25]: 225etc.
Many examples have passed into common usage. Some substitutions have become relatively widespread in England in their contracted form. "To have a butcher's", meaning to have a look, originates from "butcher's hook", an S-shaped hook used by butchers to hang up meat, and dates from the late nineteenth century but has existed independently in general use from around the 1930s simply as "butchers".[25]:30Similarly, "use your loaf", meaning "use your head", derives from "loaf of bread" and also dates from the late nineteenth century but came into independent use in the 1930s.[9][page needed]
Conversely usages have lapsed, or been usurped ("Hounslow Heath" for teeth, was replaced by "Hampsteads" from the heath of the same name, startingc.1887).[26]
In some cases,false etymologiesexist. For example, the term "barney" has been used to mean an altercation or fight since the late nineteenth century, although without a clear derivation.[27]In the 2001 feature filmOcean's Eleven, the explanation for the term is that it derives fromBarney Rubble,[28]the name of a cartoon character from theFlintstonestelevision program many decades later in origin.[25]:14[27]
Rhyming slang is used mainly in London in England but can, to some degree, be understood across the country. Some constructions, however, rely on particular regional accents for the rhymes to work. For instance, the term "Charing Cross" (a place in London), used to mean "horse" since the mid-nineteenth century,[9][page needed]does not work for a speaker without thelot–cloth split, common in London at that time but not nowadays. A similar example is "Joanna" meaning "piano", which is based on the pronunciation of "piano" as "pianna"/piˈænə/.[citation needed]Unique formations also exist in other parts of the United Kingdom, such as in theEast Midlands, where the local accent has formed "Derby Road", which rhymes with "cold".[citation needed]
Outside England, rhyming slang is used in many English-speaking countries in theCommonwealth of Nations, with local variations. For example, in Australian slang, the term for an English person is "pommy", which has been proposed as a rhyme on "pomegranate", pronounced "Pummy Grant", which rhymed with "immigrant".[29][30]
Rhyming slang is continually evolving, and new phrases are introduced all the time; new personalities replace old ones—pop culture introduces new words—as in "I haven't a Scooby" (fromScooby Doo, the eponymous cartoon dog of thecartoon series) meaning "I haven't a clue".[31]
Rhyming slang is often used as a substitute for words regarded as taboo, often to the extent that the association with the taboo word becomes unknown over time. "Berk" (often used to mean "foolish person") originates from the most famous of allfox hunts, the "Berkeley Hunt" meaning "cunt"; "cobblers" (often used in the context "what you said is rubbish") originates from "cobbler's awls", meaning "balls" (as in testicles); and "hampton" (usually "'ampton") meaning "prick" (as in penis) originates from "Hampton Wick" (a place in London) – the second part "wick" also entered common usage as "he gets on my wick" (he is an annoying person).[22]: 74
Lesser taboo terms include "pony and trap" for "crap" (as in defecate, but often used to denote nonsense or low quality); to blow araspberry(rude sound of derision) from raspberry tart for "fart"; "D'Oyly Carte" (an opera company) for "fart"; "Jimmy Riddle" (an American country musician) for "piddle" (as inurinate), "J. Arthur Rank" (a film mogul), "Sherman tank", "Jodrell Bank" or "ham shank" for "wank", "Bristol Cities" (contracted to 'Bristols') for "titties", etc. "Taking the Mick" or "taking the Mickey" is thought to be a rhyming slang form of "taking the piss", where "Mick" came from "Mickey Bliss".[32]
In December 2004Joe Pasquale, winner of the fourth series ofITV'sI'm a Celebrity... Get Me Out of Here!, became well known for his frequent use of the term "Jacobs", forJacob'sCream Crackers, a rhyming slang term for knackers i.e.testicles.
Rhyming slang has been widely used in popular culture including film, television, music, literature, sport and degree classification.
In theBritish undergraduate degree classificationsystem a first class honours degree is known as a "Geoff Hurst" (First) after the English 1966 World Cup footballer. An upper second class degree (a.k.a. a "2:1") is called an "Attila the Hun", and a lower second class ("2:2") a "Desmond Tutu", while a third class degree is known as a "Thora Hird" or "Douglas Hurd".[33]
Cary Grant's character teaches rhyming slang to his female companion inMr. Lucky(1943), describing it as 'Australian rhyming slang'. Rhyming slang is also used and described in a scene of the 1967 filmTo Sir, with LovestarringSidney Poitier, where the English students tell their foreign teacher that the slang is a drag and something for old people.[34]The closing song of the 1969 crime caper,The Italian Job, ("Getta Bloomin' Move On" a.k.a. "The Self Preservation Society") contains many slang terms.
Rhyming slang has been used to lend authenticity to an East End setting. Examples includeLock, Stock and Two Smoking Barrels(1998) (wherein the slang is translated via subtitles in one scene);The Limey(1999);Sexy Beast(2000);Snatch(2000);Ocean's Eleven(2001); andAustin Powers in Goldmember(2002);It's All Gone Pete Tong(2004), after BBC radio disc jockeyPete Tongwhose name is used in this context as rhyming slang for "wrong";Green Street Hooligans(2005). InMargin Call(2011), Will Emerson, played by London-born actorPaul Bettany, asks a friend on the telephone, "How's the trouble and strife?" ("wife").
Cockneys vs Zombies(2012) mocked the genesis of rhyming slang terms when a Cockney character calls zombies "Trafalgars" to even his Cockney fellows' puzzlement; he then explains it thus: "Trafalgar square – fox and hare – hairy Greek – five day week – weak and feeble – pins and needles – needle and stitch – Abercrombie and Fitch – Abercrombie: zombie".
The live-actionDisneyfilmMary Poppins Returnssong "Trip A Little Light Fantastic" involves Cockney rhyming slang in part of its lyrics, and is primarily spoken by the London lamplighters.
In the animated superhero filmSpider-Man: Across the Spider-Verse(2023), characterSpider-Punk, aCamdennative, is heard saying: "I haven't got ascooby" ("clue").[35]
Slang had a resurgence of popular interest in Britain beginning in the 1970s, resulting from its use in a number of London-based television programmes such asSteptoe and Son(1970–74); andNot On Your Nellie(1974–75), starringHylda Bakeras Nellie Pickersgill, alludes to the phrase "not on your Nellie Duff", rhyming slang for "not on your puff" i.e. not on your life. Similarly,The Sweeney(1975–78) alludes to the phrase "Sweeney Todd" for "Flying Squad", a rapid response unit of London's Metropolitan Police. InThe Fall and Rise of Reginald Perrin(1976–79), a comic twist was added to rhyming slang by way of spurious and fabricated examples which a young man had laboriously attempted to explain to his father (e.g. 'dustbins' meaning 'children', as in 'dustbin lids'='kids'; 'Teds' being 'Ted Heath' and thus 'teeth'; and even 'Chitty Chitty' being 'Chitty Chitty Bang Bang', and thus 'rhyming slang'...). It was also featured in an episode ofThe Good Lifein the first season (1975) where Tom and Barbara purchase a wood-burning range from a junk trader called Sam, who litters his language with phony rhyming slang in hopes of convincing suburban residents that he is an authentic traditional Cockney trader. He comes up with a fake story as to the origin of Cockney rhyming slang and is caught out rather quickly. InThe Jeffersonsseason 2 (1976) episode "The Breakup: Part 2",Mr. Bentleyexplains Cockney rhyming slang toGeorge Jefferson, in that "whistle and flute" means "suit", "apples and pears" means "stairs", "plates of meat" means "feet".
The use of rhyming slang was also prominent inMind Your Language(1977–79),Citizen Smith(1977–80),Minder[36][page needed](1979–94),Only Fools and Horses(1981–91), andEastEnders(1985–).Mindercould be quite uncompromising in its use of obscure forms without any clarification. Thus the non-Cockney viewer was obliged to deduce that, say, "iron" was "male homosexual" ('iron'='iron hoof'='poof'). One episode in Series 5 ofSteptoe and Sonwas entitled "Any Old Iron", for the same reason, when Albert thinks that Harold is 'on the turn'. Variations of rhyming slang were also used in sitcomBirds of a Feather, by main characters Sharon and Tracey, often to the confusion of character, Dorian Green, who was unfamiliar with the terms.
One early US show to regularly feature rhyming slang was the Saturday morning children's showThe Bugaloos(1970–72), with the character of Harmony (Wayne Laryea) often incorporating it in his dialogue.
In popular music,Spike Jonesand his City Slickers recorded "So 'Elp Me", based on rhyming slang, in 1950. The 1967Kinkssong "Harry Rag" was based on the usage of the nameHarry Wraggas rhyming slang for "fag" (i.e. acigarette). The idiom made a brief appearance in the UK-based DJ reggae music of the 1980s in the hit "Cockney Translation" bySmiley CultureofSouth London; this was followed a couple of years later by Domenick and Peter Metro's "Cockney and Yardie". London-based artists such asAudio BullysandChas & Dave(and others from elsewhere in the UK, such asThe Streets, who are from Birmingham) frequently use rhyming slang in their songs.
British-born M.C.MF Doomreleased an ode entitled "Rhymin' Slang", after settling in the UK in 2010. The track was released on the 2012JJ DoomalbumKey to the Kuffs.
Another contributor wasLonnie Doneganwho had a song called "My Old Man's a Dustman". In it he says his father has trouble putting on his boots "He's got such a job to pull them up that he calls them daisy roots".[37]
In modern literature, Cockney rhyming slang is used frequently in the novels and short stories ofKim Newman, for instance in the short story collections "The Man from the Diogenes Club" (2006) and "Secret Files of the Diogenes Club" (2007), where it is explained at the end of each book.[38]
It is also parodied inGoing PostalbyTerry Pratchett, which features a geriatric Junior Postman by the name of Tolliver Groat, a speaker of 'Dimwell Arrhythmic Rhyming Slang', the only rhyming slang on theDiscwhichdoes not actually rhyme. Thus, a wig is a 'prunes', from 'syrup of prunes', an obvious parody of the Cockneysyrupfromsyrup of figs – wig. There are numerous other parodies, though it has been pointed out that the result is even more impenetrable than a conventional rhyming slang and so may not be quite so illogical as it seems, given the assumed purpose of rhyming slang as a means of communicating in a manner unintelligible to all but the initiated.
In the bookGoodbye to All ThatbyRobert Graves, a beer is a "broken square" asWelch Fusiliersofficers walk into a pub and order broken squares when they see men from the Black Watch.The Black Watchhad a minor blemish on its record of otherwise unbroken squares. Fistfights ensued.
InDashiell Hammett'sThe Dain Curse, the protagonist exhibits familiarity with Cockney rhyming slang, referring to gambling at dice with the phrase "rats and mice."
Cockney rhyming slang is one of the main influences for the dialect spoken inA Clockwork Orange(1962).[39]The author of the novel,Anthony Burgess, also believed the phrase "as queer as a clockwork orange" was Cockney slang having heard it in a London pub in 1945, and subsequently named it in the title of his book.[40]
In Scottish football, a number of clubs have nicknames taken from rhyming slang.Partick Thistleare known as the "Harry Rags", which is taken from the rhyming slang of their 'official' nickname "the jags".Rangersare known as the "Teddy Bears", which comes from the rhyming slang for "the Gers" (shortened version of Ran-gers).Heart of Midlothianare known as the "Jambos", which comes from "Jam Tarts" which is the rhyming slang for "Hearts" which is the common abbreviation of the club's name.Hibernianare also referred to as "The Cabbage" which comes from Cabbage and Ribs being the rhyming slang for Hibs. The phrase Hampden Roar (originally describing the loud crowd noise emanating from thenational stadium) is employed as "What's the Hampden?",[41]("What's the score?",idiomfor "What's happening / what's going on?").[41][42]
Inrugby league, "meat pie" is used fortry.[43]
|
https://en.wikipedia.org/wiki/Rhyming_slang
|
Incomputer science, asuffix tree(also calledPAT treeor, in an earlier form,position tree) is a compressedtriecontaining all thesuffixesof the given text as their keys and positions in the text as their values. Suffix trees allow particularly fast implementations of many important string operations.
The construction of such a tree for the stringS{\displaystyle S}takes time and space linear in the length ofS{\displaystyle S}. Once constructed, several operations can be performed quickly, such as locating asubstringinS{\displaystyle S}, locating a substring if a certain number of mistakes are allowed, and locating matches for aregular expressionpattern. Suffix trees also provided one of the first linear-time solutions for thelongest common substring problem.[2]These speedups come at a cost: storing a string's suffix tree typically requires significantly more space than storing the string itself.
The concept was first introduced byWeiner (1973).
Rather than the suffixS[i..n]{\displaystyle S[i..n]}, Weiner stored in his trie[3]theprefix identifierfor each position, that is, the shortest string starting ati{\displaystyle i}and occurring only once inS{\displaystyle S}. HisAlgorithm Dtakes an uncompressed[4]trie forS[k+1..n]{\displaystyle S[k+1..n]}and extends it into a trie forS[k..n]{\displaystyle S[k..n]}. This way, starting from the trivial trie forS[n..n]{\displaystyle S[n..n]}, a trie forS[1..n]{\displaystyle S[1..n]}can be built byn−1{\displaystyle n-1}successive calls to Algorithm D; however, the overall run time isO(n2){\displaystyle O(n^{2})}. Weiner'sAlgorithm Bmaintains several auxiliary data structures, to achieve an overall run time linear in the size of the constructed trie. The latter can still beO(n2){\displaystyle O(n^{2})}nodes, e.g. forS=anbnanbn$.{\displaystyle S=a^{n}b^{n}a^{n}b^{n}\$.}Weiner'sAlgorithm Cfinally uses compressed tries to achieve linear overall storage size and run time.[5]Donald Knuthsubsequently characterized the latter as "Algorithm of the Year 1973" according to his studentVaughan Pratt.[original research?][6]The text bookAho, Hopcroft & Ullman (1974, Sect.9.5) reproduced Weiner's results in a simplified and more elegant form, introducing the termposition tree.
McCreight (1976)was the first to build a (compressed) trie of all suffixes ofS{\displaystyle S}. Although the suffix starting ati{\displaystyle i}is usually longer than the prefix identifier, their path representations in a compressed trie do not differ in size. On the other hand, McCreight could dispense with most of Weiner's auxiliary data structures; only suffix links remained.
Ukkonen (1995)further simplified the construction.[6]He provided the first online-construction of suffix trees, now known asUkkonen's algorithm, with running time that matched the then fastest algorithms.
These algorithms are all linear-time for a constant-size alphabet, and have worst-case running time ofO(nlogn){\displaystyle O(n\log n)}in general.
Farach (1997)gave the first suffix tree construction algorithm that is optimal for all alphabets. In particular, this is the first linear-time algorithm
for strings drawn from an alphabet of integers in a polynomial range. Farach's algorithm has become the basis for new algorithms for constructing both suffix trees andsuffix arrays, for example, in external memory, compressed, succinct, etc.
The suffix tree for the stringS{\displaystyle S}of lengthn{\displaystyle n}is defined as a tree such that:[7]
If a suffix ofS{\displaystyle S}is also the prefix of another suffix, such a tree does not exist for the string. For example, in the stringabcbc, the suffixbcis also a prefix of the suffixbcbc. In such a case, the path spelling outbcwill not end in a leaf, violating the fifth rule. To fix this problem,S{\displaystyle S}is padded with a terminal symbol not seen in the string (usually denoted$). This ensures that no suffix is a prefix of another, and that there will ben{\displaystyle n}leaf nodes, one for each of then{\displaystyle n}suffixes ofS{\displaystyle S}.[8]Since all internal non-root nodes are branching, there can be at mostn−1{\displaystyle n-1}such nodes, andn+(n−1)+1=2n{\displaystyle n+(n-1)+1=2n}nodes in total (n{\displaystyle n}leaves,n−1{\displaystyle n-1}internal non-root nodes, 1 root).
Suffix linksare a key feature for older linear-time construction algorithms, although most newer algorithms, which are based onFarach's algorithm, dispense with suffix links. In a complete suffix tree, all internal non-root nodes have a suffix link to another internal node. If the path from the root to a node spells the stringχα{\displaystyle \chi \alpha }, whereχ{\displaystyle \chi }is a single character andα{\displaystyle \alpha }is a string (possibly empty), it has a suffix link to the internal node representingα{\displaystyle \alpha }. See for example the suffix link from the node forANAto the node forNAin the figure above. Suffix links are also used in some algorithms running on the tree.
Ageneralized suffix treeis a suffix tree made for a set of strings instead of a single string. It represents all suffixes from this set of strings. Each string must be terminated by a different termination symbol.
A suffix tree for a stringS{\displaystyle S}of lengthn{\displaystyle n}can be built inΘ(n){\displaystyle \Theta (n)}time, if the letters come from an alphabet of integers in a polynomial range (in particular, this is true for constant-sized alphabets).[9]For larger alphabets, the running time is dominated by firstsortingthe letters to bring them into a range of sizeO(n){\displaystyle O(n)}; in general, this takesO(nlogn){\displaystyle O(n\log n)}time.
The costs below are given under the assumption that the alphabet is constant.
Assume that a suffix tree has been built for the stringS{\displaystyle S}of lengthn{\displaystyle n}, or that ageneralised suffix treehas been built for the set of stringsD={S1,S2,…,SK}{\displaystyle D=\{S_{1},S_{2},\dots ,S_{K}\}}of total lengthn=n1+n2+⋯+nK{\displaystyle n=n_{1}+n_{2}+\cdots +n_{K}}.
You can:
The suffix tree can be prepared for constant timelowest common ancestorretrieval between nodes inΘ(n){\displaystyle \Theta (n)}time.[17]One can then also:
Suffix trees can be used to solve a large number of string problems that occur in text-editing, free-text search,computational biologyand other application areas.[25]Primary applications include:[25]
Suffix trees are often used inbioinformaticsapplications, searching for patterns inDNAorproteinsequences (which can be viewed as long strings of characters). The ability to search efficiently with mismatches might be considered their greatest strength. Suffix trees are also used indata compression; they can be used to find repeated data, and can be used for the sorting stage of theBurrows–Wheeler transform. Variants of theLZWcompression schemes use suffix trees (LZSS). A suffix tree is also used insuffix tree clustering, adata clusteringalgorithm used in some search engines.[26]
If each node and edge can be represented inΘ(1){\displaystyle \Theta (1)}space, the entire tree can be represented inΘ(n){\displaystyle \Theta (n)}space. The total length of all the strings on all of the edges in the tree isO(n2){\displaystyle O(n^{2})}, but each edge can be stored as the position and length of a substring ofS, giving a total space usage ofΘ(n){\displaystyle \Theta (n)}computer words. The worst-case space usage of a suffix tree is seen with afibonacci word, giving the full2n{\displaystyle 2n}nodes.
An important choice when making a suffix tree implementation is the parent-child relationships between nodes. The most common is usinglinked listscalledsibling lists. Each node has a pointer to its first child, and to the next node in the child list it is a part of. Other implementations with efficient running time properties usehash maps, sorted or unsortedarrays(witharray doubling), orbalanced search trees. We are interested in:
Letσbe the size of the alphabet. Then you have the following costs:[citation needed]
The insertion cost is amortised, and that the costs for hashing are given for perfect hashing.
The large amount of information in each edge and node makes the suffix tree very expensive, consuming about 10 to 20 times the memory size of the source text in good implementations. Thesuffix arrayreduces this requirement to a factor of 8 (for array includingLCPvalues built within 32-bit address space and 8-bit characters.) This factor depends on the properties and may reach 2 with usage of 4-byte wide characters (needed to contain any symbol in someUNIX-likesystems, seewchar_t) on 32-bit systems.[citation needed]Researchers have continued to find smaller indexing structures.
Various parallel algorithms to speed up suffix tree construction have been proposed.[27][28][29][30][31]Recently, a practical parallel algorithm for suffix tree construction withO(n){\displaystyle O(n)}work(sequential time) andO(log2n){\displaystyle O(\log ^{2}n)}spanhas been developed. The algorithm achieves good parallel scalability on shared-memory multicore machines and can index thehuman genome– approximately 3GB– in under 3 minutes using a 40-core machine.[32]
Though linear, the memory usage of a suffix tree is significantly higher
than the actual size of the sequence collection. For a large text,
construction may require external memory approaches.
There are theoretical results for constructing suffix trees in external
memory.
The algorithm byFarach-Colton, Ferragina & Muthukrishnan (2000)is theoretically optimal, with an I/O complexity equal to that of sorting.
However the overall intricacy of this algorithm has prevented, so far, its
practical implementation.[33]
On the other hand, there have been practical works for constructing
disk-based suffix trees
which scale to (few) GB/hours.
The state of the art methods are TDD,[34]TRELLIS,[35]DiGeST,[36]and
B2ST.[37]
TDD and TRELLIS scale up to the entire human genome resulting in a disk-based suffix tree of a size in the tens of gigabytes.[34][35]However, these methods cannot handle efficiently collections of sequences exceeding 3 GB.[36]DiGeST performs significantly better and is able to handle collections of sequences in the order of 6 GB in about 6 hours.[36]
All these methods can efficiently build suffix trees for the case when the
tree does not fit in main memory,
but the input does.
The most recent method, B2ST,[37]scales to handle
inputs that do not fit in main memory. ERA is a recent parallel suffix tree construction method that is significantly faster. ERA can index the entire human genome in 19 minutes on an 8-core desktop computer with 16 GB RAM. On a simple Linux cluster with 16 nodes (4 GB RAM per node), ERA can index the entire human genome in less than 9 minutes.[38]
|
https://en.wikipedia.org/wiki/Suffix_tree
|
Inmathematics, afinite fieldorGalois field(so-named in honor ofÉvariste Galois) is afieldthat contains a finite number ofelements. As with any field, a finite field is aseton which the operations of multiplication, addition, subtraction and division are defined and satisfy certain basic rules. The most common examples of finite fields are theintegers modp{\displaystyle p}whenp{\displaystyle p}is aprime number.
Theorderof a finite field is its number of elements, which is either a prime number or aprime power. For every prime numberp{\displaystyle p}and every positive integerk{\displaystyle k}there are fields of orderpk{\displaystyle p^{k}}. All finite fields of a given order areisomorphic.
Finite fields are fundamental in a number of areas of mathematics andcomputer science, includingnumber theory,algebraic geometry,Galois theory,finite geometry,cryptographyandcoding theory.
A finite field is a finite set that is afield; this means that multiplication, addition, subtraction and division (excluding division by zero) are defined and satisfy the rules of arithmetic known as thefield axioms.[1]
The number of elements of a finite field is called itsorderor, sometimes, itssize. A finite field of orderq{\displaystyle q}exists if and only ifq{\displaystyle q}is aprime powerpk{\displaystyle p^{k}}(wherep{\displaystyle p}is a prime number andk{\displaystyle k}is a positive integer). In a field of orderpk{\displaystyle p^{k}}, addingp{\displaystyle p}copies of any element always results in zero; that is, thecharacteristicof the field isp{\displaystyle p}.[1]
Forq=pk{\displaystyle q=p^{k}}, all fields of orderq{\displaystyle q}areisomorphic(see§ Existence and uniquenessbelow).[2]Moreover, a field cannot contain two different finitesubfieldswith the same order. One may therefore identify all finite fields with the same order, and they are unambiguously denotedFq{\displaystyle \mathbb {F} _{q}},Fq{\displaystyle \mathbf {F} _{q}}orGF(q){\displaystyle \mathrm {GF} (q)}, where the letters GF stand for "Galois field".[3]
In a finite field of orderq{\displaystyle q}, thepolynomialXq−X{\displaystyle X^{q}-X}has allq{\displaystyle q}elements of the finite field asroots.[citation needed]The non-zero elements of a finite field form amultiplicative group. This group iscyclic, so all non-zero elements can be expressed as powers of a single element called aprimitive elementof the field. (In general there will be several primitive elements for a given field.)[1]
The simplest examples of finite fields are the fields of prime order: for eachprime numberp{\displaystyle p}, theprime fieldof orderp{\displaystyle p}may be constructed as theintegers modulop{\displaystyle p},Z/pZ{\displaystyle \mathbb {Z} /p\mathbb {Z} }.[1]
The elements of the prime field of orderp{\displaystyle p}may be represented by integers in the range0,…,p−1{\displaystyle 0,\ldots ,p-1}. The sum, the difference and the product are theremainder of the divisionbyp{\displaystyle p}of the result of the corresponding integer operation.[1]The multiplicative inverse of an element may be computed by using the extended Euclidean algorithm (seeExtended Euclidean algorithm § Modular integers).[citation needed]
LetF{\displaystyle F}be a finite field. For any elementx{\displaystyle x}inF{\displaystyle F}and anyintegern{\displaystyle n}, denote byn⋅x{\displaystyle n\cdot x}the sum ofn{\displaystyle n}copies ofx{\displaystyle x}. The least positiven{\displaystyle n}such thatn⋅1=0{\displaystyle n\cdot 1=0}is the characteristicp{\displaystyle p}of the field.[1]This allows defining a multiplication(k,x)↦k⋅x{\displaystyle (k,x)\mapsto k\cdot x}of an elementk{\displaystyle k}ofGF(p){\displaystyle \mathrm {GF} (p)}by an elementx{\displaystyle x}ofF{\displaystyle F}by choosing an integer representative fork{\displaystyle k}. This multiplication makesF{\displaystyle F}into aGF(p){\displaystyle \mathrm {GF} (p)}-vector space.[1]It follows that the number of elements ofF{\displaystyle F}ispn{\displaystyle p^{n}}for some integern{\displaystyle n}.[1]
Theidentity(x+y)p=xp+yp{\displaystyle (x+y)^{p}=x^{p}+y^{p}}(sometimes called thefreshman's dream) is true in a field of characteristicp{\displaystyle p}. This follows from thebinomial theorem, as eachbinomial coefficientof the expansion of(x+y)p{\displaystyle (x+y)^{p}}, except the first and the last, is a multiple ofp{\displaystyle p}.[citation needed]
ByFermat's little theorem, ifp{\displaystyle p}is a prime number andx{\displaystyle x}is in the fieldGF(p){\displaystyle \mathrm {GF} (p)}thenxp=x{\displaystyle x^{p}=x}. This implies the equalityXp−X=∏a∈GF(p)(X−a){\displaystyle X^{p}-X=\prod _{a\in \mathrm {GF} (p)}(X-a)}for polynomials overGF(p){\displaystyle \mathrm {GF} (p)}. More generally, every element inGF(pn){\displaystyle \mathrm {GF} (p^{n})}satisfies the polynomial equationxpn−x=0{\displaystyle x^{p^{n}}-x=0}.[citation needed]
Any finitefield extensionof a finite field isseparableand simple. That is, ifE{\displaystyle E}is a finite field andF{\displaystyle F}is a subfield ofE{\displaystyle E}, thenE{\displaystyle E}is obtained fromF{\displaystyle F}by adjoining a single element whoseminimal polynomialisseparable. To use a piece of jargon, finite fields areperfect.[1]
A more generalalgebraic structurethat satisfies all the other axioms of a field, but whose multiplication is not required to becommutative, is called adivision ring(or sometimesskew field). ByWedderburn's little theorem, any finite division ring is commutative, and hence is a finite field.[1]
Letq=pn{\displaystyle q=p^{n}}be aprime power, andF{\displaystyle F}be thesplitting fieldof the polynomialP=Xq−X{\displaystyle P=X^{q}-X}over the prime fieldGF(p){\displaystyle \mathrm {GF} (p)}. This means thatF{\displaystyle F}is a finite field of lowest order, in whichP{\displaystyle P}hasq{\displaystyle q}distinct roots (theformal derivativeofP{\displaystyle P}isP′=−1{\displaystyle P'=-1}, implying thatgcd(P,P′)=1{\displaystyle \mathrm {gcd} (P,P')=1}, which in general implies that the splitting field is aseparable extensionof the original). Theabove identityshows that the sum and the product of two roots ofP{\displaystyle P}are roots ofP{\displaystyle P}, as well as the multiplicative inverse of a root ofP{\displaystyle P}. In other words, the roots ofP{\displaystyle P}form a field of orderq{\displaystyle q}, which is equal toF{\displaystyle F}by the minimality of the splitting field.
The uniqueness up to isomorphism of splitting fields implies thus that all fields of orderq{\displaystyle q}are isomorphic. Also, if a fieldF{\displaystyle F}has a field of orderq=pk{\displaystyle q=p^{k}}as a subfield, its elements are theq{\displaystyle q}roots ofXq−X{\displaystyle X^{q}-X}, andF{\displaystyle F}cannot contain another subfield of orderq{\displaystyle q}.
In summary, we have the following classification theorem first proved in 1893 byE. H. Moore:[2]
The order of a finite field is a prime power. For every prime powerq{\displaystyle q}there are fields of orderq{\displaystyle q}, and they are all isomorphic. In these fields, every element satisfiesxq=x,{\displaystyle x^{q}=x,}and the polynomialXq−X{\displaystyle X^{q}-X}factors asXq−X=∏a∈F(X−a).{\displaystyle X^{q}-X=\prod _{a\in F}(X-a).}
It follows thatGF(pn){\displaystyle \mathrm {GF} (p^{n})}contains a subfield isomorphic toGF(pm){\displaystyle \mathrm {GF} (p^{m})}if and only ifm{\displaystyle m}is a divisor ofn{\displaystyle n}; in that case, this subfield is unique. In fact, the polynomialXpm−X{\displaystyle X^{p^{m}}-X}dividesXpn−X{\displaystyle X^{p^{n}}-X}if and only ifm{\displaystyle m}is a divisor ofn{\displaystyle n}.
Given a prime powerq=pn{\displaystyle q=p^{n}}withp{\displaystyle p}prime andn>1{\displaystyle n>1}, the fieldGF(q){\displaystyle \mathrm {GF} (q)}may be explicitly constructed in the following way. One first chooses anirreducible polynomialP{\displaystyle P}inGF(p)[X]{\displaystyle \mathrm {GF} (p)[X]}of degreen{\displaystyle n}(such an irreducible polynomial always exists). Then thequotient ringGF(q)=GF(p)[X]/(P){\displaystyle \mathrm {GF} (q)=\mathrm {GF} (p)[X]/(P)}of the polynomial ringGF(p)[X]{\displaystyle \mathrm {GF} (p)[X]}by the ideal generated byP{\displaystyle P}is a field of orderq{\displaystyle q}.
More explicitly, the elements ofGF(q){\displaystyle \mathrm {GF} (q)}are the polynomials overGF(p){\displaystyle \mathrm {GF} (p)}whose degree is strictly less thann{\displaystyle n}. The addition and the subtraction are those of polynomials overGF(p){\displaystyle \mathrm {GF} (p)}. The product of two elements is the remainder of theEuclidean divisionbyP{\displaystyle P}of the product inGF(q)[X]{\displaystyle \mathrm {GF} (q)[X]}.
The multiplicative inverse of a non-zero element may be computed with the extended Euclidean algorithm; seeExtended Euclidean algorithm § Simple algebraic field extensions.
However, with this representation, elements ofGF(q){\displaystyle \mathrm {GF} (q)}may be difficult to distinguish from the corresponding polynomials. Therefore, it is common to give a name, commonlyα{\displaystyle \alpha }to the element ofGF(q){\displaystyle \mathrm {GF} (q)}that corresponds to the polynomialX{\displaystyle X}. So, the elements ofGF(q){\displaystyle \mathrm {GF} (q)}become polynomials inα{\displaystyle \alpha }, whereP(α)=0{\displaystyle P(\alpha )=0}, and, when one encounters a polynomial inα{\displaystyle \alpha }of degree greater or equal ton{\displaystyle n}(for example after a multiplication), one knows that one has to use the relationP(α)=0{\displaystyle P(\alpha )=0}to reduce its degree (it is what Euclidean division is doing).
Except in the construction ofGF(4){\displaystyle \mathrm {GF} (4)}, there are several possible choices forP{\displaystyle P}, which produce isomorphic results. To simplify the Euclidean division, one commonly chooses forP{\displaystyle P}a polynomial of the formXn+aX+b,{\displaystyle X^{n}+aX+b,}which make the needed Euclidean divisions very efficient. However, for some fields, typically in characteristic2{\displaystyle 2}, irreducible polynomials of the formXn+aX+b{\displaystyle X^{n}+aX+b}may not exist. In characteristic2{\displaystyle 2}, if the polynomialXn+X+1{\displaystyle X^{n}+X+1}is reducible, it is recommended to chooseXn+Xk+1{\displaystyle X^{n}+X^{k}+1}with the lowest possiblek{\displaystyle k}that makes the polynomial irreducible. If all thesetrinomialsare reducible, one chooses "pentanomials"Xn+Xa+Xb+Xc+1{\displaystyle X^{n}+X^{a}+X^{b}+X^{c}+1}, as polynomials of degree greater than1{\displaystyle 1}, with an even number of terms, are never irreducible in characteristic2{\displaystyle 2}, having1{\displaystyle 1}as a root.[4]
A possible choice for such a polynomial is given byConway polynomials. They ensure a certain compatibility between the representation of a field and the representations of its subfields.
In the next sections, we will show how the general construction method outlined above works for small finite fields.
The smallest non-prime field is the field with four elements, which is commonly denotedGF(4){\displaystyle \mathrm {GF} (4)}orF4.{\displaystyle \mathbb {F} _{4}.}It consists of the four elements0,1,α,1+α{\displaystyle 0,1,\alpha ,1+\alpha }such thatα2=1+α{\displaystyle \alpha ^{2}=1+\alpha },1⋅α=α⋅1=α{\displaystyle 1\cdot \alpha =\alpha \cdot 1=\alpha },x+x=0{\displaystyle x+x=0}, andx⋅0=0⋅x=0{\displaystyle x\cdot 0=0\cdot x=0}, for everyx∈GF(4){\displaystyle x\in \mathrm {GF} (4)}, the other operation results being easily deduced from thedistributive law. See below for the complete operation tables.
This may be deduced as follows from the results of the preceding section.
OverGF(2){\displaystyle \mathrm {GF} (2)}, there is only oneirreducible polynomialof degree2{\displaystyle 2}:X2+X+1{\displaystyle X^{2}+X+1}Therefore, forGF(4){\displaystyle \mathrm {GF} (4)}the construction of the preceding section must involve this polynomial, andGF(4)=GF(2)[X]/(X2+X+1).{\displaystyle \mathrm {GF} (4)=\mathrm {GF} (2)[X]/(X^{2}+X+1).}Letα{\displaystyle \alpha }denote a root of this polynomial inGF(4){\displaystyle \mathrm {GF} (4)}. This implies thatα2=1+α,{\displaystyle \alpha ^{2}=1+\alpha ,}and thatα{\displaystyle \alpha }and1+α{\displaystyle 1+\alpha }are the elements ofGF(4){\displaystyle \mathrm {GF} (4)}that are not inGF(2){\displaystyle \mathrm {GF} (2)}. The tables of the operations inGF(4){\displaystyle \mathrm {GF} (4)}result from this, and are as follows:
A table for subtraction is not given, because subtraction is identical to addition, as is the case for every field of characteristic 2.
In the third table, for the division ofx{\displaystyle x}byy{\displaystyle y}, the values ofx{\displaystyle x}must be read in the left column, and the values ofy{\displaystyle y}in the top row. (Because0⋅z=0{\displaystyle 0\cdot z=0}for everyz{\displaystyle z}in everyringthedivision by 0has to remain undefined.) From the tables, it can be seen that the additive structure ofGF(4){\displaystyle \mathrm {GF} (4)}is isomorphic to theKlein four-group, while the non-zero multiplicative structure is isomorphic to the groupZ3{\displaystyle Z_{3}}.
The mapφ:x↦x2{\displaystyle \varphi :x\mapsto x^{2}}is the non-trivial field automorphism, called theFrobenius automorphism, which sendsα{\displaystyle \alpha }into the second root1+α{\displaystyle 1+\alpha }of the above-mentioned irreducible polynomialX2+X+1{\displaystyle X^{2}+X+1}.
For applying theabove general constructionof finite fields in the case ofGF(p2){\displaystyle \mathrm {GF} (p^{2})}, one has to find an irreducible polynomial of degree 2. Forp=2{\displaystyle p=2}, this has been done in the preceding section. Ifp{\displaystyle p}is an odd prime, there are always irreducible polynomials of the formX2−r{\displaystyle X^{2}-r}, withr{\displaystyle r}inGF(p){\displaystyle \mathrm {GF} (p)}.
More precisely, the polynomialX2−r{\displaystyle X^{2}-r}is irreducible overGF(p){\displaystyle \mathrm {GF} (p)}if and only ifr{\displaystyle r}is aquadratic non-residuemodulop{\displaystyle p}(this is almost the definition of a quadratic non-residue). There arep−12{\displaystyle {\frac {p-1}{2}}}quadratic non-residues modulop{\displaystyle p}. For example,2{\displaystyle 2}is a quadratic non-residue forp=3,5,11,13,…{\displaystyle p=3,5,11,13,\ldots }, and3{\displaystyle 3}is a quadratic non-residue forp=5,7,17,…{\displaystyle p=5,7,17,\ldots }. Ifp≡3mod4{\displaystyle p\equiv 3\mod 4}, that isp=3,7,11,19,…{\displaystyle p=3,7,11,19,\ldots }, one may choose−1≡p−1{\displaystyle -1\equiv p-1}as a quadratic non-residue, which allows us to have a very simple irreducible polynomialX2+1{\displaystyle X^{2}+1}.
Having chosen a quadratic non-residuer{\displaystyle r}, letα{\displaystyle \alpha }be a symbolic square root ofr{\displaystyle r}, that is, a symbol that has the propertyα2=r{\displaystyle \alpha ^{2}=r}, in the same way that the complex numberi{\displaystyle i}is a symbolic square root of−1{\displaystyle -1}. Then, the elements ofGF(p2){\displaystyle \mathrm {GF} (p^{2})}are all the linear expressionsa+bα,{\displaystyle a+b\alpha ,}witha{\displaystyle a}andb{\displaystyle b}inGF(p){\displaystyle \mathrm {GF} (p)}. The operations onGF(p2){\displaystyle \mathrm {GF} (p^{2})}are defined as follows (the operations between elements ofGF(p){\displaystyle \mathrm {GF} (p)}represented by Latin letters are the operations inGF(p){\displaystyle \mathrm {GF} (p)}):−(a+bα)=−a+(−b)α(a+bα)+(c+dα)=(a+c)+(b+d)α(a+bα)(c+dα)=(ac+rbd)+(ad+bc)α(a+bα)−1=a(a2−rb2)−1+(−b)(a2−rb2)−1α{\displaystyle {\begin{aligned}-(a+b\alpha )&=-a+(-b)\alpha \\(a+b\alpha )+(c+d\alpha )&=(a+c)+(b+d)\alpha \\(a+b\alpha )(c+d\alpha )&=(ac+rbd)+(ad+bc)\alpha \\(a+b\alpha )^{-1}&=a(a^{2}-rb^{2})^{-1}+(-b)(a^{2}-rb^{2})^{-1}\alpha \end{aligned}}}
The polynomialX3−X−1{\displaystyle X^{3}-X-1}is irreducible overGF(2){\displaystyle \mathrm {GF} (2)}andGF(3){\displaystyle \mathrm {GF} (3)}, that is, it is irreduciblemodulo2{\displaystyle 2}and3{\displaystyle 3}(to show this, it suffices to show that it has no root inGF(2){\displaystyle \mathrm {GF} (2)}nor inGF(3){\displaystyle \mathrm {GF} (3)}). It follows that the elements ofGF(8){\displaystyle \mathrm {GF} (8)}andGF(27){\displaystyle \mathrm {GF} (27)}may be represented byexpressionsa+bα+cα2,{\displaystyle a+b\alpha +c\alpha ^{2},}wherea,b,c{\displaystyle a,b,c}are elements ofGF(2){\displaystyle \mathrm {GF} (2)}orGF(3){\displaystyle \mathrm {GF} (3)}(respectively), andα{\displaystyle \alpha }is a symbol such thatα3=α+1.{\displaystyle \alpha ^{3}=\alpha +1.}
The addition, additive inverse and multiplication onGF(8){\displaystyle \mathrm {GF} (8)}andGF(27){\displaystyle \mathrm {GF} (27)}may thus be defined as follows; in following formulas, the operations between elements ofGF(2){\displaystyle \mathrm {GF} (2)}orGF(3){\displaystyle \mathrm {GF} (3)}, represented by Latin letters, are the operations inGF(2){\displaystyle \mathrm {GF} (2)}orGF(3){\displaystyle \mathrm {GF} (3)}, respectively:−(a+bα+cα2)=−a+(−b)α+(−c)α2(forGF(8),this operation is the identity)(a+bα+cα2)+(d+eα+fα2)=(a+d)+(b+e)α+(c+f)α2(a+bα+cα2)(d+eα+fα2)=(ad+bf+ce)+(ae+bd+bf+ce+cf)α+(af+be+cd+cf)α2{\displaystyle {\begin{aligned}-(a+b\alpha +c\alpha ^{2})&=-a+(-b)\alpha +(-c)\alpha ^{2}\qquad {\text{(for }}\mathrm {GF} (8),{\text{this operation is the identity)}}\\(a+b\alpha +c\alpha ^{2})+(d+e\alpha +f\alpha ^{2})&=(a+d)+(b+e)\alpha +(c+f)\alpha ^{2}\\(a+b\alpha +c\alpha ^{2})(d+e\alpha +f\alpha ^{2})&=(ad+bf+ce)+(ae+bd+bf+ce+cf)\alpha +(af+be+cd+cf)\alpha ^{2}\end{aligned}}}
The polynomialX4+X+1{\displaystyle X^{4}+X+1}is irreducible overGF(2){\displaystyle \mathrm {GF} (2)}, that is, it is irreducible modulo2{\displaystyle 2}. It follows that the elements ofGF(16){\displaystyle \mathrm {GF} (16)}may be represented byexpressionsa+bα+cα2+dα3,{\displaystyle a+b\alpha +c\alpha ^{2}+d\alpha ^{3},}wherea,b,c,d{\displaystyle a,b,c,d}are either0{\displaystyle 0}or1{\displaystyle 1}(elements ofGF(2){\displaystyle \mathrm {GF} (2)}), andα{\displaystyle \alpha }is a symbol such thatα4=α+1{\displaystyle \alpha ^{4}=\alpha +1}(that is,α{\displaystyle \alpha }is defined as a root of the given irreducible polynomial). As the characteristic ofGF(2){\displaystyle \mathrm {GF} (2)}is2{\displaystyle 2}, each element is its additive inverse inGF(16){\displaystyle \mathrm {GF} (16)}. The addition and multiplication onGF(16){\displaystyle \mathrm {GF} (16)}may be defined as follows; in following formulas, the operations between elements ofGF(2){\displaystyle \mathrm {GF} (2)}, represented by Latin letters are the operations inGF(2){\displaystyle \mathrm {GF} (2)}.(a+bα+cα2+dα3)+(e+fα+gα2+hα3)=(a+e)+(b+f)α+(c+g)α2+(d+h)α3(a+bα+cα2+dα3)(e+fα+gα2+hα3)=(ae+bh+cg+df)+(af+be+bh+cg+df+ch+dg)α+(ag+bf+ce+ch+dg+dh)α2+(ah+bg+cf+de+dh)α3{\displaystyle {\begin{aligned}(a+b\alpha +c\alpha ^{2}+d\alpha ^{3})+(e+f\alpha +g\alpha ^{2}+h\alpha ^{3})&=(a+e)+(b+f)\alpha +(c+g)\alpha ^{2}+(d+h)\alpha ^{3}\\(a+b\alpha +c\alpha ^{2}+d\alpha ^{3})(e+f\alpha +g\alpha ^{2}+h\alpha ^{3})&=(ae+bh+cg+df)+(af+be+bh+cg+df+ch+dg)\alpha \;+\\&\quad \;(ag+bf+ce+ch+dg+dh)\alpha ^{2}+(ah+bg+cf+de+dh)\alpha ^{3}\end{aligned}}}
The fieldGF(16){\displaystyle \mathrm {GF} (16)}has eightprimitive elements(the elements that have all nonzero elements ofGF(16){\displaystyle \mathrm {GF} (16)}as integer powers). These elements are the four roots ofX4+X+1{\displaystyle X^{4}+X+1}and theirmultiplicative inverses. In particular,α{\displaystyle \alpha }is a primitive element, and the primitive elements areαm{\displaystyle \alpha ^{m}}withm{\displaystyle m}less than andcoprimewith15{\displaystyle 15}(that is, 1, 2, 4, 7, 8, 11, 13, 14).
The set of non-zero elements inGF(q){\displaystyle \mathrm {GF} (q)}is anabelian groupunder the multiplication, of orderq−1{\displaystyle q-1}. ByLagrange's theorem, there exists a divisork{\displaystyle k}ofq−1{\displaystyle q-1}such thatxk=1{\displaystyle x^{k}=1}for every non-zerox{\displaystyle x}inGF(q){\displaystyle \mathrm {GF} (q)}. As the equationxk=1{\displaystyle x^{k}=1}has at mostk{\displaystyle k}solutions in any field,q−1{\displaystyle q-1}is the lowest possible value fork{\displaystyle k}.
Thestructure theorem of finite abelian groupsimplies that this multiplicative group iscyclic, that is, all non-zero elements are powers of a single element. In summary:
Such an elementa{\displaystyle a}is called aprimitive elementofGF(q){\displaystyle \mathrm {GF} (q)}. Unlessq=2,3{\displaystyle q=2,3}, the primitive element is not unique. The number of primitive elements isϕ(q−1){\displaystyle \phi (q-1)}whereϕ{\displaystyle \phi }isEuler's totient function.
The result above implies thatxq=x{\displaystyle x^{q}=x}for everyx{\displaystyle x}inGF(q){\displaystyle \mathrm {GF} (q)}. The particular case whereq{\displaystyle q}is prime isFermat's little theorem.
Ifa{\displaystyle a}is a primitive element inGF(q){\displaystyle \mathrm {GF} (q)}, then for any non-zero elementx{\displaystyle x}inF{\displaystyle F}, there is a unique integern{\displaystyle n}with0≤n≤q−2{\displaystyle 0\leq n\leq q-2}such thatx=an{\displaystyle x=a^{n}}.
This integern{\displaystyle n}is called thediscrete logarithmofx{\displaystyle x}to the basea{\displaystyle a}.
Whilean{\displaystyle a^{n}}can be computed very quickly, for example usingexponentiation by squaring, there is no known efficient algorithm for computing the inverse operation, the discrete logarithm. This has been used in variouscryptographic protocols, seeDiscrete logarithmfor details.
When the nonzero elements ofGF(q){\displaystyle \mathrm {GF} (q)}are represented by their discrete logarithms, multiplication and division are easy, as they reduce to addition and subtraction moduloq−1{\displaystyle q-1}. However, addition amounts to computing the discrete logarithm ofam+an{\displaystyle a^{m}+a^{n}}. The identityam+an=an(am−n+1){\displaystyle a^{m}+a^{n}=a^{n}\left(a^{m-n}+1\right)}allows one to solve this problem by constructing the table of the discrete logarithms ofan+1{\displaystyle a^{n}+1}, calledZech's logarithms, forn=0,…,q−2{\displaystyle n=0,\ldots ,q-2}(it is convenient to define the discrete logarithm of zero as being−∞{\displaystyle -\infty }).
Zech's logarithms are useful for large computations, such aslinear algebraover medium-sized fields, that is, fields that are sufficiently large for making natural algorithms inefficient, but not too large, as one has to pre-compute a table of the same size as the order of the field.
Every nonzero element of a finite field is aroot of unity, asxq−1=1{\displaystyle x^{q-1}=1}for every nonzero element ofGF(q){\displaystyle \mathrm {GF} (q)}.
Ifn{\displaystyle n}is a positive integer, ann{\displaystyle n}thprimitive root of unityis a solution of the equationxn=1{\displaystyle x^{n}=1}that is not a solution of the equationxm=1{\displaystyle x^{m}=1}for any positive integerm<n{\displaystyle m<n}. Ifa{\displaystyle a}is an{\displaystyle n}th primitive root of unity in a fieldF{\displaystyle F}, thenF{\displaystyle F}contains all then{\displaystyle n}roots of unity, which are1,a,a2,…,an−1{\displaystyle 1,a,a^{2},\ldots ,a^{n-1}}.
The fieldGF(q){\displaystyle \mathrm {GF} (q)}contains an{\displaystyle n}th primitive root of unity if and only ifn{\displaystyle n}is a divisor ofq−1{\displaystyle q-1}; ifn{\displaystyle n}is a divisor ofq−1{\displaystyle q-1}, then the number of primitiven{\displaystyle n}th roots of unity inGF(q){\displaystyle \mathrm {GF} (q)}isϕ(n){\displaystyle \phi (n)}(Euler's totient function). The number ofn{\displaystyle n}th roots of unity inGF(q){\displaystyle \mathrm {GF} (q)}isgcd(n,q−1){\displaystyle \mathrm {gcd} (n,q-1)}.
In a field of characteristicp{\displaystyle p}, everynp{\displaystyle np}th root of unity is also an{\displaystyle n}th root of unity. It follows that primitivenp{\displaystyle np}th roots of unity never exist in a field of characteristicp{\displaystyle p}.
On the other hand, ifn{\displaystyle n}iscoprimetop{\displaystyle p}, the roots of then{\displaystyle n}thcyclotomic polynomialare distinct in every field of characteristicp{\displaystyle p}, as this polynomial is a divisor ofXn−1{\displaystyle X^{n}-1}, whosediscriminantnn{\displaystyle n^{n}}is nonzero modulop{\displaystyle p}. It follows that then{\displaystyle n}thcyclotomic polynomialfactors overGF(q){\displaystyle \mathrm {GF} (q)}into distinct irreducible polynomials that have all the same degree, sayd{\displaystyle d}, and thatGF(pd){\displaystyle \mathrm {GF} (p^{d})}is the smallest field of characteristicp{\displaystyle p}that contains then{\displaystyle n}th primitive roots of unity.
When computingBrauer characters, one uses the mapαk↦exp(2πik/(q−1)){\displaystyle \alpha ^{k}\mapsto \exp(2\pi ik/(q-1))}to map eigenvalues of a representation matrix to the complex numbers. Under this mapping, the base subfieldGF(p){\displaystyle \mathrm {GF} (p)}consists of evenly spaced points around the unit circle (omitting zero).
The fieldGF(64){\displaystyle \mathrm {GF} (64)}has several interesting properties that smaller fields do not share: it has two subfields such that neither is contained in the other; not all generators (elements withminimal polynomialof degree6{\displaystyle 6}overGF(2){\displaystyle \mathrm {GF} (2)}) are primitive elements; and the primitive elements are not all conjugate under theGalois group.
The order of this field being26, and the divisors of6being1, 2, 3, 6, the subfields ofGF(64)areGF(2),GF(22) = GF(4),GF(23) = GF(8), andGF(64)itself. As2and3arecoprime, the intersection ofGF(4)andGF(8)inGF(64)is the prime fieldGF(2).
The union ofGF(4)andGF(8)has thus10elements. The remaining54elements ofGF(64)generateGF(64)in the sense that no other subfield contains any of them. It follows that they are roots of irreducible polynomials of degree6overGF(2). This implies that, overGF(2), there are exactly9 =54/6irreduciblemonic polynomialsof degree6. This may be verified by factoringX64−XoverGF(2).
The elements ofGF(64)are primitiven{\displaystyle n}th roots of unity for somen{\displaystyle n}dividing63{\displaystyle 63}. As the 3rd and the 7th roots of unity belong toGF(4)andGF(8), respectively, the54generators are primitiventh roots of unity for somenin{9, 21, 63}.Euler's totient functionshows that there are6primitive9th roots of unity,12{\displaystyle 12}primitive21{\displaystyle 21}st roots of unity, and36{\displaystyle 36}primitive63rd roots of unity. Summing these numbers, one finds again54{\displaystyle 54}elements.
By factoring thecyclotomic polynomialsoverGF(2){\displaystyle \mathrm {GF} (2)}, one finds that:
This shows that the best choice to constructGF(64){\displaystyle \mathrm {GF} (64)}is to define it asGF(2)[X] / (X6+X+ 1). In fact, this generator is a primitive element, and this polynomial is the irreducible polynomial that produces the easiest Euclidean division.
In this section,p{\displaystyle p}is a prime number, andq=pn{\displaystyle q=p^{n}}is a power ofp{\displaystyle p}.
InGF(q){\displaystyle \mathrm {GF} (q)}, the identity(x+y)p=xp+ypimplies that the mapφ:x↦xp{\displaystyle \varphi :x\mapsto x^{p}}is aGF(p){\displaystyle \mathrm {GF} (p)}-linear endomorphismand afield automorphismofGF(q){\displaystyle \mathrm {GF} (q)}, which fixes every element of the subfieldGF(p){\displaystyle \mathrm {GF} (p)}. It is called theFrobenius automorphism, afterFerdinand Georg Frobenius.
Denoting byφkthecompositionofφwith itselfktimes, we haveφk:x↦xpk.{\displaystyle \varphi ^{k}:x\mapsto x^{p^{k}}.}It has been shown in the preceding section thatφnis the identity. For0 <k<n, the automorphismφkis not the identity, as, otherwise, the polynomialXpk−X{\displaystyle X^{p^{k}}-X}would have more thanpkroots.
There are no otherGF(p)-automorphisms ofGF(q). In other words,GF(pn)has exactlynGF(p)-automorphisms, which areId=φ0,φ,φ2,…,φn−1.{\displaystyle \mathrm {Id} =\varphi ^{0},\varphi ,\varphi ^{2},\ldots ,\varphi ^{n-1}.}
In terms ofGalois theory, this means thatGF(pn)is aGalois extensionofGF(p), which has acyclicGalois group.
The fact that the Frobenius map is surjective implies that every finite field isperfect.
IfFis a finite field, a non-constantmonic polynomialwith coefficients inFisirreducibleoverF, if it is not the product of two non-constant monic polynomials, with coefficients inF.
As everypolynomial ringover a field is aunique factorization domain, every monic polynomial over a finite field may be factored in a unique way (up to the order of the factors) into a product of irreducible monic polynomials.
There are efficient algorithms for testing polynomial irreducibility and factoring polynomials over finite fields. They are a key step for factoring polynomials over the integers or therational numbers. At least for this reason, everycomputer algebra systemhas functions for factoring polynomials over finite fields, or, at least, over finite prime fields.
The polynomialXq−X{\displaystyle X^{q}-X}factors into linear factors over a field of orderq. More precisely, this polynomial is the product of all monic polynomials of degree one over a field of orderq.
This implies that, ifq=pnthenXq−Xis the product of all monic irreducible polynomials overGF(p), whose degree dividesn. In fact, ifPis an irreducible factor overGF(p)ofXq−X, its degree dividesn, as itssplitting fieldis contained inGF(pn). Conversely, ifPis an irreducible monic polynomial overGF(p)of degreeddividingn, it defines a field extension of degreed, which is contained inGF(pn), and all roots ofPbelong toGF(pn), and are roots ofXq−X; thusPdividesXq−X. AsXq−Xdoes not have any multiple factor, it is thus the product of all the irreducible monic polynomials that divide it.
This property is used to compute the product of the irreducible factors of each degree of polynomials overGF(p); seeDistinct degree factorization.
The numberN(q,n)of monic irreducible polynomials of degreenoverGF(q)is given by[5]N(q,n)=1n∑d∣nμ(d)qn/d,{\displaystyle N(q,n)={\frac {1}{n}}\sum _{d\mid n}\mu (d)q^{n/d},}whereμis theMöbius function. This formula is an immediate consequence of the property ofXq−Xabove and theMöbius inversion formula.
By the above formula, the number of irreducible (not necessarily monic) polynomials of degreenoverGF(q)is(q− 1)N(q,n).
The exact formula implies the inequalityN(q,n)≥1n(qn−∑ℓ∣n,ℓprimeqn/ℓ);{\displaystyle N(q,n)\geq {\frac {1}{n}}\left(q^{n}-\sum _{\ell \mid n,\ \ell {\text{ prime}}}q^{n/\ell }\right);}this is sharp if and only ifnis a power of some prime.
For everyqand everyn, the right hand side is positive, so there is at least one irreducible polynomial of degreenoverGF(q).
Incryptography, the difficulty of thediscrete logarithm problemin finite fields or inelliptic curvesis the basis of several widely used protocols, such as theDiffie–Hellmanprotocol. For example, in 2014, a secure internet connection to Wikipedia involved the elliptic curve Diffie–Hellman protocol (ECDHE) over a large finite field.[6]Incoding theory, many codes are constructed assubspacesofvector spacesover finite fields.
Finite fields are used by manyerror correction codes, such asReed–Solomon error correction codeorBCH code. The finite field almost always has characteristic of2, since computer data is stored in binary. For example, a byte of data can be interpreted as an element ofGF(28). One exception isPDF417bar code, which isGF(929). Some CPUs have special instructions that can be useful for finite fields of characteristic2, generally variations ofcarry-less product.
Finite fields are widely used innumber theory, as many problems over the integers may be solved by reducing themmoduloone or severalprime numbers. For example, the fastest known algorithms forpolynomial factorizationandlinear algebraover the field ofrational numbersproceed by reduction modulo one or several primes, and then reconstruction of the solution by usingChinese remainder theorem,Hensel liftingor theLLL algorithm.
Similarly many theoretical problems in number theory can be solved by considering their reductions modulo some or all prime numbers. See, for example,Hasse principle. Many recent developments ofalgebraic geometrywere motivated by the need to enlarge the power of these modular methods.Wiles' proof of Fermat's Last Theoremis an example of a deep result involving many mathematical tools, including finite fields.
TheWeil conjecturesconcern the number of points onalgebraic varietiesover finite fields and the theory has many applications includingexponentialandcharacter sumestimates.
Finite fields have widespread application incombinatorics, two well known examples being the definition ofPaley Graphsand the related construction forHadamard Matrices. Inarithmetic combinatoricsfinite fields[7]and finite field models[8][9]are used extensively, such as inSzemerédi's theoremon arithmetic progressions.
Adivision ringis a generalization of field. Division rings are not assumed to be commutative. There are no non-commutative finite division rings:Wedderburn's little theoremstates that all finitedivision ringsare commutative, and hence are finite fields. This result holds even if we relax theassociativityaxiom toalternativity, that is, all finitealternative division ringsare finite fields, by theArtin–Zorn theorem.[10]
A finite fieldF{\displaystyle F}is not algebraically closed: the polynomialf(T)=1+∏α∈F(T−α),{\displaystyle f(T)=1+\prod _{\alpha \in F}(T-\alpha ),}has no roots inF{\displaystyle F}, sincef(α) = 1for allα{\displaystyle \alpha }inF{\displaystyle F}.
Given a prime numberp, letF¯p{\displaystyle {\overline {\mathbb {F} }}_{p}}be an algebraic closure ofFp.{\displaystyle \mathbb {F} _{p}.}It is not only uniqueup toan isomorphism, as do all algebraic closures, but contrarily to the general case, all its subfield are fixed by all its automorphisms, and it is also the algebraic closure of all finite fields of the same characteristicp.
This property results mainly from the fact that the elements ofFpn{\displaystyle \mathbb {F} _{p^{n}}}are exactly the roots ofxpn−x,{\displaystyle x^{p^{n}}-x,}and this defines an inclusionFpn⊂Fpnm{\displaystyle \mathbb {\mathbb {F} } _{p^{n}}\subset \mathbb {F} _{p^{nm}}}form>1.{\displaystyle m>1.}These inclusions allow writing informallyF¯p=⋃n≥1Fpn.{\displaystyle {\overline {\mathbb {F} }}_{p}=\bigcup _{n\geq 1}\mathbb {F} _{p^{n}}.}The formal validation of this notation results from the fact that the above field inclusions form adirected setof fields; Itsdirect limitisF¯p,{\displaystyle {\overline {\mathbb {F} }}_{p},}which may thus be considered as "directed union".
Given aprimitive elementgmn{\displaystyle g_{mn}}ofFqmn,{\displaystyle \mathbb {F} _{q^{mn}},}thengmnm{\displaystyle g_{mn}^{m}}is a primitive element ofFqn.{\displaystyle \mathbb {F} _{q^{n}}.}
For explicit computations, it may be useful to have a coherent choice of the primitive elements for all finite fields; that is, to choose the primitive elementgn{\displaystyle g_{n}}ofFqn{\displaystyle \mathbb {F} _{q^{n}}}in order that, whenevern=mh,{\displaystyle n=mh,}one hasgm=gnh,{\displaystyle g_{m}=g_{n}^{h},}wheregm{\displaystyle g_{m}}is the primitive element already chosen forFqm.{\displaystyle \mathbb {F} _{q^{m}}.}
Such a construction may be obtained byConway polynomials.
Although finite fields are not algebraically closed, they arequasi-algebraically closed, which means that everyhomogeneous polynomialover a finite field has a non-trivial zero whose components are in the field if the number of its variables is more than its degree. This was a conjecture ofArtinandDicksonproved byChevalley(seeChevalley–Warning theorem).
|
https://en.wikipedia.org/wiki/Finite_field#Construction_of_finite_fields
|
Somemobile phonessupport use of twoSIM cards, described asdual SIMoperation. When a second SIM card is installed, the phone may allow users to switch between two separatemobile networkservices manually, have hardware support for keeping both connections in a "standby" state for automatic switching, or have twotransceiversto maintain both network connections at once.
Dual SIM phones are mainstream in many countries where phones are normally sold unlocked. Dual SIMs are popular for separating personal and business calls, in locations where lower prices apply to calls between clients of the same provider, where a single network may lack comprehensive coverage, and for travel across national and regional borders.[1][2]In countries where dual SIM phones are the norm, people who require only one SIM leave the second SIM slot empty. Dual SIM phones usually have two uniqueIMEInumbers, one for each SIM slot.
Devices that use more than two SIM cards have also been developed and released, notably the LG A290 triple SIM phone,[3]and even handsets that support four SIMs,[4][5]such as theCherry Mobile Quad Q70.[6]
The first phone to include dual SIM functionality was the Benefon Twin, released byBenefonin 2000.[7]More dual SIM phones were introduced in about 2007, most of them coming from small Chinese firms producing phones usingMediateksystems-on-a-chip. They started to attract mainstream attention.[8][9]
Such phones were initially eschewed by major manufacturers due to potential pressure from telecommunications companies,[10]but from about 2010Nokia,Samsung,Sonyand several others followed suit, with theNokia C2-00,Nokia C1-00andNokia C2-03and most notably theNokia X,[11][12][13]phones fromSamsung's Duos series,[14]and theSony Xperia Z3 Dual,Sony Xperia C[15]andtipo dual.[16][17]Appleadded dual SIM support in its 2018iPhone XSmodels, with models sold inChinacontaining two physical SIM slots, and models sold elsewhere supporting dual SIM by means of(Embedded) eSIMalongside a single physical SIM.[18][19]
For originating communications via the mobile phone network, the way to choose which SIM is used may vary on different phones. For example, one can be selected asprimaryor default for making calls, and one (which could be the same one) for data. Apple phones supporting dual SIMs can be set up to automatically use a specific SIM for each contact or the same one used for the last call to the contact, for iMessage, and for FaceTime.[20]Typically when dialling or sending a message an option to select a SIM is displayed.
Prior to the introduction of dual SIM phones, adapters that fit in the SIM card slot and hold two SIMs, with provision to switch between them when required.[10][21]
In dual SIMswitchphones, such as theNokia C1-00, only one SIM, selected by the user, is active at any time; it is not possible to receive or make calls on the inactive SIM.[22]
Dual SIMstandbyphones allow both SIMs to be accessed by usingtime multiplexing. When one SIM is in active use, for example on a call, the modem locks to it, leaving the other SIM unavailable. Older examples of dual-SIM standby phones include theSamsung Galaxy S Duos,[23]theSony Xperia M2Dual,[24]and theiPhone XS,XS MaxandiPhone XR.[25]
Dual SIM dualactive(DSDA) phones have two transceivers, and can receive calls on both SIM cards, at the cost of increased battery consumption and more complex hardware.[26][27]One example is theHTC Desire 600.[28]
Some telephones have a primary and a secondary SIM slot that support different generations of connectivity. For example,4Gand3Gprimary, and 3G and2Gsecondary,[29]or5Gand 5G, or 5G and 4G.[30]Selecting either of the SIMs as primary is usually possible without physically swapping the SIMs.
Some phone models utilize a "hybrid" SIM tray, which can hold either two SIM cards, or one SIM card and oneMicroSDmemory card.[31][32]TheHuawei Mate 20range introduced a proprietary memory card format calledNano Memory, exactly the size and shape of a nano SIM card.[33]
Some devices accept dual SIMs of different form factors. The Xiaomi Redmi Note 4 has a hybrid dual SIM tray that accepts one micro SIM card and one nano SIM card, the latter of which can be swapped for a MicroSD card.[29]
Dual SIM phones have become popular especially with business users[10][34]due to reduced costs by being able to use two different networks, with one possibly for personal use or based on signal strength or cost, without requiring several phones.
Some sub-contract Chinese companies supply inexpensive dual SIM handsets, mainly inAsiancountries. The phones, which also usually includetouch screeninterfaces and other modern features, typically retail for a much lower price than branded models. While some such phones are sold under generic names or arerebadgedby smaller companies under their own brand,[9]numerous manufacturers, especially in China, produce phones, including dual SIM models, undercounterfeittrademarks such as those of Nokia or Samsung,[35]either as cosmetically-identical clones of the originals, or in completely different designs, with the logo of a notable manufacturer present in order to take advantage of brand recognition or brand image.[8]
Dual SIM phones are common indeveloping countries, especially inChina,Southeast Asiaand theIndian subcontinent, with local firms likeKarbonn Mobiles,LYF,MicromaxandCherry Mobilereleasingfeature phonesandsmartphonesincorporating multiple SIM slots.[36][37]
The FrenchWikoMobile is also an example of rebadged Chinese Dual-SIM phones sold in few European countries as well as in North-West Africa.
Dual SIM phones have been rare in countries where phones have been usually sold on contract, as the carriers selling those phones prevent SIMs from competing carriers from being used with the phones. However, dual SIMs have been popular in locations where people normally buy phones directly from manufacturers. In such places there is little lock-in to carrier networks, and the costs of having two phone numbers are much lower.
Dual SIM phones allow separate numbers for personal and business calls on the same handset. Access to multiple networks is useful for people living in places where a single network's coverage may prove inadequate or unreliable. They are also useful in places where lower prices apply to calls between clients of the same provider.[38]
Dual SIM phones allow users to keep separate contact lists on each SIM, and allow easier roaming by being able to access a foreign network while keeping the existing local card.[39]
Vendors of foreign SIMs for travel often promote dual-SIM operation, with a home country and local SIM in the same handset.
|
https://en.wikipedia.org/wiki/Dual_SIM
|
In computing,protected mode, also calledprotected virtual address mode,[1]is an operational mode ofx86-compatiblecentral processing units(CPUs). It allowssystem softwareto use features such assegmentation,virtual memory,pagingand safemulti-taskingdesigned to increase an operating system's control overapplication software.[2][3]
When a processor that supports x86 protected mode is powered on, it begins executing instructions inreal mode, in order to maintainbackward compatibilitywith earlier x86 processors.[4]Protected mode may only be entered after the system software sets up one descriptor table and enables the Protection Enable (PE)bitin thecontrol register0 (CR0).[5]
Protected mode was first added to thex86architecture in 1982,[6]with the release ofIntel's80286(286) processor, and later extended with the release of the80386(386) in 1985.[7]Due to the enhancements added by protected mode, it has become widely adopted and has become the foundation for all subsequent enhancements to the x86 (IA-32) architecture,[8]although many of those enhancements, such as added instructions and new registers, also brought benefits to the real mode.
The first x86 processor, theIntel 8086, had a 20-bitaddress busfor itsmemory, as did itsIntel 8088variant.[9]This allowed them to access 220bytesof memory, equivalent to 1megabyte.[9]At the time, 1 megabyte was considered a relatively large amount of memory,[10]so the designers of theIBM Personal Computerreserved the first 640kilobytesfor use by applications and the operating system andthe remaining 384 kilobytesfor theBIOS(Basic Input/Output System) and memory foradd-on devices.[11]
As the cost of memory decreased and memory use increased, the 1 MB limitation became a significant problem.Intelintended to solve this limitation along with others with the release of the 286.[11]
The initial protected mode, released with the 286, was not widely used;[11]for example, it was used byCoherent(from 1982),[12]MicrosoftXenix(around 1984)[13]andMinix.[14]Several shortcomings such as the inability to make BIOS and DOS calls due to inability to switch back to real mode without resetting the processor prevented widespread usage.[15]Acceptance was additionally hampered by the fact that the 286 allowed memory access in 64kilobytesegments, addressed by its four segment registers, meaning that only4 × 64 KB, equivalent to 256 KB, could be accessed at a time.[11]Because changing a segment register in protected mode caused a 6-byte segment descriptor to be loaded into the CPU from memory, the segment register load instruction took many tens of processor cycles, making it much slower than on the 8086 and 8088; therefore, the strategy of computing segment addresses on-the-fly in order to access data structures larger than 128kilobytes(the combined size of the two data segments) became impractical, even for those few programmers who had mastered it on the 8086 and 8088.
The 286 maintained backward compatibility with the 8086 and 8088 by initially enteringreal modeon power up.[4]Real mode functioned virtually identically to the 8086 and 8088, allowing the vast majority of existingsoftwarefor those processors to run unmodified on the newer 286. Real mode also served as a more basic mode to set up andbootstrapinto protected mode. To access the extended functionality of the 286, the operating system would set up some tables in memory that controlled memory access in protected mode, set the addresses of those tables into some special registers of the processor, and then set the processor into protected mode. This enabled 24-bit addressing, which allowed the processor to access 224bytes of memory, equivalent to 16megabytes.[9]
With the release of the 386 in 1985,[7]many of the issues preventing widespread adoption of the previous protected mode were addressed.[11]The 386 was released with an address bus size of 32 bits, which allows for 232bytes of memory accessing, equivalent to 4gigabytes.[16]The segment sizes were also increased to 32 bits, meaning that the full address space of 4 gigabytes could be accessed without the need to switch between multiple segments.[16]In addition to the increased size of the address bus and segment registers, many other new features were added with the intention of increasing operational security and stability.[17]Protected mode is now used in virtually all modernoperating systemswhich run on the x86 architecture, such asMicrosoft Windows,Linux, and many others.[18]
Furthermore, learning from the failures of the 286 protected mode to satisfy the needs formultiuser DOS, Intel added a separatevirtual 8086 mode,[19]which allowed multiple virtualized 8086 processors to be emulated on the 386.Hardware x86 virtualizationrequired for virtualizing the protected mode itself, however, had to wait for another 20 years.[20]
With the release of the 386, the following additional features were added to protected mode:[2]
Until the release of the 386, protected mode did not offer a direct method to switch back into real mode once protected mode was entered.IBMdevised a workaround (implemented in theIBM AT) which involved resetting the CPU via the keyboard controller and saving the system registers,stack pointerand often the interrupt mask in the real-time clock chip's RAM. This allowed the BIOS to restore the CPU to a similar state and begin executing code before the reset.[clarification needed]Later, atriple faultwas used to reset the 286 CPU, which was a lot faster and cleaner than the keyboard controller method.
To enter protected mode, theGlobal Descriptor Table(GDT) must first be created with a minimum of three entries: a null descriptor, a code segment descriptor and data segment descriptor. Then, the PE bit must be set in the CR0 register and a far jump must be made to clear theprefetch input queue.[22][23]Also, on an IBM-compatible machine, in order to enable the CPU to access all 16 MB of the address space (instead of only the 8 even megabytes), theA20 line(21st address line) must be enabled. (A20 is disabled at power-up, causing each odd megabyte of the address space to be aliased to the previous even megabyte, in order to guarantee compatibility with older software written for the Intel 8088-basedIBM PCandPC/XTmodels).[24]Enabling A20 is not strictly required to run in protected mode; the CPU will operate normally in protected mode with A20 disabled, only without the ability to access half of the memory addresses.
With the release of the 386, protected mode could be exited by loading the segment registers with real mode values, disabling the A20 line and clearing the PE bit in the CR0 register, without the need to perform the initial setup steps required with the 286.[25]
Protected mode has a number of features designed to enhance an operating system's control over application software, in order to increase security and system stability.[3]These additions allow the operating system to function in a way that would be significantly more difficult or even impossible without proper hardware support.[26]
In protected mode, there are four privilege levels orrings, numbered from 0 to 3, with ring 0 being the most privileged and 3 being the least. The use of rings allows for system software to restrict tasks from accessing data,call gatesor executing privileged instructions.[27]In most environments, the operating system and somedevice driversrun in ring 0 and applications run in ring 3.[27]
According to theIntel 80286 Programmer's Reference Manual,[28]
the 80286 remains upwardly compatible with most 8086 and 80186 application programs. Most 8086 application programs can be re-compiled or re-assembled and executed on the 80286 in Protected Mode.
For the most part, the binary compatibility with real-mode code, the ability to access up to 16 MB of physical memory, and 1 GB ofvirtual memory, were the most apparent changes to application programmers.[29]This was not without its limitations. If an application utilized or relied on any of the techniques below, it would not run:[30]
In reality, almost allDOSapplication programs violated these rules.[32]Due to these limitations,virtual 8086 modewas introduced with the 386. Despite such potential setbacks,Windows 3.0and its successors can take advantage of the binary compatibility with real mode to run many Windows 2.x (Windows 2.0andWindows 2.1x) applications in protected mode, which ran in real mode in Windows 2.x.[33]
With the release of the 386, protected mode offers what the Intel manuals callvirtual 8086 mode. Virtual 8086 mode is designed to allow code previously written for the 8086 to run unmodified and concurrently with other tasks, without compromising security or system stability.[34]
Virtual 8086 mode, however, is not completely backward compatible with all programs. Programs that require segment manipulation, privileged instructions, direct hardware access, or useself-modifying codewill generate anexceptionthat must be served by the operating system.[35]In addition, applications running in virtual 8086 mode generate atrapwith the use of instructions that involveinput/output(I/O), which can negatively impact performance.[36]
Due to these limitations, some programs originally designed to run on the 8086 cannot be run in virtual 8086 mode. As a result, system software is forced to either compromise system security or backward compatibility when dealing withlegacy software. An example of such a compromise can be seen with the release ofWindows NT, which dropped backward compatibility for "ill-behaved" DOS applications.[37]
In real mode each logical address points directly into a physical memory location, every logical address consists of two 16-bit parts: The segment part of the logical address contains the base address of a segment with a granularity of 16 bytes, i.e. a segment may start at physical address 0, 16, 32, ..., 220− 16. The offset part of the logical address contains an offset inside the segment, i.e. the physical address can be calculated as physical_address = segment_part × 16 + offset, if the addressline A20is enabled, or (segment_part × 16 + offset) mod 220, if A20 is off.[clarification needed]Every segment has a size of 216bytes.
In protected mode, thesegment_partis replaced by a 16-bitselector, in which the 13 upper bits (bit 3 to bit 15) contain the index of anentryinside adescriptor table. The next bit (bit 2) specifies whether the operation is used with the GDT or the LDT. The lowest two bits (bit 1 and bit 0) of the selector are combined to define the privilege of the request, where the values of 0 and 3 represent the highest and the lowest privilege, respectively. This means that the byte offset of descriptors in the descriptor table is the same as the 16-bit selector, provided the lower three bits are zeroed.
The descriptor table entry defines the reallinearaddress of the segment, a limit value for the segment size, and some attribute bits (flags).
The segment address inside the descriptor table entry has a length of 24 bits so every byte of the physical memory can be defined as bound of the segment. The limit value inside the descriptor table entry has a length of 16 bits so segment length can be between 1 byte and 216byte. The calculated linear address equals the physical memory address.
The segment address inside the descriptor table entry is expanded to 32 bits so every byte of the physical memory can be defined as bound of the segment. The limit value inside the descriptor table entry is expanded to 20 bits and completed with a granularity flag (G-bit, for short):
The 386 processor also uses 32 bit values for the address offset.
For maintaining compatibility with 286 protected mode a new default flag (D-bit, for short) was added. If the D-bit of a code segment is off (0) all commands inside this segment will be interpreted as 16-bit commands by default; if it is on (1), they will be interpreted as 32-bit commands.
Where:
In addition to adding virtual 8086 mode, the 386 also added paging to protected mode.[39]Through paging, system software can restrict and control a task's access to pages, which are sections of memory. In many operating systems, paging is used to create an independent virtual address space for each task, preventing one task from manipulating the memory of another. Paging also allows for pages to be moved out ofprimary storageand onto a slower and largersecondary storage, such as ahard disk drive.[40]This allows for more memory to be used than physically available in primary storage.[40]
The x86 architecture allows control of pages through twoarrays: page directories andpage tables. Originally, a page directory was the size of one page, four kilobytes, and contained 1,024 page directory entries (PDE), although subsequent enhancements to the x86 architecture have added the ability to use larger page sizes. Each PDE contained apointerto a page table. A page table was also originally four kilobytes in size and contained 1,024 page table entries (PTE). Each PTE contained a pointer to the actual page's physical address and are only used when the four-kilobyte pages are used. At any given time, only one page directory may be in active use.[41]
Through the use of the rings, privilegedcall gates, and theTask State Segment(TSS), introduced with the 286,preemptive multitaskingwas made possible on the x86 architecture. The TSS allows general-purpose registers, segment selector fields, and stacks to all be modified without affecting those of another task. The TSS also allows a task's privilege level, and I/O port permissions to be independent of another task's.
In many operating systems, the full features of the TSS are not used.[42]This is commonly due to portability concerns or due to the performance issues created with hardware task switches.[42]As a result, many operating systems use both hardware and software to create a multitasking system.[43]
Operating systems likeOS/21.x try to switch the processor between protected and real modes. This is both slow and unsafe, because a real mode program can easilycrasha computer. OS/2 1.x defines restrictive programming rules allowing aFamily APIorboundprogram to run in either real or protected mode. Some earlyUnixoperating systems,OS/21.x, and Windows used this mode.
Windows 3.0was able to run real mode programs in 16-bit protected mode; when switching to protected mode, it decided to preserve the single privilege level model that was used in real mode, which is why Windows applications and DLLs can hook interrupts and do direct hardware access. That lasted through theWindows 9xseries. If a Windows 1.x or 2.x program is written properly and avoids segment arithmetic, it will run the same way in both real and protected modes. Windows programs generally avoid segment arithmetic because Windows implements a software virtual memory scheme, moving program code and data in memory when programs are not running, so manipulating absolute addresses is dangerous; programs should only keephandlesto memory blocks when not running. Starting an old program while Windows 3.0 is running in protected mode triggers a warning dialog, suggesting to either run Windows in real mode or to obtain an updated version of the application. Updating well-behaved programs using the MARK utility with the MEMORY parameter avoids this dialog. It is not possible to have some GUI programs running in 16-bit protected mode and other GUI programs running in real mode. InWindows 3.1, real mode was no longer supported and could not be accessed.
In modern 32-bit operating systems,virtual 8086 modeis still used for running applications, e.g.DPMIcompatibleDOS extenderprograms (throughvirtual DOS machines) or Windows 3.x applications (through theWindows on Windowssubsystem) and certain classes ofdevice drivers(e.g. for changing the screen-resolution using BIOS functionality) inOS/22.0 (and later OS/2) and 32-bitWindows NT, all under control of a 32-bit kernel. However, 64-bit operating systems (which run inlong mode) no longer use this, since virtual 8086 mode has been removed from long mode.
|
https://en.wikipedia.org/wiki/Protected_mode
|
Inmathematics,finite field arithmeticisarithmeticin afinite field(afieldcontaining a finite number ofelements) contrary to arithmetic in a field with an infinite number of elements, like the field ofrational numbers.
There are infinitely many different finite fields. Theirnumber of elementsis necessarily of the formpnwherepis aprime numberandnis apositive integer, and two finite fields of the same size areisomorphic. The primepis called thecharacteristicof the field, and the positive integernis called thedimensionof the field over itsprime field.
Finite fields are used in a variety of applications, including in classicalcoding theoryinlinear block codessuch asBCH codesandReed–Solomon error correction, incryptographyalgorithms such as theRijndael(AES) encryption algorithm, in tournament scheduling, and in thedesign of experiments.
The finite field withpnelements is denoted GF(pn) and is also called theGalois fieldof orderpn, in honor of the founder of finite field theory,Évariste Galois. GF(p), wherepis a prime number, is simply theringof integersmodulop. That is, one can perform operations (addition, subtraction, multiplication) using the usual operation on integers, followed by reduction modulop. For instance, in GF(5),4 + 3 = 7is reduced to 2 modulo 5. Division is multiplication by the inverse modulop, which may be computed using theextended Euclidean algorithm.
A particular case isGF(2), where addition isexclusive OR(XOR) and multiplication isAND. Since the only invertible element is 1, division is theidentity function.
Elements of GF(pn) may be represented aspolynomialsof degree strictly less thannover GF(p). Operations are then performed modulom(x)wherem(x)is anirreducible polynomialof degreenover GF(p), for instance usingpolynomial long division. Addition is the usual addition of polynomials, but the coefficients are reduced modulop. Multiplication is also the usual multiplication of polynomials, but with coefficients multiplied modulopand polynomials multiplied modulo the polynomialm(x).[1]This representation in terms of polynomial coefficients is called amonomial basis(a.k.a. 'polynomial basis').
There are other representations of the elements of GF(pn); some are isomorphic to the polynomial representation above and others look quite different (for instance, using matrices). Using anormal basismay have advantages in some contexts.
When the prime is 2, it is conventional to express elements of GF(pn) asbinary numbers, with the coefficient of each term in a polynomial represented by one bit in the corresponding element's binary expression. Braces ( "{" and "}" ) or similar delimiters are commonly added to binary numbers, or to their hexadecimal equivalents, to indicate that the value gives the coefficients of a basis of a field, thus representing an element of the field. For example, the following are equivalent representations of the same value in a characteristic 2 finite field:
There are many irreducible polynomials (sometimes calledreducing polynomials) that can be used to generate a finite field, but they do not all give rise to the same representation of the field.
Amonicirreducible polynomialof degreenhaving coefficients in the finite field GF(q), whereq=ptfor some primepand positive integert, is called aprimitive polynomialif all of its roots areprimitive elementsof GF(qn).[2][3]In the polynomial representation of the finite field, this implies thatxis a primitive element. There is at least one irreducible polynomial for whichxis a primitive element.[4]In other words, for a primitive polynomial, the powers ofxgenerate every nonzero value in the field.
In the following examples it is best not to use the polynomial representation, as the meaning ofxchanges between the examples. The monic irreducible polynomialx8+x4+x3+x+ 1overGF(2)is not primitive. Letλbe a root of this polynomial (in the polynomial representation this would bex), that is,λ8+λ4+λ3+λ+ 1 = 0. Nowλ51= 1, soλis not a primitive element of GF(28) and generates a multiplicative subgroup of order 51.[5]The monic irreducible polynomialx8+x4+x3+x2+ 1overGF(2)is primitive, and all 8 roots are generators ofGF(28).
All GF(28) have a total of 128 generators (seeNumber of primitive elements), and for a primitive polynomial, 8 of them are roots of the reducing polynomial. Havingxas a generator for a finite field is beneficial for many computational mathematical operations.
Addition and subtraction are performed by adding or subtracting two of these polynomials together, and reducing the result modulo the characteristic.
In a finite field with characteristic 2, addition modulo 2, subtraction modulo 2, and XOR are identical. Thus,
Under regular addition of polynomials, the sum would contain a term 2x6. This term becomes 0x6and is dropped when the answer is reduced modulo 2.
Here is a table with both the normal algebraic sum and the characteristic 2 finite field sum of a few polynomials:
In computer science applications, the operations are simplified for finite fields of characteristic 2, also called GF(2n)Galois fields, making these fields especially popular choices for applications.
Multiplication in a finite field is multiplicationmoduloanirreduciblereducing polynomial used to define the finite field. (I.e., it is multiplication followed by division using the reducing polynomial as the divisor—the remainder is the product.) The symbol "•" may be used to denote multiplication in a finite field.
Rijndael(standardised as AES) uses the characteristic 2 finite field with 256 elements, which can also be called the Galois field GF(28). It employs the following reducing polynomial for multiplication:
For example, {53} • {CA} = {01} in Rijndael's field because
and
The latter can be demonstrated throughlong division(shown using binary notation, since it lends itself well to the task. Notice thatexclusive ORis applied in the example and not arithmetic subtraction, as one might use in grade-school long division.):
(The elements {53} and {CA} aremultiplicative inversesof one another since their product is1.)
Multiplication in this particular finite field can also be done using a modified version of the "peasant's algorithm". Each polynomial is represented using the same binary notation as above. Eight bits is sufficient because only degrees 0 to 7 are possible in the terms of each (reduced) polynomial.
This algorithm uses threevariables(in the computer programming sense), each holding an eight-bit representation.aandbare initialized with the multiplicands;paccumulates the product and must be initialized to 0.
At the start and end of the algorithm, and the start and end of each iteration, thisinvariantis true:ab+pis the product. This is obviously true when the algorithm starts. When the algorithm terminates,aorbwill be zero sopwill contain the product.
This algorithm generalizes easily to multiplication over other fields of characteristic 2, changing the lengths ofa,b, andpand the value0x1bappropriately.
Themultiplicative inversefor an elementaof a finite field can be calculated a number of different ways:
When developing algorithms for Galois field computation on small Galois fields, a common performance optimization approach is to find ageneratorgand use the identity:
to implement multiplication as a sequence of table look ups for the logg(a) andgyfunctions and an integer addition operation. This exploits the property that every finite field contains generators. In the Rijndael field example, the polynomialx+ 1(or {03}) is one such generator. A necessary but not sufficient condition for a polynomial to be a generator is to beirreducible.
An implementation must test for the special case ofaorbbeing zero, as the product will also be zero.
This same strategy can be used to determine the multiplicative inverse with the identity:
Here, theorderof the generator, |g|, is the number of non-zero elements of the field. In the case of GF(28) this is28− 1 = 255. That is to say, for the Rijndael example:(x+ 1)255= 1. So this can be performed with two look up tables and an integer subtract. Using this idea for exponentiation also derives benefit:
This requires two table look ups, an integer multiplication and an integer modulo operation. Again a test for the special casea= 0must be performed.
However, in cryptographic implementations, one has to be careful with such implementations since thecache architectureof many microprocessors leads to variable timing for memory access. This can lead to implementations that are vulnerable to atiming attack.
For binary fields GF(2n), field multiplication can be implemented using a carryless multiply such asCLMUL instruction set, which is good forn≤ 64. A multiplication uses one carryless multiply to produce a product (up to 2n− 1 bits), another carryless multiply of a pre-computed inverse of the field polynomial to produce a quotient = ⌊product / (field polynomial)⌋, a multiply of the quotient by the field polynomial, then an xor: result = product ⊕ ((field polynomial) ⌊product / (field polynomial)⌋). The last 3 steps (pclmulqdq, pclmulqdq, xor) are used in theBarrett reductionstep for fast computation of CRC using thex86pclmulqdq instruction.[8]
Whenkis acomposite number, there will existisomorphismsfrom a binary field GF(2k) to an extension field of one of its subfields, that is, GF((2m)n) wherek=mn. Utilizing one of these isomorphisms can simplify the mathematical considerations as the degree of the extension is smaller with the trade off that the elements are now represented over a larger subfield.[9]To reduce gate count for hardware implementations, the process may involve multiple nesting, such as mapping from GF(28) to GF(((22)2)2).[10]
Here is someCcode which will add and multiply numbers in the characteristic 2 finite field of order 28, used for example by Rijndael algorithm or Reed–Solomon, using theRussian peasant multiplication algorithm:
This example hascache, timing, and branch prediction side-channelleaks, and is not suitable for use in cryptography.
ThisDprogram will multiply numbers in Rijndael's finite field and generate aPGMimage:
This example does not use any branches or table lookups in order to avoid side channels and is therefore suitable for use in cryptography.
|
https://en.wikipedia.org/wiki/Finite_field_arithmetic
|
Aheterarchyis a system of organization where the elements of the organization are unranked (non-hierarchical) or where they possess the potential to be ranked a number of different ways.[1]Definitions of the term vary among the disciplines: in social and information sciences, heterarchies arenetworksof elements in which each element shares the same "horizontal" position of power and authority, each playing a theoretically equal role. In biological taxonomy, however, the requisite features of heterarchy involve, for example, a species sharing, with a species in a differentfamily, a common ancestor which it does not share with members of its own family. This is theoretically possible under principles of "horizontal gene transfer".
A heterarchy may be orthogonal to ahierarchy, subsumed to a hierarchy, or it may contain hierarchies; the two kinds of structure are not mutually exclusive. In fact, each level in a hierarchical system is composed of a potentially heterarchical group.
The concept of heterarchy was first employed in a modern context bycyberneticianWarren McCullochin 1945.[2]As Carole L. Crumley has summarised, "[h]e examined alternativecognitivestructure(s), the collective organization of which he termed heterarchy. He demonstrated that the human brain, while reasonably orderly was not organized hierarchically. This understanding revolutionized the neural study of the brain and solved major problems in the fields ofartificial intelligenceand computer design."[3]
In a group of related items, heterarchy is a state wherein any pair of items is likely to be related in two or more differing ways. Whereas hierarchies sort groups into progressively smaller categories and subcategories, heterarchies divide and unite groups variously, according to multiple concerns that emerge or recede from view according to perspective. Crucially, no one way of dividing a heterarchical system can ever be a totalizing or all-encompassing view of the system, each division is clearly partial, and in many cases, a partial division leads us, as perceivers, to a feeling of contradiction that invites a new way of dividing things. (But of course the next view is just as partial and temporary.) Heterarchy is a name for this state of affairs, and a description of a heterarchy usually requires ambivalent thought, a willingness to ambulate freely between unrelated perspectives.
However, because the requirements for a heterarchical system are not exactly stated, identifying a heterarchy through the use of archaeological materials can often prove to be difficult.[4]
In an attempt to operationalize heterarchies, Schoenherr and Dopko[5]use the concept of reward systems andRelational models theory.Relational modelsare defined by distinct expectations for exchanges between individuals in terms of authority ranking, equality matching, communality, and market pricing. They suggest that discrepancies in the kind of reward that is used to assign merit and differences in merit assigned to specific groups of individuals can be used as evidence for heterarchical structure. Their study demonstrates differences in the number of women assigned PhDs, the number of women receiving academic appointments in high status academic institutions, and scientific awards.
Examples of heterarchical conceptualizations include theGilles Deleuze/Félix Guattariconceptions ofdeterritorialization,rhizome, andbody without organs.
Numerous observers[who?]in the information sciences have argued that heterarchical structure processes more information more effectively than hierarchical design. An example of the potential effectiveness of heterarchy would be the rapid growth of the heterarchicalWikipediaproject in comparison with the failed growth of theNupediaproject.[6]Heterarchy increasingly trumps hierarchy as complexity and rate of change increase.
Informational heterarchy can be defined as an organizational form somewhere between hierarchy and network that provides horizontal links that permit different elements of an organization to cooperate whilst individually optimizing different success criteria. In an organizational context the value of heterarchy derives from the way in which it permits the legitimate valuation of multiple skills, types of knowledge or working styles without privileging one over the other. In information science, therefore, heterarchy,responsible autonomyandhierarchyare sometimes combined under the umbrella termTriarchy.
This concept has also been applied to the field ofarchaeology, where it has enabled researchers to better understand social complexity. For further reading see the works of Carole Crumley.
The term heterarchy is used in conjunction with the concepts ofholonsandholarchyto describe individualsystemsat each level of a holarchy.
A heterarchical network could be used to describeneuronconnections or democracy, although there are clearly hierarchical elements in both.[7]
AnthropologistDmitri Bondarenkofollows Carole Crumley in her definition of heterarchy as "the relation of elements to one another when they are unranked or when they possess the potential for being ranked in a number of different ways" and argues that it is therefore not strictly the opposite of hierarchy, but is rather the opposite ofhomoarchy,[8]itself definable as "the relation of elements to one another when they possess the potential for being ranked in one way only".[9]
David C. Stark(1950- ) has been contributing to developing the concept of heterarchy in thesociology of organizations.
Politicalhierarchiesand heterarchies are systems in which multiple dynamic power-structures govern the actions of the system. They represent different types ofnetworkstructures that allow differing degrees of connectivity. In a (tree-structured)hierarchyeverynodeis connected to at most oneparent nodeand to zero or morechild nodes. In a heterarchy, however, a node can be connected to any of its surrounding nodes without needing to go through or to get permission from some other node.
Socially, a heterarchy distributesprivilegeand decision-making among participants, while a hierarchy assigns more power and privilege to the members "high" in the structure. In a systemic perspective, Gilbert Probst, Jean-Yves Mercier and others describe heterarchy as the flexibility of the formal relationships inside an organization.[10]Domination and subordination links can be reversed and privileges can be redistributed in each situation, following the needs of the system.[11]
Researchers have also framed higher-education staff as operating in a heterarchical structure. Examiningsex-based discriminationin psychology, Schoenherr and Dopko[5]identify discrepancies between the number of women awarded PhDs, the number of professorships held by women, and the number of scientific awards granted to women in the behavioral sciences and by the American Psychological Association. They argue that this data supports difference reward systems, representing heterarchies. They go on to connect the notion of heterarchy to contemporary models of relational structures in psychology (i.e., relational models theory). Schoenherr[12]has argued that this is also reflected in divisions within professional psychology, such as those between clinical psychologists and experimental psychologists. Using the history of professional psychology in Canada and the United States, he provides quotations from professional organization to illustrate the disparate identities and reward-systems. Rather than just reflecting a feature of psychological science, these[which?]case studies were presented as evidence of heterarchies in academia and in social organizations more generally.
|
https://en.wikipedia.org/wiki/Heterarchy
|
Incomputer networking, athin client,sometimes calledslim clientorlean client, is a simple (low-performance)computerthat has beenoptimizedforestablishing a remote connectionwith aserver-based computing environment. They are sometimes known asnetwork computers, or in their simplest form aszero clients. The server does most of the work, which can include launchingsoftwareprograms, performingcalculations, andstoring data. This contrasts with arich clientor a conventionalpersonal computer; the former is also intended for working in aclient–server modelbut has significant local processing power, while the latter aims to perform its function mostly locally.[1]
Thin clients occur as components of a broader computing infrastructure, where many clients share their computations with a server orserver farm. The server-side infrastructure usescloud computingsoftware such asapplication virtualization, hosted shared desktop (HSD) ordesktop virtualization(VDI). This combination forms what is known as a cloud-based system, where desktop resources are centralized at one or moredata centers. The benefits of centralization are hardware resource optimization, reducedsoftware maintenance, and improvedsecurity.
Thin client hardware generally supports commonperipherals, such as keyboards, mice,monitors,jacksfor sound peripherals, and openportsforUSBdevices (e.g., printer, flash drive, webcam). Some thin clients include (legacy)serialorparallel portsto support older devices, such as receipt printers, scales or time clocks. Thin client software typically consists of agraphical user interface(GUI), cloud access agents (e.g.,RDP,ICA,PCoIP), a localweb browser,terminal emulators(in some cases), and a basic set of localutilities.
In using cloud-based architecture, the server takes on the processing load of several client sessions, acting as a host for each endpoint device. The client software is narrowly purposed and lightweight; therefore, only the host server or server farm needs to be secured, rather than securing software installed on every endpoint device (although thin clients may still require basic security and strong authentication to prevent unauthorized access). One of the combined benefits of using cloud architecture with thin client desktops is that critical IT assets are centralized for better utilization of resources. Unused memory, bussing lanes, and processor cores within an individual user session, for example, can be leveraged for other active user sessions.
The simplicity of thin client hardware and software results in a very lowtotal cost of ownership, but some of these initial savings can be offset by the need for a more robust cloud infrastructure required on the server side.
An alternative to traditional server deployment which spreads out infrastructure costs over time is a cloud-based subscription model known asdesktop as a service, which allows IT organizations to outsource the cloud infrastructure to a third party.
Thin client computing is known to simplify the desktop endpoints by reducing the client-side software footprint. With a lightweight, read-onlyoperating system(OS), client-side setup and administration is greatly reduced. Cloud access is the primary role of a thin client which eliminates the need for a large suite of local user applications, data storage, and utilities. This architecture shifts most of the software execution burden from the endpoint to the data center. User assets are centralized for greater visibility. Data recovery and desktop repurposing tasks are also centralized for faster service and greater scalability.
While the server must be robust enough to handle several client sessions at once, thin client hardware requirements are minimal compared to that of a traditional PC laptop or desktop. Most thin clients have low-energy processors,flash storage, memory, and no moving parts. This reduces the cost, power consumption (heat, noise and vibrations), making them affordable to own and easy to replace or deploy. Numerous thin clients also useRaspberry Pis.[2]Since thin clients consist of fewer hardware components than a traditional desktop PC, they can operate in morehostile environments. And because they typically don't store critical data locally, risk of theft is minimized because there is little or no user data to be compromised.
Modern thin clients have come a long way to meet the demands of today's graphical computing needs. New generations of low energy chipset andcentral processing unit(CPU) combinations improve processing power and graphical capabilities. To minimize latency of high resolution video sent across the network, some host software stacks leverage multimedia redirection (MMR) techniques to offload video rendering to the desktop device. Video codecs are often embedded on the thin client to support these various multimedia formats. Other host software stacks makes use ofUser Datagram Protocol(UDP) in order to accelerate fast changing pixel updates required by modern video content. Thin clients typically support local software agents capable of accepting and decoding UDP.
Some of the more graphically intense use cases remain a challenge for thin clients. These use cases might include applications like photo editors, 3D drawing programs, and animation tools. This can be addressed at the host server using dedicatedGPUcards, allocation ofvGPUs(virtual GPU), workstation cards, andhardware accelerationcards. These solutions allow IT administrators to provide power-user performance where it is needed to a relatively generic endpoint device such as a thin client.
To achieve such simplicity, thin clients sometimes lag behind desktop PCs in terms of extensibility. For example, if a local software utility or set of device drivers are needed in order to support a locally attached peripheral device (e.g. printer, scanner,biometric security device), the thin client operating system may lack the resources needed to fully integrate the required dependencies (although dependencies can sometimes be added if they can be identified). Modern thin clients address this limitation via port mapping or USB redirection software. However, these methods cannot address all scenarios. Therefore, it is good practice to perform validation tests of locally attached peripherals in advance to ensure compatibility. Further, in large distributed desktop environments, printers are often networked, negating the need for device drivers on every desktop.
While running local productivity applications goes beyond the normal scope of a thin client, it is sometimes needed in rare use cases. License restrictions that apply to thin clients can sometimes prevent them from supporting these applications. Local storage constraints may also limit the space required to install large applications or application suites.
It is also important to acknowledge that network bandwidth and performance is more critical in any type of cloud-based computing model. IT organizations must ensure that their network can accommodate the number of users that they need to serve. If demand for bandwidth exceeds network limits, it could result in a major loss of end user productivity.
A similar risk exists inside the data center. Servers must be sized correctly in order to deliver adequate performance to end users. In a cloud-based computing model, the servers can also represent a single point of failure risk. If a server fails, end users lose access to all of the resources supported by that server. This risk can be mitigated by building redundancies, fail-over processes, backups, andload balancingutilities into the system. Redundancy provides reliable host availability but it can add cost to smaller user populations that lack scale.
Popular providers of thin clients include Chip PC Technologies,Dell(acquiredWyseTechnology in 2012),HP,ClearCube,IGEL Technology,LG,NComputing, Stratodesk,Samsung Electronics, ThinClient Direct, and ZeeTim.
Thin clients have their roots inmulti-user systems, traditionallymainframesaccessed by some sort ofcomputer terminal. As computer graphics matured, these terminals transitioned from providing acommand-line interfaceto a fullgraphical user interface, as is common on modern advanced thin clients. The prototypical multi-user environment along these lines,Unix, began to support fully graphicalX terminals, i.e., devices runningdisplay serversoftware, from about 1984. X terminals remained relatively popular even after the arrival of other thin clients in the mid-late 1990s.[citation needed]Modern Unix derivatives likeBSDandLinuxcontinue the tradition of the multi-user, remote display/input session. Typically, X software is not made available on non-X-based thin clients, although no technical reason for this exclusion would prevent it.
Windows NTbecame capable of multi-user operations primarily through the efforts ofCitrix Systems, which repackagedWindows NT 3.51as the multi-user operating systemWinFramein 1995, launched in coordination with Wyse Technology's Winterm thin client.Microsoftlicensed this technology back from Citrix and implemented it intoWindows NT 4.0Terminal Server Edition, under a project codenamed "Hydra". Windows NT then became the basis of Windows 2000 and Windows XP. As of 2011[update], Microsoft Windows systems support graphical terminals via theRemote Desktop Servicescomponent. The Wyse Winterm was the first Windows-display-focused thin client (AKA Windows Terminal) to access this environment.
The termthin clientwas coined in 1993[3]by Tim Negris, VP of Server Marketing atOracle Corporation, while working with company founderLarry Ellisonon the launch ofOracle 7. At the time, Oracle wished to differentiate their server-oriented software from Microsoft's desktop-oriented products. Ellison subsequently popularized Negris'buzzwordwith frequent use in his speeches and interviews about Oracle products. Ellison would go on to be a founding board member of thin client maker Network Computer, Inc (NCI), later renamed Liberate.[4]
The term stuck for several reasons. The earlier term "graphical terminal" had been chosen to distinguish such terminals from text-based terminals, and thus put the emphasis heavily ongraphics– which became obsolete as a distinguishing characteristic in the 1990s as text-only physical terminals themselves became obsolete, and text-only computer systems (a few of which existed in the 1980s) were no longer manufactured. The term "thin client" also conveys better what was then viewed as the fundamental difference: thin clients can be designed with less expensive hardware, because they have reduced computational workloads.
By the 2010s, thin clients were not the only desktop devices for general purpose computing that were "thin" – in the sense of having a small form factor and being relatively inexpensive. Thenettopform factor for desktop PCs was introduced, and nettops could run full feature Windows or Linux;tablets,tablet-laptop hybridshad also entered the market. However, while there was now little size difference, thin clients retained some key advantages over these competitors, such as not needing a local drive. However, "thin client" can be amisnomerfor slim form-factor computers usingflash memorysuch ascompactflash,SD card, or permanent flash memory as ahard disksubstitute. In 2013, a Citrix employee experimented with aRaspberry Pias a thin client.[5][6]Since then, several manufacturers have introduced their version of Raspberry Pi thin clients.[2]
|
https://en.wikipedia.org/wiki/Thin_client
|
High Speed Packet Access(HSPA)[1]is an amalgamation of twomobileprotocols—High Speed Downlink Packet Access (HSDPA) and High Speed Uplink Packet Access (HSUPA)—that extends and improves the performance of existing3Gmobile telecommunication networks using theWCDMAprotocols. A further-improved3GPPstandard calledEvolved High Speed Packet Access(also known as HSPA+) was released late in 2008, with subsequent worldwide adoption beginning in 2010. The newer standard allowsbit ratesto reach as high as 337 Mbit/s in the downlink and 34 Mbit/s in the uplink; however, these speeds are rarely achieved in practice.[2]
The first HSPA specifications supported increased peak data rates of up to 14 Mbit/s in the downlink and 5.76 Mbit/s in the uplink. They also reduced latency and provided up to five times more system capacity in the downlink and up to twice as much system capacity in the uplink compared with original WCDMA protocol.
High Speed Downlink Packet Access(HSDPA) is an enhanced3G(third-generation)mobilecommunications protocolin the High-Speed Packet Access (HSPA) family. HSDPA is also known as3.5Gand3G+. It allows networks based on theUniversal Mobile Telecommunications System(UMTS) to have higher data speeds and capacity. HSDPA also decreaseslatency, and therefore theround-trip timefor applications.
HSDPA was introduced in3GPPRelease 5. It was accompanied by an improvement to the uplink that provided a new bearer of 384 kbit/s (the previous maximum bearer was 128 kbit/s).Evolved High Speed Packet Access(HSPA+), introduced in 3GPP Release 7, further increased data rates by adding 64QAM modulation,MIMO, andDual-Carrier HSDPAoperation. Under 3GPP Release 11, even higher speeds of up to 337.5 Mbit/s were possible.[3]
The first phase of HSDPA was specified in 3GPP Release 5. This phase introduced new basic functions and was aimed to achieve peak data rates of 14.0 Mbit/s with significantly reduced latency. The improvement in speed and latency reduced the cost per bit and enhanced support for high-performance packet data applications. HSDPA is based on shared channel transmission, and its key features are shared channel and multi-code transmission,higher-order modulation, shortTransmission Time Interval(TTI), fast link adaptation and scheduling, and fasthybrid automatic repeat request(HARQ). Additional new features include the High Speed Downlink Shared Channels (HS-DSCH),quadrature phase-shift keying, 16-quadrature amplitude modulation, and the High Speed Medium Access protocol (MAC-hs) in base stations.
The upgrade to HSDPA is often just a software update for WCDMA networks. In HSDPA, voice calls are usually prioritized over data transfer.
The following table is derived from table 5.1a of the release 11 of 3GPP TS 25.306[4]and shows maximum data rates of different device classes and by what combination of features they are achieved. The per-cell, per-stream data rate is limited by the "maximum number of bits of an HS-DSCH transport block received within an HS-DSCH TTI" and the "minimum inter-TTI interval". The TTI is 2 milliseconds. So, for example, Cat 10 can decode 27,952 bits / 2 ms = 13.976 Mbit/s (and not 14.4 Mbit/s as often claimed incorrectly). Categories 1-4 and 11 have inter-TTI intervals of 2 or 3, which reduces the maximum data rate by that factor. Dual-Cell and MIMO 2x2 each multiply the maximum data rate by 2, because multiple independent transport blocks are transmitted over different carriers or spatial streams, respectively. The data rates given in the table are rounded to one decimal point.
Further UE categories were defined from 3GGP Release 7 onwards asEvolved HSPA(HSPA+) and are listed inEvolved HSDPA UE Categories.
As of 28 August 2009[update], 250 HSDPA networks had commercially launchedmobile broadbandservices in 109 countries. 169 HSDPA networks supported 3.6 Mbit/s peak downlink data throughput, and a growing number delivered 21 Mbit/s peak data downlink.[citation needed]
CDMA2000-EVDOnetworks had the early lead on performance. In particular,Japaneseproviders were highly successful benchmarks for this network standard. However, this later changed in favor of HSDPA, as an increasing number of providers worldwide began adopting it.
In 2007, an increasing number of telcos worldwide began sellingHSDPA USB modemsto provide mobile broadband connections. In addition, the popularity of HSDPA landline replacement boxes grew—these provided HSDPA for data viaEthernetandWi-Fi, as well as ports for connecting traditional landline telephones. Some were marketed with connection speeds of "up to 7.2 Mbit/s"[5]under ideal conditions. However, these services could be slower, such as when in fringe coverage indoors.
High-Speed Uplink Packet Access(HSUPA) is a 3G mobile telephonyprotocolin the HSPA family. It is specified and standardized in 3GPP Release 6 to improve the uplink data rate to 5.76 Mbit/s, extend capacity, and reduce latency. Together with additional improvements, this allows for new features such asVoice over Internet Protocol(VoIP), uploading pictures, and sending large e-mail messages.
HSUPA was the second major step in the UMTS evolution process. It has since been superseded by newer technologies with higher transfer rates, such asLTE(150 Mbit/s for downlink and 50 Mbit/s for uplink) andLTE Advanced(maximum downlink rates of over 1 Gbit/s).
HSUPA adds a new transport channel to WCDMA, called the Enhanced Dedicated Channel (E-DCH). It also features several improvements similar to those of HSDPA, including multi-code transmission, shorter transmission time interval enabling fasterlink adaptation, fast scheduling, and fasthybrid automatic repeat request(HARQ) with incremental redundancy, makingretransmissionsmore effective. Similar to HSDPA, HSUPA uses a "packet scheduler", but it operates on a "request-grant" principle where theuser equipment(UE) requests permission to send data and the scheduler decides when and how many UEs will be allowed to do so. A request for transmission contains data about the state of the transmission buffer and the queue at the UE and its available power margin. However, unlike HSDPA, uplink transmissions are notorthogonalto each other.
In addition to this "scheduled" mode of transmission, the standards allow a self-initiated transmission mode from the UEs, denoted "non-scheduled". The non-scheduled mode can, for example, be used for VoIP services for which even the reduced TTI and theNode Bbased scheduler are unable to provide the necessary short delay time and constant bandwidth.
Each MAC-d flow (i.e., QoS flow) is configured to use either scheduled or non-scheduled modes. The UE adjusts the data rate for scheduled and non-scheduled flows independently. The maximum data rate of each non-scheduled flow is configured at call setup, and typically not frequently changed. The power used by the scheduled flows is controlled dynamically by the Node B through absolute grant (consisting of an actual value) and relative grant (consisting of a single up/down bit) messages.
At thephysical layer, HSUPA introduces the following new channels:
The following table shows uplink speeds for the different categories of HSUPA:
Further UE categories were defined from 3GGP Release 7 onwards as Evolved HSPA (HSPA+) and are listed inEvolved HSUPA UE Categories.
Evolved HSPA(also known as HSPA Evolution, HSPA+) is a wireless broadband standard defined in3GPPrelease 7 of the WCDMA specification. It provides extensions to the existing HSPA definitions and is thereforebackward compatibleall the way to the original Release 99 WCDMA network releases. Evolved HSPA provides data rates between 42.2 and 56 Mbit/s in the downlink and 22 Mbit/s in the uplink (per 5 MHz carrier) with multiple input, multiple output (2x2 MIMO) technologies and higher order modulation (64 QAM). With Dual Cell technology, these can be doubled.
Since 2011, HSPA+ has been widely deployed among WCDMA operators, with nearly 200 commitments.[6]
|
https://en.wikipedia.org/wiki/3.5G
|
TheOpen Worldwide Application Security Project(formerly Open Web Application Security Project[7]) (OWASP) is an online community that produces freely available articles, methodologies, documentation, tools, and technologies in the fields ofIoT, system software andweb application security.[8][9][10]The OWASP provides free and open resources. It is led by a non-profit called The OWASP Foundation. The OWASP Top 10 2021 is the published result of recent research based on comprehensive data compiled from over 40 partner organizations.
Mark Curphey started OWASP on September 9, 2001.[2]Jeff Williams served as the volunteer Chair of OWASP from late 2003 until September 2011. As of 2015[update], Matt Konda chaired the Board.[11]
The OWASP Foundation, a 501(c)(3) non-profit organization in the US established in 2004, supports the OWASP infrastructure and projects. Since 2011, OWASP is also registered as a non-profit organization in Belgium under the name of OWASP Europe VZW.[12]
In February 2023, it was reported by Bil Corry, a OWASP Foundation Global Board of Directors officer,[13]on Twitter[7]that the board had voted for renaming from the Open Web Application Security Project to its current name, replacing Web with Worldwide.
They have several certification schemes to certify the knowledge of students in particular areas of security.
Baseline set of security standards applicable across technology stacks teaching learners about the OWASP top ten vulnerabilities.[30]
The OWASP organization received the 2014Haymarket Media GroupSC MagazineEditor's Choice award.[9][41]
|
https://en.wikipedia.org/wiki/OWASP
|
Alanguage island(a calque of GermanSprachinsel; alsolanguage enclave,language pocket) is anenclaveof alanguagethat is surrounded by one or more different languages.[1]The term was introduced in 1847.[2]Many speakers of these languages also have their own distinctculture.
Language islands often form as a result ofmigration,colonization,imperialism, ortradewithout a common tongue. Language islands are common of indigenous peoples, especially in theAmericas, where colonization has led them to isolate themselves greatly.
Patagonian Welsh is the dialect of theWelsh languagespoken inPatagonia, a region of southernArgentina. Despite it still being mutually intelligible with European Welsh, it has been heavily influenced bySpanish, the national language ofArgentina. Many Welsh Argentinians arebilingualor sometimestrilingualin Spanish, Welsh, andEnglish.
Education reports by English education officials created racist propaganda to raise suspicions against the Welsh people;[3]this led to laws prohibiting the Welsh language and parts of its culture.
Talianis a dialect ofWider Venetianspoken in several provinces of what is nowBrazil.[4]It is result of Italian settlers, most of them fromVeneto, taking permanent stay along the Brazilian coast, although some of them migrated further into the continent.
Examples of language islands:
Thislinguisticsarticle is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Language_island
|
Instatisticsand in particular inregression analysis, adesign matrix, also known asmodel matrixorregressor matrixand often denoted byX, is amatrixof values ofexplanatory variablesof a set of objects. Each row represents an individual object, with the successive columns corresponding to the variables and their specific values for that object. The design matrix is used in certainstatistical models, e.g., thegeneral linear model.[1][2][3]It can containindicator variables(ones and zeros) that indicate group membership in anANOVA, or it can contain values ofcontinuous variables.
The design matrix contains data on theindependent variables(also called explanatory variables), in a statistical model that is intended to explain observed data on a response variable (often called adependent variable). The theory relating to such models uses the design matrix as input to somelinear algebra: see for examplelinear regression. A notable feature of the concept of a design matrix is that it is able to represent a number of differentexperimental designsand statistical models, e.g.,ANOVA,ANCOVA, and linear regression.[citation needed]
The design matrix is defined to be a matrixX{\displaystyle X}such thatXij{\displaystyle X_{ij}}(thejthcolumn of theithrow ofX{\displaystyle X}) represents the value of thejthvariable associated with theithobject.
A regression model may be represented via matrix multiplication as
whereXis the design matrix,β{\displaystyle \beta }is a vector of the model's coefficients (one for each variable),e{\displaystyle e}is a vector of random errors with mean zero, andyis the vector of predicted outputs for each object.
The design matrix has dimensionn-by-p, wherenis the number of samples observed, andpis the number of variables (features) measured in all samples.[4][5]
In this representation different rows typically represent different repetitions of an experiment, while columns represent different types of data (say, the results from particular probes). For example, suppose an experiment is run where 10 people are pulled off the street and asked 4 questions. The data matrixMwould be a 10×4 matrix (meaning 10 rows and 4 columns). The datum in rowiand columnjof this matrix would be the answer of theithperson to thejthquestion.
The design matrix for anarithmetic meanis acolumnvector of ones.
This section gives an example ofsimple linear regression—that is, regression with only a single explanatory variable—with seven observations.
The seven data points are {yi,xi}, fori= 1, 2, …, 7. The simple linear regression model is
whereβ0{\displaystyle \beta _{0}}is they-intercept andβ1{\displaystyle \beta _{1}}is the slope of the regression line. This model can be represented in matrix form as
where the first column of 1s in the design matrix allows estimation of they-intercept while the second column contains thex-values associated with the correspondingy-values. The matrix whose columns are 1's andx's in this example is the design matrix.
This section contains an example ofmultiple regressionwith two covariates (explanatory variables):wandx.
Again suppose that the data consist of seven observations, and that for each observed value to be predicted (yi{\displaystyle y_{i}}), valueswiandxiof the two covariates are also observed. The model to be considered is
This model can be written in matrix terms as
Here the 7×3 matrix on the right side is the design matrix.
This section contains an example with a one-way analysis of variance (ANOVA) with three groups and seven observations. The given data set has the first three observations belonging to the first group, the following two observations belonging to the second group and the final two observations belonging to the third group.
If the model to be fit is just the mean of each group, then the model is
which can be written
In this modelμi{\displaystyle \mu _{i}}represents the mean of thei{\displaystyle i}th group.
The ANOVA model could be equivalently written as each group parameterτi{\displaystyle \tau _{i}}being an offset from some overall reference. Typically this reference point is taken to be one of the groups under consideration. This makes sense in the context of comparing multiple treatment groups to a control group and the control group is considered the "reference". In this example, group 1 was chosen to be the reference group. As such the model to be fit is
with the constraint thatτ1{\displaystyle \tau _{1}}is zero.
In this modelμ{\displaystyle \mu }is the mean of the reference group andτi{\displaystyle \tau _{i}}is the difference from groupi{\displaystyle i}to the reference group.τ1{\displaystyle \tau _{1}}is not included in the matrix because its difference from the reference group (itself) is necessarily zero.
|
https://en.wikipedia.org/wiki/Design_matrix#Simple_linear_regression
|
Incomputing, particularly in the context of theUnixoperating system andits workalikes,forkis an operation whereby aprocesscreates a copy of itself. It is an interface which is required for compliance with thePOSIXandSingle UNIX Specificationstandards. It is usually implemented as aC standard librarywrapperto the fork, clone, or othersystem callsof thekernel. Fork is the primary method of process creation on Unix-like operating systems.
In multitasking operating systems, processes (running programs) need a way to create new processes, e.g. to run other programs. Fork and its variants are typically the only way of doing so in Unix-like systems. For a process to start the execution of a different program, it first forks to create a copy of itself. Then, the copy, called the "child process", makes any environment changes the child will need and then calls theexecsystem call to overlay itself with the new program: it ceases execution of its former program in favor of the new. (Or, in rarer cases, the child forgoes theexecand continues executing, as a separate process, some other functionality of the original program.)
The fork operation creates a separateaddress spacefor the child. The child process has an exact copy of all the memory segments of the parent process. In modern UNIX variants that follow thevirtual memorymodel fromSunOS-4.0,copy-on-writesemantics are implemented and the physical memory need not be actually copied. Instead,virtual memory pagesin both processes may refer to the same pages ofphysical memoryuntil one of them writes to such a page: then it is copied. This optimization is important in the common case where fork is used in conjunction with exec to execute a new program: typically, the child process performs only a small set of actions before it ceases execution of its program in favour of the program to be started, and it requires very few, if any, of its parent'sdata structures.
When a process calls fork, it is deemed theparent processand the newly created process is its child. After the fork, both processes not only run the same program, but they resume execution as though both had called the system call. They can then inspect the call'sreturn valueto determine their status, child or parent, and act accordingly.
One of the earliest references to a fork concept appeared inA Multiprocessor System DesignbyMelvin Conway, published in 1962.[1]Conway's paper motivated the implementation byL. Peter Deutschof fork in theGENIE time-sharing system, where the concept was borrowed byKen Thompsonfor its earliest appearance[2]inResearch Unix.[3][4]Fork later became a standard interface inPOSIX.[5]
The child process starts off with a copy of its parent'sfile descriptors.[5]For interprocess communication, the parent process will often create one or severalpipes, and then after forking the processes will close the ends of the pipes that they do not need.[6]
Vfork is a variant of fork with the samecalling conventionand much the same semantics, but only to be used in restricted situations. It originated in the3BSDversion of Unix,[7][8][9]the first Unix to support virtual memory. It was standardized by POSIX, which permitted vfork to have exactly the same behavior as fork, but was marked obsolescent in the 2004 edition[10]and was replaced byposix_spawn() (which is typically implemented via vfork) in subsequent editions.
When a vfork system call is issued, the parent process will be suspended until the child process has either completed execution or been replaced with a new executable image via one of the "exec" family of system calls. The child borrows thememory management unitsetup from the parent and memory pages are shared among the parent and child process with no copying done, and in particular with nocopy-on-writesemantics;[10]hence, if the child process makes a modification in any of the shared pages, no new page will be created and the modified pages are visible to the parent process too. Since there is absolutely no page copying involved (consuming additional memory), this technique is an optimization over plain fork in full-copy environments when used with exec. In POSIX, using vfork for any purpose except as a prelude to an immediate call to a function from the exec family (and a select few other operations) gives rise toundefined behavior.[10]As with vfork, the child borrows data structures rather than copying them. vfork is still faster than a fork that uses copy on write semantics.
System Vdid not support this function call before System VR4 was introduced,[citation needed]because the memory sharing that it causes is error-prone:
Vforkdoes not copy page tables so it is faster than the System Vforkimplementation. But the child process executes in the same physical address space as the parent process (until anexecorexit) and can thus overwrite the parent's data and stack. A dangerous situation could arise if a programmer usesvforkincorrectly, so the onus for callingvforklies with the programmer. The difference between the System V approach and the BSD approach is philosophical: Should the kernel hide idiosyncrasies of its implementation from users, or should it allow sophisticated users the opportunity to take advantage of the implementation to do a logical function more efficiently?
Similarly, the Linuxman pagefor vfork strongly discourages its use:[7][failed verification–see discussion]
It is rather unfortunate that Linux revived this specter from the past. The BSD man page states: "This system call will be eliminated when proper system sharing mechanisms are implemented. Users should not depend on the memory sharing semantics of vfork() as it will, in that case, be made synonymous to fork(2)."
Other problems withvforkincludedeadlocksthat might occur inmultithreadedprograms due to interactions withdynamic linking.[12]As a replacement for thevforkinterface, POSIX introduced theposix_spawnfamily of functions that combine the actions of fork and exec. These functions may be implemented as library routines in terms offork, as is done in Linux,[12]or in terms ofvforkfor better performance, as is done in Solaris,[12][13]but the POSIX specification notes that they were "designed askernel operations", especially for operating systems running on constrained hardware andreal-time systems.[14]
While the 4.4BSD implementation got rid of the vfork implementation, causing vfork to have the same behavior as fork, it was later reinstated in theNetBSDoperating system for performance reasons.[8]
Some embedded operating systems such asuClinuxomit fork and only implement vfork, because they need to operate on devices where copy-on-write is impossible to implement due to lack of a memory management unit.
ThePlan 9operating system, created by the designers of Unix, includes fork but also a variant called "rfork" that permits fine-grained sharing of resources between parent and child processes, including the address space (except for astacksegment, which is unique to each process),environment variablesand the filesystem namespace;[15]this makes it a unified interface for the creation of both processes andthreadswithin them.[16]BothFreeBSD[17]andIRIXadopted the rfork system call from Plan 9, the latter renaming it "sproc".[18]
cloneis a system call in theLinux kernelthat creates a child process that may share parts of its executioncontextwith the parent. Like FreeBSD's rfork and IRIX's sproc, Linux's clone was inspired by Plan 9's rfork and can be used to implement threads (though application programmers will typically use a higher-level interface such aspthreads, implemented on top of clone). The "separate stacks" feature from Plan 9 and IRIX has been omitted because (according toLinus Torvalds) it causes too much overhead.[18]
In the original design of theVMSoperating system (1977), a copy operation with subsequent mutation of the content of a few specific addresses for the new process as in forking was considered risky.[citation needed]Errors in the current process state may be copied to a child process. Here, the metaphor of process spawning is used: each component of the memory layout of the new process is newly constructed from scratch. Thespawnmetaphor was later adopted in Microsoft operating systems (1993).
The POSIX-compatibility component ofVM/CMS(OpenExtensions) provides a very limited implementation of fork, in which the parent is suspended while the child executes, and the child and the parent share the same address space.[19]This is essentially avforklabelled as afork. (This applies to the CMS guest operating system only; other VM guest operating systems, such as Linux, provide standard fork functionality.)
The following variant of the"Hello, World!" programdemonstrates the mechanics of theforksystem call in theCprogramming language. The program forks into two processes, each deciding what functionality they perform based on the return value of the fork system call.Boilerplate codesuch asheader inclusionshas been omitted.
What follows is a dissection of this program.
The first statement inmaincalls theforksystem call to split execution into two processes. The return value offorkis recorded in a variable of typepid_t, which is the POSIX type for process identifiers (PIDs).
Minus one indicates an error infork: no new process was created, so an error message is printed.
Ifforkwas successful, then there are now two processes, both executing themainfunction from the point whereforkhas returned. To make the processes perform different tasks, the program mustbranchon the return value offorkto determine whether it is executing as thechildprocess or theparentprocess.
In the child process, the return value appears as zero (which is an invalidprocess identifier). The child process prints the desired greeting message, then exits. (For technical reasons, the POSIX_exitfunction must be used here instead of the C standardexitfunction.)
The other process, the parent, receives fromforkthe process identifier of the child, which is always a positive number. The parent process passes this identifier to thewaitpidsystem call to suspend execution until the child has exited. When this has happened, the parent resumes execution and exits by means of thereturnstatement.
|
https://en.wikipedia.org/wiki/Fork_(system_call)
|
Incomputer science,control flow(orflow of control) is the order in which individualstatements,instructionsorfunction callsof animperativeprogramareexecutedor evaluated. The emphasis on explicit control flow distinguishes animperative programminglanguage from adeclarative programminglanguage.
Within an imperativeprogramming language, acontrol flow statementis a statement that results in a choice being made as to which of two or more paths to follow. Fornon-strictfunctional languages, functions andlanguage constructsexist to achieve the same result, but they are usually not termed control flow statements.
A set of statements is in turn generally structured as ablock, which in addition to grouping, also defines alexical scope.
Interruptsandsignalsare low-level mechanisms that can alter the flow of control in a way similar to asubroutine, but usually occur as a response to some external stimulus or event (that can occurasynchronously), rather than execution of anin-linecontrol flow statement.
At the level ofmachine languageorassembly language, control flow instructions usually work by altering theprogram counter. For somecentral processing units(CPUs), the only control flow instructions available are conditional or unconditionalbranchinstructions, also termed jumps.
The kinds of control flow statements supported by different languages vary, but can be categorized by their effect:
Alabelis an explicit name or number assigned to a fixed position within thesource code, and which may be referenced by control flow statements appearing elsewhere in the source code. A label marks a position within source code and has no other effect.
Line numbersare an alternative to a named label used in some languages (such asBASIC). They arewhole numbersplaced at the start of each line of text in the source code. Languages which use these often impose the constraint that the line numbers must increase in value in each following line, but may not require that they be consecutive. For example, in BASIC:
In other languages such asCandAda, a label is anidentifier, usually appearing at the start of a line and immediately followed by a colon. For example, in C:
The languageALGOL 60allowed both whole numbers and identifiers as labels (both linked by colons to the following statement), but few if any otherALGOLvariants allowed whole numbers. EarlyFortrancompilers only allowed whole numbers as labels. Beginning with Fortran-90, alphanumeric labels have also been allowed.
Thegotostatement (a combination of the English wordsgoandto, and pronounced accordingly) is the most basic form of unconditional transfer of control.
Although thekeywordmay either be in upper or lower case depending on the language, it is usually written as:
The effect of a goto statement is to cause the next statement to be executed to be the statement appearing at (or immediately after) the indicated label.
Goto statements have beenconsidered harmfulby many computer scientists, notablyDijkstra.
The terminology forsubroutinesvaries; they may alternatively be known as routines, procedures, functions (especially if they return results) or methods (especially if they belong toclassesortype classes).
In the 1950s, computer memories were very small by current standards so subroutines were used mainly to reduce program size. A piece of code was written once and then used many times from various other places in a program.
Today, subroutines are more often used to help make a program more structured, e.g., by isolating some algorithm or hiding some data access method. If many programmers are working on one program, subroutines are one kind ofmodularitythat can help divide the work.
In structured programming, the ordered sequencing of successive commands is considered one of the basic control structures, which is used as a building block for programs alongside iteration, recursion and choice.
In May 1966, Böhm and Jacopini published an article[1]inCommunications of the ACMwhich showed that any program withgotos could be transformed into a goto-free form involving only choice (IF THEN ELSE) and loops (WHILE condition DO xxx), possibly with duplicated code and/or the addition of Boolean variables (true/false flags). Later authors showed that choice can be replaced by loops (and yet more Boolean variables).
That such minimalism is possible does not mean that it is necessarily desirable; computers theoretically need onlyone machine instruction(subtract one number from another and branch if the result is negative), but practical computers have dozens or even hundreds of machine instructions.
Other research showed that control structures with one entry and one exit were much easier to understand than any other form,[citation needed]mainly because they could be used anywhere as a statement without disrupting the control flow. In other words, they werecomposable. (Later developments, such asnon-strict programming languages– and more recently, composablesoftware transactions– have continued this strategy, making components of programs even more freely composable.)
Some academics took a purist approach to the Böhm–Jacopini result and argued that even instructions likebreakandreturnfrom the middle of loops are bad practice as they are not needed in the Böhm–Jacopini proof, and thus they advocated that all loops should have a single exit point. This purist approach is embodied in the languagePascal(designed in 1968–1969), which up to the mid-1990s was the preferred tool for teaching introductory programming in academia.[2]The direct application of the Böhm–Jacopini theorem may result in additional local variables being introduced in the structured chart, and may also result in somecode duplication.[3]Pascal is affected by both of these problems and according to empirical studies cited byEric S. Roberts, student programmers had difficulty formulating correct solutions in Pascal for several simple problems, including writing a function for searching an element in an array. A 1980 study by Henry Shapiro cited by Roberts found that using only the Pascal-provided control structures, the correct solution was given by only 20% of the subjects, while no subject wrote incorrect code for this problem if allowed to write a return from the middle of a loop.[2]
Most programming languages with control structures have an initial keyword which indicates the type of control structure involved.[clarification needed]Languages then divide as to whether or not control structures have a final keyword.
Conditional expressions and conditional constructs are features of aprogramming languagethat perform different computations or actions depending on whether a programmer-specifiedBooleanconditionevaluates to true or false.
Less common variations include:
Switch statements(orcase statements, ormultiway branches) compare a given value with specified constants and take action according to the first constant to match. There is usually a provision for a default action ("else", "otherwise") to be taken if no match succeeds. Switch statements can allow compiler optimizations, such aslookup tables. Indynamic languages, the cases may not be limited to constant expressions, and might extend topattern matching, as in theshell scriptexample on the right, where the*)implements the default case as aglobmatching any string. Case logic can also be implemented in functional form, as inSQL'sdecodestatement.
A loop is a sequence of statements which is specified once but which may be carried out several times in succession. The code "inside" the loop (thebodyof the loop, shown below asxxx) is obeyed a specified number of times, or once for each of a collection of items, or until some condition is met, orindefinitely. When one of those items is itself also a loop, it is called a "nested loop".[4][5][6]
Infunctional programminglanguages, such asHaskellandScheme, bothrecursiveanditerativeprocesses are expressed withtail recursiveprocedures instead of looping constructs that are syntactic.
Most programming languages have constructions for repeating a loop a certain number of times.
In most cases counting can go downwards instead of upwards and step sizes other than 1 can be used.
In these examples, if N < 1 then the body of loop may execute once (with I having value 1) or not at all, depending on the programming language.
In many programming languages, only integers can be reliably used in a count-controlled loop. Floating-point numbers are represented imprecisely due to hardware constraints, so a loop such as
might be repeated 9 or 10 times, depending on rounding errors and/or the hardware and/or the compiler version. Furthermore, if the increment of X occurs by repeated addition, accumulated rounding errors may mean that the value of X in each iteration can differ quite significantly from the expected sequence 0.1, 0.2, 0.3, ..., 1.0.
Most programming languages have constructions for repeating a loop until some condition changes. Some variations test the condition at the start of the loop; others test it at the end. If the test is at the start, the body may be skipped completely; if it is at the end, the body is always executed at least once.
Acontrol breakis a value change detection method used within ordinary loops to trigger processing for groups of values. Values are monitored within the loop and a change diverts program flow to the handling of the group event associated with them.
Several programming languages (e.g.,Ada,D,C++11,Smalltalk,PHP,Perl,Object Pascal,Java,C#,MATLAB,Visual Basic,Ruby,Python,JavaScript,Fortran 95and later) have special constructs which allow implicit looping through all elements of an array, or all members of a set or collection.
Scalahasfor-expressions, which generalise collection-controlled loops, and also support other uses, such asasynchronous programming.Haskellhas do-expressions and comprehensions, which together provide similar function to for-expressions in Scala.
General iteration constructs such as C'sforstatement andCommon Lisp'sdoform can be used to express any of the above sorts of loops, and others, such as looping over some number of collections in parallel. Where a more specific looping construct can be used, it is usually preferred over the general iteration construct, since it often makes the purpose of the expression clearer.
Infinite loopsare used to assure a program segment loops forever or until an exceptional condition arises, such as an error. For instance, an event-driven program (such as aserver) should loop forever, handling events as they occur, only stopping when the process is terminated by an operator.
Infinite loops can be implemented using other control flow constructs. Most commonly, in unstructured programming this is jump back up (goto), while in structured programming this is an indefinite loop (while loop) set to never end, either by omitting the condition or explicitly setting it to true, aswhile (true) .... Some languages have special constructs for infinite loops, typically by omitting the condition from an indefinite loop. Examples include Ada (loop ... end loop),[7]Fortran (DO ... END DO), Go (for { ... }), and Ruby (loop do ... end).
Often, an infinite loop is unintentionally created by a programming error in a condition-controlled loop, wherein the loop condition uses variables that never change within the loop.
Sometimes within the body of a loop there is a desire to skip the remainder of the loop body and continue with the next iteration of the loop. Some languages provide a statement such ascontinue(most languages),skip,[8]cycle(Fortran), ornext(Perl and Ruby), which will do this. The effect is to prematurely terminate the innermost loop body and then resume as normal with the next iteration. If the iteration is the last one in the loop, the effect is to terminate the entire loop early.
Some languages, like Perl[9]and Ruby,[10]have aredostatement that restarts the current iteration from the start.
Ruby has aretrystatement that restarts the entire loop from the initial iteration.[11]
When using a count-controlled loop to search through a table, it might be desirable to stop searching as soon as the required item is found. Some programming languages provide a statement such asbreak(most languages),Exit(Visual Basic), orlast(Perl), which effect is to terminate the current loop immediately, and transfer control to the statement immediately after that loop. Another term for early-exit loops isloop-and-a-half.
The following example is done inAdawhich supports bothearly exit from loopsandloops with test in the middle. Both features are very similar and comparing both code snippets will show the difference:early exitmust be combined with anifstatement while acondition in the middleis a self-contained construct.
Pythonsupports conditional execution of code depending on whether a loop was exited early (with abreakstatement) or not by using an else-clause with the loop. For example,
Theelseclause in the above example is linked to theforstatement, and not the innerifstatement. Both Python'sforandwhileloops support such an else clause, which is executed only if early exit of the loop has not occurred.
Some languages support breaking out of nested loops; in theory circles, these are called multi-level breaks. One common use example is searching a multi-dimensional table. This can be done either via multilevel breaks (break out ofNlevels), as in bash[12]and PHP,[13]or via labeled breaks (break out and continue at given label), as in Go, Java and Perl.[14]Alternatives to multilevel breaks include single breaks, together with a state variable which is tested to break out another level; exceptions, which are caught at the level being broken out to; placing the nested loops in a function and using return to effect termination of the entire nested loop; or using a label and a goto statement. C does not include a multilevel break, and the usual alternative is to use a goto to implement a labeled break.[15]Python does not have a multilevel break or continue – this was proposed inPEP 3136, and rejected on the basis that the added complexity was not worth the rare legitimate use.[16]
The notion of multi-level breaks is of some interest intheoretical computer science, because it gives rise to what is today called theKosaraju hierarchy.[17]In 1973S. Rao Kosarajurefined thestructured program theoremby proving that it is possible to avoid adding additional variables in structured programming, as long as arbitrary-depth, multi-level breaks from loops are allowed.[18]Furthermore, Kosaraju proved that a strict hierarchy of programs exists: for every integern, there exists a program containing a multi-level break of depthnthat cannot be rewritten as a program with multi-level breaks of depth less thannwithout introducing added variables.[17]
One can alsoreturnout of a subroutine executing the looped statements, breaking out of both the nested loop and the subroutine. There are otherproposed control structuresfor multiple breaks, but these are generally implemented as exceptions instead.
In his 2004 textbook,David Wattuses Tennent's notion ofsequencerto explain the similarity between multi-level breaks and return statements. Watt notes that a class of sequencers known asescape sequencers, defined as "sequencer that terminates execution of a textually enclosing command or procedure", encompasses both breaks from loops (including multi-level breaks) and return statements. As commonly implemented, however, return sequencers may also carry a (return) value, whereas the break sequencer as implemented in contemporary languages usually cannot.[19]
Loop variantsandloop invariantsare used to express correctness of loops.[20]
In practical terms, a loop variant is an integer expression which has an initial non-negative value. The variant's value must decrease during each loop iteration but must never become negative during the correct execution of the loop. Loop variants are used to guarantee that loops will terminate.
A loop invariant is an assertion which must be true before the first loop iteration and remain true after each iteration. This implies that when a loop terminates correctly, both the exit condition and the loop invariant are satisfied. Loop invariants are used to monitor specific properties of a loop during successive iterations.
Some programming languages, such asEiffelcontain native support for loop variants and invariants. In other cases, support is an add-on, such as theJava Modeling Language's specification forloop statementsinJava.
SomeLispdialects provide an extensive sublanguage for describing Loops. An early example can be found in Conversional Lisp ofInterlisp.Common Lisp[21]provides a Loop macro which implements such a sublanguage.
Many programming languages, especially those favoring more dynamic styles of programming, offer constructs fornon-local control flow. These cause the flow of execution to jump out of a given context and resume at somepredeclaredpoint.Conditions,exceptionsandcontinuationsare three common sorts of non-local control constructs; more exotic ones also exist, such asgenerators,coroutinesand theasynckeyword.
The earliestFortrancompilers had statements for testing exceptional conditions. These included theIF ACCUMULATOR OVERFLOW,IF QUOTIENT OVERFLOW, andIF DIVIDE CHECKstatements. In the interest of machine independence, they were not included in FORTRAN IV and the Fortran 66 Standard. However since Fortran 2003 it is possible to test for numerical issues via calls to functions in theIEEE_EXCEPTIONSmodule.
PL/Ihas some 22 standard conditions (e.g., ZERODIVIDE SUBSCRIPTRANGE ENDFILE) which can be raised and which can be intercepted by: ONconditionaction; Programmers can also define and use their own named conditions.
Like theunstructured if, only one statement can be specified so in many cases a GOTO is needed to decide where flow of control should resume.
Unfortunately, some implementations had a substantial overhead in both space and time (especially SUBSCRIPTRANGE), so many programmers tried to avoid using conditions.
Common Syntax examples:
Modern languages have a specialized structured construct for exception handling which does not rely on the use ofGOTOor (multi-level) breaks or returns. For example, in C++ one can write:
Any number and variety ofcatchclauses can be used above. If there is nocatchmatching a particularthrow, control percolates back through subroutine calls and/or nested blocks until a matchingcatchis found or until the end of the main program is reached, at which point the program is forcibly stopped with a suitable error message.
Via C++'s influence,catchis the keyword reserved for declaring a pattern-matching exception handler in other languages popular today, like Java or C#. Some other languages like Ada use the keywordexceptionto introduce an exception handler and then may even employ a different keyword (whenin Ada) for the pattern matching. A few languages likeAppleScriptincorporate placeholders in the exception handler syntax to automatically extract several pieces of information when the exception occurs. This approach is exemplified below by theon errorconstruct from AppleScript:
David Watt's 2004 textbook also analyzes exception handling in the framework of sequencers (introduced in this article in the section on early exits from loops). Watt notes that an abnormal situation, generally exemplified with arithmetic overflows orinput/outputfailures like file not found, is a kind of error that "is detected in some low-level program unit, but [for which] a handler is more naturally located in a high-level program unit". For example, a program might contain several calls to read files, but the action to perform when a file is not found depends on the meaning (purpose) of the file in question to the program and thus a handling routine for this abnormal situation cannot be located in low-level system code. Watts further notes that introducing status flags testing in the caller, as single-exit structured programming or even (multi-exit) return sequencers would entail, results in a situation where "the application code tends to get cluttered by tests of status flags" and that "the programmer might forgetfully or lazily omit to test a status flag. In fact, abnormal situations represented by status flags are by default ignored!" Watt notes that in contrast to status flags testing, exceptions have the oppositedefault behavior, causing the program to terminate unless the program deals with the exception explicitly in some way, possibly by adding explicit code to ignore it. Based on these arguments, Watt concludes that jump sequencers or escape sequencers are less suitable as a dedicated exception sequencer with the semantics discussed above.[24]
In Object Pascal, D, Java, C#, and Python afinallyclause can be added to thetryconstruct. No matter how control leaves thetrythe code inside thefinallyclause is guaranteed to execute. This is useful when writing code that must relinquish an expensive resource (such as an opened file or a database connection) when finished processing:
Since this pattern is fairly common, C# has a special syntax:
Upon leaving theusing-block, the compiler guarantees that thestmobject is released, effectivelybindingthe variable to the file stream while abstracting from the side effects of initializing and releasing the file. Python'swithstatement and Ruby's block argument toFile.openare used to similar effect.
All the languages mentioned above define standard exceptions and the circumstances under which they are thrown. Users can throw exceptions of their own; C++ allows users to throw and catch almost any type, including basic types likeint, whereas other languages like Java are less permissive.
C# 5.0 introduced the async keyword for supportingasynchronous I/Oin a "direct style".
Generators, also known as semicoroutines, allow control to be yielded to a consumer method temporarily, typically using ayieldkeyword (yield description) . Like the async keyword, this supports programming in a "direct style".
Coroutinesare functions that can yield control to each other - a form ofco-operative multitaskingwithout threads.
Coroutines can be implemented as a library if the programming language provides either continuations or generators - so the distinction between coroutines and generators in practice is a technical detail.
In a spoofDatamationarticle[31]in 1973, R. Lawrence Clark suggested that the GOTO statement could be replaced by theCOMEFROMstatement, and provides some entertaining examples. COMEFROM was implemented in oneesoteric programming languagenamedINTERCAL.
Donald Knuth's 1974 article "Structured Programming with go to Statements",[32]identifies two situations which were not covered by the control structures listed above, and gave examples of control structures which could handle these situations. Despite their utility, these constructs have not yet found their way into mainstream programming languages.
The following was proposed byDahlin 1972:[33]
Ifxxx1is omitted, we get a loop with the test at the top (a traditionalwhileloop). Ifxxx2is omitted, we get a loop with the test at the bottom, equivalent to ado whileloop in many languages. Ifwhileis omitted, we get an infinite loop. The construction here can be thought of as adoloop with the while check in the middle. Hence this single construction can replace several constructions in most programming languages.
Languages lacking this construct generally emulate it using an equivalent infinite-loop-with-break idiom:
A possible variant is to allow more than onewhiletest; within the loop, but the use ofexitwhen(see next section) appears to cover this case better.
InAda, the above loop construct (loop-while-repeat) can be represented using a standard infinite loop (loop-end loop) that has anexit whenclause in the middle (not to be confused with theexitwhenstatement in the following section).
Naming a loop (likeRead_Datain this example) is optional but permits leaving the outer loop of several nested loops.
This constructwas proposed byZahnin 1974.[34]A modified version is presented here.
exitwhenis used to specify the events which may occur withinxxx,
their occurrence is indicated by using the name of the event as a statement. When some event does occur, the relevant action is carried out, and then control passes just afterendexit. This construction provides a very clear separation between determining that some situation applies, and the action to be taken for that situation.
exitwhenis conceptually similar toexception handling, and exceptions or similar constructs are used for this purpose in many languages.
The following simple example involves searching a two-dimensional table for a particular item.
One way to attack a piece of software is to redirect the flow of execution of a program. A variety ofcontrol-flow integritytechniques, includingstack canaries,buffer overflow protection, shadow stacks, andvtablepointer verification, are used to defend against these attacks.[35][36][37]
|
https://en.wikipedia.org/wiki/Control_flow
|
Windows Speech Recognition(WSR) isspeech recognitiondeveloped byMicrosoftforWindows Vistathat enablesvoice commandsto control thedesktopuser interface,dictatetext inelectronic documentsandemail, navigatewebsites, performkeyboard shortcuts, and operate themouse cursor. It supports custommacrosto perform additional or supplementary tasks.
WSR is a locally processed speech recognition platform; it does not rely oncloud computingfor accuracy, dictation, or recognition, but adapts based on contexts, grammars, speech samples, training sessions, and vocabularies. It provides a personal dictionary that allows users to include or exclude words or expressions from dictation and to record pronunciations to increase recognition accuracy. Custom language models are also supported.
With Windows Vista, WSR was developed to be part of Windows, as speech recognition was previously exclusive to applications such asWindows Media Player. It is present inWindows 7,Windows 8,Windows 8.1,Windows RT,Windows 10, andWindows 11.
Microsoft was involved in speech recognition andspeech synthesisresearch for many years before WSR. In 1993, Microsoft hiredXuedong HuangfromCarnegie Mellon Universityto lead its speech development efforts; the company's research led to the development of theSpeech API(SAPI) introduced in 1994.[1]Speech recognition had also been used in previous Microsoft products.Office XPandOffice 2003provided speech recognition capabilities amongInternet ExplorerandMicrosoft Officeapplications;[2]it also enabled limited speech functionality inWindows 98,Windows Me,Windows NT 4.0, andWindows 2000.[3]Windows XPTablet PC Edition2002 included speech recognition capabilities with the Tablet PC Input Panel,[4][5]andMicrosoft Plus! for Windows XPenabled voice commands for Windows Media Player.[6]However, these all required installation of speech recognition as a separate component; before Windows Vista, Windows did not include integrated or extensive speech recognition.[5]Office 2007and later versions rely on WSR for speech recognition services.[7]
AtWinHEC 2002Microsoft announced that Windows Vista (codenamed "Longhorn") would include advances in speech recognition and in features such asmicrophone arraysupport[8]as part of an effort to "provide a consistent quality audio infrastructure for natural (continuous) speech recognition and (discrete) command and control."[9]Bill Gatesstated duringPDC 2003that Microsoft would "build speech capabilities into the system — a big advance for that in 'Longhorn,' in both recognition and synthesis, real-time";[10][11]and pre-release builds during thedevelopment of Windows Vistaincluded a speech engine with training features.[12]A PDC 2003 developer presentation stated Windows Vista would also include a user interface for microphone feedback and control, and user configuration and training features.[13]Microsoft clarified the extent to which speech recognition would be integrated when it stated in a pre-releasesoftware development kitthat "the common speech scenarios, like speech-enabling menus and buttons, will be enabled system-wide."[14]
During WinHEC 2004 Microsoft included WSR as part of a strategy to improve productivity on mobile PCs.[15][16]Microsoft later emphasizedaccessibility, new mobility scenarios, support for additional languages, and improvements to the speech user experience at WinHEC 2005. Unlike the speech support included in Windows XP, which was integrated with the Tablet PC Input Panel and required switching between separate Commanding and Dictation modes, Windows Vista would introduce a dedicated interface for speech input on the desktop and would unify the separate speech modes;[17]users previously could not speak a command after dictating or vice versa without first switching between these two modes.[18]Windows Vista Beta 1 included integrated speech recognition.[19]To incentivize company employees to analyze WSR for softwareglitchesand to provide feedback, Microsoft offered an opportunity for its testers to win a Premium model of theXbox 360.[20]
During a demonstration by Microsoft on July 27, 2006—before Windows Vista'srelease to manufacturing(RTM)—a notable incident involving WSR occurred that resulted in an unintended output of "Dear aunt, let's set so double the killer delete select all" when several attempts to dictate led to consecutive output errors;[21][22]the incident was a subject of significant derision among analysts and journalists in the audience,[23][24]despite another demonstration for application management and navigation being successful.[21]Microsoft revealed these issues were due to an audiogainglitch that caused the recognizer to distort commands and dictations; the glitch was fixed before Windows Vista's release.[25]
Reports from early 2007 indicated that WSR is vulnerable to attackers using speech recognition for malicious operations by playing certain audio commands through a target's speakers;[26][27]it was the first vulnerability discovered after Windows Vista'sgeneral availability.[28]Microsoft stated that although such an attack is theoretically possible, a number of mitigating factors and prerequisites would limit its effectiveness or prevent it altogether: a target would need the recognizer to be active and configured to properly interpret such commands; microphones and speakers would both need to be enabled and at sufficient volume levels; and an attack would require the computer to perform visible operations and produce audible feedback without users noticing.User Account Controlwould also prohibit the occurrence of privileged operations.[29]
WSR was updated to useMicrosoft UI Automationand its engine now uses theWASAPIaudio stack, substantially enhancing its performance and enabling support forecho cancellation, respectively. The document harvester, which can analyze and collect text in email and documents to contextualize user terms has improved performance, and now runs periodically in the background instead of only after recognizer startup. Sleep mode has also seen performance improvements and, to address security issues, the recognizer is turned off by default after users speak "stop listening" instead of being suspended. Windows 7 also introduces an option to submit speech training data to Microsoft to improve future recognizer versions.[30]
A new dictation scratchpad interface functions as a temporary document into which users can dictate or type text for insertion into applications that are not compatible with theText Services Framework.[30]Windows Vista previously provided an "enable dictation everywhere option" for such applications.[31]
WSR can be used to control theMetrouser interface in Windows 8, Windows 8.1, and Windows RT with commands to open theCharms bar("Press Windows C"); to dictate or display commands inMetro-style apps("Press Windows Z"); to perform tasks in apps (e.g., "Change to Celsius" inMSN Weather); and to display all installed apps listed by theStart screen("Apps").[32][33]
WSR is featured in theSettingsapplication starting with the Windows 10 April 2018 Update (Version 1803); the change first appeared inInsiderPreview Build 17083.[34]The April 2018 Update also introduces a new⊞ Win+Ctrl+Skeyboard shortcut to activate WSR.[35]
In Windows 11 version 22H2, a second Microsoft app, Voice Access, was added in addition to WSR.[36][37]In December 2023 Microsoft announced that WSR is deprecated in favor of Voice Access and may be removed in a future build or release of Windows.[38]
WSR allows a user to control applications and the Windowsdesktopuser interfacethrough voice commands.[39]Users can dictate text within documents, email, and forms; control the operating system user interface; performkeyboard shortcuts; and move themouse cursor.[40]The majority of integrated applications in Windows Vista can be controlled;[39]third-party applications must support the Text Services Framework for dictation.[1]English (U.S.),English (U.K.),French,German,Japanese,Mandarin Chinese, andSpanishare supported languages.[41]
When started for the first time, WSR presents a microphone setup wizard and an optional interactive step-by-step tutorial that users can commence to learn basic commands while adapting the recognizer to their specific voice characteristics;[39]the tutorial is estimated to require approximately 10 minutes to complete.[42]The accuracy of the recognizer increases through regular use, which adapts it to contexts, grammars, patterns, and vocabularies.[41][43]Custom language models for the specific contexts, phonetics, and terminologies of users in particular occupational fields such as legal or medical are also supported.[44]WithWindows Search,[45]the recognizer also can optionally harvest text in documents, email, as well as handwrittentablet PCinput to contextualize and disambiguate terms to improve accuracy; no information is sent to Microsoft.[43]
WSR is a locally processed speech recognition platform; it does not rely on cloud computing for accuracy, dictation, or recognition.[46]Speech profiles that store information about users are retained locally.[43]Backups and transfers of profiles can be performed viaWindows Easy Transfer.[47]
The WSR interface consists of a status area that displays instructions, information about commands (e.g., if a command is not heard by the recognizer), and the status of the recognizer; a voice meter displays visual feedback about volume levels. The status area represents the current state of WSR in a total of three modes, listed below with their respective meanings:
Colors of the recognizer listening mode button denote its various modes of operation: blue when listening; blue-gray when sleeping; gray when turned off; and yellow when the user switches context (e.g., from the desktop to the taskbar) or when a voice command is misinterpreted. The status area can also display custom user information as part ofWindows Speech Recognition Macros.[48][49]
An alternates panel disambiguation interface lists items interpreted as being relevant to a user's spoken word(s); if the word or phrase that a user desired to insert into an application is listed among results, a user can speak the corresponding number of the word or phrase in the results and confirm this choice by speaking "OK" to insert it within the application.[50]The alternates panel also appear when launching applications or speaking commands that refer to more than one item (e.g., speaking "Start Internet Explorer" may list both the web browser and a separate version with add-ons disabled). AnExactMatchOverPartialMatchentry in theWindows Registrycan limit commands to items with exact names if there is more than one instance included in results.[51]
Listed below are common WSR commands. Words initalicsindicate a word that can be substituted for the desired item (e.g., "direction" in "scrolldirection" can be substituted with the word "down").[40]A "start typing" command enables WSR to interpret all dictation commands as keyboard shortcuts.[50]
MouseGridenables users to control the mouse cursor by overlaying numbers across nine regions on the screen; these regions gradually narrow as a user speaks the number(s) of the region on which to focus until the desired interface element is reached. Users can then issue commands including "Clicknumber of region," which moves the mouse cursor to the desired region and then clicks it; and "Marknumber of region", which allows an item (such as acomputer icon) in a region to be selected, which can then be clicked with the previousclickcommand. Users also can interact with multiple regions at once.[40]
Applications and interface elements that do not present identifiable commands can still be controlled by asking the system to overlay numbers on top of them through aShow Numberscommand. Once active, speaking the overlaid number selects that item so a user can open it or perform other operations.[40]Show Numberswas designed so that users could interact with items that are not readily identifiable.[53]
WSR enables dictation of text in applications and Windows. If a dictation mistake occurs it can be corrected by speaking "Correctword" or "Correct that" and the alternates panel will appear and provide suggestions for correction; these suggestions can be selected by speaking the number corresponding to the number of the suggestion and by speaking "OK." If the desired item is not listed among suggestions, a user can speak it so that it might appear. Alternatively, users can speak "Spell it" or "I'll spell it myself" to speak the desired word on letter-by-letter basis; users can use their personal alphabet or theNATO phonetic alphabet(e.g., "N as in November") when spelling.[44]
Multiple words in a sentence can be corrected simultaneously (for example, if a user speaks "dictating" but the recognizer interprets this word as "the thing," a user can state "correct the thing" to correct both words at once). In the English language over 100,000 words are recognized by default.[44]
A personal dictionary allows users to include or exclude certain words or expressions from dictation.[44]When a user adds a word beginning with a capital letter to the dictionary, a user can specify whether it should always be capitalized or if capitalization depends on the context in which the word is spoken. Users can also record pronunciations for words added to the dictionary to increase recognition accuracy; words written via astyluson a tablet PC for the Windowshandwriting recognitionfeature are also stored. Information stored within a dictionary is included as part of a user's speech profile.[43]Users can open the speech dictionary by speaking the "show speech dictionary" command.
WSR supports custom macros through a supplementary application by Microsoft that enables additionalnatural languagecommands.[54][55]As an example of this functionality, an email macro released by Microsoft enables a natural language command where a user can speak "send email tocontactaboutsubject," which opensMicrosoft Outlookto compose a new message with the designated contact and subject automatically inserted.[56]Microsoft has also released sample macros for the speech dictionary,[57]for Windows Media Player,[58]forMicrosoft PowerPoint,[59]forspeech synthesis,[60]to switch between multiple microphones,[61]to customize various aspects of audio device configuration such as volume levels,[62]and for general natural language queries such as "What is the weather forecast?"[63]"What time is it?"[60]and "What's the date?"[60]Responses to these user inquiries are spoken back to the user in the activeMicrosoft text-to-speech voiceinstalled on the machine.
Users and developers can create their own macros based on text transcription and substitution; application execution (with support forcommand-line arguments); keyboard shortcuts; emulation of existing voice commands; or a combination of these items.XML,JScriptandVBScriptare supported.[50]Macros can be limited to specific applications[64]and rules for macros can be defined programmatically.[56]For a macro to load, it must be stored in aSpeech Macrosfolder within the active user'sDocumentsdirectory. All macros aredigitally signedby default if auser certificateis available to ensure that stored commands are not altered or loaded by third-parties; if a certificate is not available, an administrator can create one.[65]Configurable security levels can prohibit unsigned macros from being loaded; to prompt users to sign macros after creation; and to load unsigned macros.[64]
As of 2017[update]WSR uses Microsoft Speech Recognizer 8.0, the version introduced in Windows Vista. For dictation it was found to be 93.6% accurate without training by Mark Hachman, a Senior Editor ofPC World—a rate that is not as accurate as competing software. According to Microsoft, the rate of accuracy when trained is 99%. Hachman opined that Microsoft does not publicly discuss the feature because of the 2006 incident during the development of Windows Vista, with the result being that few users knew that documents could be dictated within Windows before the introduction ofCortana.[42]
|
https://en.wikipedia.org/wiki/Windows_Speech_Recognition
|
Semantic spaces[note 1][1]in the natural language domain aim to create representations of natural language that are capable of capturing meaning. The original motivation for semantic spaces stems from two core challenges of natural language:Vocabulary mismatch(the fact that the same meaning can be expressed in many ways) andambiguityof natural language (the fact that the same term can have several meanings).
The application of semantic spaces innatural language processing(NLP) aims at overcoming limitations ofrule-basedor model-based approaches operating on thekeywordlevel. The main drawback with these approaches is their brittleness, and the large manual effort required to create either rule-based NLP systems or training corpora for model learning.[2][3]Rule-based andmachine learningbased models are fixed on the keyword level and break down if the vocabulary differs from that defined in the rules or from the training material used for the statistical models.
Research in semantic spaces dates back more than 20 years. In 1996, two papers were published that raised a lot of attention around the general idea of creating semantic spaces:latent semantic analysis[4]andHyperspace Analogue to Language.[5]However, their adoption was limited by the large computational effort required to construct and use those semantic spaces. A breakthrough with regard to theaccuracyof modelling associative relations between words (e.g. "spider-web", "lighter-cigarette", as opposed to synonymous relations such as "whale-dolphin", "astronaut-driver") was achieved byexplicit semantic analysis(ESA)[6]in 2007. ESA was a novel (non-machine learning) based approach that represented words in the form of vectors with 100,000dimensions(where each dimension represents an Article inWikipedia). However practical applications of the approach are limited due to the large number of required dimensions in the vectors.
More recently, advances inneural networktechniques in combination with other new approaches (tensors) led to a host of new recent developments:Word2vec[7]fromGoogle,GloVe[8]fromStanford University, andfastText[9]fromFacebookAI Research (FAIR) labs.
|
https://en.wikipedia.org/wiki/Semantic_space
|
APetri net, also known as aplace/transition net(PT net), is one of severalmathematicalmodeling languagesfor the description ofdistributed systems. It is a class ofdiscrete event dynamic system. A Petri net is a directedbipartite graphthat has two types of elements: places and transitions. Place elements are depicted as white circles and transition elements are depicted as rectangles.
A place can contain any number of tokens, depicted as black circles. A transition is enabled if all places connected to it as inputs contain at least one token. Some sources[1]state that Petri nets were invented in August 1939 byCarl Adam Petri— at the age of 13 — for the purpose of describing chemical processes.
Like industry standards such asUMLactivity diagrams,Business Process Model and Notation, andevent-driven process chains, Petri nets offer agraphical notationfor stepwise processes that include choice,iteration, andconcurrent execution. Unlike these standards, Petri nets have an exact mathematical definition of their execution semantics, with a well-developed mathematical theory for process analysis[citation needed].
The German computer scientistCarl Adam Petri, after whom such structures are named, analyzed Petri nets extensively in his 1962 dissertationKommunikation mit Automaten.
A Petri net consists ofplaces,transitions, andarcs. Arcs run from a place to a transition or vice versa, never between places or between transitions. The places from which an arc runs to a transition are called theinput placesof the transition; the places to which arcs run from a transition are called theoutput placesof the transition.
Graphically, places in a Petri net may contain a discrete number of marks calledtokens. Any distribution of tokens over the places will represent a configuration of the net called amarking. In an abstract sense relating to a Petri net diagram, a transition of a Petri net mayfireif it isenabled, i.e. there are sufficient tokens in all of its input places; when the transition fires, it consumes the required input tokens, and creates tokens in its output places. A firing is atomic, i.e. a single non-interruptible step.
Unless anexecution policy(e.g. a strict ordering of transitions, describing precedence) is defined, the execution of Petri nets isnondeterministic: when multiple transitions are enabled at the same time, they will fire in any order.
Since firing is nondeterministic, and multiple tokens may be present anywhere in the net (even in the same place), Petri nets are well suited for modeling theconcurrentbehavior of distributed systems.
Petri nets arestate-transition systemsthat extend a class of nets called elementary nets.[2]
Definition 1.Anetis atupleN=(P,T,F){\displaystyle N=(P,T,F)}where
Definition 2.Given a netN= (P,T,F), aconfigurationis a setCso thatC⊆P.
Definition 3.Anelementary netis a net of the formEN= (N,C) where
Definition 4.APetri netis a net of the formPN= (N,M,W), which extends the elementary net so that
If a Petri net is equivalent to an elementary net, thenZcan be the countable set {0,1} and those elements inPthat map to 1 underMform a configuration. Similarly, if a Petri net is not an elementary net, then themultisetMcan be interpreted as representing a non-singleton set of configurations. In this respect,Mextends the concept of configuration for elementary nets to Petri nets.
In the diagram of a Petri net (see top figure right), places are conventionally depicted with circles, transitions with long narrow rectangles and arcs as one-way arrows that show connections of places to transitions or transitions to places. If the diagram were of an elementary net, then those places in a configuration would be conventionally depicted as circles, where each circle encompasses a single dot called atoken. In the given diagram of a Petri net (see right), the place circles may encompass more than one token to show the number of times a place appears in a configuration. The configuration of tokens distributed over an entire Petri net diagram is called amarking.
In the top figure (see right), the placep1is an input place of transitiont; whereas, the placep2is an output place to the same transition. LetPN0(top figure) be a Petri net with a marking configuredM0, andPN1(bottom figure) be a Petri net with a marking configuredM1. The configuration ofPN0enablestransitiontthrough the property that all input places have sufficient number of tokens (shown in the figures as dots) "equal to or greater" than the multiplicities on their respective arcs tot. Once and only once a transition is enabled will the transition fire. In this example, thefiringof transitiontgenerates a map that has the marking configuredM1in the image ofM0and results in Petri netPN1, seen in the bottom figure. In the diagram, the firing rule for a transition can be characterised by subtracting a number of tokens from its input places equal to the multiplicity of the respective input arcs and accumulating a new number of tokens at the output places equal to the multiplicity of the respective output arcs.
Remark 1.The precise meaning of "equal to or greater" will depend on the precise algebraic properties of addition being applied onZin the firing rule, where subtle variations on the algebraic properties can lead to other classes of Petri nets; for example, algebraic Petri nets.[3]
The following formal definition is loosely based on (Peterson 1981). Many alternative definitions exist.
APetri net graph(calledPetri netby some, but see below) is a 3-tuple(S,T,W){\displaystyle (S,T,W)}, where
Theflow relationis the set of arcs:F={(x,y)∣W(x,y)>0}{\displaystyle F=\{(x,y)\mid W(x,y)>0\}}. In many textbooks, arcs can only have multiplicity 1. These texts often define Petri nets usingFinstead ofW. When using this convention, a Petri net graph is abipartitedirected graph(S∪T,F){\displaystyle (S\cup T,F)}with node partitionsSandT.
Thepresetof a transitiontis the set of itsinput places:∙t={s∈S∣W(s,t)>0}{\displaystyle {}^{\bullet }t=\{s\in S\mid W(s,t)>0\}};
itspostsetis the set of itsoutput places:t∙={s∈S∣W(t,s)>0}{\displaystyle t^{\bullet }=\{s\in S\mid W(t,s)>0\}}. Definitions of pre- and postsets of places are analogous.
Amarkingof a Petri net (graph) is a multiset of its places, i.e., a mappingM:S→N{\displaystyle M:S\to \mathbb {N} }. We say the marking assigns to each place a number oftokens.
APetri net(calledmarked Petri netby some, see above) is a 4-tuple(S,T,W,M0){\displaystyle (S,T,W,M_{0})}, where
In words
We are generally interested in what may happen when transitions may continually fire in arbitrary order.
We say that a markingM'is reachable froma markingMin one stepifM⟶GM′{\displaystyle M{\underset {G}{\longrightarrow }}M'}; we say that itis reachable fromMifM⟶G∗M′{\displaystyle M{\overset {*}{\underset {G}{\longrightarrow }}}M'}, where⟶G∗{\displaystyle {\overset {*}{\underset {G}{\longrightarrow }}}}is thereflexive transitive closureof⟶G{\displaystyle {\underset {G}{\longrightarrow }}}; that is, if it is reachable in 0 or more steps.
For a (marked) Petri netN=(S,T,W,M0){\displaystyle N=(S,T,W,M_{0})}, we are interested in the firings that can be performed starting with the initial markingM0{\displaystyle M_{0}}. Its set ofreachable markingsis the setR(N)=D{M′|M0→(S,T,W)∗M′}{\displaystyle R(N)\ {\stackrel {D}{=}}\ \left\{M'{\Bigg |}M_{0}{\xrightarrow[{(S,T,W)}]{*}}M'\right\}}
Thereachability graphofNis the transition relation⟶G{\displaystyle {\underset {G}{\longrightarrow }}}restricted to its reachable markingsR(N){\displaystyle R(N)}. It is thestate spaceof the net.
Afiring sequencefor a Petri net with graphGand initial markingM0{\displaystyle M_{0}}is a sequence of transitionsσ→=⟨t1⋯tn⟩{\displaystyle {\vec {\sigma }}=\langle t_{1}\cdots t_{n}\rangle }such thatM0⟶G,t1M1∧⋯∧Mn−1⟶G,tnMn{\displaystyle M_{0}{\underset {G,t_{1}}{\longrightarrow }}M_{1}\wedge \cdots \wedge M_{n-1}{\underset {G,t_{n}}{\longrightarrow }}M_{n}}. The set of firing sequences is denoted asL(N){\displaystyle L(N)}.
A common variation is to disallow arc multiplicities and replace thebagof arcsWwith a simple set, called theflow relation,F⊆(S×T)∪(T×S){\displaystyle F\subseteq (S\times T)\cup (T\times S)}.
This does not limitexpressive poweras both can represent each other.
Another common variation, e.g. in Desel and Juhás (2001),[4]is to allowcapacitiesto be defined on places. This is discussed underextensionsbelow.
The markings of a Petri net(S,T,W,M0){\displaystyle (S,T,W,M_{0})}can be regarded asvectorsof non-negative integers of length|S|{\displaystyle |S|}.
Its transition relation can be described as a pair of|S|{\displaystyle |S|}by|T|{\displaystyle |T|}matrices:
Then their difference
can be used to describe the reachable markings in terms of matrix multiplication, as follows.
For any sequence of transitionsw, writeo(w){\displaystyle o(w)}for the vector that maps every transition to its number of occurrences inw. Then, we have
It must be required thatwis a firing sequence; allowing arbitrary sequences of transitions will generally produce a larger set.
W−=[∗t1t2p110p201p301p400],W+=[∗t1t2p101p210p310p401],WT=[∗t1t2p1−11p21−1p31−1p401]{\displaystyle W^{-}={\begin{bmatrix}*&t1&t2\\p1&1&0\\p2&0&1\\p3&0&1\\p4&0&0\end{bmatrix}},\ W^{+}={\begin{bmatrix}*&t1&t2\\p1&0&1\\p2&1&0\\p3&1&0\\p4&0&1\end{bmatrix}},\ W^{T}={\begin{bmatrix}*&t1&t2\\p1&-1&1\\p2&1&-1\\p3&1&-1\\p4&0&1\end{bmatrix}}}
M0=[1021]{\displaystyle M_{0}={\begin{bmatrix}1&0&2&1\end{bmatrix}}}
Meseguerand Montanari considered a kind ofsymmetric monoidal categoriesknown asPetri categories.[5]
One thing that makes Petri nets interesting is that they provide a balance between modeling power and analyzability: many things one would like to know about concurrent systems can be automatically determined for Petri nets, although some of those things are very expensive to determine in the general case. Several subclasses of Petri nets have been studied that can still model interesting classes of concurrent systems, while these determinations become easier.
An overview of suchdecision problems, with decidability andcomplexityresults for Petri nets and some subclasses, can be found in Esparza and Nielsen (1995).[6]
Thereachability problemfor Petri nets is to decide, given a Petri netNand a markingM, whetherM∈R(N){\displaystyle M\in R(N)}.
It is a matter of walking the reachability-graph defined above, until either the requested-marking is reached or it can no longer be found. This is harder than it may seem at first: the reachability graph is generally infinite, and it isn't easy to determine when it is safe to stop.
In fact, this problem was shown to beEXPSPACE-hard[7]years before it was shown to be decidable at all (Mayr, 1981). Papers continue to be published on how to do it efficiently.[8]In 2018, Czerwiński et al. improved the lower bound and showed that the problem is notELEMENTARY.[9]In 2021, this problem was shown to beAckermann-complete(thus notprimitive recursive), independently by Jerome Leroux[10]and by Wojciech Czerwiński and Łukasz Orlikowski.[11]These results thus close the long-standing complexity gap.
While reachability seems to be a good tool to find erroneous states, for practical problems the constructed graph usually has far too many states to calculate. To alleviate this problem,linear temporal logicis usually used in conjunction with thetableau methodto prove that such states cannot be reached. Linear temporal logic uses thesemi-decision techniqueto find if indeed a state can be reached, by finding a set of necessary conditions for the state to be reached then proving that those conditions cannot be satisfied.
Petri nets can be described as having different degrees of livenessL1−L4{\displaystyle L_{1}-L_{4}}. A Petri net(N,M0){\displaystyle (N,M_{0})}is calledLk{\displaystyle L_{k}}-liveif and only ifall of its transitions areLk{\displaystyle L_{k}}-live, where a transition is
Note that these are increasingly stringent requirements:Lj+1{\displaystyle L_{j+1}}-liveness impliesLj{\displaystyle L_{j}}-liveness, forj∈1,2,3{\textstyle \textstyle {j\in {1,2,3}}}.
These definitions are in accordance with Murata's overview,[12]which additionally usesL0{\displaystyle L_{0}}-liveas a term fordead.
A place in a Petri net is calledk-boundif it does not contain more thanktokens in all reachable markings, including the initial marking; it is said to besafeif it is 1-bounded; it isboundedif it isk-boundedfor somek.
A (marked) Petri net is calledk-bounded,safe, orboundedwhen all of its places are.
A Petri net (graph) is called(structurally) boundedif it is bounded for every possible initial marking.
A Petri net is bounded if and only if its reachability graph is finite.
Boundedness is decidable by looking atcovering, by constructing theKarp–Miller Tree.
It can be useful to explicitly impose a bound on places in a given net.
This can be used to model limited system resources.
Some definitions of Petri nets explicitly allow this as a syntactic feature.[13]Formally,Petri nets with place capacitiescan be defined as tuples(S,T,W,C,M0){\displaystyle (S,T,W,C,M_{0})}, where(S,T,W,M0){\displaystyle (S,T,W,M_{0})}is a Petri net,C:P→∣N{\displaystyle C:P\rightarrow \!\!\!\shortmid \mathbb {N} }an assignment of capacities to (some or all) places, and the transition relation is the usual one restricted to the markings in which each place with a capacity has at most that many tokens.
For example, if in the netN, both places are assigned capacity 2, we obtain a Petri net with place capacities, sayN2; its reachability graph is displayed on the right.
Alternatively, places can be made bounded by extending the net. To be exact,
a place can be madek-bounded by adding a "counter-place" with flow opposite to that of the place, and adding tokens to make the total in both placesk.
As well as for discrete events, there are Petri nets for continuous and hybrid discrete-continuous processes[14]that are useful in discrete, continuous and hybridcontrol theory,[15]and related to discrete, continuous and hybridautomata.
There are many extensions to Petri nets. Some of them are completely backwards-compatible (e.g.coloured Petri nets) with the original Petri net, some add properties that cannot be modelled in the original Petri net formalism (e.g. timed Petri nets). Although backwards-compatible models do not extend the computational power of Petri nets, they may have more succinct representations and may be more convenient for modeling.[16]Extensions that cannot be transformed into Petri nets are sometimes very powerful, but usually lack the full range of mathematical tools available to analyse ordinary Petri nets.
The termhigh-level Petri netis used for many Petri net formalisms that extend the basic P/T net formalism; this includes coloured Petri nets, hierarchical Petri nets such asNets within Nets, and all other extensions sketched in this section. The term is also used specifically for the type of coloured nets supported byCPN Tools.
A short list of possible extensions follows:
There are many more extensions to Petri nets, however, it is important to keep in mind, that as the complexity of the net increases in terms of extended properties, the harder it is to use standard tools to evaluate certain properties of the net. For this reason, it is a good idea to use the most simple net type possible for a given modelling task.
Instead of extending the Petri net formalism, we can also look at restricting it, and look at particular types of Petri nets, obtained by restricting the syntax in a particular way. Ordinary Petri nets are the nets where all arc weights are 1. Restricting further, the following types of ordinary Petri nets are commonly used and studied:
Workflow nets(WF-nets) are a subclass of Petri nets intending to model theworkflowof process activities.[24]The WF-net transitions are assigned to tasks or activities, and places are assigned to the pre/post conditions.
The WF-nets have additional structural and operational requirements, mainly the addition of a single input (source) place with no previous transitions, and output place (sink) with no following transitions. Accordingly, start and termination markings can be defined that represent the process status.
WF-nets have thesoundnessproperty,[24]indicating that a process with a start marking ofktokens in its source place, can reach the termination state marking withktokens in its sink place (defined ask-sound WF-net). Additionally, all the transitions in the process could fire (i.e., for each transition there is a reachable state in which the transition is enabled).
A general sound (G-sound) WF-net is defined as beingk-sound for everyk> 0.[25]
A directedpathin the Petri net is defined as the sequence of nodes (places and transitions) linked by the directed arcs. Anelementary pathincludes every node in the sequence only once.
Awell-handledPetri net is a net in which there are no fully distinct elementary paths between a place and a transition (or transition and a place), i.e., if there are two paths between the pair of nodes then these paths share a node.
An acyclic well-handled WF-net is sound (G-sound).[26]
Extended WF-net is a Petri net that is composed of a WF-net with additional transition t (feedback transition). The sink place is connected as the input place of transition t and the source place as its output place. Firing of the transition causes iteration of the process (Note, the extended WF-net is not a WF-net).[24]
WRI (Well-handled with Regular Iteration) WF-net, is an extended acyclic well-handled WF-net.
WRI-WF-net can be built as composition of nets, i.e., replacing a transition within a WRI-WF-net with a subnet which is a WRI-WF-net. The result is also WRI-WF-net. WRI-WF-nets are G-sound,[26]therefore by using only WRI-WF-net building blocks, one can get WF-nets that are G-sound by construction.
Thedesign structure matrix(DSM) can model process relations, and be utilized for process planning. TheDSM-netsare realization of DSM-based plans into workflow processes by Petri nets, and are equivalent to WRI-WF-nets. The DSM-net construction process ensures the soundness property of the resulting net.
Other ways of modelling concurrent computation have been proposed, includingvector addition systems,communicating finite-state machines,Kahn process networks,process algebra, theactor model, andtrace theory.[27]Different models provide tradeoffs of concepts such ascompositionality,modularity, and locality.
An approach to relating some of these models of concurrency is proposed in the chapter by Winskel and Nielsen.[28]
|
https://en.wikipedia.org/wiki/Petri_net
|
Many notable artificial intelligence artists have created a wide variety ofartificial intelligence artfrom the 1960s to today. These include:
Thisartificial intelligence-related article is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/List_of_artificial_intelligence_artists
|
Inmathematical logic, thelambda calculus(also written asλ-calculus) is aformal systemfor expressingcomputationbased on functionabstractionandapplicationusing variablebindingandsubstitution. Untyped lambda calculus, the topic of this article, is auniversal machine, amodel of computationthat can be used to simulate anyTuring machine(and vice versa). It was introduced by the mathematicianAlonzo Churchin the 1930s as part of his research into thefoundations of mathematics. In 1936, Church found a formulation which waslogically consistent, and documented it in 1940.
Lambda calculus consists of constructinglambda termsand performingreductionoperations on them. A term is defined as any valid lambda calculus expression. In the simplest form of lambda calculus, terms are built using only the following rules:[a]
The reduction operations include:
IfDe Bruijn indexingis used, then α-conversion is no longer required as there will be no name collisions. Ifrepeated applicationof the reduction steps eventually terminates, then by theChurch–Rosser theoremit will produce aβ-normal form.
Variable names are not needed if using a universal lambda function, such asIota and Jot, which can create any function behavior by calling it on itself in various combinations.
Lambda calculus isTuring complete, that is, it is a universalmodel of computationthat can be used to simulate anyTuring machine.[3]Its namesake, the Greek letter lambda (λ), is used in lambda expressions and lambda terms to denotebindinga variable in afunction.
Lambda calculus may beuntypedortyped. In typed lambda calculus, functions can be applied only if they are capable of accepting the given input's "type" of data. Typed lambda calculi are strictlyweakerthan the untyped lambda calculus, which is the primary subject of this article, in the sense thattyped lambda calculi can express lessthan the untyped calculus can. On the other hand, typed lambda calculi allow more things to be proven. For example, insimply typed lambda calculus, it is a theorem that every evaluation strategy terminates for every simply typed lambda-term, whereas evaluation of untyped lambda-terms need not terminate (seebelow). One reason there are many different typed lambda calculi has been the desire to do more (of what the untyped calculus can do) without giving up on being able to prove strong theorems about the calculus.
Lambda calculus has applications in many different areas inmathematics,philosophy,[4]linguistics,[5][6]andcomputer science.[7][8]Lambda calculus has played an important role in the development of thetheoryofprogramming languages.Functional programminglanguages implement lambda calculus. Lambda calculus is also a current research topic incategory theory.[9]
Lambda calculus was introduced by mathematicianAlonzo Churchin the 1930s as part of an investigation into thefoundations of mathematics.[10][c]The original system was shown to belogically inconsistentin 1935 whenStephen KleeneandJ. B. Rosserdeveloped theKleene–Rosser paradox.[11][12]
Subsequently, in 1936 Church isolated and published just the portion relevant to computation, what is now called the untyped lambda calculus.[13]In 1940, he also introduced a computationally weaker, but logically consistent system, known as thesimply typed lambda calculus.[14]
Until the 1960s when its relation to programming languages was clarified, the lambda calculus was only a formalism. Thanks toRichard Montagueand other linguists' applications in the semantics of natural language, the lambda calculus has begun to enjoy a respectable place in both linguistics[15]and computer science.[16]
There is some uncertainty over the reason for Church's use of the Greek letterlambda(λ) as the notation for function-abstraction in the lambda calculus, perhaps in part due to conflicting explanations by Church himself. According to Cardone and Hindley (2006):
By the way, why did Church choose the notation "λ"? In [an unpublished 1964 letter to Harald Dickson] he stated clearly that it came from the notation "x^{\displaystyle {\hat {x}}}" used for class-abstraction byWhitehead and Russell, by first modifying "x^{\displaystyle {\hat {x}}}" to "∧x{\displaystyle \land x}" to distinguish function-abstraction from class-abstraction, and then changing "∧{\displaystyle \land }" to "λ" for ease of printing.
This origin was also reported in [Rosser, 1984, p.338]. On the other hand, in his later years Church told two enquirers that the choice was more accidental: a symbol was needed and λ just happened to be chosen.
Dana Scotthas also addressed this question in various public lectures.[17]Scott recounts that he once posed a question about the origin of the lambda symbol to Church's former student and son-in-law John W. Addison Jr., who then wrote his father-in-law a postcard:
Dear Professor Church,
Russell had theiota operator, Hilbert had theepsilon operator. Why did you choose lambda for your operator?
According to Scott, Church's entire response consisted of returning the postcard with the following annotation: "eeny, meeny, miny, moe".
Computable functionsare a fundamental concept within computer science and mathematics. The lambda calculus provides simplesemanticsfor computation which are useful for formally studying properties of computation. The lambda calculus incorporates two simplifications that make its semantics simple.The first simplification is that the lambda calculus treats functions "anonymously"; it does not give them explicit names. For example, the function
can be rewritten inanonymous formas
(which is read as "atupleofxandyismappedtox2+y2{\textstyle x^{2}+y^{2}}").[d]Similarly, the function
can be rewritten in anonymous form as
where the input is simply mapped to itself.[d]
The second simplification is that the lambda calculus only uses functions of a single input. An ordinary function that requires two inputs, for instance thesquare_sum{\textstyle \operatorname {square\_sum} }function, can be reworked into an equivalent function that accepts a single input, and as output returnsanotherfunction, that in turn accepts a single input. For example,
can be reworked into
This method, known ascurrying, transforms a function that takes multiple arguments into a chain of functions each with a single argument.
Function applicationof thesquare_sum{\textstyle \operatorname {square\_sum} }function to the arguments (5, 2), yields at once
whereas evaluation of the curried version requires one more step
to arrive at the same result.
The lambda calculus consists of a language oflambda terms, that are defined by a certain formal syntax, and a set of transformation rules for manipulating the lambda terms. These transformation rules can be viewed as anequational theoryor as anoperational definition.
As described above, having no names, all functions in the lambda calculus are anonymous functions. They only accept one input variable, socurryingis used to implement functions of several variables.
The syntax of the lambda calculus defines some expressions as valid lambda calculus expressions and some as invalid, just as some strings of characters are valid computer programs and some are not. A valid lambda calculus expression is called a "lambda term".
The following three rules give aninductive definitionthat can be applied to build all syntactically valid lambda terms:[e]
Nothing else is a lambda term. That is, a lambda term is valid if and only if it can be obtained by repeated application of these three rules. For convenience, some parentheses can be omitted when writing a lambda term. For example, the outermost parentheses are usually not written. See§ Notation, below, for an explicit description of which parentheses are optional. It is also common to extend the syntax presented here with additional operations, which allows making sense of terms such asλx.x2.{\displaystyle \lambda x.x^{2}.}The focus of this article is the pure lambda calculus without extensions, but lambda terms extended with arithmetic operations are used for explanatory purposes.
Anabstractionλx.t{\displaystyle \lambda x.t}denotes an§ anonymous function[g]that takes a single inputxand returnst. For example,λx.(x2+2){\displaystyle \lambda x.(x^{2}+2)}is an abstraction representing the functionf{\displaystyle f}defined byf(x)=x2+2,{\displaystyle f(x)=x^{2}+2,}using the termx2+2{\displaystyle x^{2}+2}fort. The namef{\displaystyle f}is superfluous when using abstraction. The syntax(λx.t){\displaystyle (\lambda x.t)}bindsthe variablexin the termt. The definition of a function with an abstraction merely "sets up" the function but does not invoke it.
Anapplicationts{\displaystyle ts}represents the application of a functiontto an inputs, that is, it represents the act of calling functionton inputsto producet(s){\displaystyle t(s)}.
A lambda term may refer to a variable that has not been bound, such as the termλx.(x+y){\displaystyle \lambda x.(x+y)}(which represents the function definitionf(x)=x+y{\displaystyle f(x)=x+y}). In this term, the variableyhas not been defined and is considered an unknown. The abstractionλx.(x+y){\displaystyle \lambda x.(x+y)}is a syntactically valid term and represents a function that adds its input to the yet-unknowny.
Parentheses may be used and might be needed to disambiguate terms. For example,
The examples 1 and 2 denote different terms, differing only in where the parentheses are placed. They have different meanings: example 1 is a function definition, while example 2 is a function application. The lambda variablexis a placeholder in both examples.
Here,example 1definesa functionλx.B{\displaystyle \lambda x.B}, whereB{\displaystyle B}is(λx.x)x{\displaystyle (\lambda x.x)x}, an anonymous function(λx.x){\displaystyle (\lambda x.x)}, with inputx{\displaystyle x}; while example 2,M{\displaystyle M}N{\displaystyle N}, is M applied to N, whereM{\displaystyle M}is the lambda term(λx.(λx.x)){\displaystyle (\lambda x.(\lambda x.x))}being applied to the inputN{\displaystyle N}which isx{\displaystyle x}. Both examples 1 and 2 would evaluate to theidentity functionλx.x{\displaystyle \lambda x.x}.
In lambda calculus, functions are taken to be 'first class values', so functions may be used as the inputs, or be returned as outputs from other functions.
For example, the lambda termλx.x{\displaystyle \lambda x.x}represents theidentity function,x↦x{\displaystyle x\mapsto x}. Further,λx.y{\displaystyle \lambda x.y}represents theconstant functionx↦y{\displaystyle x\mapsto y}, the function that always returnsy{\displaystyle y}, no matter the input. As an example of a function operating on functions, thefunction compositioncan be defined asλf.λg.λx.(f(gx)){\displaystyle \lambda f.\lambda g.\lambda x.(f(gx))}.
There are several notions of "equivalence" and "reduction" that allow lambda terms to be "reduced" to "equivalent" lambda terms.
A basic form of equivalence, definable on lambda terms, isalpha equivalence. It captures the intuition that the particular choice of a bound variable, in an abstraction, does not (usually) matter.
For instance,λx.x{\displaystyle \lambda x.x}andλy.y{\displaystyle \lambda y.y}are alpha-equivalent lambda terms, and they both represent the same function (the identity function).
The termsx{\displaystyle x}andy{\displaystyle y}are not alpha-equivalent, because they are not bound in an abstraction.
In many presentations, it is usual to identify alpha-equivalent lambda terms.
The following definitions are necessary in order to be able to define β-reduction:
Thefree variables[h]of a term are those variables not bound by an abstraction. The set of free variables of an expression is defined inductively:
For example, the lambda term representing the identityλx.x{\displaystyle \lambda x.x}has no free variables, but the functionλx.yx{\displaystyle \lambda x.yx}has a single free variable,y{\displaystyle y}.
Supposet{\displaystyle t},s{\displaystyle s}andr{\displaystyle r}are lambda terms, andx{\displaystyle x}andy{\displaystyle y}are variables.
The notationt[x:=r]{\displaystyle t[x:=r]}indicates substitution ofr{\displaystyle r}forx{\displaystyle x}int{\displaystyle t}in acapture-avoidingmanner. This is defined so that:
For example,(λx.x)[y:=y]=λx.(x[y:=y])=λx.x{\displaystyle (\lambda x.x)[y:=y]=\lambda x.(x[y:=y])=\lambda x.x}, and((λx.y)x)[x:=y]=((λx.y)[x:=y])(x[x:=y])=(λx.y)y{\displaystyle ((\lambda x.y)x)[x:=y]=((\lambda x.y)[x:=y])(x[x:=y])=(\lambda x.y)y}.
The freshness condition (requiring thaty{\displaystyle y}is not in thefree variablesofr{\displaystyle r}) is crucial in order to ensure that substitution does not change the meaning of functions.
For example, a substitution that ignores the freshness condition could lead to errors:(λx.y)[y:=x]=λx.(y[y:=x])=λx.x{\displaystyle (\lambda x.y)[y:=x]=\lambda x.(y[y:=x])=\lambda x.x}. This erroneous substitution would turn the constant functionλx.y{\displaystyle \lambda x.y}into the identityλx.x{\displaystyle \lambda x.x}.
In general, failure to meet the freshness condition can be remedied by alpha-renaming first, with a suitable fresh variable.
For example, switching back to our correct notion of substitution, in(λx.y)[y:=x]{\displaystyle (\lambda x.y)[y:=x]}the abstraction can be renamed with a fresh variablez{\displaystyle z}, to obtain(λz.y)[y:=x]=λz.(y[y:=x])=λz.x{\displaystyle (\lambda z.y)[y:=x]=\lambda z.(y[y:=x])=\lambda z.x}, and the meaning of the function is preserved by substitution.
The β-reduction rule[b]states that an application of the form(λx.t)s{\displaystyle (\lambda x.t)s}reduces to the termt[x:=s]{\displaystyle t[x:=s]}. The notation(λx.t)s→t[x:=s]{\displaystyle (\lambda x.t)s\to t[x:=s]}is used to indicate that(λx.t)s{\displaystyle (\lambda x.t)s}β-reduces tot[x:=s]{\displaystyle t[x:=s]}.
For example, for everys{\displaystyle s},(λx.x)s→x[x:=s]=s{\displaystyle (\lambda x.x)s\to x[x:=s]=s}. This demonstrates thatλx.x{\displaystyle \lambda x.x}really is the identity.
Similarly,(λx.y)s→y[x:=s]=y{\displaystyle (\lambda x.y)s\to y[x:=s]=y}, which demonstrates thatλx.y{\displaystyle \lambda x.y}is a constant function.
The lambda calculus may be seen as an idealized version of a functional programming language, likeHaskellorStandard ML. Under this view,β-reduction corresponds to a computational step. This step can be repeated by additional β-reductions until there are no more applications left to reduce. In the untyped lambda calculus, as presented here, this reduction process may not terminate. For instance, consider the termΩ=(λx.xx)(λx.xx){\displaystyle \Omega =(\lambda x.xx)(\lambda x.xx)}.
Here(λx.xx)(λx.xx)→(xx)[x:=λx.xx]=(x[x:=λx.xx])(x[x:=λx.xx])=(λx.xx)(λx.xx){\displaystyle (\lambda x.xx)(\lambda x.xx)\to (xx)[x:=\lambda x.xx]=(x[x:=\lambda x.xx])(x[x:=\lambda x.xx])=(\lambda x.xx)(\lambda x.xx)}.
That is, the term reduces to itself in a single β-reduction, and therefore the reduction process will never terminate.
Another aspect of the untyped lambda calculus is that it does not distinguish between different kinds of data. For instance, it may be desirable to write a function that only operates on numbers. However, in the untyped lambda calculus, there is no way to prevent a function from being applied totruth values, strings, or other non-number objects.
Lambda expressions are composed of:
The set of lambda expressions,Λ, can bedefined inductively:
Instances of rule 2 are known asabstractionsand instances of rule 3 are known asapplications.[18]See§ reducible expression
This set of rules may be written inBackus–Naur formas:
To keep the notation of lambda expressions uncluttered, the following conventions are usually applied:
The abstraction operator, λ, is said to bind its variable wherever it occurs in the body of the abstraction. Variables that fall within the scope of an abstraction are said to bebound. In an expression λx.M, the part λxis often calledbinder, as a hint that the variablexis getting bound by prepending λxtoM. All other variables are calledfree. For example, in the expression λy.x x y,yis a bound variable andxis a free variable. Also a variable is bound by its nearest abstraction. In the following example the single occurrence ofxin the expression is bound by the second lambda: λx.y(λx.z x).
The set offree variablesof a lambda expression,M, is denoted as FV(M) and is defined by recursion on the structure of the terms, as follows:
An expression that contains no free variables is said to beclosed. Closed lambda expressions are also known ascombinatorsand are equivalent to terms incombinatory logic.
The meaning of lambda expressions is defined by how expressions can be reduced.[22]
There are three kinds of reduction:
We also speak of the resulting equivalences: two expressions areα-equivalent, if they can be α-converted into the same expression. β-equivalence and η-equivalence are defined similarly.
The termredex, short forreducible expression, refers to subterms that can be reduced by one of the reduction rules. For example, (λx.M)Nis a β-redex in expressing the substitution ofNforxinM. The expression to which a redex reduces is called itsreduct; the reduct of (λx.M)NisM[x:=N].[b]
Ifxis not free inM, λx.M xis also an η-redex, with a reduct ofM.
α-conversion(alpha-conversion), sometimes known as α-renaming,[23]allows bound variable names to be changed. For example, α-conversion of λx.xmight yield λy.y. Terms that differ only by α-conversion are calledα-equivalent. Frequently, in uses of lambda calculus, α-equivalent terms are considered to be equivalent.
The precise rules for α-conversion are not completely trivial. First, when α-converting an abstraction, the only variable occurrences that are renamed are those that are bound to the same abstraction. For example, an α-conversion of λx.λx.xcould result in λy.λx.x, but it couldnotresult in λy.λx.y. The latter has a different meaning from the original. This is analogous to the programming notion ofvariable shadowing.
Second, α-conversion is not possible if it would result in a variable getting captured by a different abstraction. For example, if we replacexwithyin λx.λy.x, we get λy.λy.y, which is not at all the same.
In programming languages with static scope, α-conversion can be used to makename resolutionsimpler by ensuring that no variable namemasksa name in a containingscope(seeα-renaming to make name resolution trivial).
In theDe Bruijn indexnotation, any two α-equivalent terms are syntactically identical.
Substitution, writtenM[x:=N], is the process of replacing allfreeoccurrences of the variablexin the expressionMwith expressionN. Substitution on terms of the lambda calculus is defined by recursion on the structure of terms, as follows (note: x and y are only variables while M and N are any lambda expression):
To substitute into an abstraction, it is sometimes necessary to α-convert the expression. For example, it is not correct for (λx.y)[y:=x] to result in λx.x, because the substitutedxwas supposed to be free but ended up being bound. The correct substitution in this case is λz.x,up toα-equivalence. Substitution is defined uniquely up to α-equivalence.See Capture-avoiding substitutionsabove.
β-reduction(betareduction) captures the idea of function application. β-reduction is defined in terms of substitution: the β-reduction of (λx.M)NisM[x:=N].[b]
For example, assuming some encoding of 2, 7, ×, we have the following β-reduction: (λn.n× 2) 7 → 7 × 2.
β-reduction can be seen to be the same as the concept oflocal reducibilityinnatural deduction, via theCurry–Howard isomorphism.
η-conversion(etaconversion) expresses the idea ofextensionality,[24]which in this context is that two functions are the sameif and only ifthey give the same result for all arguments. η-conversion converts between λx.fxandfwheneverxdoes not appear free inf.
η-reduction changes λx.fxtof, and η-expansion changesfto λx.fx, under the same requirement thatxdoes not appear free inf.
η-conversion can be seen to be the same as the concept oflocal completenessinnatural deduction, via theCurry–Howard isomorphism.
For the untyped lambda calculus, β-reduction as arewriting ruleis neitherstrongly normalisingnorweakly normalising.
However, it can be shown that β-reduction isconfluentwhen working up to α-conversion (i.e. we consider two normal forms to be equal if it is possible to α-convert one into the other).
Therefore, both strongly normalising terms and weakly normalising terms have a unique normal form. For strongly normalising terms, any reduction strategy is guaranteed to yield the normal form, whereas for weakly normalising terms, some reduction strategies may fail to find it.
The basic lambda calculus may be used to modelarithmetic, Booleans, data structures, and recursion, as illustrated in the following sub-sectionsi,ii,iii, and§ iv.
There are several possible ways to define thenatural numbersin lambda calculus, but by far the most common are theChurch numerals, which can be defined as follows:
and so on. Or using the alternative syntax presented above inNotation:
A Church numeral is ahigher-order function—it takes a single-argument functionf, and returns another single-argument function. The Church numeralnis a function that takes a functionfas argument and returns then-th composition off, i.e. the functionfcomposed with itselfntimes. This is denotedf(n)and is in fact then-th power off(considered as an operator);f(0)is defined to be the identity function. Such repeated compositions (of a single functionf) obey thelaws of exponents, which is why these numerals can be used for arithmetic. (In Church's original lambda calculus, the formal parameter of a lambda expression was required to occur at least once in the function body, which made the above definition of0impossible.)
One way of thinking about the Church numeraln, which is often useful when analysing programs, is as an instruction 'repeatntimes'. For example, using thePAIRandNILfunctions defined below, one can define a function that constructs a (linked) list ofnelements all equal toxby repeating 'prepend anotherxelement'ntimes, starting from an empty list. The lambda term is
By varying what is being repeated, and varying what argument that function being repeated is applied to, a great many different effects can be achieved.
We can define a successor function, which takes a Church numeralnand returnsn+ 1by adding another application off, where '(mf)x' means the function 'f' is applied 'm' times on 'x':
Because them-th composition offcomposed with then-th composition offgives them+n-th composition off, addition can be defined as follows:
PLUScan be thought of as a function taking two natural numbers as arguments and returning a natural number; it can be verified that
and
are β-equivalent lambda expressions. Since addingmto a numberncan be accomplished by adding 1mtimes, an alternative definition is:
Similarly, multiplication can be defined as
Alternatively
since multiplyingmandnis the same as repeating the addnfunctionmtimes and then applying it to zero.
Exponentiation has a rather simple rendering in Church numerals, namely
The predecessor function defined byPREDn=n− 1for a positive integernandPRED 0 = 0is considerably more difficult. The formula
can be validated by showing inductively that ifTdenotes(λg.λh.h(gf)), thenT(n)(λu.x) = (λh.h(f(n−1)(x)))forn> 0. Two other definitions ofPREDare given below, one usingconditionalsand the other usingpairs. With the predecessor function, subtraction is straightforward. Defining
SUBmnyieldsm−nwhenm>nand0otherwise.
By convention, the following two definitions (known as Church Booleans) are used for the Boolean valuesTRUEandFALSE:
Then, with these two lambda terms, we can define some logic operators (these are just possible formulations; other expressions could be equally correct):
We are now able to compute some logic functions, for example:
and we see thatAND TRUE FALSEis equivalent toFALSE.
Apredicateis a function that returns a Boolean value. The most fundamental predicate isISZERO, which returnsTRUEif its argument is the Church numeral0, butFALSEif its argument were any other Church numeral:
The following predicate tests whether the first argument is less-than-or-equal-to the second:
and sincem=n, ifLEQmnandLEQnm, it is straightforward to build a predicate for numerical equality.
The availability of predicates and the above definition ofTRUEandFALSEmake it convenient to write "if-then-else" expressions in lambda calculus. For example, the predecessor function can be defined as:
which can be verified by showing inductively thatn(λg.λk.ISZERO (g1)k(PLUS (gk) 1)) (λv.0)is the addn− 1 function forn> 0.
A pair (2-tuple) can be defined in terms ofTRUEandFALSE, by using theChurch encoding for pairs. For example,PAIRencapsulates the pair (x,y),FIRSTreturns the first element of the pair, andSECONDreturns the second.
A linked list can be defined as either NIL for the empty list, or thePAIRof an element and a smaller list. The predicateNULLtests for the valueNIL. (Alternatively, withNIL := FALSE, the constructl(λh.λt.λz.deal_with_head_h_and_tail_t) (deal_with_nil)obviates the need for an explicit NULL test).
As an example of the use of pairs, the shift-and-increment function that maps(m,n)to(n,n+ 1)can be defined as
which allows us to give perhaps the most transparent version of the predecessor function:
There is a considerable body ofprogramming idiomsfor lambda calculus. Many of these were originally developed in the context of using lambda calculus as a foundation forprogramming language semantics, effectively using lambda calculus as alow-level programming language. Because several programming languages include the lambda calculus (or something very similar) as a fragment, these techniques also see use in practical programming, but may then be perceived as obscure or foreign.
In lambda calculus, alibrarywould take the form of a collection of previously defined functions, which as lambda-terms are merely particular constants. The pure lambda calculus does not have a concept of named constants since all atomic lambda-terms are variables, but one can emulate having named constants by setting aside a variable as the name of the constant, using abstraction to bind that variable in the main body, and apply that abstraction to the intended definition. Thus to usefto meanN(some explicit lambda-term) inM(another lambda-term, the "main program"), one can say
Authors often introducesyntactic sugar, such aslet,[k]to permit writing the above in the more intuitive order
By chaining such definitions, one can write a lambda calculus "program" as zero or more function definitions, followed by one lambda-term using those functions that constitutes the main body of the program.
A notable restriction of thisletis that the namefmay not be referenced inN, forNis outside the scope of the abstraction bindingf, which isM; this means a recursive function definition cannot be written withlet. Theletrec[l]construction would allow writing recursive function definitions, where the scope of the abstraction bindingfincludesNas well asM. Or self-application a-la that which leads toYcombinator could be used.
Recursionis when a function invokes itself. What would a value be which were to represent such a function? It has to refer to itself somehow inside itself, just as the definition refers to itself inside itself. If this value were to contain itself by value, it would have to be of infinite size, which is impossible. Other notations, which support recursion natively, overcome this by referring to the functionby nameinside its definition. Lambda calculus cannot express this, since in it there simply are no names for terms to begin with, only arguments' names, i.e. parameters in abstractions. Thus, a lambda expression can receive itself as its argument and refer to (a copy of) itself via the corresponding parameter's name. This will work fine in case it was indeed called with itself as an argument. For example,(λx.xx)E= (E E)will express recursion whenEis an abstraction which is applying its parameter to itself inside its body to express a recursive call. Since this parameter receivesEas its value, its self-application will be the same(E E)again.
As a concrete example, consider thefactorialfunctionF(n), recursively defined by
In the lambda expression which is to represent this function, aparameter(typically the first one) will be assumed to receive the lambda expression itself as its value, so that calling it with itself as its first argument will amount to the recursive call. Thus to achieve recursion, the intended-as-self-referencing argument (calledshere, reminiscent of "self", or "self-applying") must always be passed to itself within the function body at a recursive call point:
and we have
Heres sbecomesthe same(E E)inside the result of the application(E E), and using the same function for a call is the definition of what recursion is. The self-application achieves replication here, passing the function's lambda expression on to the next invocation as an argument value, making it available to be referenced there by the parameter namesto be called via the self-applicationss, again and again as needed, each timere-creatingthe lambda-termF = E E.
The application is an additional step just as the name lookup would be. It has the same delaying effect. Instead of havingFinside itself as a wholeup-front, delaying its re-creation until the next call makes its existence possible by having twofinitelambda-termsEinside it re-create it on the flylateras needed.
This self-applicational approach solves it, but requires re-writing each recursive call as a self-application. We would like to have a generic solution, without the need for any re-writes:
Given a lambda term with first argument representing recursive call (e.g.Ghere), thefixed-pointcombinatorFIXwill return a self-replicating lambda expression representing the recursive function (here,F). The function does not need to be explicitly passed to itself at any point, for the self-replication is arranged in advance, when it is created, to be done each time it is called. Thus the original lambda expression(FIX G)is re-created inside itself, at call-point, achievingself-reference.
In fact, there are many possible definitions for thisFIXoperator, the simplest of them being:
In the lambda calculus,Ygis a fixed-point ofg, as it expands to:
Now, to perform the recursive call to the factorial function for an argumentn, we would simply call(YG)n. Givenn= 4, for example, this gives:
Every recursively defined function can be seen as a fixed point of some suitably defined higher order function (also known as functional) closing over the recursive call with an extra argument. Therefore, usingY, every recursive function can be expressed as a lambda expression. In particular, we can now cleanly define the subtraction, multiplication, and comparison predicates of natural numbers, using recursion.
WhenY combinatoris coded directly in astrict programming language, the applicative order of evaluation used in such languages will cause an attempt to fully expand the internal self-application(xx){\displaystyle (xx)}prematurely, causingstack overflowor, in case oftail call optimization, indefinite looping.[27]A delayed variant of Y, theZ combinator, can be used in such languages. It has the internal self-application hidden behind an extra abstraction througheta-expansion, as(λv.xxv){\displaystyle (\lambda v.xxv)}, thus preventing its premature expansion:[28]
Certain terms have commonly accepted names:[29][30][31]
Iis the identity function.SKandBCKWform completecombinator calculussystems that can express any lambda term - seethe next section.ΩisUU, the smallest term that has no normal form.YIis another such term.Yis standard and definedabove, and can also be defined asY=BU(CBU), so thatYg=g(Yg).TRUEandFALSEdefinedaboveare commonly abbreviated asTandF.
IfNis a lambda-term without abstraction, but possibly containing named constants (combinators), then there exists a lambda-termT(x,N) which is equivalent toλx.Nbut lacks abstraction (except as part of the named constants, if these are considered non-atomic). This can also be viewed as anonymising variables, asT(x,N) removes all occurrences ofxfromN, while still allowing argument values to be substituted into the positions whereNcontains anx. The conversion functionTcan be defined by:
In either case, a term of the formT(x,N)Pcan reduce by having the initial combinatorI,K, orSgrab the argumentP, just like β-reduction of(λx.N)Pwould do.Ireturns that argument.Kthrows the argument away, just like(λx.N)would do ifxhas no free occurrence inN.Spasses the argument on to both subterms of the application, and then applies the result of the first to the result of the second.
The combinatorsBandCare similar toS, but pass the argument on to only one subterm of an application (Bto the "argument" subterm andCto the "function" subterm), thus saving a subsequentKif there is no occurrence ofxin one subterm. In comparison toBandC, theScombinator actually conflates two functionalities: rearranging arguments, and duplicating an argument so that it may be used in two places. TheWcombinator does only the latter, yielding theB, C, K, W systemas an alternative toSKI combinator calculus.
Atyped lambda calculusis a typedformalismthat uses the lambda-symbol (λ{\displaystyle \lambda }) to denote anonymous function abstraction. In this context, types are usually objects of a syntactic nature that are assigned to lambda terms; the exact nature of a type depends on the calculus considered (seeKinds of typed lambda calculi). From a certain point of view, typed lambda calculi can be seen as refinements of the untyped lambda calculus but from another point of view, they can also be considered the more fundamental theory anduntyped lambda calculusa special case with only one type.[32]
Typed lambda calculi are foundational programming languages and are the base of typed functional programming languages such as ML and Haskell and, more indirectly, typedimperative programminglanguages. Typed lambda calculi play an important role in the design oftype systemsfor programming languages; here typability usually captures desirable properties of the program, e.g., the program will not cause a memory access violation.
Typed lambda calculi are closely related tomathematical logicandproof theoryvia theCurry–Howard isomorphismand they can be considered as theinternal languageof classes ofcategories, e.g., the simply typed lambda calculus is the language of aCartesian closed category(CCC).
Whether a term is normalising or not, and how much work needs to be done in normalising it if it is, depends to a large extent on the reduction strategy used. Common lambda calculus reduction strategies include:[33][34][35]
Weak reduction strategies do not reduce under lambda abstractions:
Strategies with sharing reduce computations that are "the same" in parallel:
There is no algorithm that takes as input any two lambda expressions and outputsTRUEorFALSEdepending on whether one expression reduces to the other.[13]More precisely, nocomputable functioncandecidethe question. This was historically the first problem for which undecidability could be proven. As usual for such a proof,computablemeans computable by anymodel of computationthat isTuring complete. In fact computability can itself be defined via the lambda calculus: a functionF:N→Nof natural numbers is a computable function if and only if there exists a lambda expressionfsuch that for every pair ofx,yinN,F(x)=yif and only iffx=βy, wherexandyare theChurch numeralscorresponding toxandy, respectively and =βmeaning equivalence with β-reduction. See theChurch–Turing thesisfor other approaches to defining computability and their equivalence.
Church's proof of uncomputability first reduces the problem to determining whether a given lambda expression has anormal form. Then he assumes that this predicate is computable, and can hence be expressed in lambda calculus. Building on earlier work by Kleene and constructing aGödel numberingfor lambda expressions, he constructs a lambda expressionethat closely follows the proof ofGödel's first incompleteness theorem. Ifeis applied to its own Gödel number, a contradiction results.
The notion ofcomputational complexityfor the lambda calculus is a bit tricky, because the cost of a β-reduction may vary depending on how it is implemented.[36]To be precise, one must somehow find the location of all of the occurrences of the bound variableVin the expressionE, implying a time cost, or one must keep track of the locations of free variables in some way, implying a space cost. A naïve search for the locations ofVinEisO(n)in the lengthnofE.Director stringswere an early approach that traded this time cost for a quadratic space usage.[37]More generally this has led to the study of systems that useexplicit substitution.
In 2014, it was shown that the number of β-reduction steps taken by normal order reduction to reduce a term is areasonabletime cost model, that is, the reduction can be simulated on a Turing machine in time polynomially proportional to the number of steps.[38]This was a long-standing open problem, due tosize explosion, the existence of lambda terms which grow exponentially in size for each β-reduction. The result gets around this by working with a compact shared representation. The result makes clear that the amount of space needed to evaluate a lambda term is not proportional to the size of the term during reduction. It is not currently known what a good measure of space complexity would be.[39]
An unreasonable model does not necessarily mean inefficient.Optimal reductionreduces all computations with the same label in one step, avoiding duplicated work, but the number of parallel β-reduction steps to reduce a given term to normal form is approximately linear in the size of the term. This is far too small to be a reasonable cost measure, as any Turing machine may be encoded in the lambda calculus in size linearly proportional to the size of the Turing machine. The true cost of reducing lambda terms is not due to β-reduction per se but rather the handling of the duplication of redexes during β-reduction.[40]It is not known if optimal reduction implementations are reasonable when measured with respect to a reasonable cost model such as the number of leftmost-outermost steps to normal form, but it has been shown for fragments of the lambda calculus that the optimal reduction algorithm is efficient and has at most a quadratic overhead compared to leftmost-outermost.[39]In addition the BOHM prototype implementation of optimal reduction outperformed bothCamlLight and Haskell on pure lambda terms.[40]
As pointed out byPeter Landin's 1965 paper "A Correspondence betweenALGOL 60and Church's Lambda-notation",[41]sequentialprocedural programminglanguages can be understood in terms of the lambda calculus, which provides the basic mechanisms for procedural abstraction and procedure (subprogram) application.
For example, inPythonthe "square" function can be expressed as a lambda expression as follows:
The above example is an expression that evaluates to a first-class function. The symbollambdacreates an anonymous function, given a list of parameter names,x– just a single argument in this case, and an expression that is evaluated as the body of the function,x**2. Anonymous functions are sometimes called lambda expressions.
For example,Pascaland many other imperative languages have long supported passingsubprogramsasargumentsto other subprograms through the mechanism offunction pointers. However, function pointers are an insufficient condition for functions to befirst classdatatypes, because a function is a first class datatype if and only if new instances of the function can be created atruntime. Such runtime creation of functions is supported inSmalltalk,JavaScript,Wolfram Language, and more recently inScala,Eiffel(as agents),C#(as delegates) andC++11, among others.
TheChurch–Rosserproperty of the lambda calculus means that evaluation (β-reduction) can be carried out inany order, even in parallel. This means that variousnondeterministicevaluation strategiesare relevant. However, the lambda calculus does not offer any explicit constructs forparallelism. One can add constructs such asfuturesto the lambda calculus. Otherprocess calculihave been developed for describing communication and concurrency.
The fact that lambda calculus terms act as functions on other lambda calculus terms, and even on themselves, led to questions about the semantics of the lambda calculus. Could a sensible meaning be assigned to lambda calculus terms? The natural semantics was to find a setDisomorphic to the function spaceD→D, of functions on itself. However, no nontrivial suchDcan exist, bycardinalityconstraints because the set of all functions fromDtoDhas greater cardinality thanD, unlessDis asingleton set.
In the 1970s,Dana Scottshowed that if onlycontinuous functionswere considered, a set ordomainDwith the required property could be found, thus providing amodelfor the lambda calculus.[42]
This work also formed the basis for thedenotational semanticsof programming languages.
These extensions are in thelambda cube:
Theseformal systemsare extensions of lambda calculus that are not in the lambda cube:
These formal systems are variations of lambda calculus:
These formal systems are related to lambda calculus:
Some parts of this article are based on material fromFOLDOC, used withpermission.
|
https://en.wikipedia.org/wiki/Lambda_calculus
|
Thenearest neighbour algorithmwas one of the firstalgorithmsused to solve thetravelling salesman problemapproximately. In that problem, the salesman starts at a random city and repeatedly visits the nearest city until all have been visited. The algorithm quickly yields a short tour, but usually not the optimal one.
These are the steps of the algorithm:
The sequence of the visited vertices is the output of the algorithm.
The nearest neighbour algorithm is easy to implement and executes quickly, but it can sometimes miss shorter routes which are easily noticed with human insight, due to its "greedy" nature. As a general guide, if the last few stages of the tour are comparable in length to the first stages, then the tour is reasonable; if they are much greater, then it is likely that much better tours exist. Another check is to use an algorithm such as thelower boundalgorithm to estimate if this tour is good enough.
In the worst case, the algorithm results in a tour that is much longer than the optimal tour. To be precise, for every constantrthere is an instance of the traveling salesman problem such that the length of the tour computed by the nearest neighbour algorithm is greater thanrtimes the length of the optimal tour. Moreover, for each number of cities there is an assignment of distances between the cities for which the nearest neighbour heuristic produces the unique worst possible tour. (If the algorithm is applied on every vertex as the starting vertex, the best path found will be better than at least N/2-1 other tours, where N is the number of vertices.)[1]
The nearest neighbour algorithm may not find a feasible tour at all, even when one exists.
|
https://en.wikipedia.org/wiki/Nearest_neighbour_algorithm
|
Keno/kiːnoʊ/is alottery-like gambling game often played at moderncasinos, and also offered as a game in some lotteries.
Players wager by choosing numbers ranging from 1 through (usually) 80. After all players make their wagers, 20 numbers (some variants draw fewer numbers) are drawn at random, either with a ball machine similar to ones used for lotteries andbingo, or with arandom number generator.
Each casino sets its own series of payouts, called "paytables". The player is paid based on how many numbers were chosen (either player selection, or the terminal picking the numbers), the number of matches out of those chosen, and the wager.
There are a wide variety of keno paytables depending on the casino, usually with a larger "house edge" than other games, ranging from less than 4 percent[1]to over 35 percent[2]in online play, and 20-40% in in-person casinos.[3]By way of comparison, the typical house edge for non-slot casino games is under 5%.[4]
The word "keno" hasFrenchorLatinroots (Fr.quine"five winning numbers", L.quini"five each"), but by all accounts the game originated in China. Legend has it thatZhang Lianginvented the game during theChu-Han Contentionto raise money to defend an ancient city, and its widespread popularity later helped raise funds to build theGreat Wall of China. In modern China, the idea of usinglotteriesto fund a public institution was not accepted before the late 19th century.[5]
Chinese lottery is not documented before 1847, when the Portuguese government ofMacaodecided to grant a licence to lottery operators. According to some, results of keno games in great cities were sent to outlying villages and hamlets bycarrier pigeons, resulting in its Chinese name 白鸽票báigē piào, with the literal reading "white dove tickets" in Mandarin, but in Southern varieties of Chinese spoken inGuangdongsimply meaning "pigeon tickets",[6]and pronouncedbaak6-gaap3-piu3inCantonese(on which the Western spelling 'pak-ah-pu' / 'pakapoo' was based).
The Chinese played the game using sheets printed withChinese characters, often the first 80 characters of theThousand Character Classic, from which the winning characters were selected.[7][8]Eventually, Chinese immigrants introduced keno to the West when they sailed across the Pacific Ocean to work on construction of theFirst transcontinental railroadin the 19th century,[9]where the name was Westernized intoboc hop bu[8]andpuck-apu.[7]There were also other, earlier games called Keno, but these were played in the same way as the game now known as "Bingo", not the modern game of Keno.[citation needed]
Keno payouts are based on how many numbers the player chooses and how many of those numbers are "hit", multiplied by the proportion of the player's original wager to the "base rate" of the paytable. Typically, the more numbers a player chooses and the more numbers hit, the greater the payout, although some paytables pay for hitting a lesser number of spots. For example, it is not uncommon to see casinos paying $500 or even $1,000 for a “catch” of 0 out of 20 on a 20 spot ticket with a $5.00 wager. Payouts vary widely by casino. Most casinos allow paytable wagers of 1 through 20 numbers, but some limit the choice to only 1 through 10, 12 and 15 numbers, or "spots" as keno aficionados call the numbers selected.[10]
Theprobabilityof a player hitting all 20 numbers on a 20 spot ticket is approximately 1 in 3.5quintillion(1 in 3,535,316,142,212,174,320).[11]
Even though it is highly improbable to hit all 20 numbers on a 20 spot ticket, the same player would typically also get paid for hitting “catches” 0, 1, 2, 3, and 7 through 19 out of 20, often with the 17 through 19 catches paying the same as the solid 20 hit. Some of the other paying "catches" on a 20 spot ticket or any other ticket with high "solid catch" odds are in reality very possible to hit:
Probabilities change significantly based on the number of spots and numbers that are picked on each ticket.
Keno probabilities come from ahypergeometric distribution.[12][13]For Keno, one calculates the probability of hitting exactlyr{\displaystyle r}spots on ann{\displaystyle n}-spot ticket by the formula:
To calculate the probability of hitting 4 spots on a 6-spot ticket, the formula is:
where(nr){\displaystyle {n \choose r}}is calculated asn!r!(n−r)!{\displaystyle n! \over r!(n-r)!}, where X! is notation for Xfactorial. Spreadsheets have the functionCOMBIN(n,r)to calculate(nr){\displaystyle {n \choose r}}.
To calculate "odds-to-1", divide the probability into 1.0 and subtract 1 from the result.
|
https://en.wikipedia.org/wiki/Keno
|
Minimum evolutionis adistance methodemployed inphylogeneticsmodeling. It shares withmaximum parsimonythe aspect of searching for the phylogeny that has the shortest total sum of branch lengths.[1][2]
The theoretical foundations of the minimum evolution (ME) criterion lay in the seminal works of both Kidd and Sgaramella-Zonta (1971)[3]and Rzhetsky and Nei (1993).[4]In these frameworks, the molecular sequences from taxa are replaced by a set of measures of their dissimilarity (i.e., the so-called "evolutionary distances") and a fundamental result states that if such distances were unbiased estimates of thetrue evolutionary distancesfrom taxa (i.e., the distances that one would obtain if all the molecular data from taxa were available), then thetrue phylogenyof taxa would have an expected length shorter than any other possible phylogeny T compatible with those distances.
It is worth noting here a subtle difference between the maximum-parsimony criterion and the ME criterion: while maximum-parsimony is based on an abductive heuristic, i.e., the plausibility of the simplest evolutionary hypothesis of taxa with respect to the more complex ones, the ME criterion is based on Kidd and Sgaramella-Zonta's conjectures that were proven true 22 years later by Rzhetsky and Nei.[4]These mathematical results set the ME criterion free from theOccam's razorprinciple and confer it a solid theoretical and quantitative basis.
Similarly to ME, maximum parsimony becomes anNP-hardproblem when trying to find the optimal tree[5](that is, the one with the least total character-state changes). This is why heuristics are often utilized in order to select a tree, though this does not guarantee the tree will be an optimal selection for the input dataset. This method is often used when very similar sequences are analyzed, as part of the process is locating informative sites in the sequences where a notable number of substitutions can be found.[6]
Maximum-parsimony criterion, which usesHamming distancebranch lengths, was shown to bestatistically inconsistentin 1978. This led to an interest in statistically consistent alternatives such as ME.[7]
Neighbor joiningmay be viewed as agreedy heuristicfor the balanced minimum evolution (BME) criterion. Saito and Nei's 1987 NJ algorithm far predates the BME criterion of 2000. For two decades, researchers used NJ without a firm theoretical basis for why it works.[8]
While neighbor joining shares the same underlying principle of prioritizing minimal evolutionary steps, it differs in that it is a distance method as opposed to maximum parsimony, which is a character-based method. Distance methods like neighbor joining are often simpler to implement and more efficient, which has led to its popularity for analyzing especially large datasets where computational speed is critical. Neighbor joining is a relatively fast phylogenetic tree-building method, though its worst-case time complexity can still beO(N3) without utilizing heuristic implementations to improve on this.[9]It also considers varying rates of evolution across branches, which many other methods do not account for.
Neighbor joining also is a rather consistent method in that an input distance matrix with little to no errors will often provide an output tree with minimal inaccuracy. However, using simple distance values rather than full sequence information like in maximum parsimony does lend itself to a loss of information due to the simplification of the problem.[10]
Maximum likelihood contrasts itself with Minimum Evolution in the sense of Maximum likelihood is a combination of the testing of the most likely tree to result from the data. However, due to the nature of the mathematics involved it is less accurate with smaller datasets but becomes far less biased as the sample size increases, this is due to due to the error rate being 1/log(n). Minimal evolution is similar but it is less accurate with very large datasets. It is similarly powerful but overall much more complicated compared to UPGMA and other options.[11]
UPGMA is a clustering method. It builds a collection of clusters that are then further clustered until the maximum potential cluster is obtained. This is then worked backwards to determine the relation of the groups. It specifically uses an arithmetic mean enabling a more stable clustering. Overall while it is less powerful compared to any of the other listed comparisons it is far simpler and less complex to create. Minimal Evolution is overall more powerful but also more complicated to set up, and is also NP hard.[12]
The ME criterion is known to be statistically consistent whenever the branch lengths are estimated via theOrdinary Least-Squares(OLS) or vialinear programming.[4][13][14]However, as observed in Rzhetsky & Nei's article, the phylogeny having the minimum length under the OLS branch length estimation model may be characterized, in some circumstance, by negative branch lengths, which unfortunately are empty of biological meaning.[4]To solve this drawback, Pauplin[15]proposed to replace OLS with a new particular branch length estimation model, known asbalanced basic evolution(BME).Richard DesperandOlivier Gascuel[16]showed that the BME branch length estimation model ensures the general statistical consistency of the minimum length phylogeny as well as the non-negativity of its branch lengths, whenever the estimated evolutionary distances from taxa satisfy the triangle inequality.
Le Sy VinhandArndt von Haeseler[17]have shown, by means of massive and systematic simulation experiments, that the accuracy of the ME criterion under the BME branch length estimation model is by far the highest indistance methodsand not inferior to those of alternative criteria based e.g., on Maximum Likelihood or Bayesian Inference. Moreover, as shown by Daniele Catanzaro,Martin FrohnandRaffaele Pesenti,[18]the minimum length phylogeny under the BME branch length estimation model can be interpreted as the (Pareto optimal) consensus tree between concurrent minimum entropy processes encoded by a forest of n phylogenies rooted on the n analyzed taxa. This particular information theory-based interpretation is conjectured to be shared by alldistance methodsin phylogenetics.
Francois DenisandOlivier Gascuel[19]proved that the Minimum Evolution principle is not consistent in weighted least squares and generalized least squares. They showed that there was an algorithm that could be used in OLS models where all weights are equal called EDGE_LENGTHS. In this algorithm the lengths of two edges, 1u and 2u can be computed without using distances δij(i,j≠1,2). This property does not hold in WLS models or in the GLS models. Without this property the ME principle is not consistent in the WLS and GLS models.
The "minimum evolution problem" (MEP), in which a minimum-summed-length phylogeny is derived from a set of sequences under the ME criterion, is said to beNP-hard.[20][21]The "balanced minimum evolution problem" (BMEP), which uses the newer BME criterion, isAPX-hard.[20]
A number of exact algorithms solving BMEP have been described.[22][23][24][25]The best known exact algorithm[26]remains impractical for more than a dozen taxa, even with multiprocessing.[20]There is only one approximation algorithm with proven error bounds, published in 2012.
In practical use, BMEP is overwhelmingly implemented byheuristic search. The basic, aforementionedneighbor-joiningalgorithm implements a greedy version of BME.[27]
FastME, the "state-of-the-art",[20]starts with a rough tree then improves it using a set of topological moves such as Nearest Neighbor Interchanges (NNI). Compared to NJ, it is about as fast and more accurate.[28]
FastME operates on the Balanced Minimum Evolution principle, which calculates tree length using a weighted linear function of all pairwise distances. The BME score for a given topology is expressed as:
wheredij{\displaystyle d_{ij}}represents the evolutionary distance between taxai{\displaystyle i}andj{\displaystyle j}, andwij{\displaystyle w_{ij}}is a topology-dependent weight that balances each pair’s contribution. This approach enables more accurate reconstructions than greedy algorithms like NJ.
The algorithm improves tree topology through local rearrangements, primarily Subtree Prune and Regraft (SPR) and NNI operations. At each step, it checks if a rearranged tree has a lower BME score. If so, the change is retained. This iterative refinement enables FastME to converge toward near-optimal solutions efficiently, even for large datasets.
Simplified pseudocode of FastME:
Simulations reported by Desper and Gascuel demonstrate that FastME consistently outperforms NJ in terms of topological accuracy, particularly when evolutionary rates vary or distances deviate from strict additivity. It has also been successfully used on datasets with over 1,000 taxa.[29]
Like most distance-based methods, BME assumes that the input distances are additive. When this assumption does not hold—due to noise, unequal rates, or other violations—the resulting trees may still be close to optimal, but accuracy can be affected. In addition to FastME,metaheuristicmethods such as genetic algorithms and simulated annealing have also been used to explore tree topologies under the minimum evolution criterion, particularly for very large datasets where traditional heuristics may struggle.[30]
|
https://en.wikipedia.org/wiki/Minimum_Evolution
|
Thephase-space formulationis a formulation ofquantum mechanicsthat places thepositionandmomentumvariables on equal footing inphase space. The two key features of the phase-space formulation are that the quantum state is described by aquasiprobability distribution(instead of awave function,state vector, ordensity matrix) and operator multiplication is replaced by astar product.
The theory was fully developed byHilbrand Groenewoldin 1946 in his PhD thesis,[1]and independently byJoe Moyal,[2]each building on earlier ideas byHermann Weyl[3]andEugene Wigner.[4]
In contrast to the phase-space formulation, theSchrödinger pictureuses the positionormomentum representations (see alsoposition and momentum space).
The chief advantage of the phase-space formulation is that it makes quantum mechanics appear as similar toHamiltonian mechanicsas possible by avoiding the operator formalism, thereby "'freeing' the quantization of the 'burden' of theHilbert space".[5]This formulation is statistical in nature and offers logical connections between quantum mechanics and classicalstatistical mechanics, enabling a natural comparison between the two (seeclassical limit). Quantum mechanics in phase space is often favored in certainquantum opticsapplications (seeoptical phase space), or in the study ofdecoherenceand a range of specialized technical problems, though otherwise the formalism is less commonly employed in practical situations.[6]
The conceptual ideas underlying the development of quantum mechanics in phase space have branched into mathematical offshoots such as Kontsevich's deformation-quantization (seeKontsevich quantization formula) andnoncommutative geometry.
The phase-space distributionf(x,p)of a quantum state is a quasiprobability distribution. In the phase-space formulation, the phase-space distribution may be treated as the fundamental, primitive description of the quantum system, without any reference to wave functions or density matrices.[7]
There are several different ways to represent the distribution, all interrelated.[8][9]The most noteworthy is theWigner representation,W(x,p), discovered first.[4]Other representations (in approximately descending order of prevalence in the literature) include theGlauber–Sudarshan P,[10][11]Husimi Q,[12]Kirkwood–Rihaczek, Mehta, Rivier, and Born–Jordan representations.[13][14]These alternatives are most useful when the Hamiltonian takes a particular form, such asnormal orderfor the Glauber–Sudarshan P-representation. Since the Wigner representation is the most common, this article will usually stick to it, unless otherwise specified.
The phase-space distribution possesses properties akin to the probability density in a 2n-dimensional phase space. For example, it isreal-valued, unlike the generally complex-valued wave function. We can understand the probability of lying within a position interval, for example, by integrating the Wigner function over all momenta and over the position interval:
IfÂ(x,p)is an operator representing an observable, it may be mapped to phase space asA(x,p)through theWigner transform. Conversely, this operator may be recovered by theWeyl transform.
The expectation value of the observable with respect to the phase-space distribution is[2][15]
A point of caution, however: despite the similarity in appearance,W(x,p)is not a genuinejoint probability distribution, because regions under it do not represent mutually exclusive states, as required in thethird axiom of probability theory. Moreover, it can, in general, takenegative valueseven for pure states, with the unique exception of (optionallysqueezed)coherent states, in violation of thefirst axiom.
Regions of such negative value are provable to be "small": they cannot extend to compact regions larger than a fewħ, and hence disappear in theclassical limit. They are shielded by theuncertainty principle, which does not allow precise localization within phase-space regions smaller thanħ, and thus renders such "negative probabilities" less paradoxical. If the left side of the equation is to be interpreted as an expectation value in the Hilbert space with respect to an operator, then in the context ofquantum opticsthis equation is known as theoptical equivalence theorem. (For details on the properties and interpretation of the Wigner function, see itsmain article.)
An alternative phase-space approach to quantum mechanics seeks to define a wave function (not just a quasiprobability density) on phase space, typically by means of theSegal–Bargmann transform. To be compatible with the uncertainty principle, the phase-space wave function cannot be an arbitrary function, or else it could be localized into an arbitrarily small region of phase space. Rather, the Segal–Bargmann transform is aholomorphic functionofx+ip{\displaystyle x+ip}. There is a quasiprobability density associated to the phase-space wave function; it is theHusimi Q representationof the position wave function.
The fundamental noncommutative binary operator in the phase-space formulation that replaces the standard operator multiplication is thestar product, represented by the symbol★.[1]Each representation of the phase-space distribution has adifferentcharacteristic star product. For concreteness, we restrict this discussion to the star product relevant to the Wigner–Weyl representation.
For notational convenience, we introduce the notion ofleft and right derivatives. For a pair of functionsfandg, the left and right derivatives are defined as
Thedifferential definitionof the star product is
where the argument of the exponential function can be interpreted as apower series.
Additional differential relations allow this to be written in terms of a change in the arguments offandg:
It is also possible to define the★-product in a convolution integral form,[16]essentially through theFourier transform:
(Thus, e.g.,[7]Gaussians composehyperbolically:
or
etc.)
The energyeigenstatedistributions are known asstargenstates,★-genstates,stargenfunctions, or★-genfunctions, and the associated energies are known asstargenvaluesor★-genvalues. These are solved, analogously to the time-independentSchrödinger equation, by the★-genvalue equation,[17][18]
whereHis the Hamiltonian, a plain phase-space function, most often identical to the classical Hamiltonian.
Thetime evolutionof the phase space distribution is given by a quantum modification ofLiouville flow.[2][9][19]This formula results from applying theWigner transformationto the density matrix version of thequantum Liouville equation,
thevon Neumann equation.
In any representation of the phase space distribution with its associated star product, this is
or, for the Wigner function in particular,
where {{ , }} is theMoyal bracket, the Wigner transform of the quantum commutator, while { , } is the classicalPoisson bracket.[2]
This yields a concise illustration of thecorrespondence principle: this equation manifestly reduces to the classical Liouville equation in the limitħ→ 0. In the quantum extension of the flow, however,the density of points in phase space is not conserved; the probability fluid appears "diffusive" and compressible.[2]The concept of quantum trajectory is therefore a delicate issue here.[20]See the movie for the Morse potential, below, to appreciate the nonlocality of quantum phase flow.
N.B. Given the restrictions placed by the uncertainty principle on localization,Niels Bohrvigorously denied the physical existence of such trajectories on the microscopic scale. By means of formal phase-space trajectories, the time evolution problem of the Wigner function can be rigorously solved using the path-integral method[21]and themethod of quantum characteristics,[22]although there are severe practical obstacles in both cases.
The Hamiltonian for the simple harmonic oscillator in one spatial dimension in the Wigner–Weyl representation is
The★-genvalue equation for thestaticWigner function then reads
Consider, first, the imaginary part of the★-genvalue equation,
This implies that one may write the★-genstates as functions of a single argument:
With this change of variables, it is possible to write the real part of the★-genvalue equation in the form of a modified Laguerre equation (notHermite's equation!), the solution of which involves theLaguerre polynomialsas[18]
introduced by Groenewold,[1]with associated★-genvalues
For the harmonic oscillator, the time evolution of an arbitrary Wigner distribution is simple. An initialW(x,p;t= 0) =F(u)evolves by the above evolution equation driven by the oscillator Hamiltonian given, by simplyrigidly rotating in phase space,[1]
Typically, a "bump" (or coherent state) of energyE≫ħωcan represent a macroscopic quantity and appear like a classical object rotating uniformly in phase space, a plain mechanical oscillator (see the animated figures). Integrating over all phases (starting positions att= 0) of such objects, a continuous "palisade", yields a time-independent configuration similar to the above static★-genstatesF(u), an intuitive visualization of theclassical limitfor large-action systems.[6]
The eigenfunctions can also be characterized by being rotationally symmetric (thus time-invariant) pure states. That is, they are functions of formW(x,p)=f(x2+p2){\displaystyle W(x,p)=f({\sqrt {x^{2}+p^{2}}})}that satisfyW⋆W=(2πℏ)−1W{\displaystyle W\star W=(2\pi \hbar )^{-1}W}.
Suppose a particle is initially in a minimally uncertainGaussian state, with the expectation values of position and momentum both centered at the origin in phase space. The Wigner function for such a state propagating freely is
whereαis a parameter describing the initial width of the Gaussian, andτ=m/α2ħ.
Initially, the position and momenta are uncorrelated. Thus, in 3 dimensions, we expect the position and momentum vectors to be twice as likely to be perpendicular to each other as parallel.
However, the position and momentum become increasingly correlated as the state evolves, because portions of the distribution farther from the origin in position require a larger momentum to be reached: asymptotically,
(This relative"squeezing"reflects the spreading of the freewave packetin coordinate space.)
Indeed, it is possible to show that the kinetic energy of the particle becomes asymptotically radial only, in agreement with the standard
quantum-mechanical notion of the ground-state nonzero angular momentum specifying orientation independence:[24]
TheMorse potentialis used to approximate the vibrational structure of a diatomic molecule.
Tunnelingis a hallmark quantum effect where a quantum particle, not having sufficient energy to fly above, still goes through a barrier. This effect does not exist in classical mechanics.
|
https://en.wikipedia.org/wiki/Phase_space_formulation
|
Acode name,codename,call sign, orcryptonymis acode wordor name used, sometimes clandestinely, to refer to another name, word, project, or person. Code names are often used for military purposes, or in espionage. They may also be used inindustrial counter-espionageto protect secret projects and the like from business rivals, or to give names to projects whose marketing name has not yet been determined. Another reason for the use of names and phrases in the military is that they transmit with a lower level of cumulative errors over awalkie-talkieor radio link than actual names.
TheAchaemenid EmpireunderDarius Iemployed a network of spies called the King’s Eye or the King’s Ear.[1][2]These agents operated under anonymity, and “King’s Eye” was not a specific person but rather a code name for the intelligence network that reported directly to the king.[2]
TheCarthaginiangeneralHannibal Barcareportedly used coded references for his agents and informants in Rome and among allied territories.[3]Some sources suggest that key figures in his intelligence operations were identified using nicknames instead of real names to avoid detection by Roman counterintelligence.[3]
Julius Caesarusedciphersto encode messages and likely employed code names for key operatives.[4]His famousCaesar cipher(simple letter-shiftingencryption) was used to disguise military commands.[4]He also referred toMarc Antonyand other generals with shortened or altered names in correspondence to prevent interception from revealing strategic plans.[4]
During theJewish revolts against Rome, leaders and messengers used symbolic or misleading names in communications.[5][6]The Dead Sea Scrolls reference figures such as the “Teacher of Righteousness” and the “Wicked Priest,” which may have functioned as code names to obscure real identities.[5][6]
TheByzantine Empire’sintelligence agents, particularly underEmperor Justinian I, operated under codenames or titles rather than real identities.[7]Procopiussuggests that spies within the Persian and Gothic courts were assigned allegorical names to protect them from discovery.[7]
DuringWorld War I, names common to theAlliesreferring to nations, cities, geographical features, military units, military operations, diplomatic meetings, places, and individual persons were agreed upon, adapting pre-war naming procedures in use by the governments concerned. In the British case names were administered and controlled by the Inter Services Security Board (ISSB) staffed by theWar Office.[8]This procedure was coordinated with the United States when itentered the war. Random lists of names were issued to users in alphabetical blocks of ten words and were selected as required. Words became available for re-use after six months and unused allocations could be reassigned at discretion and according to need. Judicious selection from the available allocation could result in clever meanings and result in anaptronymorbackronym, although policy was to select words that had no obviously deducible connection with what they were supposed to be concealing. Those for the majorconference meetingshad a partial naming sequence referring to devices or instruments which had a number as part of their meaning, e.g., the third meeting was "TRIDENT".Joseph Stalin, whose last name means "man of steel", was given the name "GLYPTIC", meaning "an image carved out of stone".
Ewen Montagu, a British Naval intelligence officer, discloses inBeyond Top Secret Ultrathat duringWorld War II,Nazi Germanyhabitually usedad hoccode names as nicknames which often openly revealed or strongly hinted at their content or function.
Some German code names:
Conversely,Operation Wacht am Rhein(Watch on theRhine) was deliberately named to suggest the opposite of its purpose – a defensive "watch" as opposed to a massiveblitzkriegoperation, just as wasOperation Weserübung(Weser-exercise), which signified the plans to invadeNorwayandDenmarkin April 1940.
Britain and the United States developed the security policy of assigning code names intended to give no such clues to the uninitiated. For example, the British counter measures against theV-2was calledOperation Crossbow. Theatomic bombproject centered inNew Mexicowas called theManhattan Project, derived from theManhattan Engineer Districtwhich managed the program. The code name for the AmericanA-12/SR-71spy plane project, producing the fastest, highest-flying aircraft in the world, wasOxcart. The American group that planned that country's firstICBMwas called theTeapot Committee.
Although the word could stand for a menace to shipping (in this case, that of Japan), the American code name for the attack on the subtropical island ofOkinawain World War II wasOperation Iceberg. The Soviet Union's project to base missiles in Cuba was namedOperation Anadyrafter their closest bomber base to the US (just across the Bering Strait from Nome, Alaska). The names of colors are generally avoided in American practice to avoid confusion with meteorological reporting practices. Britain, in contrast, made deliberately non-meaningful use of them, through the system ofrainbow codes.
Although German and Italian aircraft were not given code names by their Allied opponents, in 1942, Captain Frank T. McCoy, an intelligence officer of theUSAAF, invented a system for the identification of Japanese military aircraft. Initially using short, "hillbilly" boys' names such as "Pete", "Jake", and "Rufe", the system was later extended to include girls' names and names of trees and birds, and became widely used by the Allies throughout thePacific theaterof war. This type of naming scheme differs from the other use of code names in that it does not have to be kept secret, but is a means of identification where the official nomenclature is unknown or uncertain.
The policy of recognition reporting names was continued into theCold Warfor Soviet, otherWarsaw Pact, and Communist Chinese aircraft. Although this was started by the Air Standards Co-ordinating Committee (ASCC) formed by the United States, United Kingdom, Canada, Australia, and New Zealand, it was extended throughoutNATOas theNATO reporting namefor aircraft, rockets and missiles. These names were considered by the Soviets as being like a nickname given to one's unit by the opponents in a battle. The Soviets did not like theSukhoi Su-25getting the code name "Frogfoot".[citation needed]However, some names were appropriate, such as "Condor" for theAntonov An-124, or, most famously, "Fulcrum" for theMikoyan MiG-29, which had a "pivotal" role in Soviet air-strategy.
Code names were adopted by the following process. Aerial or space reconnaissance would note a new aircraft at aWarsaw Pactairbase. The intelligence units would then assign it a code name consisting of the official abbreviation of the base, then a letter, for example, "Ram-A", signifying an aircraft sighted atRamenskoye Airport. Missiles were given designations like "TT-5", for the fifth rocket seen atTyura-Tam. When more information resulted in knowing a bit about what a missile was used for, it would be given a designation like "SS-6", for the sixth surface-to-surface missile design reported. Finally, when either an aircraft or a missile was able to be photographed with a hand-held camera, instead of a reconnaissance aircraft, it was given a name like "Flanker" or "Scud" – always an English word, as international pilots worldwide are required to learn English. The Soviet manufacturer or designation – which may be mistakenly inferred by NATO – has nothing to do with it.
Jet-powered aircraft received two-syllable names likeFoxbat, while propeller aircraft were designated with short names likeBull. Fighter names began with an "F", bombers with a "B", cargo aircraft with a "C". Training aircraft and reconnaissance aircraft were grouped under the word "miscellaneous", and received "M". The same convention applies to missiles, with air-launched ground attack missiles beginning with the letter "K" and surface-to-surface missiles (ranging fromintercontinental ballistic missilestoantitankrockets) with the letter "S", air-to-air missiles "A", and surface-to-air missiles "G".
Throughout the Second World War, the British allocation practice favored one-word code names (Jubilee,Frankton). That of the Americans favored longer compound words, although the nameOverlordwas personally chosen byWinston Churchillhimself. Many examples of both types can be cited, as can exceptions.
Winston Churchill was particular about the quality of code names. He insisted that code words, especially for dangerous operations, would be not overly grand nor petty nor common. One emotional goal he mentions is to never have to report to anyone that their son "was killed in an operation called 'Bunnyhug' or 'Ballyhoo'."[12]
Presently, British forces tend to use one-word names, presumably in keeping with their post-World War II policy of reserving single words for operations and two-word names for exercises. British operation code names are usually randomly generated by a computer and rarely reveal its components or any political implications unlike the American names (e.g., the2003 invasion of Iraqwas called "Operation Telic" compared to Americans' "Operation Iraqi Freedom", obviously chosen for propaganda rather than secrecy). Americans prefer two-word names, whereas the Canadians and Australians use either. The French military currently prefer names drawn from nature (such as colors or the names of animals), for instanceOpération Daguet("brocket deer") orOpération Baliste("Triggerfish"). The CIA uses alphabetical prefixes to designate the part of the agency supporting an operation.
In many cases with the United States, the first word of the name has to do with the intent of the program. Programs with "have" as the first word, such asHave Bluefor the stealth fighter development, are developmental programs, not meant to produce a production aircraft. Programs that start with Senior, such as Senior Trend for the F-117, are for aircraft in testing meant to enter production.[citation needed]
In the United States code names are commonly set entirely in upper case.[13]This is not done in other countries, though for the UK in British documents the code name is in upper case while operation is shortened to OP e.g., "Op. TELIC".
This presents an opportunity for a bit of public-relations (Operation Just Cause), or for controversy over the naming choice (Operation Infinite Justice, renamedOperation Enduring Freedom). Computers are now used to aid in the selection. And further, there is a distinction between thesecretnames during former wars and thepublishednames of recent ones.
Aproject code nameis a code name (usually a single word, short phrase or acronym) which is given to aprojectbeing developed byindustry,academia, government, and other concerns.
Project code names are typically used for several reasons:
Different organizations have different policies regarding the use and publication of project code names. Some companies take great pains toneverdiscuss or disclose project code names outside of the company (other than with outside entities who have a need to know, and typically are bound with anon-disclosure agreement). Other companies never use them in official or formal communications, but widely disseminate project code names through informal channels (often in an attempt to create amarketing buzzfor the project). Still others (such asMicrosoft) discuss code names publicly, and routinely use project code names on beta releases and such, but remove them from final product(s). In the case of Windows 95, the code name "CHICAGO" was left embedded in theINF File structureand remained required through Windows Me. At the other end of the spectrum,Appleincludes the project code names forMac OS Xas part of the official name of the final product, a practice that was started in 2002 withMac OS X v10.2"Jaguar". Google and theAOSPalso used this for theirAndroidoperating system until 2013, where the code name was different from the release name.
|
https://en.wikipedia.org/wiki/Code_name
|
Instatistics, azero-inflated modelis astatistical modelbased on a zero-inflatedprobability distribution, i.e. a distribution that allows for frequent zero-valued observations.
Zero-inflated models are commonly used in the analysis of count data, such as the number of visits a patient makes to the emergency room in one year, or the number of fish caught in one day in one lake.[1]Count data can take values of 0, 1, 2, … (non-negative integer values).[2]Other examples of count data are the number of hits recorded by a Geiger counter in one minute, patient days in the hospital, goals scored in a soccer game,[3]and the number of episodes of hypoglycemia per year for a patient with diabetes.[4]
For statistical analysis, the distribution of the counts is often represented using aPoisson distributionor anegative binomial distribution. Hilbe[3]notes that "Poisson regression is traditionally conceived of as the basic count model upon which a variety of other count models are based." In a Poisson model, "… the random variabley{\displaystyle y}is the count response and parameterλ{\displaystyle \lambda }(lambda) is the mean. Often,λ{\displaystyle \lambda }is also called the rate or intensity parameter… In statistical literature,λ{\displaystyle \lambda }is also expressed asμ{\displaystyle \mu }(mu) when referring to Poisson and traditional negative binomial models."
In some data, the number of zeros is greater than would be expected using aPoisson distributionor anegative binomial distribution. Data with such an excess of zero counts are described as Zero-inflated.[4]
Example histograms of zero-inflated Poisson distributions with meanμ{\displaystyle \mu }of 5 or 10 and proportion of zero inflationπ{\displaystyle \pi }of 0.2 or 0.5 are shown below, based on the R program ZeroInflPoiDistPlots.R from Bilder and Laughlin.[1]
As the examples above show, zero-inflated data can arise as amixtureof two distributions. The first distribution generates zeros. The second distribution, which may be aPoisson distribution, anegative binomial distributionor other count distribution, generates counts, some of which may be zeros.[7]
In the statistical literature, different authors may use different names to distinguish zeros from the two distributions. Some authors describe zeros generated by the first (binary) distribution as "structural" and zeros generated by the second (count) distribution as "random".[7]Other authors use the terminology "immune" and "susceptible" for the binary and count zeros, respectively.[1]
One well-known zero-inflated model isDiane Lambert's zero-inflated Poisson model, which concerns a random event containing excess zero-count data in unit time.[8]For example, the number ofinsurance claimswithin a population for a certain type of risk would be zero-inflated by those people who have not taken out insurance against the risk and thus are unable to claim. The zero-inflated Poisson (ZIP) modelmixestwo zero generating processes. The first process generates zeros. The second process is governed by aPoisson distributionthat generates counts, some of which may be zero. Themixture distributionis described as follows:
where the outcome variableyi{\displaystyle y_{i}}has any non-negative integer value,λ{\displaystyle \lambda }is the expected Poisson count for thei{\displaystyle i}th individual;π{\displaystyle \pi }is the probability of extra zeros.
The mean is(1−π)λ{\displaystyle (1-\pi )\lambda }and the variance isλ(1−π)(1+πλ){\displaystyle \lambda (1-\pi )(1+\pi \lambda )}.
The method of moments estimators are given by[9]
wherem{\displaystyle m}is the sample mean ands2{\displaystyle s^{2}}is the sample variance.
The maximum likelihood estimator[10]can be found by solving the following equation
wheren0n{\displaystyle {\frac {n_{0}}{n}}}is the observed proportion of zeros.
A closed form solution of this equation is given by[11]
withW0{\displaystyle W_{0}}being the main branch of Lambert's W-function[12]and
Alternatively, the equation can be solved by iteration.[13]
The maximum likelihood estimator forπ{\displaystyle \pi }is given by
In 1994, Greene considered the zero-inflatednegative binomial(ZINB) model.[14]Daniel B. Hall adapted Lambert's methodology to an upper-bounded count situation, thereby obtaining a zero-inflated binomial (ZIB) model.[15]
If the count dataY{\displaystyle Y}is such that the probability of zero is larger than the probability of nonzero, namely
then the discrete dataY{\displaystyle Y}obey discrete pseudocompound Poisson distribution.[16]
In fact, letG(z)=∑n=0∞P(Y=n)zn{\displaystyle G(z)=\sum \limits _{n=0}^{\infty }P(Y=n)z^{n}}be theprobability generating functionofyi{\displaystyle y_{i}}. Ifp0=Pr(Y=0)>0.5{\displaystyle p_{0}=\Pr(Y=0)>0.5}, then|G(z)|⩾p0−∑i=1∞pi=2p0−1>0{\displaystyle |G(z)|\geqslant p_{0}-\sum \limits _{i=1}^{\infty }p_{i}=2p_{0}-1>0}. Then from theWiener–Lévy theorem,[17]G(z){\displaystyle G(z)}has theprobability generating functionof the discrete pseudocompound Poisson distribution.
We say that the discrete random variableY{\displaystyle Y}satisfyingprobability generating functioncharacterization
has a discrete pseudocompound Poisson distributionwith parameters
When all theαk{\displaystyle \alpha _{k}}are non-negative, it is the discretecompound Poisson distribution(non-Poisson case) withoverdispersionproperty.
|
https://en.wikipedia.org/wiki/Zero-inflated_model
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.