text
stringlengths 16
172k
| source
stringlengths 32
122
|
|---|---|
Inmathematics, aroot of unityis anycomplex numberthat yields 1 whenraisedto some positiveintegerpowern. Roots of unity are used in many branches of mathematics, and are especially important innumber theory, the theory ofgroup characters, and thediscrete Fourier transform. It is occasionally called ade Moivre numberafter French mathematicianAbraham de Moivre.
Roots of unity can be defined in anyfield. If thecharacteristicof the field is zero, the roots are complex numbers that are alsoalgebraic integers. For fields with a positive characteristic, the roots belong to afinite field, and,conversely, every nonzero element of a finite field is a root of unity. Anyalgebraically closed fieldcontains exactlynnth roots of unity, except whennis a multiple of the (positive) characteristic of the field.
Annth root of unity, wherenis a positive integer, is a numberzsatisfying theequation[1][2]zn=1.{\displaystyle z^{n}=1.}Unless otherwise specified, the roots of unity may be taken to becomplex numbers(including the number 1, and the number −1 ifniseven, which are complex with a zeroimaginary part), and in this case, thenth roots of unity are[3]exp(2kπin)=cos2kπn+isin2kπn,k=0,1,…,n−1.{\displaystyle \exp \left({\frac {2k\pi i}{n}}\right)=\cos {\frac {2k\pi }{n}}+i\sin {\frac {2k\pi }{n}},\qquad k=0,1,\dots ,n-1.}
However, the defining equation of roots of unity is meaningful over anyfield(and even over anyring)F, and this allows considering roots of unity inF. Whichever is the fieldF, the roots of unity inFare either complex numbers, if thecharacteristicofFis 0, or, otherwise, belong to afinite field. Conversely, every nonzero element in a finite field is a root of unity in that field. SeeRoot of unity modulonandFinite fieldfor further details.
Annth root of unity is said to beprimitiveif it is not anmth root of unity for some smallerm, that is if[4][5]
Ifnis aprime number, then allnth roots of unity, except 1, are primitive.[6]
In the above formula in terms of exponential and trigonometric functions, the primitiventh roots of unity are those for whichkandnarecoprime integers.
Subsequent sections of this article will comply with complex roots of unity. For the case of roots of unity in fields of nonzero characteristic, seeFinite field § Roots of unity. For the case of roots of unity in rings ofmodular integers, seeRoot of unity modulon.
Everynth root of unityzis a primitiveath root of unity for somea≤n, which is the smallest positive integer such thatza= 1.
Any integer power of annth root of unity is also annth root of unity,[7]as
This is also true for negative exponents. In particular, thereciprocalof annth root of unity is itscomplex conjugate, and is also annth root of unity:[8]
Ifzis annth root of unity anda≡b(modn)thenza=zb. Indeed, by the definition ofcongruence modulon,a=b+knfor some integerk, and hence
Therefore, given a powerzaofz, one hasza=zr, where0 ≤r<nis the remainder of theEuclidean divisionofabyn.
Letzbe a primitiventh root of unity. Then the powersz,z2, ...,zn−1,zn=z0= 1arenth roots of unity and are all distinct. (Ifza=zbwhere1 ≤a<b≤n, thenzb−a= 1, which would imply thatzwould not be primitive.) This implies thatz,z2, ...,zn−1,zn=z0= 1are all of thenth roots of unity, since annth-degreepolynomial equationover a field (in this case the field of complex numbers) has at mostnsolutions.
From the preceding, it follows that, ifzis a primitiventh root of unity, thenza=zb{\displaystyle z^{a}=z^{b}}if and only ifa≡b(modn).{\displaystyle a\equiv b{\pmod {n}}.}Ifzis not primitive thena≡b(modn){\displaystyle a\equiv b{\pmod {n}}}impliesza=zb,{\displaystyle z^{a}=z^{b},}but the converse may be false, as shown by the following example. Ifn= 4, a non-primitiventh root of unity isz= −1, and one hasz2=z4=1{\displaystyle z^{2}=z^{4}=1}, although2≢4(mod4).{\displaystyle 2\not \equiv 4{\pmod {4}}.}
Letzbe a primitiventh root of unity. A powerw=zkofzis a primitiveath root of unity for
wheregcd(k,n){\displaystyle \gcd(k,n)}is thegreatest common divisorofnandk. This results from the fact thatkais the smallest multiple ofkthat is also a multiple ofn. In other words,kais theleast common multipleofkandn. Thus
Thus, ifkandnarecoprime,zkis also a primitiventh root of unity, and therefore there areφ(n)distinct primitiventh roots of unity (whereφisEuler's totient function). This implies that ifnis a prime number, all the roots except+1are primitive.
In other words, ifR(n)is the set of allnth roots of unity andP(n)is the set of primitive ones,R(n)is adisjoint unionof theP(n):
where the notation means thatdgoes through all the positivedivisorsofn, including1andn.
Since thecardinalityofR(n)isn, and that ofP(n)isφ(n), this demonstrates the classical formula
The product and themultiplicative inverseof two roots of unity are also roots of unity. In fact, ifxm= 1andyn= 1, then(x−1)m= 1, and(xy)k= 1, wherekis theleast common multipleofmandn.
Therefore, the roots of unity form anabelian groupunder multiplication. Thisgroupis thetorsion subgroupof thecircle group.
For an integern, the product and the multiplicative inverse of twonth roots of unity are alsonth roots of unity. Therefore, thenth roots of unity form an abelian group under multiplication.
Given a primitiventh root of unityω, the othernth roots are powers ofω. This means that the group of thenth roots of unity is acyclic group. It is worth remarking that the term ofcyclic grouporiginated from the fact that this group is asubgroupof thecircle group.
LetQ(ω){\displaystyle \mathbb {Q} (\omega )}be thefield extensionof therational numbersgenerated overQ{\displaystyle \mathbb {Q} }by a primitiventh root of unityω. As everynth root of unity is a power ofω, thefieldQ(ω){\displaystyle \mathbb {Q} (\omega )}contains allnth roots of unity, andQ(ω){\displaystyle \mathbb {Q} (\omega )}is aGalois extensionofQ.{\displaystyle \mathbb {Q} .}
Ifkis an integer,ωkis a primitiventh root of unity if and only ifkandnarecoprime. In this case, the map
induces anautomorphismofQ(ω){\displaystyle \mathbb {Q} (\omega )}, which maps everynth root of unity to itskth power. Every automorphism ofQ(ω){\displaystyle \mathbb {Q} (\omega )}is obtained in this way, and these automorphisms form theGalois groupofQ(ω){\displaystyle \mathbb {Q} (\omega )}over the field of the rationals.
The rules of exponentiation imply that thecompositionof two such automorphisms is obtained by multiplying the exponents. It follows that the map
defines agroup isomorphismbetween theunitsof the ring ofintegers modulonand the Galois group ofQ(ω).{\displaystyle \mathbb {Q} (\omega ).}
This shows that this Galois group isabelian, and implies thus that the primitive roots of unity may be expressed in terms ofradicals.
The real part of the primitive roots of unity are related to one another as roots of theminimal polynomialof2cos(2π/n).{\displaystyle 2\cos(2\pi /n).}The roots of the minimal polynomial are just twice the real part; these roots form a cyclic Galois group.
De Moivre's formula, which is valid for allrealxand integersn, is
Settingx=2π/ngives a primitiventh root of unity – one gets
but
fork= 1, 2, …,n− 1. In other words,
is a primitiventh root of unity.
This formula shows that in thecomplex planethenth roots of unity are at the vertices of aregularn-sided polygoninscribed in theunit circle, with one vertex at 1 (see the plot forn= 3on the right). This geometric fact accounts for the term "cyclotomic" in such phrases ascyclotomic fieldandcyclotomic polynomial; it is from the Greek roots "cyclo" (circle) plus "tomos" (cut, divide).
Euler's formula
which is valid for all realx, can be used to put the formula for thenth roots of unity into the form
It follows from the discussion in the previous section that this is a primitiventh-root if and only if the fractionk/nis in lowest terms; that is, thatkandnare coprime. Anirrational numberthat can be expressed as thereal partof the root of unity; that is, ascos(2πk/n){\displaystyle \cos(2\pi k/n)}, is called atrigonometric number.
Thenth roots of unity are, by definition, therootsof thepolynomialxn− 1, and are thusalgebraic numbers. As this polynomial is notirreducible(except forn= 1), the primitiventh roots of unity are roots of an irreducible polynomial (over the integers) of lower degree, called thenthcyclotomic polynomial, and often denotedΦn. The degree ofΦnis given byEuler's totient function, which counts (among other things) the number of primitiventh roots of unity.[9]The roots ofΦnare exactly the primitiventh roots of unity.
Galois theorycan be used to show that the cyclotomic polynomials may be conveniently solved in terms of radicals. (The trivial form1n{\displaystyle {\sqrt[{n}]{1}}}is not convenient, because it contains non-primitive roots, such as 1, which are not roots of the cyclotomic polynomial, and because it does not give the real and imaginary parts separately.) This means that, for each positive integern, there exists an expression built from integers by root extractions, additions, subtractions, multiplications, and divisions (and nothing else), such that the primitiventh roots of unity are exactly the set of values that can be obtained by choosing values for the root extractions (kpossible values for akth root). (For more details see§ Cyclotomic fields, below.)
Gaussprovedthat a primitiventh root of unity can be expressed using onlysquare roots, addition, subtraction, multiplication and division if and only if it is possible toconstruct with compass and straightedgetheregularn-gon. This is the caseif and only ifnis either apower of twoor the product of a power of two andFermat primesthat are all different.
Ifzis a primitiventh root of unity, the same is true for1/z, andr=z+1z{\displaystyle r=z+{\frac {1}{z}}}is twice the real part ofz. In other words,Φnis areciprocal polynomial, the polynomialRn{\displaystyle R_{n}}that hasras a root may be deduced fromΦnby the standard manipulation on reciprocal polynomials, and the primitiventh roots of unity may be deduced from the roots ofRn{\displaystyle R_{n}}by solving thequadratic equationz2−rz+1=0.{\displaystyle z^{2}-rz+1=0.}That is, the real part of the primitive root isr2,{\displaystyle {\frac {r}{2}},}and its imaginary part is±i1−(r2)2.{\displaystyle \pm i{\sqrt {1-\left({\frac {r}{2}}\right)^{2}}}.}
The polynomialRn{\displaystyle R_{n}}is an irreducible polynomial whose roots are all real. Its degree is a power of two, if and only ifnis a product of a power of two by a product (possiblyempty) of distinct Fermat primes, and the regularn-gon is constructible with compass and straightedge. Otherwise, it is solvable in radicals, but one are in thecasus irreducibilis, that is, every expression of the roots in terms of radicals involvesnonreal radicals.
Ifzis a primitiventh root of unity, then the sequence of powers
isn-periodic (becausezj+n=zjzn=zjfor all values ofj), and thensequences of powers
fork= 1, … ,nare alln-periodic (becausezk⋅(j+n)=zk⋅j). Furthermore, the set{s1, … ,sn} of these sequences is abasisof thelinear spaceof alln-periodic sequences. This means thatanyn-periodic sequence of complex numbers
can be expressed as alinear combinationof powers of a primitiventh root of unity:
for some complex numbersX1, … ,Xnand every integerj.
This is a form ofFourier analysis. Ifjis a (discrete) time variable, thenkis afrequencyandXkis a complexamplitude.
Choosing for the primitiventh root of unity
allowsxjto be expressed as a linear combination ofcosandsin:
This is adiscrete Fourier transform.
LetSR(n)be the sum of all thenth roots of unity, primitive or not. Then
This is an immediate consequence ofVieta's formulas. In fact, thenth roots of unity being the roots of the polynomialXn− 1, their sum is thecoefficientof degreen− 1, which is either 1 or 0 according whethern= 1orn> 1.
Alternatively, forn= 1there is nothing to prove, and forn> 1there exists a rootz≠ 1– since the setSof all thenth roots of unity is agroup,zS=S, so the sum satisfieszSR(n) = SR(n), whenceSR(n) = 0.
LetSP(n)be the sum of all the primitiventh roots of unity. Then
whereμ(n)is theMöbius function.
In the sectionElementary properties, it was shown that ifR(n)is the set of allnth roots of unity andP(n)is the set of primitive ones,R(n)is a disjoint union of theP(n):
This implies
Applying theMöbius inversion formulagives
In this formula, ifd<n, thenSR(n/d) = 0, and ford=n:SR(n/d) = 1. Therefore,SP(n) =μ(n).
This is the special casecn(1)ofRamanujan's sumcn(s),[10]defined as the sum of thesth powers of the primitiventh roots of unity:
From the summation formula follows anorthogonalityrelationship: forj= 1, … ,nandj′= 1, … ,n
whereδis theKronecker deltaandzis any primitiventh root of unity.
Then×nmatrixUwhose(j,k)th entry is
defines adiscrete Fourier transform. Computing the inverse transformation usingGaussian eliminationrequiresO(n3)operations. However, it follows from the orthogonality thatUisunitary. That is,
and thus the inverse ofUis simply the complex conjugate. (This fact was first noted byGausswhen solving the problem oftrigonometric interpolation.) The straightforward application ofUor its inverse to a given vector requiresO(n2)operations. Thefast Fourier transformalgorithms reduces the number of operations further toO(nlogn).
Thezerosof the polynomial
are precisely thenth roots of unity, each withmultiplicity1. Thenthcyclotomic polynomialis defined by the fact that its zeros are precisely theprimitiventh roots of unity, each with multiplicity 1.
wherez1,z2,z3, …,zφ(n)are the primitiventh roots of unity, andφ(n)isEuler's totient function. The polynomialΦn(z)has integer coefficients and is anirreducible polynomialover the rational numbers (that is, it cannot be written as the product of two positive-degree polynomials with rational coefficients).[9]The case of primen, which is easier than the general assertion, follows by applyingEisenstein's criterionto the polynomial
and expanding via thebinomial theorem.
Everynth root of unity is a primitivedth root of unity for exactly one positivedivisordofn. This implies that[9]
This formula represents thefactorizationof the polynomialzn− 1into irreducible factors:
ApplyingMöbius inversionto the formula gives
whereμis theMöbius function. So the first few cyclotomic polynomials are
Ifpis aprime number, then all thepth roots of unity except 1 are primitivepth roots. Therefore,[6]Φp(z)=zp−1z−1=∑k=0p−1zk.{\displaystyle \Phi _{p}(z)={\frac {z^{p}-1}{z-1}}=\sum _{k=0}^{p-1}z^{k}.}Substituting any positive integer ≥ 2 forz, this sum becomes abasezrepunit. Thus a necessary (but not sufficient) condition for a repunit to be prime is that its length be prime.
Note that, contrary to first appearances,notall coefficients of all cyclotomic polynomials are 0, 1, or −1. The first exception isΦ105. It is not a surprise it takes this long to get an example, because the behavior of the coefficients depends not so much onnas on how manyoddprime factors appear inn. More precisely, it can be shown that ifnhas 1 or 2 odd prime factors (for example,n= 150) then thenth cyclotomic polynomial only has coefficients 0, 1 or −1. Thus the first conceivablenfor which there could be a coefficient besides 0, 1, or −1 is a product of the three smallest odd primes, and that is3 ⋅ 5 ⋅ 7 = 105. This by itself doesn't prove the 105th polynomial has another coefficient, but does show it is the first one which even has a chance of working (and then a computation of the coefficients shows it does). A theorem of Schur says that there are cyclotomic polynomials with coefficients arbitrarily large inabsolute value. In particular, ifn=p1p2⋯pt,{\displaystyle n=p_{1}p_{2}\cdots p_{t},}wherep1<p2<⋯<pt{\displaystyle p_{1}<p_{2}<\cdots <p_{t}}are odd primes,p1+p2>pt,{\displaystyle p_{1}+p_{2}>p_{t},}andtis odd, then1 −toccurs as a coefficient in thenth cyclotomic polynomial.[11]
Many restrictions are known about the values that cyclotomic polynomials can assume at integer values. For example, ifpis prime, thend∣ Φp(d)if and only ifd≡ 1 (modp).
Cyclotomic polynomials are solvable inradicals, as roots of unity are themselves radicals. Moreover, there exist more informative radical expressions fornth roots of unity with the additional property[12]that every value of the expression obtained by choosing values of the radicals (for example, signs of square roots) is a primitiventh root of unity. This was already shown byGaussin 1797.[13]Efficientalgorithmsexist for calculating such expressions.[14]
Thenth roots of unity form under multiplication acyclic groupofordern, and in fact these groups comprise all of thefinitesubgroups of themultiplicative groupof the complex number field. Ageneratorfor this cyclic group is a primitiventh root of unity.
Thenth roots of unity form an irreduciblerepresentationof any cyclic group of ordern. The orthogonality relationship also follows fromgroup-theoreticprinciples as described inCharacter group.
The roots of unity appear as entries of theeigenvectorsof anycirculant matrix; that is, matrices that are invariant under cyclic shifts, a fact that also follows fromgroup representation theoryas a variant ofBloch's theorem.[15][page needed]In particular, if a circulantHermitian matrixis considered (for example, a discretized one-dimensionalLaplacianwith periodic boundaries[16]), the orthogonality property immediately follows from the usual orthogonality of eigenvectors of Hermitian matrices.
Byadjoininga primitiventh root of unity toQ,{\displaystyle \mathbb {Q} ,}one obtains thenthcyclotomic fieldQ(exp(2πi/n)).{\displaystyle \mathbb {Q} (\exp(2\pi i/n)).}Thisfieldcontains allnth roots of unity and is thesplitting fieldof thenth cyclotomic polynomial overQ.{\displaystyle \mathbb {Q} .}Thefield extensionQ(exp(2πi/n))/Q{\displaystyle \mathbb {Q} (\exp(2\pi i/n))/\mathbb {Q} }has degree φ(n) and itsGalois groupisnaturallyisomorphicto the multiplicativegroup of unitsof the ringZ/nZ.{\displaystyle \mathbb {Z} /n\mathbb {Z} .}
As the Galois group ofQ(exp(2πi/n))/Q{\displaystyle \mathbb {Q} (\exp(2\pi i/n))/\mathbb {Q} }is abelian, this is anabelian extension. Everysubfieldof a cyclotomic field is an abelian extension of the rationals. It follows that everynth root of unity may be expressed in term ofk-roots, with variousknot exceeding φ(n). In these casesGalois theorycan be written out explicitly in terms ofGaussian periods: this theory from theDisquisitiones ArithmeticaeofGausswas published many years before Galois.[17]
Conversely,everyabelian extension of the rationals is such a subfield of a cyclotomic field – this is the content of a theorem ofKronecker, usually called theKronecker–Weber theoremon the grounds that Weber completed the proof.
Forn= 1, 2, both roots of unity1and−1areintegers.
For three values ofn, the roots of unity arequadratic integers:
For four other values ofn, the primitive roots of unity are not quadratic integers, but the sum of any root of unity with itscomplex conjugate(also annth root of unity) is a quadratic integer.
Forn= 5, 10, none of the non-real roots of unity (which satisfy aquartic equation) is a quadratic integer, but the sumz+z= 2Rezof each root with its complex conjugate (also a 5th root of unity) is an element of theringZ[1 +√5/2](D= 5). For two pairs of non-real 5th roots of unity these sums areinversegolden ratioandminusgolden ratio.
Forn= 8, for any root of unityz+zequals to either 0, ±2, or ±√2(D= 2).
Forn= 12, for any root of unity,z+zequals to either 0, ±1, ±2 or ±√3(D= 3).
|
https://en.wikipedia.org/wiki/Root_of_unity
|
Inmathematics, aquadratic equationis a polynomial equation of the seconddegree. The general form is
wherea≠ 0.
The quadratic equation on a numberx{\displaystyle x}can be solved using the well-knownquadratic formula, which can be derived bycompleting the square. That formula always gives the roots of the quadratic equation, but the solutions are expressed in a form that often involves aquadratic irrationalnumber, which is analgebraic fractionthat can be evaluated as adecimal fractiononly by applying an additionalroot extraction algorithm.
If the roots arereal, there is an alternative technique that obtains a rational approximation to one of the roots by manipulating the equation directly. The method works in many cases, and long ago it stimulated further development of theanalytical theoryofcontinued fractions.
Here is a simple example to illustrate the solution of a quadratic equation usingcontinued fractions. We begin with the equation
and manipulate it directly. Subtracting one from both sides we obtain
This is easily factored into
from which we obtain
and finally
Now comes the crucial step. We substitute this expression forxback into itself, recursively, to obtain
But now we can make the same recursive substitution again, and again, and again, pushing the unknown quantityxas far down and to the right as we please, and obtaining in the limit the infinitesimple continued fraction
By applying thefundamental recurrence formulaswe may easily compute the successiveconvergentsof this continued fraction to be 1, 3/2, 7/5, 17/12, 41/29, 99/70, 239/169, ..., where each successive convergent is formed by taking the numerator plus the denominator of the preceding term as the denominator in the next term, then adding in the preceding denominator to form the new numerator. This sequence of denominators is a particularLucas sequenceknown as thePell numbers.
We can gain further insight into this simple example by considering the successive powers of
That sequence of successive powers is given by
and so forth. Notice how the fractions derived as successiveapproximantsto√2appear in thisgeometric progression.
Since 0 <ω< 1, the sequence {ωn} clearly tends toward zero, by well-known properties of the positive real numbers. This fact can be used to prove, rigorously, that the convergents discussed in the simple example above do in fact converge to√2, in the limit.
We can also find these numerators and denominators appearing in the successive powers of
The sequence of successive powers {ω−n} does not approach zero; it grows without limit instead. But it can still be used to obtain the convergents in our simple example.
Notice also that thesetobtained by formingallthe combinationsa+b√2, whereaandbare integers, is an example of an object known inabstract algebraas aring, and more specifically as anintegral domain. The number ω is aunitin that integral domain. See alsoalgebraic number field.
Continued fractions are most conveniently applied to solve the general quadratic equation expressed in the form of amonic polynomial
which can always be obtained by dividing the original equation by its leadingcoefficient. Starting from this monic equation we see that
But now we can apply the last equation to itself recursively to obtain
If this infinite continued fractionconvergesat all, it must converge to one of therootsof the monic polynomialx2+bx+c= 0. Unfortunately, this particular continued fraction does not converge to a finite number in every case. We can easily see that this is so by considering thequadratic formulaand a monic polynomial with real coefficients. If thediscriminantof such a polynomial is negative, then both roots of the quadratic equation haveimaginaryparts. In particular, ifbandcare real numbers andb2− 4c< 0, all the convergents of this continued fraction "solution" will be real numbers, and they cannot possibly converge to a root of the formu+iv(wherev≠ 0), which does not lie on thereal number line.
By applying a result obtained byEulerin 1748 it can be shown that the continued fraction solution to the general monic quadratic equation with real coefficients
given by
eitherconvergesor diverges depending on both the coefficientband the value of thediscriminant,b2− 4c.
Ifb= 0 the general continued fraction solution is totally divergent; the convergents alternate between 0 and∞{\displaystyle \infty }. Ifb≠ 0 we distinguish three cases.
When the monic quadratic equation with real coefficients is of the formx2=c, thegeneralsolution described above is useless because division by zero is not well defined. As long ascis positive, though, it is always possible to transform the equation by subtracting aperfect squarefrom both sides and proceeding along the lines illustrated with√2above. In symbols, if
just choose some positive real numberpsuch that
Then by direct manipulation we obtain
and this transformed continued fraction must converge because all the partial numerators and partial denominators are positive real numbers.
By thefundamental theorem of algebra, if the monic polynomial equationx2+bx+c= 0 has complex coefficients, it must have two (not necessarily distinct) complex roots. Unfortunately, the discriminantb2− 4cis not as useful in this situation, because it may be acomplex number. Still, a modified version of the general theorem can be proved.
The continued fraction solution to the general monic quadratic equation with complex coefficients
given by
convergesor not depending on the value of the discriminant,b2− 4c, and on the relative magnitude of its two roots.
Denoting the two roots byr1andr2we distinguish three cases.
In case 2, the rate of convergence depends on the absolute value of the ratio between the two roots: the farther that ratio is from unity, the more quickly the continued fraction converges.
This general solution of monic quadratic equations with complex coefficients is usually not very useful for obtaining rational approximations to the roots, because the criteria are circular (that is, the relative magnitudes of the two roots must be known before we can conclude that the fraction converges, in most cases). But this solution does find useful applications in the further analysis of theconvergence problemfor continued fractions with complex elements.
|
https://en.wikipedia.org/wiki/Solving_quadratic_equations_with_continued_fractions
|
Thesquare-root sum problem(SRS) is a computationaldecision problemfrom the field ofnumerical analysis, with applications tocomputational geometry.
SRS is defined as follows:[1]
Given positive integersa1,…,ak{\displaystyle a_{1},\ldots ,a_{k}}and an integert, decide whether∑i=1kai≤t{\displaystyle \sum _{i=1}^{k}{\sqrt {a_{i}}}\leq t}.
An alternative definition is:
Given positive integersa1,…,ak{\displaystyle a_{1},\ldots ,a_{k}}andb1,…,bk{\displaystyle b_{1},\ldots ,b_{k}}, decide whether∑i=1kai≤∑i=1kbi{\displaystyle \sum _{i=1}^{k}{\sqrt {a_{i}}}\leq \sum _{i=1}^{k}{\sqrt {b_{i}}}}.
The problem was posed in 1981,[2]and likely earlier.
SRS can be solved in polynomial time in theReal RAMmodel.[3]However, its run-time complexity in the Turing machine model is open, as of 1997.[1]The main difficulty is that, in order to solve the problem, the square-roots should be computed to a high accuracy, which may require a large number of bits. The problem is mentioned in the Open Problems Garden.[4]
Blomer[5]presents a polynomial-timeMonte Carlo algorithmfor deciding whether a sum of square roots equals zero. The algorithm applies more generally, to anysum of radicals.
Allender, Burgisser, Pedersen and Miltersen[6]prove that SRS lies in thecounting hierarchy(which is contained inPSPACE).
One way to solve SRS is to prove a lower bound on the absolute difference|t−∑i=1kai|{\displaystyle \left|t-\sum _{i=1}^{k}{\sqrt {a_{i}}}\right|}or|∑i=1kai−∑i=1kbi|{\displaystyle \left|\sum _{i=1}^{k}{\sqrt {a_{i}}}-\sum _{i=1}^{k}{\sqrt {b_{i}}}\right|}. Such lower bound is called a "separation bound" since it separates between the difference and 0. For example, if the absolute difference is at least 2−d, it means that we can round all numbers todbits of accuracy, and solve SRS in time polynomial ind.
This leads to the mathematical problem of proving bounds on this difference. Definer(n,k) as the smallest positive value of the difference∑i=1kai−∑i=1kbi{\displaystyle \sum _{i=1}^{k}{\sqrt {a_{i}}}-\sum _{i=1}^{k}{\sqrt {b_{i}}}}, whereaiandbiare integers between 1 andn; defineR(n,k) is defined as -logr(n,k), which is the number of accuracy digits required to solve SRS. Computingr(n,k) is open problem 33 in the open problem project.[7]
In particular, it is interesting whether r(n,k) is in O(poly(k,log(n)). A positive answer would imply that SRS can be solved in polynomial time in the Turing Machine model. Some currently known bounds are:
SRS is important incomputational geometry, as Euclidean distances are given by square-roots, and many geometric problems (e.g.Minimum spanning treein the plane andEuclidean traveling salesmanproblem) require to compute sums of distances.
Etessami and Yannakakis[13]show a reduction from SRS to the problem of termination of recursive concurrentstochastic games.
SRS also has a theoretic importance, as it is a simple special case of asemidefinite programmingfeasibility problem. Consider the matrix(1xxa){\displaystyle \left({\begin{matrix}1&x\\x&a\end{matrix}}\right)}. This matrix ispositive semidefiniteiffa−x2≥0{\displaystyle a-x^{2}\geq 0}, iff|x|≤a{\displaystyle |x|\leq {\sqrt {a}}}. Therefore, to solve SRS, we can construct a feasibility problem withnconstraints of the form(1xixiai)⪰0{\displaystyle \left({\begin{matrix}1&x_{i}\\x_{i}&a_{i}\end{matrix}}\right)\succeq 0}, and additional linear constraintsxi≥0,∑i=1nxi≥k{\displaystyle x_{i}\geq 0,\sum _{i=1}^{n}x_{i}\geq k}. The resulting SDP is feasible if and only if SRS is feasible. As the runtime complexity of SRS in the Turing machine model is open, the same is true for SDP feasibility (as of 1997).
Kayal and Saha[14]extend the problem from integers topolynomials. Their results imply a solution to SRS for a special class of integers.
|
https://en.wikipedia.org/wiki/Square-root_sum_problem
|
ThePenrose method(orsquare-root method) is a method devised in 1946 by ProfessorLionel Penrose[1]for allocating the voting weights of delegations (possibly a single representative) in decision-making bodies proportional to thesquare rootof the population represented by this delegation. This is justified by the fact that, due to thesquare root law of Penrose, thea priorivoting power (as defined by thePenrose–Banzhaf index) of a member of a voting body is inversely proportional to the square root of its size. Under certain conditions, this allocation achieves equal voting powers for all people represented, independent of the size of their constituency. Proportional allocation would result in excessive voting powers for the electorates of larger constituencies.
A precondition for the appropriateness of the method isen blocvoting of the delegations in the decision-making body: a delegation cannot split its votes; rather, each delegation has just a single vote to which weights are applied proportional to the square root of the population they represent. Another precondition is that the opinions of the people represented are statistically independent. The representativity of each delegation results from statistical fluctuations within the country, and then, according to Penrose, "small electorates are likely to obtain more representative governments than large electorates." A mathematical formulation of this idea results in the square root rule.
The Penrose method is not currently being used for any notable decision-making body, but it has been proposed for apportioning representation in aUnited Nations Parliamentary Assembly,[1][2]and forvoting in the Council of the European Union.[3][4]
The Penrose method became revitalised within theEuropean Unionwhen it was proposed by Sweden in 2003 amid negotiations on theAmsterdam Treatyand by Poland June 2007 during summit on theTreaty of Lisbon. In this context, the method was proposed to compute voting weights of member states in the Council of the European Union.
Currently, the voting in the Council of the EU does not follow the Penrose method. Instead, therules of the Nice Treatyare effective between 2004 and 2014, under certain conditions until 2017. The associated voting weights are compared in the adjacent table along with the population data of the member states.
Besides the voting weight, the voting power (i.e., the Penrose–Banzhaf index) of a member state also depends on the threshold percentage needed to make a decision. Smaller percentages work in favor of larger states. For example, if one state has 30% of the total voting weights while the threshold for decision making is at 29%, this state will have 100% voting power (i.e., an index of 1). For the EU-27, an optimal threshold, at which the voting powers of all citizens in any member state are almost equal, has been computed at about 61.6%.[3]After the university of the authors of this paper, this system is referred to as the "Jagiellonian Compromise". Optimal threshold decreases with the numberM{\displaystyle M}of the member states as1/2+1/πM{\displaystyle 1/2+1/{\sqrt {\pi M}}}.[6]
According toINFUSA, "The square-root method is more than a pragmatic compromise between the extreme methods of world representation unrelated to population size and allocation of national quotas in direct proportion to population size; Penrose showed that in terms of statistical theory the square-root method gives to each voter in the world an equal influence on decision-making in a world assembly".[2]
Under the Penrose method, the relative voting weights of the most populous countries are lower than their proportion of the world population. In the table below, the countries' voting weights are computed as the square root of their year-2005 population in millions. This procedure was originally published by Penrose in 1946 based on pre-World War IIpopulation figures.[1]
It has been claimed that thePenrose square root lawis limited to votes for which public opinion is equally divided for and against.[7][8][9]A study of various elections has shown that this equally-divided scenario is not typical; these elections suggested that voting weights should be distributed according to the 0.9 power of the number of voters represented (in contrast to the 0.5 power used in the Penrose method).[8]
In practice, the theoretical possibility of the decisiveness of a single vote is questionable. Elections results that come close to a tie are likely to be legally challenged, as was the case in the US presidential election inFlorida in 2000, which suggests that no single vote is pivotal.[8]
In addition, a minor technical issue is that the theoretical argument for allocation of voting weight is based on the possibility that an individual has a deciding vote in each representative's area. This scenario is only possible when each representative has an odd number of voters in their area.[9]
|
https://en.wikipedia.org/wiki/Square_root_principle
|
Inquantum computingand specifically thequantum circuitmodel of computation, aquantum logic gate(or simplyquantum gate) is a basic quantum circuit operating on a small number ofqubits. Quantum logic gates are the building blocks of quantum circuits, like classicallogic gatesare for conventional digital circuits.
Unlike many classical logic gates, quantum logic gates arereversible. It is possible to perform classical computing using only reversible gates. For example, the reversibleToffoli gatecan implement allBoolean functions, often at the cost of having to useancilla bits. The Toffoli gate has a direct quantum equivalent, showing that quantum circuits can perform all operations performed by classical circuits.
Quantum gates areunitary operators, and are described asunitary matricesrelative to someorthonormalbasis. Usually thecomputational basisis used, which unless comparing it with something, just means that for ad-level quantum system (such as aqubit, aquantum register, orqutritsandqudits)[1]: 22–23theorthonormal basisvectorsare labeled|0⟩,|1⟩,…,|d−1⟩{\displaystyle |0\rangle ,|1\rangle ,\dots ,|d-1\rangle },or usebinary notation.
The current notation for quantum gates was developed by many of the founders ofquantum information scienceincluding Adriano Barenco,Charles Bennett,Richard Cleve,David P. DiVincenzo,Norman Margolus,Peter Shor, Tycho Sleator,John A. Smolin, and Harald Weinfurter,[2]building on notation introduced byRichard Feynmanin 1986.[3]
Quantum logic gates are represented byunitary matrices. A gate that acts onn{\displaystyle n}qubits(aregister) is represented by a2n×2n{\displaystyle 2^{n}\times 2^{n}}unitary matrix, and thesetof all such gates with the group operation ofmatrix multiplication[a]is theunitary groupU(2n).[2]Thequantum statesthat the gates act upon areunit vectorsin2n{\displaystyle 2^{n}}complexdimensions, with thecomplex Euclidean norm(the2-norm).[4]: 66[5]: 56, 65Thebasis vectors(sometimes calledeigenstates) are the possible outcomes if the state of the qubits ismeasured, and a quantum state is alinear combinationof these outcomes. The most common quantum gates operate onvector spacesof one or two qubits, just like the commonclassical logic gatesoperate on one or twobits.
Even though the quantum logic gates belong tocontinuous symmetry groups, realhardwareis inexact and thus limited in precision. The application of gates typically introduces errors, and thequantum states' fidelitiesdecrease over time. Iferror correctionis used, the usable gates are further restricted to a finite set.[4]: ch. 10[1]: ch. 14Later in this article, this is ignored as the focus is on the ideal quantum gates' properties.
Quantum states are typically represented by "kets", from a notation known asbra–ket.
The vector representation of a singlequbitis
Here,v0{\displaystyle v_{0}}andv1{\displaystyle v_{1}}are the complexprobability amplitudesof the qubit. These values determine the probability of measuring a 0 or a 1, when measuring the state of the qubit. Seemeasurementbelow for details.
The value zero is represented by the ket|0⟩=[10]{\displaystyle |0\rangle ={\begin{bmatrix}1\\0\end{bmatrix}}},and the value one is represented by the ket|1⟩=[01]{\displaystyle |1\rangle ={\begin{bmatrix}0\\1\end{bmatrix}}}.
Thetensor product(orKronecker product) is used to combine quantum states. The combined state for aqubit registeris the tensor product of the constituent qubits. The tensor product is denoted by the symbol⊗{\displaystyle \otimes }.
The vector representation of two qubits is:[6]
The action of the gate on a specific quantum state is found bymultiplyingthe vector|ψ1⟩{\displaystyle |\psi _{1}\rangle }, which represents the state by the matrixU{\displaystyle U}representing the gate. The result is a new quantum state|ψ2⟩{\displaystyle |\psi _{2}\rangle }:
TheSchrödinger equationdescribes how quantum systems that are notobservedevolve over time, and isiℏddt|Ψ⟩=H^|Ψ⟩.{\displaystyle i\hbar {\frac {d}{dt}}|\Psi \rangle ={\hat {H}}|\Psi \rangle .}When the system is in a stable environment, so it has a constantHamiltonian, the solution to this equation isU(t)=e−iH^t/ℏ.{\displaystyle U(t)=e^{-i{\hat {H}}t/\hbar }.}[1]: 24–25If the timet{\displaystyle t}is always the same it may be omitted for simplicity, and the way quantum states evolve can be described asU|ψ1⟩=|ψ2⟩,{\displaystyle U|\psi _{1}\rangle =|\psi _{2}\rangle ,}just as in the above section.
That is, a quantum gate is how a quantum system that is not observed evolves over some specific time, or equivalently, a gate is the unitarytime evolutionoperatorU{\displaystyle U}acting on a quantum state for a specific duration.
There exists anuncountably infinitenumber of gates. Some of them have been named by various authors,[1][2][4][5][7][8][9]and below follow some of those most often used in the literature.
The identity gate is theidentity matrix, usually written asI, and is defined for a single qubit as
whereIis basis independent and does not modify the quantum state. The identity gate is most useful when describing mathematically the result of various gate operations or when discussing multi-qubit circuits.
The Pauli gates(X,Y,Z){\displaystyle (X,Y,Z)}are the threePauli matrices(σx,σy,σz){\displaystyle (\sigma _{x},\sigma _{y},\sigma _{z})}and act on a single qubit. The PauliX,YandZequate, respectively, to a rotation around thex,yandzaxes of theBloch spherebyπ{\displaystyle \pi }radians.[b]
The Pauli-Xgate is the quantum equivalent of theNOT gatefor classical computers with respect to the standard basis|0⟩{\displaystyle |0\rangle },|1⟩{\displaystyle |1\rangle },which distinguishes thezaxis on theBloch sphere. It is sometimes called a bit-flip as it maps|0⟩{\displaystyle |0\rangle }to|1⟩{\displaystyle |1\rangle }and|1⟩{\displaystyle |1\rangle }to|0⟩{\displaystyle |0\rangle }. Similarly, the Pauli-Ymaps|0⟩{\displaystyle |0\rangle }toi|1⟩{\displaystyle i|1\rangle }and|1⟩{\displaystyle |1\rangle }to−i|0⟩{\displaystyle -i|0\rangle }. PauliZleaves the basis state|0⟩{\displaystyle |0\rangle }unchanged and maps|1⟩{\displaystyle |1\rangle }to−|1⟩{\displaystyle -|1\rangle }.Due to this nature, PauliZis sometimes called phase-flip.
These matrices are usually represented as
The Pauli matrices areinvolutory, meaning that the square of a Pauli matrix is theidentity matrix.
The Pauli matrices alsoanti-commute, for exampleZX=iY=−XZ.{\displaystyle ZX=iY=-XZ.}
Thematrix exponentialof a Pauli matrixσj{\displaystyle \sigma _{j}}is arotation operator, often written ase−iσjθ/2.{\displaystyle e^{-i\sigma _{j}\theta /2}.}
Controlled gates act on 2 or more qubits, where one or more qubits act as a control for some operation.[2]For example, thecontrolled NOT gate(or CNOT or CX) acts on 2 qubits, and performs the NOT operation on the second qubit only when the first qubit is|1⟩{\displaystyle |1\rangle },and otherwise leaves it unchanged. With respect to the basis|00⟩{\displaystyle |00\rangle },|01⟩{\displaystyle |01\rangle },|10⟩{\displaystyle |10\rangle },|11⟩{\displaystyle |11\rangle },it is represented by theHermitianunitarymatrix:
The CNOT (or controlled Pauli-X) gate can be described as the gate that maps the basis states|a,b⟩↦|a,a⊕b⟩{\displaystyle |a,b\rangle \mapsto |a,a\oplus b\rangle }, where⊕{\displaystyle \oplus }isXOR.
The CNOT can be expressed in thePauli basisas:
Being a Hermitian unitary operator, CNOThas the propertythateiθU=(cosθ)I+(isinθ)U{\displaystyle e^{i\theta U}=(\cos \theta )I+(i\sin \theta )U}andU=eiπ2(I−U)=e−iπ2(I−U){\displaystyle U=e^{i{\frac {\pi }{2}}(I-U)}=e^{-i{\frac {\pi }{2}}(I-U)}}, and isinvolutory.
More generally ifUis a gate that operates on a single qubit with matrix representation
then thecontrolled-U gateis a gate that operates on two qubits in such a way that the first qubit serves as a control. It maps the basis states as follows.
The matrix representing the controlledUis
WhenUis one of the Pauli operators,X,Y,Z, the respective terms "controlled-X", "controlled-Y", or "controlled-Z" are sometimes used.[4]: 177–185Sometimes this is shortened to just CX, CYand CZ.
In general, any single qubitunitary gatecan be expressed asU=eiH{\displaystyle U=e^{iH}}, whereHis aHermitian matrix, and then the controlledUisCU=ei12(I−Z1)H2.{\displaystyle CU=e^{i{\frac {1}{2}}(I-Z_{1})H_{2}}.}
Control can be extended to gates with arbitrary number of qubits[2]and functions in programming languages.[10]Functions can be conditioned on superposition states.[11][12]
Gates can also be controlled by classical logic. A quantum computer is controlled by aclassical computer, and behaves like acoprocessorthat receives instructions from the classical computer about what gates to execute on which qubits.[13]: 42–43[14]Classical control is simply the inclusion, or omission, of gates in the instruction sequence for the quantum computer.[4]: 26–28[1]: 87–88
The phase shift is a family of single-qubit gates that map the basis states|0⟩↦|0⟩{\displaystyle |0\rangle \mapsto |0\rangle }and|1⟩↦eiφ|1⟩{\displaystyle |1\rangle \mapsto e^{i\varphi }|1\rangle }. The probability of measuring a|0⟩{\displaystyle |0\rangle }or|1⟩{\displaystyle |1\rangle }is unchanged after applying this gate, however it modifies the phase of the quantum state. This is equivalent to tracing a horizontal circle (a line of constant latitude), or a rotation about the z-axis on theBloch spherebyφ{\displaystyle \varphi }radians. The phase shift gate is represented by the matrix:
whereφ{\displaystyle \varphi }is thephase shiftwith theperiod2π. Some common examples are theTgate whereφ=π4{\textstyle \varphi ={\frac {\pi }{4}}}(historically known as theπ/8{\displaystyle \pi /8}gate), the phase gate (also known as the S gate, written asS, thoughSis sometimes used for SWAP gates) whereφ=π2{\textstyle \varphi ={\frac {\pi }{2}}}and thePauli-Zgatewhereφ=π.{\displaystyle \varphi =\pi .}
The phase shift gates are related to each other as follows:
Note that the phase gateP(φ){\displaystyle P(\varphi )}is notHermitian(except for allφ=nπ,n∈Z{\displaystyle \varphi =n\pi ,n\in \mathbb {Z} }). These gates are different from their Hermitian conjugates:P†(φ)=P(−φ){\displaystyle P^{\dagger }(\varphi )=P(-\varphi )}. The twoadjoint(orconjugate transpose) gatesS†{\displaystyle S^{\dagger }}andT†{\displaystyle T^{\dagger }}are sometimes included in instruction sets.[15][16]
The Hadamard or Walsh-Hadamard gate, named afterJacques Hadamard(French:[adamaʁ]) andJoseph L. Walsh, acts on a single qubit. It maps the basis states|0⟩↦|0⟩+|1⟩2{\textstyle |0\rangle \mapsto {\frac {|0\rangle +|1\rangle }{\sqrt {2}}}}and|1⟩↦|0⟩−|1⟩2{\textstyle |1\rangle \mapsto {\frac {|0\rangle -|1\rangle }{\sqrt {2}}}}(it creates an equal superposition state if given a computational basis state). The two states(|0⟩+|1⟩)/2{\displaystyle (|0\rangle +|1\rangle )/{\sqrt {2}}}and(|0⟩−|1⟩)/2{\displaystyle (|0\rangle -|1\rangle )/{\sqrt {2}}}are sometimes written|+⟩{\displaystyle |+\rangle }and|−⟩{\displaystyle |-\rangle }respectively. The Hadamard gate performs a rotation ofπ{\displaystyle \pi }about the axis(x^+z^)/2{\displaystyle ({\hat {x}}+{\hat {z}})/{\sqrt {2}}}at theBloch sphere, and is thereforeinvolutory. It is represented by theHadamard matrix:
If theHermitian(soH†=H−1=H{\displaystyle H^{\dagger }=H^{-1}=H}) Hadamard gate is used to perform achange of basis, it flipsx^{\displaystyle {\hat {x}}}andz^{\displaystyle {\hat {z}}}. For example,HZH=X{\displaystyle HZH=X}andHXH=Z=S.{\displaystyle H{\sqrt {X}}\;H={\sqrt {Z}}=S.}
The swap gate swaps two qubits. With respect to the basis|00⟩{\displaystyle |00\rangle },|01⟩{\displaystyle |01\rangle },|10⟩{\displaystyle |10\rangle },|11⟩{\displaystyle |11\rangle }, it is represented by the matrix
The swap gate can be decomposed into summation form:
The Toffoli gate, named afterTommaso Toffoliand also called the CCNOT gate orDeutsch gateD(π/2){\displaystyle D(\pi /2)}, is a 3-bit gate that isuniversalfor classical computation but not for quantum computation. The quantum Toffoli gate is the same gate, defined for 3 qubits. If we limit ourselves to only accepting input qubits that are|0⟩{\displaystyle |0\rangle }and|1⟩{\displaystyle |1\rangle },then if the first two bits are in the state|1⟩{\displaystyle |1\rangle }it applies a Pauli-X(or NOT) on the third bit, else it does nothing. It is an example of a CC-U (controlled-controlled Unitary) gate. Since it is the quantum analog of a classical gate, it is completely specified by its truth table. The Toffoli gate is universal when combined with the single qubit Hadamard gate.[17]
[1000000001000000001000000001000000001000000001000000000100000010]{\displaystyle {\begin{bmatrix}1&0&0&0&0&0&0&0\\0&1&0&0&0&0&0&0\\0&0&1&0&0&0&0&0\\0&0&0&1&0&0&0&0\\0&0&0&0&1&0&0&0\\0&0&0&0&0&1&0&0\\0&0&0&0&0&0&0&1\\0&0&0&0&0&0&1&0\\\end{bmatrix}}}
The Toffoli gate is related to the classicalAND(∧{\displaystyle \land }) andXOR(⊕{\displaystyle \oplus }) operations as it performs the mapping|a,b,c⟩↦|a,b,c⊕(a∧b)⟩{\displaystyle |a,b,c\rangle \mapsto |a,b,c\oplus (a\land b)\rangle }on states in the computational basis.
The Toffoli gate can be expressed usingPauli matricesas
A set ofuniversal quantum gatesis any set of gates to which any operation possible on a quantum computer can be reduced, that is, any other unitary operation can be expressed as a finite sequence of gates from the set. Technically, this is impossible with anything less than anuncountableset of gates since the number of possible quantum gates is uncountable, whereas the number of finite sequences from a finite set iscountable. To solve this problem, we only require that any quantum operation can be approximated by a sequence of gates from this finite set. Moreover, forunitarieson a constant number of qubits, theSolovay–Kitaev theoremguarantees that this can be done efficiently. Checking if a set of quantum gates is universal can be done usinggroup theorymethods[18]and/or relation to (approximate)unitary t-designs[19]
Some universal quantum gate sets include:
A single-gate set of universal quantum gates can also be formulated using the parametrized three-qubit Deutsch gateD(θ){\displaystyle D(\theta )},[21]named after physicistDavid Deutsch. It is a general case ofCC-U, orcontrolled-controlled-unitarygate, and is defined as
Unfortunately, a working Deutsch gate has remained out of reach, due to lack of a protocol. There are some proposals to realize a Deutsch gate with dipole–dipole interaction in neutral atoms.[22]
A universal logic gate for reversible classical computing, the Toffoli gate, is reducible to the Deutsch gateD(π/2){\displaystyle D(\pi /2)}, thus showing that all reversible classical logic operations can be performed on a universal quantum computer.
There also exist single two-qubit gates sufficient for universality. In 1996, Adriano Barenco showed that the Deutsch gate can be decomposed using only a single two-qubit gate (Barenco gate), but it is hard to realize experimentally.[1]: 93This feature is exclusive to quantum circuits, as there is no classical two-bit gate that is both reversible and universal.[1]: 93Universal two-qubit gates could be implemented to improve classical reversible circuits in fast low-power microprocessors.[1]: 93
Assume that we have two gatesAandBthat both act onn{\displaystyle n}qubits. WhenBis put afterAin a series circuit, then the effect of the two gates can be described as a single gateC.
where⋅{\displaystyle \cdot }ismatrix multiplication. The resulting gateCwill have the same dimensions asAandB. The order in which the gates would appear in a circuit diagram is reversed when multiplying them together.[4]: 17–18,22–23,62–64[5]: 147–169
For example, putting the PauliXgate after the PauliYgate, both of which act on a single qubit, can be described as a single combined gateC:
The product symbol (⋅{\displaystyle \cdot }) is often omitted.
Allrealexponents ofunitary matricesare also unitary matrices, and all quantum gates are unitary matrices.
Positive integer exponents are equivalent to sequences of serially wired gates (e.g.X3=X⋅X⋅X{\displaystyle X^{3}=X\cdot X\cdot X}),and the real exponents is a generalization of the series circuit. For example,Xπ{\displaystyle X^{\pi }}andX=X1/2{\displaystyle {\sqrt {X}}=X^{1/2}}are both valid quantum gates.
U0=I{\displaystyle U^{0}=I}for any unitary matrixU{\displaystyle U}. Theidentity matrix(I{\displaystyle I}) behaves like aNOP[23][24]and can be represented as bare wire in quantum circuits, or not shown at all.
All gates are unitary matrices, so thatU†U=UU†=I{\displaystyle U^{\dagger }U=UU^{\dagger }=I}andU†=U−1{\displaystyle U^{\dagger }=U^{-1}},where†{\displaystyle \dagger }is theconjugate transpose. This means that negative exponents of gates areunitary inversesof their positively exponentiated counterparts:U−n=(Un)†{\displaystyle U^{-n}=(U^{n})^{\dagger }}.For example, some negative exponents of thephase shift gatesareT−1=T†{\displaystyle T^{-1}=T^{\dagger }}andT−2=(T2)†=S†{\displaystyle T^{-2}=(T^{2})^{\dagger }=S^{\dagger }}.
Note that for aHermitian matrixH†=H,{\displaystyle H^{\dagger }=H,}and because of unitarity,HH†=I,{\displaystyle HH^{\dagger }=I,}soH2=I{\displaystyle H^{2}=I}for all Hermitian gates. They areinvolutory. Examples of Hermitian gates are thePauli gates,Hadamard,CNOT,SWAPandToffoli. Each Hermitian unitary matrixH{\displaystyle H}has the propertythateiθH=(cosθ)I+(isinθ)H{\displaystyle e^{i\theta H}=(\cos \theta )I+(i\sin \theta )H}whereH=eiπ2(I−H)=e−iπ2(I−H).{\displaystyle H=e^{i{\frac {\pi }{2}}(I-H)}=e^{-i{\frac {\pi }{2}}(I-H)}.}
The exponent of a gate is a multiple of the duration of time that thetime evolution operatoris applied to a quantum state. E.g. in aspin qubit quantum computertheSWAP{\displaystyle {\sqrt {\mathrm {SWAP} }}}gate could be realized viaexchange interactionon thespinof twoelectronsfor half the duration of a full exchange interaction.[25]
Thetensor product(orKronecker product) of two quantum gates is the gate that is equal to the two gates in parallel.[4]: 71–75[5]: 148
If we, as in the picture, combine the Pauli-Ygate with the Pauli-Xgate in parallel, then this can be written as:
Both the Pauli-Xand the Pauli-Ygate act on a single qubit. The resulting gateC{\displaystyle C}act on two qubits.
Sometimes the tensor product symbol is omitted, and indexes are used for the operators instead.[25]
The gateH2=H⊗H{\displaystyle H_{2}=H\otimes H}is theHadamard gate(H{\displaystyle H})applied in parallel on 2 qubits. It can be written as:
This "two-qubit parallel Hadamard gate" will, when applied to, for example, the two-qubit zero-vector(|00⟩{\displaystyle |00\rangle }),create a quantum state that has equal probability of being observed in any of its four possible outcomes;|00⟩{\displaystyle |00\rangle },|01⟩{\displaystyle |01\rangle },|10⟩{\displaystyle |10\rangle },and|11⟩{\displaystyle |11\rangle }.We can write this operation as:
Here the amplitude for each measurable state is1⁄2. The probability to observe any state is the square of the absolute value of the measurable states amplitude, which in the above example means that there is one in four that we observe any one of the individual four cases. Seemeasurementfor details.
H2{\displaystyle H_{2}}performs theHadamard transformon two qubits. Similarly the gateH⊗H⊗⋯⊗H⏟ntimes=⨂i=0n−1H=H⊗n=Hn{\displaystyle \underbrace {H\otimes H\otimes \dots \otimes H} _{n{\text{ times}}}=\bigotimes _{i=0}^{n-1}H=H^{\otimes n}=H_{n}}performs a Hadamard transform on aregisterofn{\displaystyle n}qubits.
When applied to a register ofn{\displaystyle n}qubits all initialized to|0⟩{\displaystyle |0\rangle },the Hadamard transform puts the quantum register into a superposition with equal probability of being measured in any of its2n{\displaystyle 2^{n}}possible states:
This state is auniform superpositionand it is generated as the first step in some search algorithms, for example inamplitude amplificationandphase estimation.
Measuringthis state results in arandom numberbetween|0⟩{\displaystyle |0\rangle }and|2n−1⟩{\displaystyle |2^{n}-1\rangle }.[e]How random the number is depends on thefidelityof the logic gates. If not measured, it is a quantum state with equalprobability amplitude12n{\displaystyle {\frac {1}{\sqrt {2^{n}}}}}for each of its possible states.
The Hadamard transform acts on a register|ψ⟩{\displaystyle |\psi \rangle }withn{\displaystyle n}qubits such that|ψ⟩=⨂i=0n−1|ψi⟩{\textstyle |\psi \rangle =\bigotimes _{i=0}^{n-1}|\psi _{i}\rangle }as follows:
If two or more qubits are viewed as a single quantum state, this combined state is equal to the tensor product of the constituent qubits. Any state that can be written as a tensor product from the constituent subsystems are calledseparable states. On the other hand, anentangled stateis any state that cannot be tensor-factorized, or in other words:An entangled state can not be written as a tensor product of its constituent qubits states.Special care must be taken when applying gates to constituent qubits that make up entangled states.
If we have a set ofNqubits that are entangled and wish to apply a quantum gate onM<Nqubits in the set, we will have to extend the gate to takeNqubits. This application can be done by combining the gate with anidentity matrixsuch that their tensor product becomes a gate that act onNqubits. The identity matrix(I{\displaystyle I})is a representation of the gate that maps every state to itself (i.e., does nothing at all). In a circuit diagram the identity gate or matrix will often appear as just a bare wire.
For example, the Hadamard gate(H{\displaystyle H})acts on a single qubit, but if we feed it the first of the two qubits that constitute theentangledBell state|00⟩+|11⟩2{\displaystyle {\frac {|00\rangle +|11\rangle }{\sqrt {2}}}},we cannot write that operation easily. We need to extend the Hadamard gateH{\displaystyle H}with the identity gateI{\displaystyle I}so that we can act on quantum states that spantwoqubits:
The gateK{\displaystyle K}can now be applied to any two-qubit state, entangled or otherwise. The gateK{\displaystyle K}will leave the second qubit untouched and apply the Hadamard transform to the first qubit. If applied to the Bell state in our example, we may write that as:
Thetime complexity for multiplyingtwon×n{\displaystyle n\times n}-matrices is at leastΩ(n2logn){\displaystyle \Omega (n^{2}\log n)},[26]if using a classical machine. Because the size of a gate that operates onq{\displaystyle q}qubits is2q×2q{\displaystyle 2^{q}\times 2^{q}}it means that the time for simulating a step in a quantum circuit (by means of multiplying the gates) that operates on generic entangled states isΩ(2q2log(2q)){\displaystyle \Omega ({2^{q}}^{2}\log({2^{q}}))}.For this reason it is believed to beintractableto simulate large entangled quantum systems using classical computers. Subsets of the gates, such as theClifford gates, or the trivial case of circuits that only implement classical Boolean functions (e.g. combinations ofX,CNOT,Toffoli), can however be efficiently simulated on classical computers.
The state vector of aquantum registerwithn{\displaystyle n}qubits is2n{\displaystyle 2^{n}}complex entries. Storing theprobability amplitudesas a list offloating pointvalues is not tractable for largen{\displaystyle n}.
Because all quantum logical gates arereversible, any composition of multiple gates is also reversible. All products and tensor products (i.e.seriesandparallelcombinations) ofunitary matricesare also unitary matrices. This means that it is possible to construct an inverse of all algorithms and functions, as long as they contain only gates.
Initialization, measurement,I/Oand spontaneousdecoherenceareside effectsin quantum computers. Gates however arepurely functionalandbijective.
IfU{\displaystyle U}is aunitary matrix, thenU†U=UU†=I{\displaystyle U^{\dagger }U=UU^{\dagger }=I}andU†=U−1{\displaystyle U^{\dagger }=U^{-1}}.The dagger (†{\displaystyle \dagger }) denotes theconjugate transpose. It is also called theHermitian adjoint.
If a functionF{\displaystyle F}is a product ofm{\displaystyle m}gates,F=A1⋅A2⋅⋯⋅Am{\displaystyle F=A_{1}\cdot A_{2}\cdot \dots \cdot A_{m}},the unitary inverse of the functionF†{\displaystyle F^{\dagger }}can be constructed:
Because(UV)†=V†U†{\displaystyle (UV)^{\dagger }=V^{\dagger }U^{\dagger }}we have, after repeated application on itself
Similarly if the functionG{\displaystyle G}consists of two gatesA{\displaystyle A}andB{\displaystyle B}in parallel, thenG=A⊗B{\displaystyle G=A\otimes B}andG†=(A⊗B)†=A†⊗B†{\displaystyle G^{\dagger }=(A\otimes B)^{\dagger }=A^{\dagger }\otimes B^{\dagger }}.
Gates that are their own unitary inverses are calledHermitianorself-adjoint operators. Some elementary gates such as theHadamard(H) and thePauli gates(I,X,Y,Z) are Hermitian operators, while others like thephase shift(S,T,P,CPhase) gates generally are not.
For example, an algorithm for addition can be used for subtraction, if it is being "run in reverse", as its unitary inverse. Theinverse quantum Fourier transformis the unitary inverse. Unitary inverses can also be used foruncomputation. Programming languages for quantum computers, such asMicrosoft'sQ#,[10]Bernhard Ömer'sQCL,[13]: 61andIBM'sQiskit,[27]contain function inversion as programming concepts.
Measurement (sometimes calledobservation) is irreversible and therefore not a quantum gate, because it assigns the observed quantum state to a single value. Measurement takes a quantum state and projects it to one of thebasis vectors, with a likelihood equal to the square of the vector's length (in the2-norm[4]: 66[5]: 56, 65) along that basis vector.[1]: 15–17[28][29][30]This is known as theBorn ruleand appears[e]as astochasticnon-reversible operation as it probabilistically sets the quantum state equal to the basis vector that represents the measured state. At the instant of measurement, the state is said to "collapse" to the definite single value that was measured. Why and how, or even if[31][32]the quantum state collapses at measurement, is called themeasurement problem.
The probability of measuring a value withprobability amplitudeϕ{\displaystyle \phi }is1≥|ϕ|2≥0{\displaystyle 1\geq |\phi |^{2}\geq 0},where|⋅|{\displaystyle |\cdot |}is themodulus.
Measuring a single qubit, whose quantum state is represented by the vectora|0⟩+b|1⟩=[ab]{\displaystyle a|0\rangle +b|1\rangle ={\begin{bmatrix}a\\b\end{bmatrix}}},will result in|0⟩{\displaystyle |0\rangle }with probability|a|2{\displaystyle |a|^{2}},and in|1⟩{\displaystyle |1\rangle }with probability|b|2{\displaystyle |b|^{2}}.
For example, measuring a qubit with the quantum state|0⟩−i|1⟩2=12[1−i]{\displaystyle {\frac {|0\rangle -i|1\rangle }{\sqrt {2}}}={\frac {1}{\sqrt {2}}}{\begin{bmatrix}1\\-i\end{bmatrix}}}will yield with equal probability either|0⟩{\displaystyle |0\rangle }or|1⟩{\displaystyle |1\rangle }.
A quantum state|Ψ⟩{\displaystyle |\Psi \rangle }that spansnqubits can be written as a vector in2n{\displaystyle 2^{n}}complexdimensions:|Ψ⟩∈C2n{\displaystyle |\Psi \rangle \in \mathbb {C} ^{2^{n}}}.This is because the tensor product ofnqubits is a vector in2n{\displaystyle 2^{n}}dimensions. This way, aregisterofnqubits can be measured to2n{\displaystyle 2^{n}}distinct states, similar to how a register ofnclassicalbitscan hold2n{\displaystyle 2^{n}}distinct states. Unlike with the bits of classical computers, quantum states can have non-zero probability amplitudes in multiple measurable values simultaneously. This is calledsuperposition.
The sum of all probabilities for all outcomes must always be equal to1.[f]Another way to say this is that thePythagorean theoremgeneralized toC2n{\displaystyle \mathbb {C} ^{2^{n}}}has that all quantum states|Ψ⟩{\displaystyle |\Psi \rangle }withnqubits must satisfy1=∑x=02n−1|ax|2,{\textstyle 1=\sum _{x=0}^{2^{n}-1}|a_{x}|^{2},}[g]whereax{\displaystyle a_{x}}is the probability amplitude for measurable state|x⟩{\displaystyle |x\rangle }.A geometric interpretation of this is that the possiblevalue-spaceof a quantum state|Ψ⟩{\displaystyle |\Psi \rangle }withnqubits is the surface of theunit sphereinC2n{\displaystyle \mathbb {C} ^{2^{n}}}and that theunitary transforms(i.e. quantum logic gates) applied to it are rotations on the sphere. The rotations that the gates perform form thesymmetry groupU(2n). Measurement is then a probabilistic projection of the points at the surface of thiscomplexsphere onto thebasis vectorsthat span the space (and labels the outcomes).
In many cases the space is represented as aHilbert spaceH{\displaystyle {\mathcal {H}}}rather than some specific2n{\displaystyle 2^{n}}-dimensionalcomplex space. The number of dimensions (defined by the basis vectors, and thus also the possible outcomes from measurement) is then often implied by the operands, for example as the requiredstate spacefor solving aproblem. InGrover's algorithm,Grovernamed this generic basis vector set"the database".
The selection of basis vectors against which to measure a quantum state will influence the outcome of the measurement.[1]: 30–35[4]: 22, 84–85, 185–188[33]Seechange of basisandVon Neumann entropyfor details. In this article, we always use thecomputationalbasis, which means that we have labeled the2n{\displaystyle 2^{n}}basis vectors of ann-qubitregister|0⟩,|1⟩,|2⟩,⋯,|2n−1⟩{\displaystyle |0\rangle ,|1\rangle ,|2\rangle ,\cdots ,|2^{n}-1\rangle },or use thebinary representation|010⟩=|0…002⟩,|110⟩=|0…012⟩,|210⟩=|0…102⟩,⋯,|2n−1⟩=|111…12⟩{\displaystyle |0_{10}\rangle =|0\dots 00_{2}\rangle ,|1_{10}\rangle =|0\dots 01_{2}\rangle ,|2_{10}\rangle =|0\dots 10_{2}\rangle ,\cdots ,|2^{n}-1\rangle =|111\dots 1_{2}\rangle }.
Inquantum mechanics, the basis vectors constitute anorthonormal basis.
An example of usage of an alternative measurement basis is in theBB84cipher.
If twoquantum states(i.e.qubits, orregisters) areentangled(meaning that their combined state cannot be expressed as atensor product), measurement of one register affects or reveals the state of the other register by partially or entirely collapsing its state too. This effect can be used for computation, and is used in many algorithms.
The Hadamard-CNOT combination acts on the zero-state as follows:
This resulting state is theBell state|00⟩+|11⟩2=12[1001]{\displaystyle {\frac {|00\rangle +|11\rangle }{\sqrt {2}}}={\frac {1}{\sqrt {2}}}{\begin{bmatrix}1\\0\\0\\1\end{bmatrix}}}.It cannot be described as a tensor product of two qubits. There is no solution for
because for examplewneeds to be both non-zero and zero in the case ofxwandyw.
The quantum statespansthe two qubits. This is calledentanglement. Measuring one of the two qubits that make up this Bell state will result in that the other qubit logically must have the same value, both must be the same: Either it will be found in the state|00⟩{\displaystyle |00\rangle },or in the state|11⟩{\displaystyle |11\rangle }.If we measure one of the qubits to be for example|1⟩{\displaystyle |1\rangle },then the other qubit must also be|1⟩{\displaystyle |1\rangle },because their combined statebecame|11⟩{\displaystyle |11\rangle }.Measurement of one of the qubits collapses the entire quantum state, that span the two qubits.
TheGHZ stateis a similar entangled quantum state that spans three or more qubits.
This type of value-assignment occursinstantaneously over any distanceand this has as of 2018 been experimentally verified byQUESSfor distances of up to 1200 kilometers.[34][35][36]That the phenomena appears to happen instantaneously as opposed to the time it would take to traverse the distance separating the qubits at the speed of light is called theEPR paradox, and it is an open question in physics how to resolve this. Originally it was solved by giving up the assumption oflocal realism, but otherinterpretationshave also emerged. For more information see theBell test experiments. Theno-communication theoremproves that this phenomenon cannot be used for faster-than-light communication ofclassical information.
Take aregisterA withnqubits all initialized to|0⟩{\displaystyle |0\rangle },and feed it through aparallel Hadamard gateH⊗n{\textstyle H^{\otimes n}}.Register A will then enter the state12n∑k=02n−1|k⟩{\textstyle {\frac {1}{\sqrt {2^{n}}}}\sum _{k=0}^{2^{n}-1}|k\rangle }that have equal probability of when measured to be in any of its2n{\displaystyle 2^{n}}possible states;|0⟩{\displaystyle |0\rangle }to|2n−1⟩{\displaystyle |2^{n}-1\rangle }.Take a second register B, also withnqubits initialized to|0⟩{\displaystyle |0\rangle }and pairwiseCNOTits qubits with the qubits in register A, such that for eachpthe qubitsAp{\displaystyle A_{p}}andBp{\displaystyle B_{p}}forms the state|ApBp⟩=|00⟩+|11⟩2{\displaystyle |A_{p}B_{p}\rangle ={\frac {|00\rangle +|11\rangle }{\sqrt {2}}}}.
If we now measure the qubits in register A, then register B will be found to contain the same value as A. If we however instead apply a quantum logic gateFon A and then measure, then|A⟩=F|B⟩⟺F†|A⟩=|B⟩{\displaystyle |A\rangle =F|B\rangle \iff F^{\dagger }|A\rangle =|B\rangle },whereF†{\displaystyle F^{\dagger }}is theunitary inverseofF.
Because of howunitary inverses of gatesact,F†|A⟩=F−1(|A⟩)=|B⟩{\displaystyle F^{\dagger }|A\rangle =F^{-1}(|A\rangle )=|B\rangle }.For example, sayF(x)=x+3(mod2n){\displaystyle F(x)=x+3{\pmod {2^{n}}}}, then|B⟩=|A−3(mod2n)⟩{\displaystyle |B\rangle =|A-3{\pmod {2^{n}}}\rangle }.
The equality will hold no matter in which order measurement is performed (on the registers A or B), assuming thatFhas run to completion. Measurement can even be randomly and concurrently interleaved qubit by qubit, since the measurements assignment of one qubit will limit the possible value-space from the other entangled qubits.
Even though the equalities holds, the probabilities for measuring the possible outcomes may change as a result of applyingF, as may be the intent in a quantum search algorithm.
This effect of value-sharing via entanglement is used inShor's algorithm,phase estimationand inquantum counting. Using theFourier transformto amplify the probability amplitudes of the solution states for someproblemis a generic method known as "Fourier fishing".[37]
Functions and routines that only use gates can themselves be described as matrices, just like the smaller gates. The matrix that represents a quantum function acting onq{\displaystyle q}qubits has size2q×2q{\displaystyle 2^{q}\times 2^{q}}.For example, a function that acts on a "qubyte" (aregisterof 8 qubits) would be represented by a matrix with28×28=256×256{\displaystyle 2^{8}\times 2^{8}=256\times 256}elements.
Unitary transformations that are not in the set of gates natively available at the quantum computer (the primitive gates) can be synthesised, or approximated, by combining the available primitive gates in acircuit. One way to do this is to factor the matrix that encodes the unitary transformation into a product of tensor products (i.e.seriesandparallelcircuits) of the available primitive gates. ThegroupU(2q)is thesymmetry groupfor the gates that act onq{\displaystyle q}qubits.[2]Factorization is then theproblemof finding a path in U(2q) from thegenerating setof primitive gates. TheSolovay–Kitaev theoremshows that given a sufficient set of primitive gates, there exist an efficient approximate for any gate. For the general case with a large number of qubits this direct approach to circuit synthesis isintractable.[38][39]This puts a limit on how large functions can be brute-force factorized into primitive quantum gates. Typically quantum programs are instead built using relatively small and simple quantum functions, similar to normal classical programming.
Because of the gatesunitarynature, all functions must bereversibleand always bebijectivemappings of input to output. There must always exist a functionF−1{\displaystyle F^{-1}}such thatF−1(F(|ψ⟩))=|ψ⟩{\displaystyle F^{-1}(F(|\psi \rangle ))=|\psi \rangle }.Functions that are not invertible can be made invertible by addingancilla qubitsto the input or the output, or both. After the function has run to completion, the ancilla qubits can then either beuncomputedor left untouched. Measuring or otherwise collapsing the quantum state of an ancilla qubit (e.g. by re-initializing the value of it, or by its spontaneousdecoherence) that have not been uncomputed may result in errors,[40][41]as their state may be entangled with the qubits that are still being used in computations.
Logically irreversible operations, for example addition modulo2n{\displaystyle 2^{n}}of twon{\displaystyle n}-qubit registersaandb,F(a,b)=a+b(mod2n){\displaystyle F(a,b)=a+b{\pmod {2^{n}}}},[h]can be made logically reversible by adding information to the output, so that the input can be computed from the output (i.e. there exists a functionF−1{\displaystyle F^{-1}}).In our example, this can be done by passing on one of the input registers to the output:F(|a⟩⊗|b⟩)=|a+b(mod2n)⟩⊗|a⟩{\displaystyle F(|a\rangle \otimes |b\rangle )=|a+b{\pmod {2^{n}}}\rangle \otimes |a\rangle }.The output can then be used to compute the input (i.e. given the outputa+b{\displaystyle a+b}anda{\displaystyle a},we can easily find the input;a{\displaystyle a}is given and(a+b)−a=b{\displaystyle (a+b)-a=b})and the function is made bijective.
AllBoolean algebraicexpressions can be encoded as unitary transforms (quantum logic gates), for example by using combinations of thePauli-X,CNOTandToffoligates. These gates arefunctionally completein the Boolean logic domain.
There are many unitary transforms available in the libraries ofQ#,QCL,Qiskit, and otherquantum programminglanguages. It also appears in the literature.[42][43]
For example,inc(|x⟩)=|x+1(mod2xlength)⟩{\displaystyle \mathrm {inc} (|x\rangle )=|x+1{\pmod {2^{x_{\text{length}}}}}\rangle }, wherexlength{\displaystyle x_{\text{length}}}is the number of qubits that constitutes theregisterx{\displaystyle x},is implemented as the following in QCL:[44][13][12]
In QCL, decrement is done by "undoing" increment. The prefix!is used to instead run theunitary inverseof the function.!inc(x)is the inverse ofinc(x)and instead performs the operationinc†|x⟩=inc−1(|x⟩)=|x−1(mod2xlength)⟩{\displaystyle \mathrm {inc} ^{\dagger }|x\rangle =\mathrm {inc} ^{-1}(|x\rangle )=|x-1{\pmod {2^{x_{\text{length}}}}}\rangle }.Thecondkeyword means that the function can beconditional.[11]
In themodel of computationused in this article (thequantum circuitmodel), a classic computer generates the gate composition for the quantum computer, and the quantum computer behaves as acoprocessorthat receives instructions from the classical computer about which primitive gates to apply to which qubits.[13]: 36–43[14]Measurement of quantum registers results in binary values that the classical computer can use in its computations.Quantum algorithmsoften contain both a classical and a quantum part. UnmeasuredI/O(sending qubits to remote computers without collapsing their quantum states) can be used to createnetworks of quantum computers.Entanglement swappingcan then be used to realizedistributed algorithmswith quantum computers that are not directly connected. Examples of distributed algorithms that only require the use of a handful of quantum logic gates aresuperdense coding, thequantum Byzantine agreementand theBB84cipherkey exchange protocol.
|
https://en.wikipedia.org/wiki/Quantum_gate#Square_root_of_NOT_gate_(√NOT)
|
Inmodular arithmeticcomputation,Montgomery modular multiplication, more commonly referred to asMontgomery multiplication, is a method for performing fast modular multiplication. It was introduced in 1985 by the American mathematicianPeter L. Montgomery.[1][2]
Montgomery modular multiplication relies on a special representation of numbers called Montgomery form. The algorithm uses the Montgomery forms ofaandbto efficiently compute the Montgomery form ofabmodN. The efficiency comes from avoiding expensive division operations. Classical modular multiplication reduces the double-width productabusing division byNand keeping only the remainder. This division requires quotient digit estimation and correction. The Montgomery form, in contrast, depends on a constantR > Nwhich iscoprimetoN, and the only division necessary in Montgomery multiplication is division byR. The constantRcan be chosen so that division byRis easy, significantly improving the speed of the algorithm. In practice,Ris always a power of two, since division by powers of two can be implemented bybit shifting.
The need to convertaandbinto Montgomery form and their product out of Montgomery form means that computing a single product by Montgomery multiplication is slower than the conventional orBarrett reductionalgorithms. However, when performing many multiplications in a row, as inmodular exponentiation, intermediate results can be left in Montgomery form. Then the initial and final conversions become a negligible fraction of the overall computation. Many important cryptosystems such asRSAandDiffie–Hellman key exchangeare based on arithmetic operations modulo a large odd number, and for these cryptosystems, computations using Montgomery multiplication withRa power of two are faster than the available alternatives.[3]
LetNdenote a positive integer modulus. Thequotient ringZ/NZconsists of residue classes moduloN, that is, its elements are sets of the form
wherearanges across the integers. Each residue class is a set of integers such that the difference of any two integers in the set is divisible byN(and the residue class is maximal with respect to that property; integers aren't left out of the residue class unless they would violate the divisibility condition). The residue class corresponding toais denoteda. Equality of residue classes is called congruence and is denoted
Storing an entire residue class on a computer is impossible because the residue class has infinitely many elements. Instead, residue classes are stored as representatives. Conventionally, these representatives are the integersafor which0 ≤a≤N− 1. Ifais an integer, then the representative ofais writtenamodN. When writing congruences, it is common to identify an integer with the residue class it represents. With this convention, the above equality is writtena≡bmodN.
Arithmetic on residue classes is done by first performing integer arithmetic on their representatives. The output of the integer operation determines a residue class, and the output of the modular operation is determined by computing the residue class's representative. For example, ifN= 17, then the sum of the residue classes7and15is computed by finding the integer sum7 + 15 = 22, then determining22 mod 17, the integer between 0 and 16 whose difference with 22 is a multiple of 17. In this case, that integer is 5, so7+15≡5mod 17.
Ifaandbare integers in the range[0,N− 1], then their sum is in the range[0, 2N− 2]and their difference is in the range[−N+ 1,N− 1], so determining the representative in[0,N− 1]requires at most one subtraction or addition (respectively) ofN. However, the productabis in the range[0,N2− 2N+ 1]. Storing the intermediate integer productabrequires twice as many bits as eitheraorb, and efficiently determining the representative in[0,N− 1]requires division. Mathematically, the integer between 0 andN− 1that is congruent toabcan be expressed by applying theEuclidean division theorem:
whereqis the quotient⌊ab/N⌋{\displaystyle \lfloor ab/N\rfloor }andr, the remainder, is in the interval[0,N− 1]. The remainderrisabmodN. Determiningrcan be done by computingq, then subtractingqNfromab. For example, again withN=17{\displaystyle N=17}, the product7⋅15is determined by computing7⋅15=105{\displaystyle 7\cdot 15=105}, dividing⌊105/17⌋=6{\displaystyle \lfloor 105/17\rfloor =6}, and subtracting105−6⋅17=105−102=3{\displaystyle 105-6\cdot 17=105-102=3}.
Because the computation ofqrequires division, it is undesirably expensive on most computer hardware. Montgomery form is a different way of expressing the elements of the ring in which modular products can be computed without expensive divisions. While divisions are still necessary, they can be done with respect to a different divisorR. This divisor can be chosen to be a power of two, for which division can be replaced by shifting, or a whole number of machine words, for which division can be replaced by omitting words. These divisions are fast, so most of the cost of computing modular products using Montgomery form is the cost of computing ordinary products.
The auxiliary modulusRmust be a positive integer such thatgcd(R,N) = 1. For computational purposes it is also necessary that division and reduction moduloRare inexpensive, and the modulus is not useful for modular multiplication unlessR>N. TheMontgomery formof the residue classawith respect toRisaRmodN, that is, it is the representative of the residue classaR. For example, suppose thatN= 17and thatR= 100. The Montgomery forms of 3, 5, 7, and 15 are300 mod 17 = 11,500 mod 17 = 7,700 mod 17 = 3, and1500 mod 17 = 4.
Addition and subtraction in Montgomery form are the same as ordinary modular addition and subtraction because of the distributive law:
Note that doing the operation in Montgomery form does not lose information compared to doing it in the quotient ringZ/NZ. This is a consequence of the fact that, becausegcd(R,N) = 1, multiplication byRis anisomorphismon the additive groupZ/NZ. For example,(7 + 15) mod 17 = 5, which in Montgomery form becomes(3 + 4) mod 17 = 7.
Multiplication in Montgomery form, however, is seemingly more complicated. The usual product ofaRandbRdoes not represent the product ofaandbbecause it has an extra factor ofR:
Computing products in Montgomery form requires removing the extra factor ofR. While division byRis cheap, the intermediate product(aRmodN)(bRmodN)is not divisible byRbecause the modulo operation has destroyed that property. So for instance, the product of the Montgomery forms of 7 and 15 modulo 17, withR= 100, is the product of 3 and 4, which is 12. Since 12 is not divisible by 100, additional effort is required to remove the extra factor ofR.
Removing the extra factor ofRcan be done by multiplying by an integerR′such thatRR′ ≡ 1 (modN), that is, by anR′whose residue class is themodular inverseofRmodN. Then, working moduloN,
The integerR′exists because of the assumption thatRandNare coprime. It can be constructed using theextended Euclidean algorithm. The extended Euclidean algorithm efficiently determines integersR′andN′that satisfyBézout's identity:0 <R′ <N,0 <N′ <R, and:
This shows that it is possible to do multiplication in Montgomery form. A straightforward algorithm to multiply numbers in Montgomery form is therefore to multiplyaRmodN,bRmodN, andR′as integers and reduce moduloN.
For example, to multiply 7 and 15 modulo 17 in Montgomery form, again withR= 100, compute the product of 3 and 4 to get 12 as above. The extended Euclidean algorithm implies that8⋅100 − 47⋅17 = 1, soR′ = 8. Multiply 12 by 8 to get 96 and reduce modulo 17 to get 11. This is the Montgomery form of 3, as expected.
While the above algorithm is correct, it is slower than multiplication in the standard representation because of the need to multiply byR′and divide byN.Montgomery reduction, also known as REDC, is an algorithm that simultaneously computes the product byR′and reduces moduloNmore quickly than the naïve method. Unlike conventional modular reduction, which focuses on making the number smaller thanN, Montgomery reduction focuses on making the number more divisible byR. It does this by adding a small multiple ofNwhich is sophisticatedly chosen to cancel the residue moduloR. Dividing the result byRyields a much smaller number. This number is so much smaller that it is nearly the reduction moduloN, and computing the reduction moduloNrequires only a final conditional subtraction. Because all computations are done using only reduction and divisions with respect toR, notN, the algorithm runs faster than a straightforward modular reduction by division.
To see that this algorithm is correct, first observe thatmis chosen precisely so thatT+mNis divisible byR. A number is divisible byRif and only if it is congruent to zero modR, and we have:
Therefore,tis an integer. Second, the output is eithertort−N, both of which are congruent totmodN, so to prove that the output is congruent toTR−1modN, it suffices to prove thattisTR−1modN,tsatisfies:
Therefore, the output has the correct residue class. Third,mis in[0,R− 1], and thereforeT+mNis between 0 and(RN− 1) + (R− 1)N< 2RN. Hencetis less than2N, and because it's an integer, this putstin the range[0, 2N− 1]. Therefore, reducingtinto the desired range requires at most a single subtraction, so the algorithm's output lies in the correct range.
To use REDC to compute the product of 7 and 15 modulo 17, first convert to Montgomery form and multiply as integers to get 12 as above. Then apply REDC withR= 100,N= 17,N′ = 47, andT= 12. The first step setsmto12 ⋅ 47 mod 100 = 64. The second step setstto(12 + 64 ⋅ 17) / 100. Notice that12 + 64 ⋅ 17is 1100, a multiple of 100 as expected.tis set to 11, which is less than 17, so the final result is 11, which agrees with the computation of the previous section.
As another example, consider the product7 ⋅ 15 mod 17but withR= 10. Using the extended Euclidean algorithm, compute−5 ⋅ 10 + 3 ⋅ 17 = 1, soN′will be−3 mod 10 = 7. The Montgomery forms of 7 and 15 are70 mod 17 = 2and150 mod 17 = 14, respectively. Their product 28 is the inputTto REDC, and since28 <RN= 170, the assumptions of REDC are satisfied. To run REDC, setmto(28 mod 10) ⋅ 7 mod 10 = 196 mod 10 = 6. Then28 + 6 ⋅ 17 = 130, sot= 13. Because30 mod 17 = 13, this is the Montgomery form of3 = 7 ⋅ 15 mod 17.
Given the modulusNand the Montgomery radixRused in a Montgomery reduction, consider theresidue ring
Z/(NR)Z≅Z/NZ×Z/RZ,{\displaystyle \mathbb {Z} /(NR)\mathbb {Z} \;\cong \;\mathbb {Z} /N\mathbb {Z} \;\times \;\mathbb {Z} /R\mathbb {Z} ,}
anisomorphismthat follows from theChinese Remainder Theorem (CRT).
For an integerT{\displaystyle T}with0≤T<NR{\displaystyle 0\leq T<NR}(as is typical whenT{\displaystyle T}arises from multiplying two residues), take its reductions
TN=TmodN,TR=TmodR.{\displaystyle T_{N}=T{\bmod {N}},\qquad T_{R}=T{\bmod {R}}.}
TheCRTgives the explicit reconstruction formula
T≡TN(R−1modN)R+TR(N−1modR)N(modNR).{\displaystyle T\equiv T_{N}{\bigl (}R^{-1}{\bmod {N}}{\bigr )}\,R\;+T_{R}{\bigl (}N^{-1}{\bmod {R}}{\bigr )}\,N{\pmod {NR}}.}
Because the right-hand side is already taken moduloNR{\displaystyle NR}, this may also be written as
T≡(TNR−1modN)R+(TRN−1modR)N(modNR).{\displaystyle T\equiv {\bigl (}T_{N}R^{-1}{\bmod {N}}{\bigr )}R\;+{\bigl (}T_{R}N^{-1}{\bmod {R}}{\bigr )}N{\pmod {NR}}.}
Both summands lie in the half‑open interval[0,NR){\displaystyle [0,NR)}:
0≤(TNR−1modN)R<NR,0≤(TRN−1modR)N<NR.{\displaystyle 0\leq {\bigl (}T_{N}R^{-1}{\bmod {N}}{\bigr )}R<NR,\qquad 0\leq {\bigl (}T_{R}N^{-1}{\bmod {R}}{\bigr )}N<NR.}
Hence, asintegerequations (not merely congruences) we have
T=(TNR−1modN)R+(TRN−1modR)N,{\displaystyle T={\bigl (}T_{N}R^{-1}{\bmod {N}}{\bigr )}R\;+{\bigl (}T_{R}N^{-1}{\bmod {R}}{\bigr )}N,}
or,
T+NR=(TNR−1modN)R+(TRN−1modR)N.{\displaystyle T+NR={\bigl (}T_{N}R^{-1}{\bmod {N}}{\bigr )}R\;+{\bigl (}T_{R}N^{-1}{\bmod {R}}{\bigr )}N.}
To solve forTN{\displaystyle T_{N}}, isolate the first summand:
(TNR−1modN)R={T−(TRN−1modR)N,T+NR−(TRN−1modR)N.{\displaystyle {\bigl (}T_{N}R^{-1}{\bmod {N}}{\bigr )}R={\begin{cases}T-{\bigl (}T_{R}N^{-1}{\bmod {R}}{\bigr )}N,\\T+NR-{\bigl (}T_{R}N^{-1}{\bmod {R}}{\bigr )}N.\end{cases}}}
Every quantity above is an integer, and the left‑hand side is a multiple ofR{\displaystyle R}; therefore each right‑hand side is divisible byR{\displaystyle R}. Dividing byR{\displaystyle R}yields
TNR−1modN={T−(TRN−1modR)NR,T+NR−(TRN−1modR)NR=T−(TRN−1modR)NR+N.{\displaystyle T_{N}R^{-1}{\bmod {N}}={\begin{cases}{\dfrac {T-{\bigl (}T_{R}N^{-1}{\bmod {R}}{\bigr )}N}{R}},\\[8pt]{\dfrac {T+NR-{\bigl (}T_{R}N^{-1}{\bmod {R}}{\bigr )}N}{R}}\,=\,{\dfrac {T-{\bigl (}T_{R}N^{-1}{\bmod {R}}{\bigr )}N}{R}}+N.\end{cases}}}
Consequently,
T−(TRN−1modR)NR={TNR−1modN,TNR−1modN+N.{\displaystyle {\dfrac {T-{\bigl (}T_{R}N^{-1}{\bmod {R}}{\bigr )}N}{R}}={\begin{cases}T_{N}R^{-1}{\bmod {N}},\\T_{N}R^{-1}{\bmod {N}}\;+\;N.\end{cases}}}
This gives two key facts:
T−(TRN−1modR)NR≡TNR−1(modN).{\displaystyle {\frac {T-{\bigl (}T_{R}N^{-1}{\bmod {R}}{\bigr )}N}{R}}\;\equiv \;T_{N}R^{-1}{\pmod {N}}.}
0≤T−(TRN−1modR)NR<2N.{\displaystyle 0\;\leq \;{\frac {T-{\bigl (}T_{R}N^{-1}{\bmod {R}}{\bigr )}N}{R}}\;<\;2N.}
Therefore, by reducing
T−(TRN−1modR)NR{\displaystyle {\frac {T-{\bigl (}T_{R}N^{-1}{\bmod {R}}{\bigr )}N}{R}}}
once more moduloN, one obtains the non‑negative residue representingTNR−1modN{\displaystyle T_{N}R^{-1}{\bmod {N}}}.
Many operations of interest moduloNcan be expressed equally well in Montgomery form. Addition, subtraction, negation, comparison for equality, multiplication by an integer not in Montgomery form, and greatest common divisors withNmay all be done with the standard algorithms. TheJacobi symbolcan be calculated as(aN)=(aRN)/(RN){\displaystyle {\big (}{\tfrac {a}{N}}{\big )}={\big (}{\tfrac {aR}{N}}{\big )}/{\big (}{\tfrac {R}{N}}{\big )}}as long as(RN){\displaystyle {\big (}{\tfrac {R}{N}}{\big )}}is stored.
WhenR>N, most other arithmetic operations can be expressed in terms of REDC. This assumption implies that the product of two representatives modNis less thanRN, the exact hypothesis necessary for REDC to generate correct output. In particular, the product ofaRmodNandbRmodNisREDC((aRmodN)(bRmodN)). The combined operation of multiplication and REDC is often calledMontgomery multiplication.
Conversion into Montgomery form is done by computingREDC((amodN)(R2modN)). Conversion out of Montgomery form is done by computingREDC(aRmodN). The modular inverse ofaRmodNisREDC((aRmodN)−1(R3modN)). Modular exponentiation can be done usingexponentiation by squaringby initializing the initial product to the Montgomery representation of 1, that is, toRmodN, and by replacing the multiply and square steps by Montgomery multiplies.
Performing these operations requires knowing at leastN′andR2modN. WhenRis a power of a small positive integerb,N′can be computed byHensel's lemma: The inverse ofNmodulobis computed by a naïve algorithm (for instance, ifb= 2then the inverse is 1), and Hensel's lemma is used repeatedly to find the inverse modulo higher and higher powers ofb, stopping when the inverse moduloRis known;N′is the negation of this inverse. The constantsRmodNandR3modNcan be generated asREDC(R2modN)and asREDC((R2modN)(R2modN)). The fundamental operation is to compute REDC of a product. When standalone REDC is needed, it can be computed as REDC of a product with1 modN. The only place where a direct reduction moduloNis necessary is in the precomputation ofR2modN.
Most cryptographic applications require numbers that are hundreds or even thousands of bits long. Such numbers are too large to be stored in a single machine word. Typically, the hardware performs multiplication mod some baseB, so performing larger multiplications requires combining several small multiplications. The baseBis typically 2 for microelectronic applications, 28for 8-bit firmware,[5]or 232or 264for software applications.
The REDC algorithm requires products moduloR, and typicallyR>Nso that REDC can be used to compute products. However, whenRis a power ofB, there is a variant of REDC which requires products only of machine word sized integers. Suppose that positive multi-precision integers are storedlittle endian, that is,xis stored as an arrayx[0], ...,x[ℓ - 1]such that0 ≤x[i] <Bfor alliandx= ∑x[i]Bi. The algorithm begins with a multiprecision integerTand reduces it one word at a time. First an appropriate multiple ofNis added to makeTdivisible byB. Then a multiple ofNis added to makeTdivisible byB2, and so on. EventuallyTis divisible byR, and after division byRthe algorithm is in the same place as REDC was after the computation oft.
The final comparison and subtraction is done by the standard algorithms.
The above algorithm is correct for essentially the same reasons that REDC is correct. Each time through theiloop,mis chosen so thatT[i] +mN[0]is divisible byB. ThenmNBiis added toT. Because this quantity is zero modN, adding it does not affect the value ofTmodN. Ifmidenotes the value ofmcomputed in theith iteration of the loop, then the algorithm setsStoT+ (∑miBi)N. Because MultiPrecisionREDC and REDC produce the same output, this sum is the same as the choice ofmthat the REDC algorithm would make.
The last word ofT,T[r+p](and consequentlyS[p]), is used only to hold a carry, as the initial reduction result is bound to a result in the range of0 ≤S<2N. It follows that this extra carry word can be avoided completely if it is known in advance thatR≥2N. On a typical binary implementation, this is equivalent to saying that this carry word can be avoided if the number of bits ofNis smaller than the number of bits ofR. Otherwise, the carry will be either zero or one. Depending upon the processor, it may be possible to store this word as a carry flag instead of a full-sized word.
It is possible to combine multiprecision multiplication and REDC into a single algorithm. This combined algorithm is usually called Montgomery multiplication. Several different implementations are described by Koç, Acar, and Kaliski.[6]The algorithm may use as little asp+ 2words of storage (plus a carry bit).
As an example, letB= 10,N= 997, andR= 1000. Suppose thata= 314andb= 271. The Montgomery representations ofaandbare314000 mod 997 = 942and271000 mod 997 = 813. Compute942 ⋅ 813 = 765846. The initial inputTto MultiPrecisionREDC will be [6, 4, 8, 5, 6, 7]. The numberNwill be represented as [7, 9, 9]. The extended Euclidean algorithm says that−299 ⋅ 10 + 3 ⋅ 997 = 1, soN′will be 7.
Therefore, before the final comparison and subtraction,S= 1047. The final subtraction yields the number 50. Since the Montgomery representation of314 ⋅ 271 mod 997 = 349is349000 mod 997 = 50, this is the expected result.
When working in base 2, determining the correctmat each stage is particularly easy: If the current working bit is even, thenmis zero and if it's odd, thenmis one. Furthermore, because each step of MultiPrecisionREDC requires knowing only the lowest bit, Montgomery multiplication can be easily combined with acarry-save adder.
Because Montgomery reduction avoids the correction steps required in conventional division when quotient digit estimates are inaccurate, it is mostly free of the conditional branches which are the primary targets of timing and powerside-channel attacks; the sequence of instructions executed is independent of the input operand values. The only exception is the final conditional subtraction of the modulus, but it is easily modified (to always subtract something, either the modulus or zero) to make it resistant.[5]It is of course necessary to ensure that the exponentiation algorithm built around the multiplication primitive is also resistant.[5][7]
|
https://en.wikipedia.org/wiki/Montgomery_reduction
|
Kochanski multiplication[1]is analgorithmthat allowsmodular arithmetic(multiplication or operations based on it, such asexponentiation) to be performed efficiently when the modulus is large (typically several hundred bits). This has particular application innumber theoryand incryptography: for example, in theRSAcryptosystem andDiffie–Hellman key exchange.
The most common way of implementing large-integer multiplication in hardware is to express the multiplier inbinaryand enumerate its bits, one bit at a time, starting with the most significant bit, perform the following operations on anaccumulator:
For ann-bit multiplier, this will takenclock cycles (where each cycle does either a shift or a shift-and-add).
To convert this into an algorithm for modular multiplication, with a modulusr, it is necessary to subtractrconditionally at each stage:
This algorithm works. However, it is critically dependent on the speed of addition.
Addition of long integers suffers from the problem thatcarrieshave to be propagated from right to left and the final result is not known until this process has been completed. Carry propagation can be speeded up withcarry look-aheadlogic, but this still makes addition very much slower than it needs to be (for 512-bit addition, addition with carry look-ahead is 32 times slower than addition without carries at all[citation needed]).
Non-modular multiplication can make use ofcarry-save adders, which save time by storing the carries from each digit position and using them later: for example, by computing 111111111111+000000000010 as 111111111121 instead of waiting for the carry to propagate through the whole number to yield the true binary value 1000000000001. That final propagation still has to be done to yield a binary result but this only needs to be done once at the very end of the multiplication.
Unfortunately, the modular multiplication method outlined above needs to know the magnitude of the accumulated value at every step, in order to decide whether to subtractr: for example, if it needs to know whether the value in the accumulator is greater than 1000000000000, the carry-save representation 111111111121 is useless and needs to be converted to its true binary value for the comparison to be made.
It therefore seems that one can haveeitherthe speed of carry-saveormodular multiplication, but not both.
The principle of the Kochanski algorithm is one of making guesses as to whether or notrshould be subtracted, based on the most significant few bits of the carry-save value in the accumulator. Such a guess will be wrong some of the time, since there is no way of knowing whether latent carries in the less significant digits (which have not been examined) might not invalidate the result of the comparison. Thus:
What is happening is essentially a race between the errors that result from wrong guesses, which double with every shift left, and the corrections made by adding or subtracting multiples ofrbased on a guess of what the errors may be.
It turns out[2]that examining the most significant 4 bits of the accumulator is sufficient to keep the errors within bounds and that the only values that need to be added to the accumulator are −2r, −r, 0, +r, and +2r, all of which can be generated instantaneously by simple shifts and negations.
At the end of a complete modular multiplication, the true binary result of the operation has to be evaluated and it is possible that an additional addition or subtraction ofrwill be needed as a result of the carries that are then discovered; but the cost of that extra step is small when amortized over the hundreds of shift-and-add steps that dominate the overall cost of the multiplication.
Brickell[3]has published a similar algorithm that requires greater complexity in the electronics for each digit of the accumulator.
Montgomery multiplicationis an alternative algorithm which processes the multiplier "backwards" (least significant digit first) and uses the least significant digit of the accumulator to control whether or not the modulus should be added. This avoids the need for carries to propagate. However, the algorithm is impractical for single modular multiplications, since two or three additional Montgomery steps have to be performed to convert the operands into a special form before processing and to convert the result back into conventional binary at the end.
|
https://en.wikipedia.org/wiki/Kochanski_multiplication
|
Inmodular arithmetic,Barrett reductionis analgorithmdesigned to optimize the calculation ofamodn{\displaystyle a\,{\bmod {\,}}n\,}[1]without needing a fastdivision algorithm. It replaces divisions with multiplications, and can be used whenn{\displaystyle n}is constant anda<n2{\displaystyle a<n^{2}}. It was introduced in 1986 by P.D. Barrett.[2]
Historically, for valuesa,b<n{\displaystyle a,b<n}, one computedabmodn{\displaystyle ab\,{\bmod {\,}}n\,}by applying
Barrett reduction to the full productab{\displaystyle ab}.
In 2021, Becker et al. showed that the full product is unnecessary if we can perform precomputation on one of the operands.[3]
We call a function[]:R→Z{\displaystyle \left[\,\right]:\mathbb {R} \to \mathbb {Z} }an integer approximation if|[z]−z|≤1{\displaystyle |\left[z\right]-z|\leq 1}.
For a modulusn{\displaystyle n}and an integer approximation[]{\displaystyle \left[\,\right]},
we definemod[]n:Z→(Z/nZ){\displaystyle {\text{mod}}^{\left[\,\right]}\,n:\mathbb {Z} \to (\mathbb {Z} /n\mathbb {Z} )}as
Common choices of[]{\displaystyle \left[\,\right]}arefloor,ceiling, androundingfunctions.
Generally, Barrett multiplication starts by specifying two integer approximations[]0,[]1{\displaystyle \left[\,\right]_{0},\left[\,\right]_{1}}and computes a reasonably close approximation ofabmodn{\displaystyle ab\,{\bmod {\,}}n}as
whereR{\displaystyle R}is a fixed constant, typically a power of 2, chosen so that multiplication and division byR{\displaystyle R}can be performed efficiently.
The caseb=1{\displaystyle b=1}was introduced by P.D. Barrett[2]for the floor-function case[]0=[]1=⌊⌋{\displaystyle \left[\,\right]_{0}=\left[\,\right]_{1}=\lfloor \,\rfloor }.
The general case forb{\displaystyle b}can be found inNTL.[4]The integer approximation view and the correspondence betweenMontgomery multiplicationand Barrett multiplication was discovered by Hanno Becker, Vincent Hwang, Matthias J. Kannwischer, Bo-Yin Yang, and Shang-Yi Yang.[3]
Barrett initially considered an integer version of the above algorithm when the values fit into machine words.
We illustrate the idea for the floor-function case withb=1{\displaystyle b=1}andR=2k{\displaystyle R=2^{k}}.
When calculatingamodn{\displaystyle a\,{\bmod {\,}}n}for unsigned integers, the obvious analog would be to use division byn{\displaystyle n}:
However, division can be expensive and, in cryptographic settings, might not be a constant-time instruction on some CPUs, subjecting the operation to atiming attack. Thus Barrett reduction approximates1/n{\displaystyle 1/n}with a valuem/2k{\displaystyle m/2^{k}}because division by2k{\displaystyle 2^{k}}is just a right-shift, and so it is cheap.
In order to calculate the best value form{\displaystyle m}given2k{\displaystyle 2^{k}}consider:
Form{\displaystyle m}to be an integer, we need to round2k/n{\displaystyle {2^{k}}/{n}}somehow.
Rounding to the nearest integer will give the best approximation but can result inm/2k{\displaystyle m/2^{k}}being larger than1/n{\displaystyle 1/n}, which can cause underflows. Thusm=⌊2k/n⌋{\displaystyle m=\lfloor {2^{k}}/{n}\rfloor }is used for unsigned arithmetic.
Thus we can approximate the function above with the following:
However, sincem/2k≤1/n{\displaystyle m/2^{k}\leq 1/n}, the value ofqin that function can end up being one too small, and thusais only guaranteed to be within[0,2n){\displaystyle [0,2n)}rather than[0,n){\displaystyle [0,n)}as is generally required. A conditional subtraction will correct this:
Supposeb{\displaystyle b}is known.
This allows us to precompute⌊bRn⌋{\displaystyle \left\lfloor {\frac {bR}{n}}\right\rfloor }before we receivea{\displaystyle a}.
Barrett multiplication computesab{\displaystyle ab}, approximates the high part ofab{\displaystyle ab}with⌊a⌊bRn⌋R⌋n{\displaystyle \left\lfloor {\frac {a\left\lfloor {\frac {bR}{n}}\right\rfloor }{R}}\right\rfloor \,n},
and subtracts the approximation.
Since⌊a⌊bRn⌋R⌋n{\displaystyle \left\lfloor {\frac {a\left\lfloor {\frac {bR}{n}}\right\rfloor }{R}}\right\rfloor \,n}is a multiple ofn{\displaystyle n},
the resulting valueab−⌊a⌊bRn⌋R⌋n{\displaystyle ab-\left\lfloor {\frac {a\left\lfloor {\frac {bR}{n}}\right\rfloor }{R}}\right\rfloor \,n}is a representative ofabmodn{\displaystyle ab\,{\bmod {\,}}n}.
Recall that unsignedMontgomery multiplicationcomputes a representative ofabmodn{\displaystyle ab\,{\bmod {\,}}n}as
In fact, this value is equal toab−⌊a⌊bRn⌋R⌋n{\displaystyle ab-\left\lfloor {\frac {a\left\lfloor {\frac {bR}{n}}\right\rfloor }{R}}\right\rfloor \,n}.
We prove the claim as follows.
Generally, for integer approximations[]0,[]1{\displaystyle \left[\,\right]_{0},\left[\,\right]_{1}},
we have
We bound the output with
Similar bounds hold for other kinds of integer approximation functions.
For example, if we choose[]0=[]1=⌊⌉{\displaystyle \left[\,\right]_{0}=\left[\,\right]_{1}=\left\lfloor \,\right\rceil }, therounding half upfunction,
then we have
It is common to select R such thataR<1{\displaystyle {\frac {a}{R}}<1}(or|a|R<1{\displaystyle {\frac {\left|a\right|}{R}}<1}in the[]0=[]1=⌊⌉{\displaystyle \left[\,\right]_{0}=\left[\,\right]_{1}=\left\lfloor \,\right\rceil }case) so that the output remains within0{\displaystyle 0}and2n{\displaystyle 2n}(−n{\displaystyle -n}andn{\displaystyle n}resp.), and therefore only one check is performed to obtain the final result between0{\displaystyle 0}andn{\displaystyle n}. Furthermore, one can skip the check and perform it once at the end of an algorithm at the expense of larger inputs to the field arithmetic operations.
The Barrett multiplication previously described requires a constant operand b to pre-compute[bRn]0{\displaystyle \left[{\frac {bR}{n}}\right]_{0}}ahead of time. Otherwise, the operation is not efficient. It is common to useMontgomery multiplicationwhen both operands are non-constant as it has better performance. However,Montgomery multiplicationrequires a conversion to and from Montgomery domain which means it is expensive when a few modular multiplications are needed.
To perform Barrett multiplication with non-constant operands, one can seta{\displaystyle a}as the product of the operands and setb{\displaystyle b}to1{\displaystyle 1}. This leads to
a−[a[Rn]0R]1n=a(Rmod[]0n)+(a(−Rmod[]0q)n−1mod[]1R)nR{\displaystyle a-\left[{\frac {a\,\left[{\frac {R}{n}}\right]_{0}}{R}}\right]_{1}\,n={\frac {a\left(R\,{\text{mod}}^{\left[\,\right]_{0}}\,n\right)+\left(a\left(-R\,{\text{mod}}^{\left[\,\right]_{0}}\,q\right)n^{-1}\,{\text{mod}}^{\left[\,\right]_{1}}\,R\right)n}{R}}}
A quick check on the bounds yield the following in[]0=[]1=⌊⌋{\displaystyle \left[\,\right]_{0}=\left[\,\right]_{1}=\left\lfloor \,\right\rfloor }case
and the following in[]0=[]1=⌊⌉{\displaystyle \left[\,\right]_{0}=\left[\,\right]_{1}=\left\lfloor \,\right\rceil }case
SettingR>|a|{\displaystyle R>|a|}will always yield one check on the output. However, a tighter constraint onR{\displaystyle R}might be possible sinceRmod[]0n{\displaystyle R\,{\text{mod}}^{\left[\,\right]_{0}}\,n}is a constant that is sometimes significantly smaller thann{\displaystyle n}.
A small issue arises with performing the following producta[Rn]0{\displaystyle a\,\left[{\frac {R}{n}}\right]_{0}}sincea{\displaystyle a}is already a product of two operands. Assumingn{\displaystyle n}fits inw{\displaystyle w}bits, thena{\displaystyle a}would fit in2w{\displaystyle 2w}bits and[Rn]0{\displaystyle \left[{\frac {R}{n}}\right]_{0}}would fit inw{\displaystyle w}bits. Their product would require a2w×w{\displaystyle 2w\times w}multiplication which might require fragmenting in systems that cannot perform the product in one operation.
An alternative approach is to perform the following Barrett reduction:
whereR0=2k−β{\displaystyle R_{0}=2^{k-\beta }},R1=2α+β{\displaystyle R_{1}=2^{\alpha +\beta }},R=R0⋅R1=2k+α{\displaystyle R=R_{0}\cdot R_{1}=2^{k+\alpha }}, andk{\displaystyle k}is the bit-length ofn{\displaystyle n}.
Bound check in the case[]0=[]1=[]2=⌊⌋{\displaystyle \left[\,\right]_{0}=\left[\,\right]_{1}=\left[\,\right]_{2}=\left\lfloor \,\right\rfloor }yields the following
and for the case[]0=[]1=[]2=⌊⌉{\displaystyle \left[\,\right]_{0}=\left[\,\right]_{1}=\left[\,\right]_{2}=\left\lfloor \,\right\rceil }yields the following
For any modulus and assuming|a|<2k+γ{\displaystyle |a|<2^{k+\gamma }}, the bound inside the parenthesis in both cases is less than or equal:
whereϵ=−1R1{\displaystyle \epsilon =-{\frac {1}{R_{1}}}}in the⌊⌋{\displaystyle \left\lfloor \,\right\rfloor }case andϵ=12R1{\displaystyle \epsilon ={\frac {1}{2R_{1}}}}in the⌊⌉{\displaystyle \left\lfloor \,\right\rceil }case.
Settingβ=2{\displaystyle \beta =2}andα=γ+1{\displaystyle \alpha =\gamma +1}(orα=γ+2{\displaystyle \alpha =\gamma +2}in the⌊⌉{\displaystyle \left\lfloor \,\right\rceil }case) will always yield one check. In some cases, testing the bounds might yield a lowerα{\displaystyle \alpha }and/orβ{\displaystyle \beta }values.
It is possible to perform a Barrett reduction with one less multiplication as follows
Every modulus can be written in the formn=2k−c=R−c{\displaystyle n=2^{k}-c=R-c}for some integerc{\displaystyle c}.
Therefore, reducing anya<R2c−R{\displaystyle a<{\frac {R^{2}}{c}}-R}for[]1=⌊⌋{\displaystyle \left[\,\right]_{1}=\left\lfloor \,\right\rfloor }or any|a|<(R2c−R)/2{\displaystyle |a|<\left({\frac {R^{2}}{c}}-R\right)/\,2}for[]1=⌊⌉{\displaystyle \left[\,\right]_{1}=\left\lfloor \,\right\rceil }yields one check.
From the analysis of the constraint, it can be observed that the bound ofa{\displaystyle a}is larger whenc{\displaystyle c}is smaller. In other words, the bound is larger whenn{\displaystyle n}is closer toR{\displaystyle R}.
Barrett reduction can be used to compute floor, round or ceil division[an]{\displaystyle \left[{\frac {a}{n}}\right]}without performing expensive long division. Furthermore it can be used to compute[abn]{\displaystyle \left[{\frac {ab}{n}}\right]}. After pre-computing the constants, the steps are as follows:
If the constraints for the Barrett reduction are chosen such that there is one check, then the absolute value ofe{\displaystyle e}in step 3 cannot be more than 1. Using[]0=[]1=⌊⌉{\displaystyle \left[\,\right]_{0}=\left[\,\right]_{1}=\left\lfloor \,\right\rceil }and appropriate constraints, the errore{\displaystyle e}can be obtained from the sign ofr~{\displaystyle {\tilde {r}}}.
Barrett's primary motivation for considering reduction was the implementation ofRSA, where the values in question will almost certainly exceed the size of a machine word. In this situation, Barrett provided an algorithm that approximates the single-word version above but for multi-word values. For details see section 14.3.3 of theHandbook of Applied Cryptography.[5]
It is also possible to use Barrett algorithm for polynomial division, by reversing polynomials and using X-adic arithmetic.[6]
|
https://en.wikipedia.org/wiki/Barrett_reduction
|
Aquantityis subject toexponential decayif it decreases at a rateproportionalto its current value. Symbolically, this process can be expressed by the followingdifferential equation, whereNis the quantity andλ(lambda) is a positive rate called theexponential decay constant,disintegration constant,[1]rate constant,[2]ortransformation constant:[3]
The solution to this equation (seederivationbelow) is:
whereN(t)is the quantity at timet,N0=N(0)is the initial quantity, that is, the quantity at timet= 0.
If the decaying quantity,N(t), is the number of discrete elements in a certainset, it is possible to compute the average length of time that an element remains in the set. This is called themean lifetime(or simply thelifetime), where theexponentialtime constant,τ{\displaystyle \tau }, relates to the decay rate constant, λ, in the following way:
The mean lifetime can be looked at as a "scaling time", because the exponential decay equation can be written in terms of the mean lifetime,τ{\displaystyle \tau }, instead of the decay constant, λ:
and thatτ{\displaystyle \tau }is the time at which the population of the assembly is reduced to1⁄e≈ 0.367879441 times its initial value. This is equivalent tolog2e{\displaystyle \log _{2}{e}}≈ 1.442695 half-lives.
For example, if the initial population of the assembly,N(0), is 1000, then the population at timeτ{\displaystyle \tau },N(τ){\displaystyle N(\tau )}, is 368.
A very similar equation will be seen below, which arises when the base of the exponential is chosen to be 2, rather thane. In that case the scaling time is the "half-life".
A more intuitive characteristic of exponential decay for many people is the time required for the decaying quantity to fall to one half of its initial value. (IfN(t) is discrete, then this is the median life-time rather than the mean life-time.) This time is called thehalf-life, and often denoted by the symbolt1/2. The half-life can be written in terms of the decay constant, or the mean lifetime, as:
When this expression is inserted forτ{\displaystyle \tau }in the exponential equation above, andln 2is absorbed into the base, this equation becomes:
Thus, the amount of material left is 2−1= 1/2 raised to the (whole or fractional) number of half-lives that have passed. Thus, after 3 half-lives there will be 1/23= 1/8 of the original material left.
Therefore, the mean lifetimeτ{\displaystyle \tau }is equal to the half-life divided by the natural log of 2, or:
For example,polonium-210has a half-life of 138 days, and a mean lifetime of 200 days.
The equation that describes exponential decay is
or, by rearranging (applying the technique calledseparation of variables),
Integrating, we have
where C is theconstant of integration, and hence
where the final substitution,N0=eC, is obtained by evaluating the equation att= 0, asN0is defined as being the quantity att= 0.
This is the form of the equation that is most commonly used to describe exponential decay. Any one of decay constant, mean lifetime, or half-life is sufficient to characterise the decay. The notation λ for the decay constant is a remnant of the usual notation for aneigenvalue. In this case, λ is the eigenvalue of thenegativeof thedifferential operatorwithN(t) as the correspondingeigenfunction.
Given an assembly of elements, the number of which decreases ultimately to zero, themean lifetime,τ{\displaystyle \tau }, (also called simply thelifetime) is theexpected valueof the amount of time before an object is removed from the assembly. Specifically, if theindividual lifetimeof an element of the assembly is the time elapsed between some reference time and the removal of that element from the assembly, the mean lifetime is thearithmetic meanof the individual lifetimes.
Starting from the population formula
first letcbe the normalizing factor to convert to aprobability density function:
or, on rearranging,
Exponential decay is ascalar multipleof theexponential distribution(i.e. the individual lifetime of each object is exponentially distributed), which has awell-known expected value. We can compute it here usingintegration by parts.
A quantity may decay via two or more different processes simultaneously. In general, these processes (often called "decay modes", "decay channels", "decay routes" etc.) have different probabilities of occurring, and thus occur at different rates with different half-lives, in parallel. The total decay rate of the quantityNis given by thesumof the decay routes; thus, in the case of two processes:
The solution to this equation is given in the previous section, where the sum ofλ1+λ2{\displaystyle \lambda _{1}+\lambda _{2}\,}is treated as a new total decay constantλc{\displaystyle \lambda _{c}}.
Partial mean lifeassociated with individual processes is by definition themultiplicative inverseof corresponding partial decay constant:τ=1/λ{\displaystyle \tau =1/\lambda }. A combinedτc{\displaystyle \tau _{c}}can be given in terms ofλ{\displaystyle \lambda }s:
Since half-lives differ from mean lifeτ{\displaystyle \tau }by a constant factor, the same equation holds in terms of the two corresponding half-lives:
whereT1/2{\displaystyle T_{1/2}}is the combined or total half-life for the process,t1{\displaystyle t_{1}}andt2{\displaystyle t_{2}}are so-namedpartial half-livesof corresponding processes. Terms "partial half-life" and "partial mean life" denote quantities derived from a decay constant as if the given decay mode were the only decay mode for the quantity. The term "partial half-life" is misleading, because it cannot be measured as a time interval for which a certain quantity ishalved.
In terms of separate decay constants, the total half-lifeT1/2{\displaystyle T_{1/2}}can be shown to be
For a decay by three simultaneous exponential processes the total half-life can be computed as above:
Innuclear scienceandpharmacokinetics, the agent of interest might be situated in a decay chain, where the accumulation is governed by exponential decay of a source agent, while the agent of interest itself decays by means of an exponential process.
These systems are solved using theBateman equation.
In the pharmacology setting, some ingested substances might be absorbed into the body by a process reasonably modeled as exponential decay, or might be deliberatelyformulatedto have such a release profile.
Exponential decay occurs in a wide variety of situations. Most of these fall into the domain of thenatural sciences.
Many decay processes that are often treated as exponential, are really only exponential so long as the sample is large and thelaw of large numbersholds. For small samples, a more general analysis is necessary, accounting for aPoisson process.
|
https://en.wikipedia.org/wiki/Exponential_decay
|
Afraction(fromLatin:fractus, "broken") represents a part of a whole or, more generally, any number of equal parts. When spoken in everyday English, a fraction describes how many parts of a certain size there are, for example, one-half, eight-fifths, three-quarters. Acommon,vulgar,[n 1]orsimplefraction (examples:1/2and17/3) consists of an integernumerator, displayed above a line (or before a slash like1⁄2), and anon-zerointegerdenominator, displayed below (or after) that line. If these integers are positive, then the numerator represents a number of equal parts, and the denominator indicates how many of those parts make up a unit or a whole. For example, in the fraction3/4, the numerator 3 indicates that the fraction represents 3 equal parts, and the denominator 4 indicates that 4 parts make up a whole. The picture to the right illustrates3/4of a cake.
Fractions can be used to representratiosanddivision.[1]Thus the fraction3/4can be used to represent the ratio 3:4 (the ratio of the part to the whole), and the division3 ÷ 4(three divided by four).
We can also write negative fractions, which represent the opposite of a positive fraction. For example, if1/2represents a half-dollar profit, then −1/2represents a half-dollar loss. Because of the rules of division of signed numbers (which states in part that negative divided by positive is negative), −1/2,−1/2and1/−2all represent the same fraction – negative one-half. And because a negative divided by a negative produces a positive,−1/−2represents positive one-half.
In mathematics arational numberis a number that can be represented by a fraction of the forma/b, whereaandbare integers andbis not zero; the set of all rational numbers is commonly represented by the symbolQ{\displaystyle \mathbb {Q} }orQ, which stands forquotient. The termfractionand the notationa/bcan also be used for mathematical expressions that do not represent a rational number (for example22{\displaystyle \textstyle {\frac {\sqrt {2}}{2}}}), and even do not represent any number (for example therational fraction1x{\displaystyle \textstyle {\frac {1}{x}}}).
In a fraction, the number of equal parts being described is thenumerator(fromLatin:numerātor, "counter" or "numberer"), and the type or variety of the parts is thedenominator(fromLatin:dēnōminātor, "thing that names or designates").[2][3]As an example, the fraction8/5amounts to eight parts, each of which is of the type namedfifth. In terms ofdivision, the numerator corresponds to thedividend, and the denominator corresponds to thedivisor.
Informally, the numerator and denominator may be distinguished by placement alone, but in formal contexts they are usually separated by afraction bar. The fraction bar may be horizontal (as in1/3), oblique (as in 2/5), or diagonal (as in4⁄9).[4]These marks are respectively known as the horizontal bar; the virgule,slash(US), orstroke(UK); and the fraction bar, solidus,[5]orfraction slash.[n 2]Intypography, fractions stacked vertically are also known asenornutfractions, and diagonal ones asemormutton fractions, based on whether a fraction with a single-digit numerator and denominator occupies the proportion of a narrowensquare, or a wideremsquare.[4]In traditionaltypefounding, a piece of type bearing a complete fraction (e.g.1/2) was known as acase fraction, while those representing only parts of fractions were calledpiece fractions.
The denominators of English fractions are generally expressed asordinal numbers, in the plural if the numerator is not 1. (For example,2/5and3/5are both read as a number offifths.) Exceptions include the denominator 2, which is always readhalforhalves, the denominator 4, which may be alternatively expressed asquarter/quartersor asfourth/fourths, and the denominator 100, which may be alternatively expressed ashundredth/hundredthsorpercent.
When the denominator is 1, it may be expressed in terms ofwholesbut is more commonly ignored, with the numerator read out as a whole number. For example,3/1may be described asthree wholes, or simply asthree. When the numerator is 1, it may be omitted (as ina tenthoreach quarter).
The entire fraction may be expressed as a single composition, in which case it is hyphenated, or as a number of fractions with a numerator of one, in which case they are not. (For example,two-fifthsis the fraction2/5andtwo fifthsis the same fraction understood as 2 instances of1/5.) Fractions should always be hyphenated when used as adjectives. Alternatively, a fraction may be described by reading it out as the numeratoroverthe denominator, with the denominator expressed as acardinal number. (For example,3/1may also be expressed asthree over one.) The termoveris used even in the case of solidus fractions, where the numbers are placed left and right of aslash mark. (For example, 1/2 may be readone-half,one half, orone over two.) Fractions with large denominators that arenotpowers of ten are often rendered in this fashion (e.g.,1/117asone over one hundred seventeen), while those with denominators divisible by ten are typically read in the normal ordinal fashion (e.g.,6/1000000assix-millionths,six millionths, orsix one-millionths).
Asimple fraction(also known as acommon fractionorvulgar fraction)[n 1]is arational numberwritten asa/borab{\displaystyle {\tfrac {a}{b}}}, whereaandbare bothintegers.[9]As with other fractions, the denominator (b) cannot be zero. Examples include1/2, −8/5,−8/5, and8/−5. The term was originally used to distinguish this type of fraction from thesexagesimal fractionused in astronomy.[10]
Common fractions can be positive or negative, and they can beproperorimproper(see below). Compound fractions, complex fractions, mixed numerals, and decimal expressions (see below) are notcommon fractions; though, unless irrational, they can be evaluated to a common fraction.
In Unicode, precomposed fraction characters are in theNumber Formsblock.
Common fractions can be classified as either proper or improper. When the numerator and the denominator are both positive, the fraction is calledproperif the numerator is less than the denominator, andimproperotherwise.[11]The concept of animproper fractionis a late development, with the terminology deriving from the fact thatfractionmeans "piece", so a proper fraction must be less than 1.[10]This was explained in the 17th century textbookThe Ground of Arts.[12][13]
In general, a common fraction is said to be aproper fractionif theabsolute valueof the fraction is strictly less than one—that is, if the fraction is greater than −1 and less than 1.[14][15]It is said to be animproper fraction, or sometimestop-heavy fraction,[16]if the absolute value of the fraction is greater than or equal to 1. Examples of proper fractions are 2/3, −3/4, and 4/9, whereas examples of improper fractions are 9/4, −4/3, and 3/3.As described below, any improper fraction can be converted to a mixed number (integer plus proper fraction), and vice versa.
Thereciprocalof a fraction is another fraction with the numerator and denominator exchanged. The reciprocal of3/7, for instance, is7/3. The product of a non-zero fraction and its reciprocal is 1, hence the reciprocal is themultiplicative inverseof a fraction. The reciprocal of a proper fraction is improper, and the reciprocal of an improper fraction not equal to 1 (that is, numerator and denominator are not equal) is a proper fraction.
When the numerator and denominator of a fraction are equal (for example,7/7), its value is 1, and the fraction therefore is improper. Its reciprocal is identical and hence also equal to 1 and improper.
Any integer can be written as a fraction with the number one as denominator. For example, 17 can be written as17/1, where 1 is sometimes referred to as theinvisible denominator.[17]Therefore, every fraction and every integer, except for zero, has a reciprocal. For example, the reciprocal of 17 is1/17.
Aratiois a relationship between two or more numbers that can be sometimes expressed as a fraction. Typically, a number of items are grouped and compared in a ratio, specifying numerically the relationship between each group. Ratios are expressed as "group 1 to group 2 ... to groupn". For example, if a car lot had 12 vehicles, of which
then the ratio of red to white to yellow cars is 6 to 2 to 4. The ratio of yellow cars to white cars is 4 to 2 and may be expressed as 4:2 or 2:1.
A ratio is often converted to a fraction when it is expressed as a ratio to the whole. In the above example, the ratio of yellow cars to all the cars on the lot is 4:12 or 1:3. We can convert these ratios to a fraction, and say that4/12of the cars or1/3of the cars in the lot are yellow. Therefore, if a person randomly chose one car on the lot, then there is a one in three chance orprobabilitythat it would be yellow.
Adecimal fractionis a fraction whose denominator is an integer power of ten, commonly expressed using decimal notation, in which the denominator is not given explicitly but is implied by the number ofdigitsto the right of adecimal separator. The separator can be a period ⟨.⟩,interpunct⟨·⟩, or comma ⟨,⟩, depending on locale. (For examples, seeDecimal separator.) Thus, for 0.75 the numerator is 75 and the implied denominator is 10 to the second power, namely, 100, because there are two digits to the right of the decimal separator. In decimal numbers greater than 1 (such as 3.75), thefractional partof the number is expressed by the digits to the right of the separator (with a value of 0.75 in this case). 3.75 can be written either as an improper fraction,375/100, or as a mixed number,3+75/100.
Decimal fractions can also be expressed usingscientific notationwith negative exponents, such as6.023×10−7, a convenient alternative to the unwieldy 0.0000006023. The10−7represents a denominator of107. Dividing by107moves the decimal point seven places to the left.
A decimal fraction with infinitely many digits to the right of the decimal separator represents aninfinite series. For example,1/3= 0.333... represents the infinite series 3/10 + 3/100 + 3/1000 + ....
Another kind of fraction is thepercentage(fromLatin:per centum, meaning "per hundred", represented by the symbol %), in which the implied denominator is always 100. Thus, 51% means51⁄100. Percentages greater than 100 or less than zero are treated in the same way, e.g. 311% means311⁄100and −27% means−27⁄100.
The related concept ofpermille, orparts per thousand(ppt), means a denominator of 1000, and thisparts-pernotationis commonly used with larger denominators, such asmillionandbillion, e.g.75 parts per million(ppm) means that the proportion is75/1000000.
The choice between fraction and decimal notation is often a matter of taste and context. Fractions are used most often when the denominator is relatively small. Bymental calculation, it is easier tomultiply16 by3⁄16than to do the same calculation using the fraction's decimal equivalent (0.1875). And it is moreprecise(exact, in fact) to multiply 15 by1⁄3, for example, than it is to multiply 15 by any decimal approximation of one third. Monetary values are commonly expressed as decimal fractions with denominator 100, i.e., with two digits after the decimal separator, for example$3.75. However, as noted above, in pre-decimal British currency, shillings and pence were often given the form (but not the meaning) of a fraction, as, for example, "3/6", commonly readthree and six, meansthree shillings and sixpenceand has no relationship to the fractionthree sixths.
Amixed number(also called amixed fractionormixed numeral) is the sum of a non-zero integer and a proper fraction, conventionally written by juxtaposition (orconcatenation) of the two parts, without the use of an intermediate plus (+) or minus (−) sign. When the fraction is written horizontally, a space is added between the integer and fraction to separate them.
As a basic example, two entire cakes and three quarters of another cake might be written as234{\displaystyle 2{\tfrac {3}{4}}}cakes or23/4{\displaystyle 2\ \,3/4}cakes, with the numeral2{\displaystyle 2}representing the whole cakes and the fraction34{\displaystyle {\tfrac {3}{4}}}representing the additional partial cake juxtaposed; this is more concise than the more explicit notation2+34{\displaystyle 2+{\tfrac {3}{4}}}cakes. The mixed number2+3/4is spokentwo and three quartersortwo and three fourths, with the integer and fraction portions connected by the wordand.[18]Subtraction or negation is applied to the entire mixed numeral, so−234{\displaystyle -2{\tfrac {3}{4}}}means−(2+34).{\displaystyle -{\bigl (}2+{\tfrac {3}{4}}{\bigr )}.}
Any mixed number can be converted to animproper fractionby applying the rules ofadding unlike quantities. For example,2+34=84+34=114.{\displaystyle 2+{\tfrac {3}{4}}={\tfrac {8}{4}}+{\tfrac {3}{4}}={\tfrac {11}{4}}.}Conversely, an improper fraction can be converted to a mixed number usingdivision with remainder, with the proper fraction consisting of the remainder divided by the divisor. For example, since 4 goes into 11 twice, with 3 left over,114=2+34.{\displaystyle {\tfrac {11}{4}}=2+{\tfrac {3}{4}}.}
In primary school, teachers often insist that every fractional result should be expressed as a mixed number.[19]Outside school, mixed numbers are commonly used for describing measurements, for instance2+1/2hours or 5 3/16inches, and remain widespread in daily life and in trades, especially in regions that do not use the decimalizedmetric system. However, scientific measurements typically use the metric system, which is based on decimal fractions, and starting from the secondary school level, mathematics pedagogy treats every fraction uniformly as arational number, the quotientp/qof integers, leaving behind the concepts ofimproper fractionandmixed number.[20]College students with years of mathematical training are sometimes confused when re-encountering mixed numbers because they are used to the convention that juxtaposition inalgebraic expressionsmeans multiplication.[21]
AnEgyptian fractionis the sum of distinct positive unit fractions, for example12+13{\displaystyle {\tfrac {1}{2}}+{\tfrac {1}{3}}}. This definition derives from the fact that theancient Egyptiansexpressed all fractions except12{\displaystyle {\tfrac {1}{2}}},23{\displaystyle {\tfrac {2}{3}}}and34{\displaystyle {\tfrac {3}{4}}}in this manner. Every positive rational number can be expanded as an Egyptian fraction. For example,57{\displaystyle {\tfrac {5}{7}}}can be written as12+16+121.{\displaystyle {\tfrac {1}{2}}+{\tfrac {1}{6}}+{\tfrac {1}{21}}.}Any positive rational number can be written as a sum of unit fractions in infinitely many ways. Two ways to write1317{\displaystyle {\tfrac {13}{17}}}are12+14+168{\displaystyle {\tfrac {1}{2}}+{\tfrac {1}{4}}+{\tfrac {1}{68}}}and13+14+16+168{\displaystyle {\tfrac {1}{3}}+{\tfrac {1}{4}}+{\tfrac {1}{6}}+{\tfrac {1}{68}}}.
In acomplex fraction, either the numerator, or the denominator, or both, is a fraction or a mixed number,[22][23]corresponding to division of fractions. For example,1/21/3{\displaystyle {\tfrac {1/2}{1/3}}}and(1234)/26{\displaystyle {\bigl (}12{\tfrac {3}{4}}{\bigr )}{\big /}26}are complex fractions. To interpret nested fractions writtenstackedwith a horizontal fraction bars, treat shorter bars as nested inside longer bars. Complex fractions can be simplified using multiplication by the reciprocal, as described below at§ Division. For example:
A complex fraction should never be written without an obvious marker showing which fraction is nested inside the other, as such expressions are ambiguous. For example, the expression5/10/20{\displaystyle 5/10/20}could be plausibly interpreted as either510/20=140{\displaystyle {\tfrac {5}{10}}{\big /}20={\tfrac {1}{40}}}or as5/1020=10.{\displaystyle 5{\big /}{\tfrac {10}{20}}=10.}The meaning can be made explicit by writing the fractions using distinct separators or by adding explicit parentheses, in this instance(5/10)/20{\displaystyle (5/10){\big /}20}or5/(10/20).{\displaystyle 5{\big /}(10/20).}
Acompound fractionis a fraction of a fraction, or any number of fractions connected with the wordof,[22][23]corresponding to multiplication of fractions. To reduce a compound fraction to a simple fraction, just carry out the multiplication (see§ Multiplication). For example,34{\displaystyle {\tfrac {3}{4}}}of57{\displaystyle {\tfrac {5}{7}}}is a compound fraction, corresponding to34×57=1528{\displaystyle {\tfrac {3}{4}}\times {\tfrac {5}{7}}={\tfrac {15}{28}}}. The terms compound fraction and complex fraction are closely related and sometimes one is used as a synonym for the other. (For example, the compound fraction34×57{\displaystyle {\tfrac {3}{4}}\times {\tfrac {5}{7}}}is equivalent to the complex fraction3/47/5{\displaystyle {\tfrac {3/4}{7/5}}}.)
Nevertheless,complex fractionandcompound fractionmay both be considered outdated[24]and now used in no well-defined manner, partly even taken as synonymous with each other[25]or withmixed numerals.[26]They have lost their meaning as technical terms and the attributescomplexandcompoundtend to be used in their everyday meaning ofconsisting of parts.
Like whole numbers, fractions obey thecommutative,associative, anddistributivelaws, and the rule againstdivision by zero.
Mixed-number arithmetic can be performed either by converting each mixed number to an improper fraction, or by treating each as a sum of integer and fractional parts.
Multiplying the numerator and denominator of a fraction by the same (non-zero) number results in a fraction that is equivalent to the original fraction. This is true because for any non-zero numbern{\displaystyle n}, the fractionnn{\displaystyle {\tfrac {n}{n}}}equals 1. Therefore, multiplying bynn{\displaystyle {\tfrac {n}{n}}}is the same as multiplying by one, and any number multiplied by one has the same value as the original number. By way of an example, start with the fraction12{\displaystyle {\tfrac {1}{2}}}. When the numerator and denominator are both multiplied by 2, the result is2/4, which has the same value (0.5) as1/2. To picture this visually, imagine cutting a cake into four pieces; two of the pieces together (2/4) make up half the cake (1/2).
Dividing the numerator and denominator of a fraction by the same non-zero number yields an equivalent fraction: if the numerator and the denominator of a fraction are both divisible by a number (called a factor) greater than 1, then the fraction can be reduced to an equivalent fraction with a smaller numerator and a smaller denominator. For example, if both the numerator and the denominator of the fractionab{\displaystyle {\tfrac {a}{b}}}are divisible byc{\displaystyle c}, then they can be written asa=cd{\displaystyle a=cd},b=ce{\displaystyle b=ce}, and the fraction becomescd/ce, which can be reduced by dividing both the numerator and denominator bycto give the reduced fractiond/e.
If one takes forcthegreatest common divisorof the numerator and the denominator, one gets the equivalent fraction whose numerator and denominator have the lowestabsolute values. One says that the fraction has been reduced to itslowest terms.
If the numerator and the denominator do not share any factor greater than 1, the fraction is already reduced to its lowest terms, and it is said to beirreducible,reduced, orin simplest terms. For example,39{\displaystyle {\tfrac {3}{9}}}is not in lowest terms because both 3 and 9 can be exactly divided by 3. In contrast,38{\displaystyle {\tfrac {3}{8}}}isin lowest terms—the only positive integer that goes into both 3 and 8 evenly is 1.
Using these rules, we can show that5/10=1/2=10/20=50/100, for example.
As another example, since the greatest common divisor of 63 and 462 is 21, the fraction63/462can be reduced to lowest terms by dividing the numerator and denominator by 21:
TheEuclidean algorithmgives a method for finding the greatest common divisor of any two integers.
Comparing fractions with the same positive denominator yields the same result as comparing the numerators:
If the equal denominators are negative, then the opposite result of comparing the numerators holds for the fractions:
If two positive fractions have the same numerator, then the fraction with the smaller denominator is the larger number. When a whole is divided into equal pieces, if fewer equal pieces are needed to make up the whole, then each piece must be larger. When two positive fractions have the same numerator, they represent the same number of parts, but in the fraction with the smaller denominator, the parts are larger.
One way to compare fractions with different numerators and denominators is to find a common denominator. To compareab{\displaystyle {\tfrac {a}{b}}}andcd{\displaystyle {\tfrac {c}{d}}}, these are converted toa⋅db⋅d{\displaystyle {\tfrac {a\cdot d}{b\cdot d}}}andb⋅cb⋅d{\displaystyle {\tfrac {b\cdot c}{b\cdot d}}}(where the dot signifies multiplication and is an alternative symbol to ×). Thenbdis a common denominator and the numeratorsadandbccan be compared. It is not necessary to determine the value of the common denominator to compare fractions – one can just compareadandbc, without evaluatingbd, e.g., comparing23{\displaystyle {\tfrac {2}{3}}}?12{\displaystyle {\tfrac {1}{2}}}gives46>36{\displaystyle {\tfrac {4}{6}}>{\tfrac {3}{6}}}.
For the more laborious question518{\displaystyle {\tfrac {5}{18}}}?417,{\displaystyle {\tfrac {4}{17}},}multiply top and bottom of each fraction by the denominator of the other fraction, to get a common denominator, yielding5×1718×17{\displaystyle {\tfrac {5\times 17}{18\times 17}}}?18×418×17{\displaystyle {\tfrac {18\times 4}{18\times 17}}}. It is not necessary to calculate18×17{\displaystyle 18\times 17}– only the numerators need to be compared. Since 5×17 (= 85) is greater than 4×18 (= 72), the result of comparing is518>417{\displaystyle {\tfrac {5}{18}}>{\tfrac {4}{17}}}.
Because every negative number, including negative fractions, is less than zero, and every positive number, including positive fractions, is greater than zero, it follows that any negative fraction is less than any positive fraction. This allows, together with the above rules, to compare all possible fractions.
The first rule of addition is that only like quantities can be added; for example, various quantities of quarters. Unlike quantities, such as adding thirds to quarters, must first be converted to like quantities as described below: Imagine a pocket containing two quarters, and another pocket containing three quarters; in total, there are five quarters. Since four quarters is equivalent to one (dollar), this can be represented as follows:
To add fractions containing unlike quantities (e.g. quarters and thirds), it is necessary to convert all amounts to like quantities. It is easy to work out the chosen type of fraction to convert to; simply multiply together the two denominators (bottom number) of each fraction. In case of an integer number apply theinvisible denominator1.
For adding quarters to thirds, both types of fraction are converted to twelfths, thus:
Consider adding the following two quantities:
First, convert35{\displaystyle {\tfrac {3}{5}}}into fifteenths by multiplying both the numerator and denominator by three:35×33=915{\displaystyle {\tfrac {3}{5}}\times {\tfrac {3}{3}}={\tfrac {9}{15}}}. Since3/3equals 1, multiplication by3/3does not change the value of the fraction.
Second, convert2/3into fifteenths by multiplying both the numerator and denominator by five:23×55=1015{\displaystyle {\tfrac {2}{3}}\times {\tfrac {5}{5}}={\tfrac {10}{15}}}.
Now it can be seen that
is equivalent to
This method can be expressed algebraically:
This algebraic method always works, thereby guaranteeing that the sum of simple fractions is always again a simple fraction. However, if the single denominators contain a common factor, a smaller denominator than the product of these can be used. For example, when adding34{\displaystyle {\tfrac {3}{4}}}and56{\displaystyle {\tfrac {5}{6}}}the single denominators have a common factor 2, and therefore, instead of the denominator 24 (4 × 6), the halved denominator 12 may be used, not only reducing the denominator in the result, but also the factors in the numerator.
The smallest possible denominator is given by theleast common multipleof the single denominators, which results from dividing the rote multiple by all common factors of the single denominators. This is called the least common denominator.
The process for subtracting fractions is, in essence, the same as that of adding them: find a common denominator, and change each fraction to an equivalent fraction with the chosen common denominator. The resulting fraction will have that denominator, and its numerator will be the result of subtracting the numerators of the original fractions. For instance,
To subtract a mixed number, an extra one can be borrowed from the minuend, for instance
To multiply fractions, multiply the numerators and multiply the denominators. Thus:
To explain the process, consider one third of one quarter. Using the example of a cake, if three small slices of equal size make up a quarter, and four quarters make up a whole, twelve of these small, equal slices make up a whole. Therefore, a third of a quarter is a twelfth. Now consider the numerators. The first fraction, two thirds, is twice as large as one third. Since one third of a quarter is one twelfth, two thirds of a quarter is two twelfth. The second fraction, three quarters, is three times as large as one quarter, so two thirds of three quarters is three times as large as two thirds of one quarter. Thus two thirds times three quarters is six twelfths.
A short cut for multiplying fractions is calledcancellation. Effectively the answer is reduced to lowest terms during multiplication. For example:
A two is a commonfactorin both the numerator of the left fraction and the denominator of the right and is divided out of both. Three is a common factor of the left denominator and right numerator and is divided out of both.
Since a whole number can be rewritten as itself divided by 1, normal fraction multiplication rules can still apply. For example,
This method works because the fraction 6/1 means six equal parts, each one of which is a whole.
The product of mixed numbers can be computed by converting each to an improper fraction.[27]For example:
Alternately, mixed numbers can be treated as sums, andmultiplied as binomials. In this example,
To divide a fraction by a whole number, you may either divide the numerator by the number, if it goes evenly into the numerator, or multiply the denominator by the number. For example,103÷5{\displaystyle {\tfrac {10}{3}}\div 5}equals23{\displaystyle {\tfrac {2}{3}}}and also equals103⋅5=1015{\displaystyle {\tfrac {10}{3\cdot 5}}={\tfrac {10}{15}}}, which reduces to23{\displaystyle {\tfrac {2}{3}}}. To divide a number by a fraction, multiply that number by thereciprocalof that fraction. Thus,12÷34=12×43=1⋅42⋅3=23{\displaystyle {\tfrac {1}{2}}\div {\tfrac {3}{4}}={\tfrac {1}{2}}\times {\tfrac {4}{3}}={\tfrac {1\cdot 4}{2\cdot 3}}={\tfrac {2}{3}}}.
To change a common fraction to decimal notation, do a long division of the numerator by the denominator (this is idiomatically also phrased as "divide the denominator into the numerator"), and round the result to the desired precision. For example, to change1/4to a decimal expression, divide1by4("4into1"), to obtain exactly0.25. To change1/3to a decimal expression, divide1...by3("3into1..."), and stop when the desired precision is obtained, e.g., at four places after thedecimal separator(ten-thousandths) as0.3333. The fraction1/4is expressed exactly with only two digits after the decimal separator, while the fraction1/3cannot be written exactly as a decimal with a finite number of digits. A decimal expression can be converted to a fraction by removing the decimal separator, using the result as the numerator, and using1followed by the same number of zeroes as there are digits to the right of the decimal separator as the denominator. Thus,1.23=123100.{\displaystyle 1.23={\tfrac {123}{100}}.}
Decimal numbers, while arguably more useful to work with when performing calculations, sometimes lack the precision that common fractions have. Sometimes an infiniterepeating decimalis required to reach the same precision. Thus, it is often useful to convert repeating digits into fractions.
A conventional way to indicate a repeating decimal is to place a bar (known as avinculum) over the digits that repeat, for example 0.789= 0.789789789.... For repeating patterns that begin immediately after the decimal point, the result of the conversion is the fraction with the pattern as a numerator, and the same number of nines as a denominator. For example:
Ifleading zerosprecede the pattern, the nines are suffixed by the same number oftrailing zeros:
If a non-repeating set of digits precede the pattern (such as 0.1523987), one may write the number as the sum of the non-repeating and repeating parts, respectively:
Then, convert both parts to fractions, and add them using the methods described above:
Alternatively, algebra can be used, such as below:
In addition to being of great practical importance, fractions are also studied by mathematicians, who check that the rules for fractions given above areconsistent and reliable. Mathematicians define a fraction as an ordered pair(a,b){\displaystyle (a,b)}ofintegersa{\displaystyle a}andb≠0,{\displaystyle b\neq 0,}for which the operationsaddition,subtraction,multiplication, anddivisionare defined as follows:[28]
These definitions agree in every case with the definitions given above; only the notation is different. Alternatively, instead of defining subtraction and division as operations, theinversefractions with respect to addition and multiplication might be defined as:
Furthermore, therelation, specified as
is anequivalence relationof fractions. Each fraction from one equivalence class may be considered as arepresentativefor the whole class, and each whole class may be considered as one abstract fraction. This equivalence is preserved by the above defined operations, i.e., the results of operating on fractions are independent of the selection of representatives from their equivalence class. Formally, for addition of fractions
and similarly for the other operations.
In the case of fractions of integers, the fractionsa/bwithaandbcoprimeandb> 0are often taken as uniquely determined representatives for theirequivalentfractions, which are considered to be thesamerational number. This way the fractions of integers make up the field of the rational numbers.
More generally,aandbmay be elements of anyintegral domainR, in which case a fraction is an element of thefield of fractionsofR. For example,polynomialsin one indeterminate, with coefficients from some integral domainD, are themselves an integral domain, call itP. So foraandbelements ofP, the generatedfield of fractionsis the field ofrational fractions(also known as the field ofrational functions).
An algebraic fraction is the indicatedquotientof twoalgebraic expressions. As with fractions of integers, the denominator of an algebraic fraction cannot be zero. Two examples of algebraic fractions are3xx2+2x−3{\displaystyle {\frac {3x}{x^{2}+2x-3}}}andx+2x2−3{\displaystyle {\frac {\sqrt {x+2}}{x^{2}-3}}}. Algebraic fractions are subject to the samefieldproperties as arithmetic fractions.
If the numerator and the denominator arepolynomials, as in3xx2+2x−3{\displaystyle {\frac {3x}{x^{2}+2x-3}}}, the algebraic fraction is called arational fraction(orrational expression). Anirrational fractionis one that is not rational, as, for example, one that contains the variable under a fractional exponent or root, as inx+2x2−3{\displaystyle {\frac {\sqrt {x+2}}{x^{2}-3}}}.
The terminology used to describe algebraic fractions is similar to that used for ordinary fractions. For example, an algebraic fraction is in lowest terms if the only factors common to the numerator and the denominator are 1 and −1. An algebraic fraction whose numerator or denominator, or both, contain a fraction, such as1+1x1−1x{\displaystyle {\frac {1+{\tfrac {1}{x}}}{1-{\tfrac {1}{x}}}}}, is called acomplex fraction.
The field of rational numbers is thefield of fractionsof the integers, while the integers themselves are not a field but rather anintegral domain. Similarly, therational fractionswith coefficients in afieldform the field of fractions of polynomials with coefficient in that field. Considering the rational fractions with real coefficients,radical expressionsrepresenting numbers, such as2/2{\displaystyle \textstyle {\sqrt {2}}/2}, are also rational fractions, as are atranscendental numberssuch asπ/2,{\textstyle \pi /2,}since all of2,π,{\displaystyle {\sqrt {2}},\pi ,}and2{\displaystyle 2}arereal numbers, and thus considered as coefficients. These same numbers, however, are not rational fractions withintegercoefficients.
The termpartial fractionis used when decomposing rational fractions into sums of simpler fractions. For example, the rational fraction2xx2−1{\displaystyle {\frac {2x}{x^{2}-1}}}can be decomposed as the sum of two fractions:1x+1+1x−1{\displaystyle {\frac {1}{x+1}}+{\frac {1}{x-1}}}. This is useful for the computation ofantiderivativesofrational functions(seepartial fraction decompositionfor more).
A fraction may also containradicalsin the numerator or the denominator. If the denominator contains radicals, it can be helpful torationalizeit (compareSimplified form of a radical expression), especially if further operations, such as adding or comparing that fraction to another, are to be carried out. It is also more convenient if division is to be done manually. When the denominator is amonomialsquare root, it can be rationalized by multiplying both the top and the bottom of the fraction by the denominator:
The process of rationalization ofbinomialdenominators involves multiplying the top and the bottom of a fraction by theconjugateof the denominator so that the denominator becomes a rational number. For example:
Even if this process results in the numerator being irrational, like in the examples above, the process may still facilitate subsequent manipulations by reducing the number of irrationals one has to work with in the denominator.
In computer displays andtypography, simple fractions are sometimes printed as a single character, e.g. ½ (one half). See the article onNumber Formsfor information on doing this inUnicode.
Scientific publishing distinguishes four ways to set fractions, together with guidelines on use:[29]
The earliest fractions werereciprocalsofintegers: ancient symbols representing one part of two, one part of three, one part of four, and so on.[32]The Egyptians usedEgyptian fractionsc.1000BC. About 4000 years ago, Egyptians divided with fractions using slightly different methods. They used least common multiples withunit fractions. Their methods gave the same answer as modern methods.[33]The Egyptians also had a different notation fordyadic fractions, used for certain systems of weights and measures.[34]
TheGreeksused unit fractions and (later)simple continued fractions.Followersof theGreekphilosopherPythagoras(c.530BC) discovered that thesquare root of twocannot be expressed as a fraction of integers. (This is commonly though probably erroneously ascribed toHippasusofMetapontum, who is said to have been executed for revealing this fact.) In150 BCJainmathematicians inIndiawrote theSthananga Sutra, which contains work on the theory of numbers, arithmetical operations, and operations with fractions.
A modern expression of fractions known asbhinnarasiseems to have originated in India in the work ofAryabhatta(c.AD 500),[citation needed]Brahmagupta(c.628), andBhaskara(c.1150).[35]Their works form fractions by placing the numerators (Sanskrit:amsa) over the denominators (cheda), but without a bar between them.[35]InSanskrit literature, fractions were always expressed as an addition to or subtraction from an integer.[citation needed]The integer was written on one line and the fraction in its two parts on the next line. If the fraction was marked by a small circle⟨०⟩or cross⟨+⟩, it is subtracted from the integer; if no such sign appears, it is understood to be added. For example,Bhaskara Iwrites:[36]
which is the equivalent of
and would be written in modern notation as 61/4, 11/5, and 2 −1/9(i.e., 18/9).
The horizontal fraction bar is first attested in the work ofAl-Hassār(fl.1200),[35]aMuslim mathematicianfromFez,Morocco, who specialized inIslamic inheritance jurisprudence. In his discussion he writes: "for example, if you are told to write three-fifths and a third of a fifth, write thus,3153{\displaystyle {\frac {3\quad 1}{5\quad 3}}}".[37]The same fractional notation—with the fraction given before the integer[35]—appears soon after in the work ofLeonardo Fibonacciin the 13th century.[38]
In discussing the origins ofdecimal fractions,Dirk Jan Struikstates:[39]
The introduction of decimal fractions as a common computational practice can be dated back to theFlemishpamphletDe Thiende, published atLeydenin 1585, together with a French translation,La Disme, by the Flemish mathematicianSimon Stevin(1548–1620), then settled in the NorthernNetherlands. It is true that decimal fractions were used by theChinesemany centuries before Stevin and that the Persian astronomerAl-Kāshīused both decimal andsexagesimalfractions with great ease in hisKey to arithmetic(Samarkand, early fifteenth century).[40]
While thePersianmathematicianJamshīd al-Kāshīclaimed to have discovered decimal fractions himself in the 15th century, J. Lennart Berggren notes that he was mistaken, as decimal fractions were first used five centuries before him by theBaghdadimathematicianAbu'l-Hasan al-Uqlidisias early as the 10th century.[41][n 3]
Inprimary schools, fractions have been demonstrated throughCuisenaire rods, Fraction Bars, fraction strips, fraction circles, paper (for folding or cutting),pattern blocks, pie-shaped pieces, plastic rectangles, grid paper,dot paper,geoboards, counters and computer software.
Several states in the United States have adopted learning trajectories from theCommon Core State Standards Initiative's guidelines for mathematics education. Aside from sequencing the learning of fractions and operations with fractions, the document provides the following definition of a fraction: "A number expressible in the forma{\displaystyle a}⁄b{\displaystyle b}wherea{\displaystyle a}is a whole number andb{\displaystyle b}is a positive whole number. (The wordfractionin these standards always refers to a non-negative number.)"[43]The document itself also refers to negative fractions.
Weisstein, Eric(2003). "CRC Concise Encyclopedia of Mathematics, Second Edition".CRC Concise Encyclopedia of Mathematics. Chapman & Hall/CRC. p. 1925.ISBN1-58488-347-2.
|
https://en.wikipedia.org/wiki/Fraction
|
Inmathematics, ahyperbolais a type ofsmoothcurve lying in a plane, defined by its geometric properties or byequationsfor which it is the solution set. A hyperbola has two pieces, calledconnected componentsor branches, that are mirror images of each other and resemble two infinitebows. The hyperbola is one of the three kinds ofconic section, formed by the intersection of aplaneand a doublecone. (The other conic sections are theparabolaand theellipse. Acircleis a special case of an ellipse.) If the plane intersects both halves of the double cone but does not pass through the apex of the cones, then the conic is a hyperbola.
Besides being a conic section, a hyperbola can arise as thelocusof points whose difference of distances to two fixedfociis constant, as a curve for each point of which the rays to two fixed foci arereflectionsacross thetangent lineat that point, or as the solution of certain bivariatequadratic equationssuch as thereciprocalrelationshipxy=1.{\displaystyle xy=1.}[1]In practical applications, a hyperbola can arise as the path followed by the shadow of the tip of asundial'sgnomon, the shape of anopen orbitsuch as that of a celestial object exceeding theescape velocityof the nearest gravitational body, or thescattering trajectoryof asubatomic particle, among others.
Eachbranchof the hyperbola has two arms which become straighter (lower curvature) further out from the center of the hyperbola. Diagonally opposite arms, one from each branch, tend in the limit to a common line, called theasymptoteof those two arms. So there are two asymptotes, whose intersection is at the center ofsymmetryof the hyperbola, which can be thought of as the mirror point about which each branch reflects to form the other branch. In the case of the curvey(x)=1/x{\displaystyle y(x)=1/x}the asymptotes are the twocoordinate axes.[1]
Hyperbolas share many of the ellipses' analytical properties such aseccentricity,focus, anddirectrix. Typically the correspondence can be made with nothing more than a change of sign in some term. Many othermathematical objectshave their origin in the hyperbola, such ashyperbolic paraboloids(saddle surfaces),hyperboloids("wastebaskets"),hyperbolic geometry(Lobachevsky's celebratednon-Euclidean geometry),hyperbolic functions(sinh, cosh, tanh, etc.), andgyrovector spaces(a geometry proposed for use in bothrelativityandquantum mechanicswhich is notEuclidean).
The word "hyperbola" derives from theGreekὑπερβολή, meaning "over-thrown" or "excessive", from which the English termhyperbolealso derives. Hyperbolae were discovered byMenaechmusin his investigations of the problem ofdoubling the cube, but were then called sections of obtuse cones.[2]The term hyperbola is believed to have been coined byApollonius of Perga(c.262– c.190 BC) in his definitive work on theconic sections, theConics.[3]The names of the other two general conic sections, theellipseand theparabola, derive from the corresponding Greek words for "deficient" and "applied"; all three names are borrowed from earlier Pythagorean terminology which referred to a comparison of the side of rectangles of fixed area with a given line segment. The rectangle could be "applied" to the segment (meaning, have an equal length), be shorter than the segment or exceed the segment.[4]
A hyperbola can be defined geometrically as asetof points (locus of points) in the Euclidean plane:
The midpointM{\displaystyle M}of the line segment joining the foci is called thecenterof the hyperbola.[6]The line through the foci is called themajor axis. It contains theverticesV1,V2{\displaystyle V_{1},V_{2}}, which have distancea{\displaystyle a}to the center. The distancec{\displaystyle c}of the foci to the center is called thefocal distanceorlinear eccentricity. The quotientca{\displaystyle {\tfrac {c}{a}}}is theeccentricitye{\displaystyle e}.
The equation||PF2|−|PF1||=2a{\displaystyle \left|\left|PF_{2}\right|-\left|PF_{1}\right|\right|=2a}can be viewed in a different way (see diagram):Ifc2{\displaystyle c_{2}}is the circle with midpointF2{\displaystyle F_{2}}and radius2a{\displaystyle 2a}, then the distance of a pointP{\displaystyle P}of the right branch to the circlec2{\displaystyle c_{2}}equals the distance to the focusF1{\displaystyle F_{1}}:|PF1|=|Pc2|.{\displaystyle |PF_{1}|=|Pc_{2}|.}c2{\displaystyle c_{2}}is called thecircular directrix(related to focusF2{\displaystyle F_{2}}) of the hyperbola.[7][8]In order to get the left branch of the hyperbola, one has to use the circular directrix related toF1{\displaystyle F_{1}}. This property should not be confused with the definition of a hyperbola with help of a directrix (line) below.
If thexy-coordinate system isrotatedabout the origin by the angle+45∘{\displaystyle +45^{\circ }}and new coordinatesξ,η{\displaystyle \xi ,\eta }are assigned, thenx=ξ+η2,y=−ξ+η2{\displaystyle x={\tfrac {\xi +\eta }{\sqrt {2}}},\;y={\tfrac {-\xi +\eta }{\sqrt {2}}}}.The rectangular hyperbolax2−y2a2=1{\displaystyle {\tfrac {x^{2}-y^{2}}{a^{2}}}=1}(whosesemi-axesare equal) has the new equation2ξηa2=1{\displaystyle {\tfrac {2\xi \eta }{a^{2}}}=1}.
Solving forη{\displaystyle \eta }yieldsη=a2/2ξ.{\displaystyle \eta ={\tfrac {a^{2}/2}{\xi }}\ .}
Thus, in anxy-coordinate system the graph of a functionf:x↦Ax,A>0,{\displaystyle f:x\mapsto {\tfrac {A}{x}},\;A>0\;,}with equationy=Ax,A>0,{\displaystyle y={\frac {A}{x}}\;,A>0\;,}is arectangular hyperbolaentirely in the first and thirdquadrantswith
A rotation of the original hyperbola by−45∘{\displaystyle -45^{\circ }}results in a rectangular hyperbola entirely in the second and fourth quadrants, with the same asymptotes, center, semi-latus rectum, radius of curvature at the vertices, linear eccentricity, and eccentricity as for the case of+45∘{\displaystyle +45^{\circ }}rotation, with equationy=−Ax,A>0,{\displaystyle y=-{\frac {A}{x}}\;,~~A>0\;,}
Shifting the hyperbola with equationy=Ax,A≠0,{\displaystyle y={\frac {A}{x}},\ A\neq 0\ ,}so that the new center is(c0,d0){\displaystyle (c_{0},d_{0})},yields the new equationy=Ax−c0+d0,{\displaystyle y={\frac {A}{x-c_{0}}}+d_{0}\;,}and the new asymptotes arex=c0{\displaystyle x=c_{0}}andy=d0{\displaystyle y=d_{0}}. The shape parametersa,b,p,c,e{\displaystyle a,b,p,c,e}remain unchanged.
The two lines at distanced=a2c{\textstyle d={\frac {a^{2}}{c}}}from the center and parallel to the minor axis are calleddirectricesof the hyperbola (see diagram).
For an arbitrary pointP{\displaystyle P}of the hyperbola the quotient of the distance to one focus and to the corresponding directrix (see diagram) is equal to the eccentricity:|PF1||Pl1|=|PF2||Pl2|=e=ca.{\displaystyle {\frac {|PF_{1}|}{|Pl_{1}|}}={\frac {|PF_{2}|}{|Pl_{2}|}}=e={\frac {c}{a}}\,.}The proof for the pairF1,l1{\displaystyle F_{1},l_{1}}follows from the fact that|PF1|2=(x−c)2+y2,|Pl1|2=(x−a2c)2{\displaystyle |PF_{1}|^{2}=(x-c)^{2}+y^{2},\ |Pl_{1}|^{2}=\left(x-{\tfrac {a^{2}}{c}}\right)^{2}}andy2=b2a2x2−b2{\displaystyle y^{2}={\tfrac {b^{2}}{a^{2}}}x^{2}-b^{2}}satisfy the equation|PF1|2−c2a2|Pl1|2=0.{\displaystyle |PF_{1}|^{2}-{\frac {c^{2}}{a^{2}}}|Pl_{1}|^{2}=0\ .}The second case is proven analogously.
Theinverse statementis also true and can be used to define a hyperbola (in a manner similar to the definition of a parabola):
For any pointF{\displaystyle F}(focus), any linel{\displaystyle l}(directrix) not throughF{\displaystyle F}and anyreal numbere{\displaystyle e}withe>1{\displaystyle e>1}the set of points (locus of points), for which the quotient of the distances to the point and to the line ise{\displaystyle e}H={P||PF||Pl|=e}{\displaystyle H=\left\{P\,{\Biggr |}\,{\frac {|PF|}{|Pl|}}=e\right\}}is a hyperbola.
(The choicee=1{\displaystyle e=1}yields aparabolaand ife<1{\displaystyle e<1}anellipse.)
LetF=(f,0),e>0{\displaystyle F=(f,0),\ e>0}and assume(0,0){\displaystyle (0,0)}is a point on the curve.
The directrixl{\displaystyle l}has equationx=−fe{\displaystyle x=-{\tfrac {f}{e}}}. WithP=(x,y){\displaystyle P=(x,y)}, the relation|PF|2=e2|Pl|2{\displaystyle |PF|^{2}=e^{2}|Pl|^{2}}produces the equations
The substitutionp=f(1+e){\displaystyle p=f(1+e)}yieldsx2(e2−1)+2px−y2=0.{\displaystyle x^{2}(e^{2}-1)+2px-y^{2}=0.}This is the equation of anellipse(e<1{\displaystyle e<1}) or aparabola(e=1{\displaystyle e=1}) or ahyperbola(e>1{\displaystyle e>1}). All of these non-degenerate conics have, in common, the origin as a vertex (see diagram).
Ife>1{\displaystyle e>1}, introduce new parametersa,b{\displaystyle a,b}so thate2−1=b2a2,andp=b2a{\displaystyle e^{2}-1={\tfrac {b^{2}}{a^{2}}},{\text{ and }}\ p={\tfrac {b^{2}}{a}}}, and then the equation above becomes(x+a)2a2−y2b2=1,{\displaystyle {\frac {(x+a)^{2}}{a^{2}}}-{\frac {y^{2}}{b^{2}}}=1\,,}which is the equation of a hyperbola with center(−a,0){\displaystyle (-a,0)}, thex-axis as major axis and the major/minor semi axisa,b{\displaystyle a,b}.
Because ofc⋅a2c=a2{\displaystyle c\cdot {\tfrac {a^{2}}{c}}=a^{2}}pointL1{\displaystyle L_{1}}of directrixl1{\displaystyle l_{1}}(see diagram) and focusF1{\displaystyle F_{1}}are inverse with respect to thecircle inversionat circlex2+y2=a2{\displaystyle x^{2}+y^{2}=a^{2}}(in diagram green). Hence pointE1{\displaystyle E_{1}}can be constructed using thetheorem of Thales(not shown in the diagram). The directrixl1{\displaystyle l_{1}}is the perpendicular to lineF1F2¯{\displaystyle {\overline {F_{1}F_{2}}}}through pointE1{\displaystyle E_{1}}.
Alternative construction ofE1{\displaystyle E_{1}}: Calculation shows, that pointE1{\displaystyle E_{1}}is the intersection of the asymptote with its perpendicular throughF1{\displaystyle F_{1}}(see diagram).
The intersection of an upright double cone by a plane not through the vertex with slope greater than the slope of the lines on the cone is a hyperbola (see diagram: red curve). In order to prove the defining property of a hyperbola (see above) one uses twoDandelin spheresd1,d2{\displaystyle d_{1},d_{2}}, which are spheres that touch the cone along circlesc1{\displaystyle c_{1}},c2{\displaystyle c_{2}}and the intersecting (hyperbola) plane at pointsF1{\displaystyle F_{1}}andF2{\displaystyle F_{2}}.It turns out:F1,F2{\displaystyle F_{1},F_{2}}are thefociof the hyperbola.
The definition of a hyperbola by its foci and its circular directrices (see above) can be used for drawing an arc of it with help of pins, a string and a ruler:[9]
The following method to construct single points of a hyperbola relies on theSteiner generation of a non degenerate conic section:
For the generation of points of the hyperbolax2a2−y2b2=1{\displaystyle {\tfrac {x^{2}}{a^{2}}}-{\tfrac {y^{2}}{b^{2}}}=1}one uses the pencils at the verticesV1,V2{\displaystyle V_{1},V_{2}}. LetP=(x0,y0){\displaystyle P=(x_{0},y_{0})}be a point of the hyperbola andA=(a,y0),B=(x0,0){\displaystyle A=(a,y_{0}),B=(x_{0},0)}. The line segmentBP¯{\displaystyle {\overline {BP}}}is divided into n equally-spaced segments and this division is projected parallel with the diagonalAB{\displaystyle AB}as direction onto the line segmentAP¯{\displaystyle {\overline {AP}}}(see diagram). The parallel projection is part of the projective mapping between the pencils atV1{\displaystyle V_{1}}andV2{\displaystyle V_{2}}needed. The intersection points of any two related linesS1Ai{\displaystyle S_{1}A_{i}}andS2Bi{\displaystyle S_{2}B_{i}}are points of the uniquely defined hyperbola.
Remarks:
A hyperbola with equationy=ax−b+c,a≠0{\displaystyle y={\tfrac {a}{x-b}}+c,\ a\neq 0}is uniquely determined by three points(x1,y1),(x2,y2),(x3,y3){\displaystyle (x_{1},y_{1}),\;(x_{2},y_{2}),\;(x_{3},y_{3})}with differentx- andy-coordinates. A simple way to determine the shape parametersa,b,c{\displaystyle a,b,c}uses theinscribed angle theoremfor hyperbolas:
Analogous to theinscribed angletheorem for circles one gets the
Inscribed angle theorem for hyperbolas[10][11]—For four pointsPi=(xi,yi),i=1,2,3,4,xi≠xk,yi≠yk,i≠k{\displaystyle P_{i}=(x_{i},y_{i}),\ i=1,2,3,4,\ x_{i}\neq x_{k},y_{i}\neq y_{k},i\neq k}(see diagram) the following statement is true:
The four points are on a hyperbola with equationy=ax−b+c{\displaystyle y={\tfrac {a}{x-b}}+c}if and only if the angles atP3{\displaystyle P_{3}}andP4{\displaystyle P_{4}}are equal in the sense of the measurement above. That means if(y4−y1)(x4−x1)(x4−x2)(y4−y2)=(y3−y1)(x3−x1)(x3−x2)(y3−y2){\displaystyle {\frac {(y_{4}-y_{1})}{(x_{4}-x_{1})}}{\frac {(x_{4}-x_{2})}{(y_{4}-y_{2})}}={\frac {(y_{3}-y_{1})}{(x_{3}-x_{1})}}{\frac {(x_{3}-x_{2})}{(y_{3}-y_{2})}}}
The proof can be derived by straightforward calculation. If the points are on a hyperbola, one can assume the hyperbola's equation isy=a/x{\displaystyle y=a/x}.
A consequence of the inscribed angle theorem for hyperbolas is the
3-point-form of a hyperbola's equation—The equation of the hyperbola determined by 3 pointsPi=(xi,yi),i=1,2,3,xi≠xk,yi≠yk,i≠k{\displaystyle P_{i}=(x_{i},y_{i}),\ i=1,2,3,\ x_{i}\neq x_{k},y_{i}\neq y_{k},i\neq k}is the solution of the equation(y−y1)(x−x1)(x−x2)(y−y2)=(y3−y1)(x3−x1)(x3−x2)(y3−y2){\displaystyle {\frac {({\color {red}y}-y_{1})}{({\color {green}x}-x_{1})}}{\frac {({\color {green}x}-x_{2})}{({\color {red}y}-y_{2})}}={\frac {(y_{3}-y_{1})}{(x_{3}-x_{1})}}{\frac {(x_{3}-x_{2})}{(y_{3}-y_{2})}}}fory{\displaystyle {\color {red}y}}.
Another definition of a hyperbola usesaffine transformations:
An affine transformation of the Euclidean plane has the formx→→f→0+Ax→{\displaystyle {\vec {x}}\to {\vec {f}}_{0}+A{\vec {x}}}, whereA{\displaystyle A}is a regularmatrix(itsdeterminantis not 0) andf→0{\displaystyle {\vec {f}}_{0}}is an arbitrary vector. Iff→1,f→2{\displaystyle {\vec {f}}_{1},{\vec {f}}_{2}}are the column vectors of the matrixA{\displaystyle A}, the unit hyperbola(±cosh(t),sinh(t)),t∈R,{\displaystyle (\pm \cosh(t),\sinh(t)),t\in \mathbb {R} ,}is mapped onto the hyperbola
x→=p→(t)=f→0±f→1cosht+f→2sinht.{\displaystyle {\vec {x}}={\vec {p}}(t)={\vec {f}}_{0}\pm {\vec {f}}_{1}\cosh t+{\vec {f}}_{2}\sinh t\ .}
f→0{\displaystyle {\vec {f}}_{0}}is the center,f→0+f→1{\displaystyle {\vec {f}}_{0}+{\vec {f}}_{1}}a point of the hyperbola andf→2{\displaystyle {\vec {f}}_{2}}a tangent vector at this point.
In general the vectorsf→1,f→2{\displaystyle {\vec {f}}_{1},{\vec {f}}_{2}}are not perpendicular. That means, in generalf→0±f→1{\displaystyle {\vec {f}}_{0}\pm {\vec {f}}_{1}}arenotthe vertices of the hyperbola. Butf→1±f→2{\displaystyle {\vec {f}}_{1}\pm {\vec {f}}_{2}}point into the directions of the asymptotes. The tangent vector at pointp→(t){\displaystyle {\vec {p}}(t)}isp→′(t)=f→1sinht+f→2cosht.{\displaystyle {\vec {p}}'(t)={\vec {f}}_{1}\sinh t+{\vec {f}}_{2}\cosh t\ .}Because at a vertex the tangent is perpendicular to the major axis of the hyperbola one gets the parametert0{\displaystyle t_{0}}of a vertex from the equationp→′(t)⋅(p→(t)−f→0)=(f→1sinht+f→2cosht)⋅(f→1cosht+f→2sinht)=0{\displaystyle {\vec {p}}'(t)\cdot \left({\vec {p}}(t)-{\vec {f}}_{0}\right)=\left({\vec {f}}_{1}\sinh t+{\vec {f}}_{2}\cosh t\right)\cdot \left({\vec {f}}_{1}\cosh t+{\vec {f}}_{2}\sinh t\right)=0}and hence fromcoth(2t0)=−f→12+f→222f→1⋅f→2,{\displaystyle \coth(2t_{0})=-{\tfrac {{\vec {f}}_{1}^{\,2}+{\vec {f}}_{2}^{\,2}}{2{\vec {f}}_{1}\cdot {\vec {f}}_{2}}}\ ,}which yields
t0=14ln(f→1−f→2)2(f→1+f→2)2.{\displaystyle t_{0}={\tfrac {1}{4}}\ln {\tfrac {\left({\vec {f}}_{1}-{\vec {f}}_{2}\right)^{2}}{\left({\vec {f}}_{1}+{\vec {f}}_{2}\right)^{2}}}.}
The formulaecosh2x+sinh2x=cosh2x{\displaystyle \cosh ^{2}x+\sinh ^{2}x=\cosh 2x},2sinhxcoshx=sinh2x{\displaystyle 2\sinh x\cosh x=\sinh 2x},andarcothx=12lnx+1x−1{\displaystyle \operatorname {arcoth} x={\tfrac {1}{2}}\ln {\tfrac {x+1}{x-1}}}were used.
The twoverticesof the hyperbola aref→0±(f→1cosht0+f→2sinht0).{\displaystyle {\vec {f}}_{0}\pm \left({\vec {f}}_{1}\cosh t_{0}+{\vec {f}}_{2}\sinh t_{0}\right).}
Solving the parametric representation forcosht,sinht{\displaystyle \cosh t,\sinh t}byCramer's ruleand usingcosh2t−sinh2t−1=0{\displaystyle \;\cosh ^{2}t-\sinh ^{2}t-1=0\;}, one gets the implicit representationdet(x→−f→0,f→2)2−det(f→1,x→−f→0)2−det(f→1,f→2)2=0.{\displaystyle \det \left({\vec {x}}\!-\!{\vec {f}}\!_{0},{\vec {f}}\!_{2}\right)^{2}-\det \left({\vec {f}}\!_{1},{\vec {x}}\!-\!{\vec {f}}\!_{0}\right)^{2}-\det \left({\vec {f}}\!_{1},{\vec {f}}\!_{2}\right)^{2}=0.}
The definition of a hyperbola in this section gives a parametric representation of an arbitrary hyperbola, even in space, if one allowsf→0,f→1,f→2{\displaystyle {\vec {f}}\!_{0},{\vec {f}}\!_{1},{\vec {f}}\!_{2}}to be vectors in space.
Because the unit hyperbolax2−y2=1{\displaystyle x^{2}-y^{2}=1}is affinely equivalent to the hyperbolay=1/x{\displaystyle y=1/x}, an arbitrary hyperbola can be considered as the affine image (see previous section) of the hyperbolay=1/x{\displaystyle y=1/x\,}:
x→=p→(t)=f→0+f→1t+f→21t,t≠0.{\displaystyle {\vec {x}}={\vec {p}}(t)={\vec {f}}_{0}+{\vec {f}}_{1}t+{\vec {f}}_{2}{\tfrac {1}{t}},\quad t\neq 0\,.}
M:f→0{\displaystyle M:{\vec {f}}_{0}}is the center of the hyperbola, the vectorsf→1,f→2{\displaystyle {\vec {f}}_{1},{\vec {f}}_{2}}have the directions of the asymptotes andf→1+f→2{\displaystyle {\vec {f}}_{1}+{\vec {f}}_{2}}is a point of the hyperbola. The tangent vector isp→′(t)=f→1−f→21t2.{\displaystyle {\vec {p}}'(t)={\vec {f}}_{1}-{\vec {f}}_{2}{\tfrac {1}{t^{2}}}.}At a vertex the tangent is perpendicular to the major axis. Hencep→′(t)⋅(p→(t)−f→0)=(f→1−f→21t2)⋅(f→1t+f→21t)=f→12t−f→221t3=0{\displaystyle {\vec {p}}'(t)\cdot \left({\vec {p}}(t)-{\vec {f}}_{0}\right)=\left({\vec {f}}_{1}-{\vec {f}}_{2}{\tfrac {1}{t^{2}}}\right)\cdot \left({\vec {f}}_{1}t+{\vec {f}}_{2}{\tfrac {1}{t}}\right)={\vec {f}}_{1}^{2}t-{\vec {f}}_{2}^{2}{\tfrac {1}{t^{3}}}=0}and the parameter of a vertex is
t0=±f→22f→124.{\displaystyle t_{0}=\pm {\sqrt[{4}]{\frac {{\vec {f}}_{2}^{2}}{{\vec {f}}_{1}^{2}}}}.}
|f→1|=|f→2|{\displaystyle \left|{\vec {f}}\!_{1}\right|=\left|{\vec {f}}\!_{2}\right|}is equivalent tot0=±1{\displaystyle t_{0}=\pm 1}andf→0±(f→1+f→2){\displaystyle {\vec {f}}_{0}\pm ({\vec {f}}_{1}+{\vec {f}}_{2})}are the vertices of the hyperbola.
The following properties of a hyperbola are easily proven using the representation of a hyperbola introduced in this section.
The tangent vector can be rewritten by factorization:p→′(t)=1t(f→1t−f→21t).{\displaystyle {\vec {p}}'(t)={\tfrac {1}{t}}\left({\vec {f}}_{1}t-{\vec {f}}_{2}{\tfrac {1}{t}}\right)\ .}This means that
This property provides a way to construct the tangent at a point on the hyperbola.
This property of a hyperbola is an affine version of the 3-point-degeneration ofPascal's theorem.[12]
The area of the grey parallelogramMAPB{\displaystyle MAPB}in the above diagram isArea=|det(tf→1,1tf→2)|=|det(f→1,f→2)|=⋯=a2+b24{\displaystyle {\text{Area}}=\left|\det \left(t{\vec {f}}_{1},{\tfrac {1}{t}}{\vec {f}}_{2}\right)\right|=\left|\det \left({\vec {f}}_{1},{\vec {f}}_{2}\right)\right|=\cdots ={\frac {a^{2}+b^{2}}{4}}}and hence independent of pointP{\displaystyle P}. The last equation follows from a calculation for the case, whereP{\displaystyle P}is a vertex and the hyperbola in its canonical formx2a2−y2b2=1.{\displaystyle {\tfrac {x^{2}}{a^{2}}}-{\tfrac {y^{2}}{b^{2}}}=1\,.}
For a hyperbola with parametric representationx→=p→(t)=f→1t+f→21t{\displaystyle {\vec {x}}={\vec {p}}(t)={\vec {f}}_{1}t+{\vec {f}}_{2}{\tfrac {1}{t}}}(for simplicity the center is the origin) the following is true:
A:a→=f→1t1+f→21t2,B:b→=f→1t2+f→21t1{\displaystyle A:\ {\vec {a}}={\vec {f}}_{1}t_{1}+{\vec {f}}_{2}{\tfrac {1}{t_{2}}},\ B:\ {\vec {b}}={\vec {f}}_{1}t_{2}+{\vec {f}}_{2}{\tfrac {1}{t_{1}}}}
The simple proof is a consequence of the equation1t1a→=1t2b→{\displaystyle {\tfrac {1}{t_{1}}}{\vec {a}}={\tfrac {1}{t_{2}}}{\vec {b}}}.
This property provides a possibility to construct points of a hyperbola if the asymptotes and one point are given.
This property of a hyperbola is an affine version of the 4-point-degeneration ofPascal's theorem.[13]
For simplicity the center of the hyperbola may be the origin and the vectorsf→1,f→2{\displaystyle {\vec {f}}_{1},{\vec {f}}_{2}}have equal length. If the last assumption is not fulfilled one can first apply a parameter transformation (see above) in order to make the assumption true. Hence±(f→1+f→2){\displaystyle \pm ({\vec {f}}_{1}+{\vec {f}}_{2})}are the vertices,±(f→1−f→2){\displaystyle \pm ({\vec {f}}_{1}-{\vec {f}}_{2})}span the minor axis and one gets|f→1+f→2|=a{\displaystyle |{\vec {f}}_{1}+{\vec {f}}_{2}|=a}and|f→1−f→2|=b{\displaystyle |{\vec {f}}_{1}-{\vec {f}}_{2}|=b}.
For the intersection points of the tangent at pointp→(t0)=f→1t0+f→21t0{\displaystyle {\vec {p}}(t_{0})={\vec {f}}_{1}t_{0}+{\vec {f}}_{2}{\tfrac {1}{t_{0}}}}with the asymptotes one gets the pointsC=2t0f→1,D=2t0f→2.{\displaystyle C=2t_{0}{\vec {f}}_{1},\ D={\tfrac {2}{t_{0}}}{\vec {f}}_{2}.}Theareaof the triangleM,C,D{\displaystyle M,C,D}can be calculated by a 2 × 2 determinant:A=12|det(2t0f→1,2t0f→2)|=2|det(f→1,f→2)|{\displaystyle A={\tfrac {1}{2}}{\Big |}\det \left(2t_{0}{\vec {f}}_{1},{\tfrac {2}{t_{0}}}{\vec {f}}_{2}\right){\Big |}=2{\Big |}\det \left({\vec {f}}_{1},{\vec {f}}_{2}\right){\Big |}}(see rules fordeterminants).|det(f→1,f→2)|{\displaystyle \left|\det({\vec {f}}_{1},{\vec {f}}_{2})\right|}is the area of the rhombus generated byf→1,f→2{\displaystyle {\vec {f}}_{1},{\vec {f}}_{2}}. The area of a rhombus is equal to one half of the product of its diagonals. The diagonals are the semi-axesa,b{\displaystyle a,b}of the hyperbola. Hence:
Thereciprocationof acircleBin a circleCalways yields a conic section such as a hyperbola. The process of "reciprocation in a circleC" consists of replacing every line and point in a geometrical figure with their correspondingpole and polar, respectively. Thepoleof a line is theinversionof its closest point to the circleC, whereas the polar of a point is the converse, namely, a line whose closest point toCis the inversion of the point.
The eccentricity of the conic section obtained by reciprocation is the ratio of the distances between the two circles' centers to the radiusrof reciprocation circleC. IfBandCrepresent the points at the centers of the corresponding circles, then
e=BC¯r.{\displaystyle e={\frac {\overline {BC}}{r}}.}
Since the eccentricity of a hyperbola is always greater than one, the centerBmust lie outside of the reciprocating circleC.
This definition implies that the hyperbola is both thelocusof the poles of the tangent lines to the circleB, as well as theenvelopeof the polar lines of the points onB. Conversely, the circleBis the envelope of polars of points on the hyperbola, and the locus of poles of tangent lines to the hyperbola. Two tangent lines toBhave no (finite) poles because they pass through the centerCof the reciprocation circleC; the polars of the corresponding tangent points onBare the asymptotes of the hyperbola. The two branches of the hyperbola correspond to the two parts of the circleBthat are separated by these tangent points.
A hyperbola can also be defined as a second-degree equation in the Cartesian coordinates(x,y){\displaystyle (x,y)}in theplane,
Axxx2+2Axyxy+Ayyy2+2Bxx+2Byy+C=0,{\displaystyle A_{xx}x^{2}+2A_{xy}xy+A_{yy}y^{2}+2B_{x}x+2B_{y}y+C=0,}
provided that the constantsAxx,{\displaystyle A_{xx},}Axy,{\displaystyle A_{xy},}Ayy,{\displaystyle A_{yy},}Bx,{\displaystyle B_{x},}By,{\displaystyle B_{y},}andC{\displaystyle C}satisfy the determinant condition
D:=|AxxAxyAxyAyy|<0.{\displaystyle D:={\begin{vmatrix}A_{xx}&A_{xy}\\A_{xy}&A_{yy}\end{vmatrix}}<0.}
This determinant is conventionally called thediscriminantof the conic section.[14]
A special case of a hyperbola—thedegenerate hyperbolaconsisting of two intersecting lines—occurs when another determinant is zero:
Δ:=|AxxAxyBxAxyAyyByBxByC|=0.{\displaystyle \Delta :={\begin{vmatrix}A_{xx}&A_{xy}&B_{x}\\A_{xy}&A_{yy}&B_{y}\\B_{x}&B_{y}&C\end{vmatrix}}=0.}
This determinantΔ{\displaystyle \Delta }is sometimes called the discriminant of the conic section.[15]
The general equation's coefficients can be obtained from known semi-major axisa,{\displaystyle a,}semi-minor axisb,{\displaystyle b,}center coordinates(x∘,y∘){\displaystyle (x_{\circ },y_{\circ })}, and rotation angleθ{\displaystyle \theta }(the angle from the positive horizontal axis to the hyperbola's major axis) using the formulae:
Axx=−a2sin2θ+b2cos2θ,Bx=−Axxx∘−Axyy∘,Ayy=−a2cos2θ+b2sin2θ,By=−Axyx∘−Ayyy∘,Axy=(a2+b2)sinθcosθ,C=Axxx∘2+2Axyx∘y∘+Ayyy∘2−a2b2.{\displaystyle {\begin{aligned}A_{xx}&=-a^{2}\sin ^{2}\theta +b^{2}\cos ^{2}\theta ,&B_{x}&=-A_{xx}x_{\circ }-A_{xy}y_{\circ },\\[1ex]A_{yy}&=-a^{2}\cos ^{2}\theta +b^{2}\sin ^{2}\theta ,&B_{y}&=-A_{xy}x_{\circ }-A_{yy}y_{\circ },\\[1ex]A_{xy}&=\left(a^{2}+b^{2}\right)\sin \theta \cos \theta ,&C&=A_{xx}x_{\circ }^{2}+2A_{xy}x_{\circ }y_{\circ }+A_{yy}y_{\circ }^{2}-a^{2}b^{2}.\end{aligned}}}
These expressions can be derived from the canonical equation
X2a2−Y2b2=1{\displaystyle {\frac {X^{2}}{a^{2}}}-{\frac {Y^{2}}{b^{2}}}=1}
by atranslation and rotationof the coordinates(x,y){\displaystyle (x,y)}:
X=+(x−x∘)cosθ+(y−y∘)sinθ,Y=−(x−x∘)sinθ+(y−y∘)cosθ.{\displaystyle {\begin{alignedat}{2}X&={\phantom {+}}\left(x-x_{\circ }\right)\cos \theta &&+\left(y-y_{\circ }\right)\sin \theta ,\\Y&=-\left(x-x_{\circ }\right)\sin \theta &&+\left(y-y_{\circ }\right)\cos \theta .\end{alignedat}}}
Given the above general parametrization of the hyperbola in Cartesian coordinates, the eccentricity can be found using the formula inConic section#Eccentricity in terms of coefficients.
The center(xc,yc){\displaystyle (x_{c},y_{c})}of the hyperbola may be determined from the formulae
xc=−1D|BxAxyByAyy|,yc=−1D|AxxBxAxyBy|.{\displaystyle {\begin{aligned}x_{c}&=-{\frac {1}{D}}\,{\begin{vmatrix}B_{x}&A_{xy}\\B_{y}&A_{yy}\end{vmatrix}}\,,\\[1ex]y_{c}&=-{\frac {1}{D}}\,{\begin{vmatrix}A_{xx}&B_{x}\\A_{xy}&B_{y}\end{vmatrix}}\,.\end{aligned}}}
In terms of new coordinates,ξ=x−xc{\displaystyle \xi =x-x_{c}}andη=y−yc,{\displaystyle \eta =y-y_{c},}the defining equation of the hyperbola can be written
Axxξ2+2Axyξη+Ayyη2+ΔD=0.{\displaystyle A_{xx}\xi ^{2}+2A_{xy}\xi \eta +A_{yy}\eta ^{2}+{\frac {\Delta }{D}}=0.}
The principal axes of the hyperbola make an angleφ{\displaystyle \varphi }with the positivex{\displaystyle x}-axis that is given by
tan(2φ)=2AxyAxx−Ayy.{\displaystyle \tan(2\varphi )={\frac {2A_{xy}}{A_{xx}-A_{yy}}}.}
Rotating the coordinate axes so that thex{\displaystyle x}-axis is aligned with the transverse axis brings the equation into itscanonical form
x2a2−y2b2=1.{\displaystyle {\frac {x^{2}}{a^{2}}}-{\frac {y^{2}}{b^{2}}}=1.}
The major and minor semiaxesa{\displaystyle a}andb{\displaystyle b}are defined by the equations
a2=−Δλ1D=−Δλ12λ2,b2=−Δλ2D=−Δλ1λ22,{\displaystyle {\begin{aligned}a^{2}&=-{\frac {\Delta }{\lambda _{1}D}}=-{\frac {\Delta }{\lambda _{1}^{2}\lambda _{2}}},\\[1ex]b^{2}&=-{\frac {\Delta }{\lambda _{2}D}}=-{\frac {\Delta }{\lambda _{1}\lambda _{2}^{2}}},\end{aligned}}}
whereλ1{\displaystyle \lambda _{1}}andλ2{\displaystyle \lambda _{2}}are therootsof thequadratic equation
λ2−(Axx+Ayy)λ+D=0.{\displaystyle \lambda ^{2}-\left(A_{xx}+A_{yy}\right)\lambda +D=0.}
For comparison, the corresponding equation for a degenerate hyperbola (consisting of two intersecting lines) is
x2a2−y2b2=0.{\displaystyle {\frac {x^{2}}{a^{2}}}-{\frac {y^{2}}{b^{2}}}=0.}
The tangent line to a given point(x0,y0){\displaystyle (x_{0},y_{0})}on the hyperbola is defined by the equation
Ex+Fy+G=0{\displaystyle Ex+Fy+G=0}
whereE,{\displaystyle E,}F,{\displaystyle F,}andG{\displaystyle G}are defined by
E=Axxx0+Axyy0+Bx,F=Axyx0+Ayyy0+By,G=Bxx0+Byy0+C.{\displaystyle {\begin{aligned}E&=A_{xx}x_{0}+A_{xy}y_{0}+B_{x},\\[1ex]F&=A_{xy}x_{0}+A_{yy}y_{0}+B_{y},\\[1ex]G&=B_{x}x_{0}+B_{y}y_{0}+C.\end{aligned}}}
Thenormal lineto the hyperbola at the same point is given by the equation
F(x−x0)−E(y−y0)=0.{\displaystyle F(x-x_{0})-E(y-y_{0})=0.}
The normal line is perpendicular to the tangent line, and both pass through the same point(x0,y0).{\displaystyle (x_{0},y_{0}).}
From the equation
x2a2−y2b2=1,0<b≤a,{\displaystyle {\frac {x^{2}}{a^{2}}}-{\frac {y^{2}}{b^{2}}}=1,\qquad 0<b\leq a,}
the left focus is(−ae,0){\displaystyle (-ae,0)}and the right focus is(ae,0),{\displaystyle (ae,0),}wheree{\displaystyle e}is the eccentricity. Denote the distances from a point(x,y){\displaystyle (x,y)}to the left and right foci asr1{\displaystyle r_{1}}andr2.{\displaystyle r_{2}.}For a point on the right branch,
r1−r2=2a,{\displaystyle r_{1}-r_{2}=2a,}
and for a point on the left branch,
r2−r1=2a.{\displaystyle r_{2}-r_{1}=2a.}
This can be proved as follows:
If(x,y){\displaystyle (x,y)}is a point on the hyperbola the distance to the left focal point is
r12=(x+ae)2+y2=x2+2xae+a2e2+(x2−a2)(e2−1)=(ex+a)2.{\displaystyle r_{1}^{2}=(x+ae)^{2}+y^{2}=x^{2}+2xae+a^{2}e^{2}+\left(x^{2}-a^{2}\right)\left(e^{2}-1\right)=(ex+a)^{2}.}
To the right focal point the distance is
r22=(x−ae)2+y2=x2−2xae+a2e2+(x2−a2)(e2−1)=(ex−a)2.{\displaystyle r_{2}^{2}=(x-ae)^{2}+y^{2}=x^{2}-2xae+a^{2}e^{2}+\left(x^{2}-a^{2}\right)\left(e^{2}-1\right)=(ex-a)^{2}.}
If(x,y){\displaystyle (x,y)}is a point on the right branch of the hyperbola thenex>a{\displaystyle ex>a}and
r1=ex+a,r2=ex−a.{\displaystyle {\begin{aligned}r_{1}&=ex+a,\\r_{2}&=ex-a.\end{aligned}}}
Subtracting these equations one gets
r1−r2=2a.{\displaystyle r_{1}-r_{2}=2a.}
If(x,y){\displaystyle (x,y)}is a point on the left branch of the hyperbola thenex<−a{\displaystyle ex<-a}and
r1=−ex−a,r2=−ex+a.{\displaystyle {\begin{aligned}r_{1}&=-ex-a,\\r_{2}&=-ex+a.\end{aligned}}}
Subtracting these equations one gets
r2−r1=2a.{\displaystyle r_{2}-r_{1}=2a.}
If Cartesian coordinates are introduced such that the origin is the center of the hyperbola and thex-axis is the major axis, then the hyperbola is calledeast-west-openingand
For an arbitrary point(x,y){\displaystyle (x,y)}the distance to the focus(c,0){\displaystyle (c,0)}is(x−c)2+y2{\textstyle {\sqrt {(x-c)^{2}+y^{2}}}}and to the second focus(x+c)2+y2{\textstyle {\sqrt {(x+c)^{2}+y^{2}}}}. Hence the point(x,y){\displaystyle (x,y)}is on the hyperbola if the following condition is fulfilled(x−c)2+y2−(x+c)2+y2=±2a.{\displaystyle {\sqrt {(x-c)^{2}+y^{2}}}-{\sqrt {(x+c)^{2}+y^{2}}}=\pm 2a\ .}Remove the square roots by suitable squarings and use the relationb2=c2−a2{\displaystyle b^{2}=c^{2}-a^{2}}to obtain the equation of the hyperbola:
x2a2−y2b2=1.{\displaystyle {\frac {x^{2}}{a^{2}}}-{\frac {y^{2}}{b^{2}}}=1\ .}
This equation is called thecanonical formof a hyperbola, because any hyperbola, regardless of its orientation relative to the Cartesian axes and regardless of the location of its center, can be transformed to this form by a change of variables, giving a hyperbola that iscongruentto the original (seebelow).
The axes ofsymmetryorprincipal axesare thetransverse axis(containing the segment of length 2awith endpoints at the vertices) and theconjugate axis(containing the segment of length 2bperpendicular to the transverse axis and with midpoint at the hyperbola's center).[6]As opposed to an ellipse, a hyperbola has only two vertices:(a,0),(−a,0){\displaystyle (a,0),\;(-a,0)}. The two points(0,b),(0,−b){\displaystyle (0,b),\;(0,-b)}on the conjugate axes arenoton the hyperbola.
It follows from the equation that the hyperbola issymmetricwith respect to both of the coordinate axes and hence symmetric with respect to the origin.
For a hyperbola in the above canonical form, theeccentricityis given by
e=1+b2a2.{\displaystyle e={\sqrt {1+{\frac {b^{2}}{a^{2}}}}}.}
Two hyperbolas aregeometrically similarto each other – meaning that they have the same shape, so that one can be transformed into the other byrigid left and right movements,rotation,taking a mirror image, and scaling (magnification) – if and only if they have the same eccentricity.
Solving the equation (above) of the hyperbola fory{\displaystyle y}yieldsy=±bax2−a2.{\displaystyle y=\pm {\frac {b}{a}}{\sqrt {x^{2}-a^{2}}}.}It follows from this that the hyperbola approaches the two linesy=±bax{\displaystyle y=\pm {\frac {b}{a}}x}for large values of|x|{\displaystyle |x|}. These two lines intersect at the center (origin) and are calledasymptotesof the hyperbolax2a2−y2b2=1.{\displaystyle {\tfrac {x^{2}}{a^{2}}}-{\tfrac {y^{2}}{b^{2}}}=1\ .}[16]
With the help of the second figure one can see that
From theHesse normal formbx±aya2+b2=0{\displaystyle {\tfrac {bx\pm ay}{\sqrt {a^{2}+b^{2}}}}=0}of the asymptotes and the equation of the hyperbola one gets:[17]
From the equationy=±bax2−a2{\displaystyle y=\pm {\frac {b}{a}}{\sqrt {x^{2}-a^{2}}}}of the hyperbola (above) one can derive:
In addition, from (2) above it can be shown that[17]
The length of the chord through one of the foci, perpendicular to the major axis of the hyperbola, is called thelatus rectum. One half of it is thesemi-latus rectump{\displaystyle p}. A calculation showsp=b2a.{\displaystyle p={\frac {b^{2}}{a}}.}The semi-latus rectump{\displaystyle p}may also be viewed as theradius of curvatureat the vertices.
The simplest way to determine the equation of the tangent at a point(x0,y0){\displaystyle (x_{0},y_{0})}is toimplicitly differentiatethe equationx2a2−y2b2=1{\displaystyle {\tfrac {x^{2}}{a^{2}}}-{\tfrac {y^{2}}{b^{2}}}=1}of the hyperbola. Denotingdy/dxasy′, this produces2xa2−2yy′b2=0⇒y′=xyb2a2⇒y=x0y0b2a2(x−x0)+y0.{\displaystyle {\frac {2x}{a^{2}}}-{\frac {2yy'}{b^{2}}}=0\ \Rightarrow \ y'={\frac {x}{y}}{\frac {b^{2}}{a^{2}}}\ \Rightarrow \ y={\frac {x_{0}}{y_{0}}}{\frac {b^{2}}{a^{2}}}(x-x_{0})+y_{0}.}With respect tox02a2−y02b2=1{\displaystyle {\tfrac {x_{0}^{2}}{a^{2}}}-{\tfrac {y_{0}^{2}}{b^{2}}}=1}, the equation of the tangent at point(x0,y0){\displaystyle (x_{0},y_{0})}isx0a2x−y0b2y=1.{\displaystyle {\frac {x_{0}}{a^{2}}}x-{\frac {y_{0}}{b^{2}}}y=1.}
A particular tangent line distinguishes the hyperbola from the other conic sections.[18]Letfbe the distance from the vertexV(on both the hyperbola and its axis through the two foci) to the nearer focus. Then the distance, along a line perpendicular to that axis, from that focus to a point P on the hyperbola is greater than 2f. The tangent to the hyperbola at P intersects that axis at point Q at an angle ∠PQV of greater than 45°.
In the casea=b{\displaystyle a=b}the hyperbola is calledrectangular(orequilateral), because its asymptotes intersect at right angles. For this case, the linear eccentricity isc=2a{\displaystyle c={\sqrt {2}}a}, the eccentricitye=2{\displaystyle e={\sqrt {2}}}and the semi-latus rectump=a{\displaystyle p=a}. The graph of the equationy=1/x{\displaystyle y=1/x}is a rectangular hyperbola.
Using thehyperbolic sine and cosine functionscosh,sinh{\displaystyle \cosh ,\sinh }, a parametric representation of the hyperbolax2a2−y2b2=1{\displaystyle {\tfrac {x^{2}}{a^{2}}}-{\tfrac {y^{2}}{b^{2}}}=1}can be obtained, which is similar to the parametric representation of an ellipse:(±acosht,bsinht),t∈R,{\displaystyle (\pm a\cosh t,b\sinh t),\,t\in \mathbb {R} \ ,}which satisfies the Cartesian equation becausecosh2t−sinh2t=1.{\displaystyle \cosh ^{2}t-\sinh ^{2}t=1.}
Further parametric representations are given in the sectionParametric equationsbelow.
For the hyperbolax2a2−y2b2=1{\displaystyle {\frac {x^{2}}{a^{2}}}-{\frac {y^{2}}{b^{2}}}=1}, change the sign on the right to obtain the equation of theconjugate hyperbola:
A hyperbola and its conjugate may havediameters which are conjugate. In the theory ofspecial relativity, such diameters may represent axes of time and space, where one hyperbola representseventsat a given spatial distance from thecenter, and the other represents events at a corresponding temporal distance from the center.
The polar coordinates used most commonly for the hyperbola are defined relative to the Cartesian coordinate system that has itsorigin in a focusand its x-axis pointing toward the origin of the "canonical coordinate system" as illustrated in the first diagram.
In this case the angleφ{\displaystyle \varphi }is calledtrue anomaly.
Relative to this coordinate system one has that
r=p1∓ecosφ,p=b2a{\displaystyle r={\frac {p}{1\mp e\cos \varphi }},\quad p={\frac {b^{2}}{a}}}
and
−arccos(−1e)<φ<arccos(−1e).{\displaystyle -\arccos \left(-{\frac {1}{e}}\right)<\varphi <\arccos \left(-{\frac {1}{e}}\right).}
With polar coordinates relative to the "canonical coordinate system" (see second diagram)
one has that
r=be2cos2φ−1.{\displaystyle r={\frac {b}{\sqrt {e^{2}\cos ^{2}\varphi -1}}}.\,}
For the right branch of the hyperbola the range ofφ{\displaystyle \varphi }is−arccos(1e)<φ<arccos(1e).{\displaystyle -\arccos \left({\frac {1}{e}}\right)<\varphi <\arccos \left({\frac {1}{e}}\right).}
When using polar coordinates, the eccentricity of the hyperbola can be expressed assecφmax{\displaystyle \sec \varphi _{\text{max}}}whereφmax{\displaystyle \varphi _{\text{max}}}is the limit of the angular coordinate. Asφ{\displaystyle \varphi }approaches this limit,rapproaches infinity and the denominator in either of the equations noted above approaches zero, hence:[19]: 219
e2cos2φmax−1=0{\displaystyle e^{2}\cos ^{2}\varphi _{\text{max}}-1=0}
1±ecosφmax=0{\displaystyle 1\pm e\cos \varphi _{\text{max}}=0}
⟹e=secφmax{\displaystyle \implies e=\sec \varphi _{\text{max}}}
A hyperbola with equationx2a2−y2b2=1{\displaystyle {\tfrac {x^{2}}{a^{2}}}-{\tfrac {y^{2}}{b^{2}}}=1}can be described by several parametric equations:
Just as thetrigonometric functionsare defined in terms of theunit circle, so also thehyperbolic functionsare defined in terms of theunit hyperbola, as shown in this diagram. In a unit circle, the angle (in radians) is equal to twice the area of thecircular sectorwhich that angle subtends. The analogoushyperbolic angleis likewise defined as twice the area of ahyperbolic sector.
Leta{\displaystyle a}be twice the area between thex{\displaystyle x}axis and a ray through the origin intersecting the unit hyperbola, and define(x,y)=(cosha,sinha)=(x,x2−1){\textstyle (x,y)=(\cosh a,\sinh a)=(x,{\sqrt {x^{2}-1}})}as the coordinates of the intersection point.
Then the area of the hyperbolic sector is the area of the triangle minus the curved region past the vertex at(1,0){\displaystyle (1,0)}:a2=xy2−∫1xt2−1dt=12(xx2−1)−12(xx2−1−ln(x+x2−1)),{\displaystyle {\begin{aligned}{\frac {a}{2}}&={\frac {xy}{2}}-\int _{1}^{x}{\sqrt {t^{2}-1}}\,dt\\[1ex]&={\frac {1}{2}}\left(x{\sqrt {x^{2}-1}}\right)-{\frac {1}{2}}\left(x{\sqrt {x^{2}-1}}-\ln \left(x+{\sqrt {x^{2}-1}}\right)\right),\end{aligned}}}which simplifies to thearea hyperbolic cosinea=arcoshx=ln(x+x2−1).{\displaystyle a=\operatorname {arcosh} x=\ln \left(x+{\sqrt {x^{2}-1}}\right).}Solving forx{\displaystyle x}yields the exponential form of the hyperbolic cosine:x=cosha=ea+e−a2.{\displaystyle x=\cosh a={\frac {e^{a}+e^{-a}}{2}}.}Fromx2−y2=1{\displaystyle x^{2}-y^{2}=1}one getsy=sinha=cosh2a−1=ea−e−a2,{\displaystyle y=\sinh a={\sqrt {\cosh ^{2}a-1}}={\frac {e^{a}-e^{-a}}{2}},}and its inverse thearea hyperbolic sine:a=arsinhy=ln(y+y2+1).{\displaystyle a=\operatorname {arsinh} y=\ln \left(y+{\sqrt {y^{2}+1}}\right).}Other hyperbolic functions are defined according to the hyperbolic cosine and hyperbolic sine, so for exampletanha=sinhacosha=e2a−1e2a+1.{\displaystyle \operatorname {tanh} a={\frac {\sinh a}{\cosh a}}={\frac {e^{2a}-1}{e^{2a}+1}}.}
The tangent at a pointP{\displaystyle P}bisects the angle between the linesPF1¯,PF2¯.{\displaystyle {\overline {PF_{1}}},{\overline {PF_{2}}}.}This is called theoptical propertyorreflection propertyof a hyperbola.[20]
LetL{\displaystyle L}be the point on the linePF2¯{\displaystyle {\overline {PF_{2}}}}with the distance2a{\displaystyle 2a}to the focusF2{\displaystyle F_{2}}(see diagram,a{\displaystyle a}is the semi major axis of the hyperbola). Linew{\displaystyle w}is the bisector of the angle between the linesPF1¯,PF2¯{\displaystyle {\overline {PF_{1}}},{\overline {PF_{2}}}}. In order to prove thatw{\displaystyle w}is the tangent line at pointP{\displaystyle P}, one checks that any pointQ{\displaystyle Q}on linew{\displaystyle w}which is different fromP{\displaystyle P}cannot be on the hyperbola. Hencew{\displaystyle w}has only pointP{\displaystyle P}in common with the hyperbola and is, therefore, the tangent at pointP{\displaystyle P}.From the diagram and thetriangle inequalityone recognizes that|QF2|<|LF2|+|QL|=2a+|QF1|{\displaystyle |QF_{2}|<|LF_{2}|+|QL|=2a+|QF_{1}|}holds, which means:|QF2|−|QF1|<2a{\displaystyle |QF_{2}|-|QF_{1}|<2a}. But ifQ{\displaystyle Q}is a point of the hyperbola, the difference should be2a{\displaystyle 2a}.
The midpoints of parallel chords of a hyperbola lie on a line through the center (see diagram).
The points of any chord may lie on different branches of the hyperbola.
The proof of the property on midpoints is best done for the hyperbolay=1/x{\displaystyle y=1/x}. Because any hyperbola is an affine image of the hyperbolay=1/x{\displaystyle y=1/x}(see section below) and an affine transformation preserves parallelism and midpoints of line segments, the property is true for all hyperbolas:For two pointsP=(x1,1x1),Q=(x2,1x2){\displaystyle P=\left(x_{1},{\tfrac {1}{x_{1}}}\right),\ Q=\left(x_{2},{\tfrac {1}{x_{2}}}\right)}of the hyperbolay=1/x{\displaystyle y=1/x}
For parallel chords the slope is constant and the midpoints of the parallel chords lie on the liney=1x1x2x.{\displaystyle y={\tfrac {1}{x_{1}x_{2}}}\;x\ .}
Consequence: for any pair of pointsP,Q{\displaystyle P,Q}of a chord there exists askew reflectionwith an axis (set of fixed points) passing through the center of the hyperbola, which exchanges the pointsP,Q{\displaystyle P,Q}and leaves the hyperbola (as a whole) fixed. A skew reflection is a generalization of an ordinary reflection across a linem{\displaystyle m}, where all point-image pairs are on a line perpendicular tom{\displaystyle m}.
Because a skew reflection leaves the hyperbola fixed, the pair of asymptotes is fixed, too. Hence the midpointM{\displaystyle M}of a chordPQ{\displaystyle PQ}divides the related line segmentP¯Q¯{\displaystyle {\overline {P}}\,{\overline {Q}}}between the asymptotes into halves, too. This means that|PP¯|=|QQ¯|{\displaystyle |P{\overline {P}}|=|Q{\overline {Q}}|}. This property can be used for the construction of further pointsQ{\displaystyle Q}of the hyperbola if a pointP{\displaystyle P}and the asymptotes are given.
If the chord degenerates into atangent, then the touching point divides the line segment between the asymptotes in two halves.
For a hyperbolax2a2−y2b2=1,a>b{\textstyle {\frac {x^{2}}{a^{2}}}-{\frac {y^{2}}{b^{2}}}=1,\,a>b}the intersection points oforthogonaltangents lie on the circlex2+y2=a2−b2{\displaystyle x^{2}+y^{2}=a^{2}-b^{2}}.This circle is called theorthopticof the given hyperbola.
The tangents may belong to points on different branches of the hyperbola.
In case ofa≤b{\displaystyle a\leq b}there are no pairs of orthogonal tangents.
Any hyperbola can be described in a suitable coordinate system by an equationx2a2−y2b2=1{\displaystyle {\tfrac {x^{2}}{a^{2}}}-{\tfrac {y^{2}}{b^{2}}}=1}. The equation of the tangent at a pointP0=(x0,y0){\displaystyle P_{0}=(x_{0},y_{0})}of the hyperbola isx0xa2−y0yb2=1.{\displaystyle {\tfrac {x_{0}x}{a^{2}}}-{\tfrac {y_{0}y}{b^{2}}}=1.}If one allows pointP0=(x0,y0){\displaystyle P_{0}=(x_{0},y_{0})}to be an arbitrary point different from the origin, then
This relation between points and lines is abijection.
Theinverse functionmaps
Such a relation between points and lines generated by a conic is calledpole-polar relationor justpolarity. The pole is the point, the polar the line. SeePole and polar.
By calculation one checks the following properties of the pole-polar relation of the hyperbola:
Remarks:
Pole-polar relations exist for ellipses and parabolas, too.
The arc length of a hyperbola does not have anelementary expression. The upper half of a hyperbola can be parameterized as
y=bx2a2−1.{\displaystyle y=b{\sqrt {{\frac {x^{2}}{a^{2}}}-1}}.}
Then the integral giving the arc lengths{\displaystyle s}fromx1{\displaystyle x_{1}}tox2{\displaystyle x_{2}}can be computed as:
s=b∫arcoshx1aarcoshx2a1+(1+a2b2)sinh2vdv.{\displaystyle s=b\int _{\operatorname {arcosh} {\frac {x_{1}}{a}}}^{\operatorname {arcosh} {\frac {x_{2}}{a}}}{\sqrt {1+\left(1+{\frac {a^{2}}{b^{2}}}\right)\sinh ^{2}v}}\,\mathrm {d} v.}
After using the substitutionz=iv{\displaystyle z=iv}, this can also be represented using theincomplete elliptic integral of the second kindE{\displaystyle E}with parameterm=k2{\displaystyle m=k^{2}}:
s=ib[E(iv|1+a2b2)]arcoshx2aarcoshx1a.{\displaystyle s=ib{\Biggr [}E\left(iv\,{\Biggr |}\,1+{\frac {a^{2}}{b^{2}}}\right){\Biggr ]}_{\operatorname {arcosh} {\frac {x_{2}}{a}}}^{\operatorname {arcosh} {\frac {x_{1}}{a}}}.}
Using only real numbers, this becomes[23]
s=b[F(gdv|−a2b2)−E(gdv|−a2b2)+1+a2b2tanh2vsinhv]arcoshx1aarcoshx2a{\displaystyle s=b\left[F\left(\operatorname {gd} v\,{\Biggr |}-{\frac {a^{2}}{b^{2}}}\right)-E\left(\operatorname {gd} v\,{\Biggr |}-{\frac {a^{2}}{b^{2}}}\right)+{\sqrt {1+{\frac {a^{2}}{b^{2}}}\tanh ^{2}v}}\,\sinh v\right]_{\operatorname {arcosh} {\tfrac {x_{1}}{a}}}^{\operatorname {arcosh} {\tfrac {x_{2}}{a}}}}
whereF{\displaystyle F}is theincomplete elliptic integral of the first kindwith parameterm=k2{\displaystyle m=k^{2}}andgdv=arctansinhv{\displaystyle \operatorname {gd} v=\arctan \sinh v}is theGudermannian function.
Several other curves can be derived from the hyperbola byinversion, the so-calledinverse curvesof the hyperbola. If the center of inversion is chosen as the hyperbola's own center, the inverse curve is thelemniscate of Bernoulli; the lemniscate is also the envelope of circles centered on a rectangular hyperbola and passing through the origin. If the center of inversion is chosen at a focus or a vertex of the hyperbola, the resulting inverse curves are alimaçonor astrophoid, respectively.
A family of confocal hyperbolas is the basis of the system ofelliptic coordinatesin two dimensions. These hyperbolas are described by the equation
(xccosθ)2−(ycsinθ)2=1{\displaystyle \left({\frac {x}{c\cos \theta }}\right)^{2}-\left({\frac {y}{c\sin \theta }}\right)^{2}=1}
where the foci are located at a distancecfrom the origin on thex-axis, and where θ is the angle of the asymptotes with thex-axis. Every hyperbola in this family is orthogonal to every ellipse that shares the same foci. This orthogonality may be shown by aconformal mapof the Cartesian coordinate systemw=z+ 1/z, wherez=x+iyare the original Cartesian coordinates, andw=u+ivare those after the transformation.
Other orthogonal two-dimensional coordinate systems involving hyperbolas may be obtained by other conformal mappings. For example, the mappingw=z2transforms the Cartesian coordinate system into two families of orthogonal hyperbolas.
Besides providing a uniform description of circles, ellipses, parabolas, and hyperbolas, conic sections can also be understood as a natural model of the geometry of perspective in the case where the scene being viewed consists of circles, or more generally an ellipse. The viewer is typically a camera or the human eye and the image of the scene acentral projectiononto an image plane, that is, all projection rays pass a fixed pointO, the center. Thelens planeis a plane parallel to the image plane at the lensO.
The image of a circle c is
(Special positions where the circle plane contains pointOare omitted.)
These results can be understood if one recognizes that the projection process can be seen in two steps: 1) circle c and pointOgenerate a cone which is 2) cut by the image plane, in order to generate the image.
One sees a hyperbola whenever catching sight of a portion of a circle cut by one's lens plane. The inability to see very much of the arms of the visible branch, combined with the complete absence of the second branch, makes it virtually impossible for the human visual system to recognize the connection with hyperbolas.
Hyperbolas may be seen in manysundials. On any given day, the sun revolves in a circle on thecelestial sphere, and its rays striking the point on a sundial traces out a cone of light. The intersection of this cone with the horizontal plane of the ground forms a conic section. At most populated latitudes and at most times of the year, this conic section is a hyperbola. In practical terms, the shadow of the tip of a pole traces out a hyperbola on the ground over the course of a day (this path is called thedeclination line). The shape of this hyperbola varies with the geographical latitude and with the time of the year, since those factors affect the cone of the sun's rays relative to the horizon. The collection of such hyperbolas for a whole year at a given location was called apelekinonby the Greeks, since it resembles a double-bladed axe.
A hyperbola is the basis for solvingmultilaterationproblems, the task of locating a point from the differences in its distances to given points — or, equivalently, the difference in arrival times of synchronized signals between the point and the given points. Such problems are important in navigation, particularly on water; a ship can locate its position from the difference in arrival times of signals from aLORANorGPStransmitters. Conversely, a homing beacon or any transmitter can be located by comparing the arrival times of its signals at two separate receiving stations; such techniques may be used to track objects and people. In particular, the set of possible positions of a point that has a distance difference of 2afrom two given points is a hyperbola of vertex separation 2awhose foci are the two given points.
The path followed by any particle in the classicalKepler problemis aconic section. In particular, if the total energyEof the particle is greater than zero (that is, if the particle is unbound), the path of such a particle is a hyperbola. This property is useful in studying atomic and sub-atomic forces by scattering high-energy particles; for example, theRutherford experimentdemonstrated the existence of anatomic nucleusby examining the scattering ofalpha particlesfromgoldatoms. If the short-range nuclear interactions are ignored, the atomic nucleus and the alpha particle interact only by a repulsiveCoulomb force, which satisfies theinverse square lawrequirement for a Kepler problem.[24]
The hyperbolic trig functionsechx{\displaystyle \operatorname {sech} \,x}appears as one solution to theKorteweg–de Vries equationwhich describes the motion of a soliton wave in a canal.
As shown first byApollonius of Perga, a hyperbola can be used totrisect any angle, a well studied problem of geometry. Given an angle, first draw a circle centered at its vertexO, which intersects the sides of the angle at pointsAandB. Next draw the line segment with endpointsAandBand its perpendicular bisectorℓ{\displaystyle \ell }. Construct a hyperbola ofeccentricitye=2 withℓ{\displaystyle \ell }asdirectrixandBas a focus. LetPbe the intersection (upper) of the hyperbola with the circle. AnglePOBtrisects angleAOB.
To prove this, reflect the line segmentOPabout the lineℓ{\displaystyle \ell }obtaining the pointP'as the image ofP. SegmentAP'has the same length as segmentBPdue to the reflection, while segmentPP'has the same length as segmentBPdue to the eccentricity of the hyperbola.[25]AsOA,OP',OPandOBare all radii of the same circle (and so, have the same length), the trianglesOAP',OPP'andOPBare all congruent. Therefore, the angle has been trisected, since 3×POB=AOB.[26]
Inportfolio theory, the locus ofmean-variance efficientportfolios (called the efficient frontier) is the upper half of the east-opening branch of a hyperbola drawn with the portfolio return's standard deviation plotted horizontally and its expected value plotted vertically; according to this theory, all rational investors would choose a portfolio characterized by some point on this locus.
Inbiochemistryandpharmacology, theHill equationandHill-Langmuir equationrespectively describe biologicalresponsesand the formation ofprotein–ligand complexesas functions of ligand concentration. They are both rectangular hyperbolae.
Hyperbolas appear as plane sections of the followingquadrics:
Brozinsky, Michael K. (1984),"Reflection Property of the Ellipse and the Hyperbola",College Mathematics Journal,15(2):140–42,doi:10.1080/00494925.1984.11972763(inactive 2024-12-16),JSTOR2686519{{citation}}: CS1 maint: DOI inactive as of December 2024 (link)
|
https://en.wikipedia.org/wiki/Hyperbola
|
Inprobability theoryandstatistics, aninverse distributionis the distribution of thereciprocalof a random variable. Inverse distributions arise in particular in theBayesiancontext ofprior distributionsandposterior distributionsforscale parameters. In thealgebra of random variables, inverse distributions are special cases of the class ofratio distributions, in which the numerator random variable has adegenerate distribution.
In general, given theprobability distributionof a random variableXwith strictly positive support, it is possible to find the distribution of the reciprocal,Y= 1 /X. If the distribution ofXiscontinuouswithdensity functionf(x) andcumulative distribution functionF(x), then the cumulative distribution function,G(y), of the reciprocal is found by noting that
Then the density function ofYis found as the derivative of the cumulative distribution function:
Thereciprocal distributionhas a density function of the form[1]
where∝{\displaystyle \propto \!\,}means"is proportional to".
It follows that the inverse distribution in this case is of the form
which is again a reciprocal distribution.
If the original random variableXisuniformly distributedon the interval (a,b), wherea>0, then the reciprocal variableY= 1 /Xhas the reciprocal distribution which takes values in the range (b−1,a−1), and the probability density function in this range is
and is zero elsewhere.
The cumulative distribution function of the reciprocal, within the same range, is
For example, ifXis uniformly distributed on the interval (0,1), thenY= 1 /Xhas densityg(y)=y−2{\displaystyle g(y)=y^{-2}}and cumulative distribution functionG(y)=1−y−1{\displaystyle G(y)={1-y^{-1}}}wheny>1.{\displaystyle y>1.}
LetXbe atdistributedrandom variate withkdegrees of freedom. Then its density function is
The density ofY= 1 /Xis
Withk= 1, the distributions ofXand 1 /Xare identical (Xis thenCauchy distributed(0,1)). Ifk> 1 then the distribution of 1 /Xisbimodal.[citation needed]
If variableX{\displaystyle X}follows anormal distributionN(μ,σ2){\displaystyle {\mathcal {N}}(\mu ,\sigma ^{2})}, then the inverse or reciprocalY=1X{\displaystyle Y={\frac {1}{X}}}follows a reciprocal normal distribution:[2]
If variableXfollows astandard normal distributionN(0,1){\displaystyle {\mathcal {N}}(0,1)}, thenY= 1/Xfollows areciprocal standard normal distribution,heavy-tailedandbimodal,[2]with modes at±12{\displaystyle \pm {\tfrac {1}{\sqrt {2}}}}and density
and the first and higher-order moments do not exist.[2]For such inverse distributions and forratio distributions, there can still be defined probabilities for intervals, which can be computed either byMonte Carlo simulationor, in some cases, by using the Geary–Hinkley transformation.[3]
However, in the more general case of a shifted reciprocal function1/(p−B){\displaystyle 1/(p-B)}, forB=N(μ,σ){\displaystyle B=N(\mu ,\sigma )}following a general normal distribution, then mean and variance statistics do exist in aprincipal valuesense, if the difference between the polep{\displaystyle p}and the meanμ{\displaystyle \mu }is real-valued. The mean of this transformed random variable (reciprocal shifted normal distribution) is then indeed the scaledDawson's function:[4]
In contrast, if the shiftp−μ{\displaystyle p-\mu }is purely complex, the mean exists and is a scaledFaddeeva function, whose exact expression depends on the sign of the imaginary part,Im(p−μ){\displaystyle \operatorname {Im} (p-\mu )}.
In both cases, the variance is a simple function of the mean.[5]Therefore, the variance has to be considered in a principal value sense ifp−μ{\displaystyle p-\mu }is real, while it exists if the imaginary part ofp−μ{\displaystyle p-\mu }is non-zero. Note that these means and variances are exact, as they do not recur to linearisation of the ratio. The exact covariance of two ratios with a pair of different polesp1{\displaystyle p_{1}}andp2{\displaystyle p_{2}}is similarly available.[6]The case of the inverse of acomplex normal variableB{\displaystyle B}, shifted or not, exhibits different characteristics.[4]
IfX{\displaystyle X}is an exponentially distributed random variable with rate parameterλ{\displaystyle \lambda }, thenY=1/X{\displaystyle Y=1/X}has the following cumulative distribution function:FY(y)=e−λ/y{\displaystyle F_{Y}(y)=e^{-\lambda /y}}fory>0{\displaystyle y>0}. Note that the expected value of this random variable does not exist. The reciprocal exponential distribution finds use in the analysis of fading wireless communication systems.
IfXis aCauchy distributed(μ,σ) random variable, then 1 / X is a Cauchy (μ/C,σ/C) random variable whereC=μ2+σ2.
IfXis anF(ν1,ν2) distributedrandom variable then 1 /Xis anF(ν2,ν1) random variable.
IfX{\displaystyle X}is distributed according to a Binomial distribution withn{\displaystyle n}number of trials and a probability of successp{\displaystyle p}then no closed form for the reciprocal distribution is known. However, we can calculate the mean of this distribution.
E[1(1+X)]=1p(n+1)(1−(1−p)n+1){\displaystyle E\left[{\frac {1}{(1+X)}}\right]={\frac {1}{p(n+1)}}\left(1-(1-p)^{n+1}\right)}
An asymptotic approximation for the non-central moments of the reciprocal distribution is known.[7]
E[(1+X)a]=O((np)−a)+o(n−a){\displaystyle E[(1+X)^{a}]=O((np)^{-a})+o(n^{-a})}
where O() and o() are the big and littleo order functionsanda{\displaystyle a}is a real number.
For atriangular distributionwith lower limita, upper limitband modec, wherea<banda≤c≤b, the mean of the reciprocal is given by
μ=2(aln(ac)a−c+bln(cb)b−c)a−b{\displaystyle \mu ={\frac {2\left({\frac {a\,\mathrm {ln} \left({\frac {a}{c}}\right)}{a-c}}+{\frac {b\,\mathrm {ln} \left({\frac {c}{b}}\right)}{b-c}}\right)}{a-b}}}
and the variance by
σ2=2(ln(ca)a−c+ln(bc)b−c)a−b−μ2{\displaystyle \sigma ^{2}={\frac {2\left({\frac {\mathrm {ln} \left({\frac {c}{a}}\right)}{a-c}}+{\frac {\mathrm {ln} \left({\frac {b}{c}}\right)}{b-c}}\right)}{a-b}}-\mu ^{2}}.
Both moments of the reciprocal are only defined when the triangle does not cross zero, i.e. whena,b, andcare either all positive or all negative.
Other inverse distributions include
Inverse distributions are widely used as prior distributions in Bayesian inference for scale parameters.
|
https://en.wikipedia.org/wiki/Inverse_distribution
|
Inmathematicsand especiallynumber theory, thesum of reciprocals(orsum of inverses) generally is computed for thereciprocalsof some or all of thepositiveintegers(counting numbers)—that is, it is generally the sum ofunit fractions. If infinitely many numbers have their reciprocals summed, generally the terms are given in a certain sequence and the firstnof them are summed, then one more is included to give the sum of the firstn+1 of them, etc.
If only finitely many numbers are included, the key issue is usually to find a simple expression for the value of the sum, or to require the sum to be less than a certain value, or to determine whether the sum is ever an integer.
For aninfinite seriesof reciprocals, the issues are twofold: First, does the sequence of sumsdiverge—that is, does it eventually exceed any given number—or does itconverge, meaning there is some number that it gets arbitrarily close to without ever exceeding it? (A set of positive integers is said to belargeif the sum of its reciprocals diverges, and small if it converges.) Second, if it converges, what is a simple expression for the value it converges to, is that valuerationalorirrational, and is that valuealgebraicortranscendental?[1]
Sums of inverses can be extended to sum of inversepowers:
|
https://en.wikipedia.org/wiki/List_of_sums_of_reciprocals
|
Arepeating decimalorrecurring decimalis adecimal representationof a number whosedigitsare eventuallyperiodic(that is, after some place, the same sequence of digits is repeated forever); if this sequence consists only of zeros (that is if there is only a finite number of nonzero digits), the decimal is said to beterminating, and is not considered as repeating.
It can be shown that a number isrationalif and only if its decimal representation is repeating or terminating. For example, the decimal representation of1/3becomes periodic just after thedecimal point, repeating the single digit "3" forever, i.e. 0.333.... A more complicated example is3227/555, whose decimal becomes periodic at theseconddigit following the decimal point and then repeats the sequence "144" forever, i.e. 5.8144144144.... Another example of this is593/53, which becomes periodic after the decimal point, repeating the 13-digit pattern "1886792452830" forever, i.e. 11.18867924528301886792452830....
The infinitely repeated digit sequence is called therepetendorreptend. If the repetend is a zero, this decimal representation is called aterminating decimalrather than a repeating decimal, since the zeros can be omitted and the decimal terminates before these zeros.[1]Every terminating decimal representation can be written as adecimal fraction, a fraction whose denominator is apowerof 10 (e.g.1.585 =1585/1000); it may also be written as aratioof the formk/2n·5m(e.g.1.585 =317/23·52). However,everynumber with a terminating decimal representation also trivially has a second, alternative representation as a repeating decimal whose repetend is the digit "9". This is obtained by decreasing the final (rightmost) non-zero digit by one and appending a repetend of 9. Two examples of this are1.000... = 0.999...and1.585000... = 1.584999.... (This type of repeating decimal can be obtained by long division if one uses a modified form of the usualdivision algorithm.[2])
Any number that cannot be expressed as aratioof twointegersis said to beirrational. Their decimal representation neither terminates nor infinitely repeats, but extends forever without repetition (see§ Every rational number is either a terminating or repeating decimal). Examples of such irrational numbers are√2andπ.[3]
There are several notational conventions for representing repeating decimals. None of them are accepted universally.
In English, there are various ways to read repeating decimals aloud. For example, 1.234may be read "one point two repeating three four", "one point two repeated three four", "one point two recurring three four", "one point two repetend three four" or "one point two into infinity three four". Likewise, 11.1886792452830may be read "eleven point repeating one double eight six seven nine two four five two eight three zero", "eleven point repeated one double eight six seven nine two four five two eight three zero", "eleven point recurring one double eight six seven nine two four five two eight three zero" "eleven point repetend one double eight six seven nine two four five two eight three zero" or "eleven point into infinity one double eight six seven nine two four five two eight three zero".
In order to convert arational numberrepresented as a fraction into decimal form, one may uselong division. For example, consider the rational number5/74:
etc. Observe that at each step we have a remainder; the successive remainders displayed above are 56, 42, 50. When we arrive at 50 as the remainder, and bring down the "0", we find ourselves dividing 500 by 74, which is the same problem we began with. Therefore, the decimal repeats:0.0675675675....
For any integer fractionA/B, the remainder at step k, for any positive integerk, isA× 10k(moduloB).
For any given divisor, only finitely many different remainders can occur. In the example above, the 74 possible remainders are 0, 1, 2, ..., 73. If at any point in the division the remainder is 0, the expansion terminates at that point. Then the length of the repetend, also called "period", is defined to be 0.
If 0 never occurs as a remainder, then the division process continues forever, and eventually, a remainder must occur that has occurred before. The next step in the division will yield the same new digit in the quotient, and the same new remainder, as the previous time the remainder was the same. Therefore, the following division will repeat the same results. The repeating sequence of digits is called "repetend" which has a certain length greater than 0, also called "period".[5]
In base 10, a fraction has a repeating decimal if and only ifin lowest terms, its denominator has any prime factors besides 2 or 5, or in other words, cannot be expressed as 2m5n, wheremandnare non-negative integers.
Each repeating decimal number satisfies alinear equationwith integer coefficients, and its unique solution is a rational number. In the example above,α= 5.8144144144...satisfies the equation
The process of how to find these integer coefficients is describedbelow.
Given a repeating decimalx=a.bc¯{\displaystyle x=a.b{\overline {c}}}wherea{\displaystyle a},b{\displaystyle b}, andc{\displaystyle c}are groups of digits, letn=⌈log10b⌉{\displaystyle n=\lceil {\log _{10}b}\rceil }, the number of digits ofb{\displaystyle b}. Multiplying by10n{\displaystyle 10^{n}}separates the repeating and terminating groups:
10nx=ab.c¯.{\displaystyle 10^{n}x=ab.{\bar {c}}.}
If the decimals terminate (c=0{\displaystyle c=0}), the proof is complete.[6]Forc≠0{\displaystyle c\neq 0}withk∈N{\displaystyle k\in \mathbb {N} }digits, letx=y.c¯{\displaystyle x=y.{\bar {c}}}wherey∈Z{\displaystyle y\in \mathbb {Z} }is a terminating group of digits. Then,
c=d1d2...dk{\displaystyle c=d_{1}d_{2}\,...d_{k}}
wheredi{\displaystyle d_{i}}denotes thei-thdigit, and
x=y+∑n=1∞c(10k)n=y+(c∑n=0∞1(10k)n)−c.{\displaystyle x=y+\sum _{n=1}^{\infty }{\frac {c}{{(10^{k})}^{n}}}=y+\left(c\sum _{n=0}^{\infty }{\frac {1}{{(10^{k})}^{n}}}\right)-c.}
Since∑n=0∞1(10k)n=11−10−k{\displaystyle \textstyle \sum _{n=0}^{\infty }{\frac {1}{{(10^{k})}^{n}}}={\frac {1}{1-10^{-k}}}},[7]
x=y−c+10kc10k−1.{\displaystyle x=y-c+{\frac {10^{k}c}{10^{k}-1}}.}
Sincex{\displaystyle x}is the sum of an integer (y−c{\displaystyle y-c}) and a rational number (10kc10k−1{\textstyle {\frac {10^{k}c}{10^{k}-1}}}),x{\displaystyle x}is also rational.[8]
Therebyfractionis theunit fraction1/nandℓ10is the length of the (decimal) repetend.
The lengthsℓ10(n) of the decimal repetends of1/n,n= 1, 2, 3, ..., are:
For comparison, the lengthsℓ2(n) of thebinaryrepetends of the fractions1/n,n= 1, 2, 3, ..., are:
The decimal repetends of1/n,n= 1, 2, 3, ..., are:
The decimal repetend lengths of1/p,p= 2, 3, 5, ... (nth prime), are:
The least primespfor which1/phas decimal repetend lengthn,n= 1, 2, 3, ..., are:
The least primespfor whichk/phasndifferent cycles (1 ≤k≤p−1),n= 1, 2, 3, ..., are:
A fractionin lowest termswith aprimedenominator other than 2 or 5 (i.e.coprimeto 10) always produces a repeating decimal. The length of the repetend (period of the repeating decimal segment) of1/pis equal to theorderof 10 modulop. If 10 is aprimitive rootmodulop, then the repetend length is equal top− 1; if not, then the repetend length is a factor ofp− 1. This result can be deduced fromFermat's little theorem, which states that10p−1≡ 1 (modp).
The base-10digital rootof the repetend of the reciprocal of any prime number greater than 5 is 9.[9]
If the repetend length of1/pfor primepis equal top− 1 then the repetend, expressed as an integer, is called acyclic number.
Examples of fractions belonging to this group are:
The list can go on to include the fractions1/109,1/113,1/131,1/149,1/167,1/179,1/181,1/193,1/223,1/229, etc. (sequenceA001913in theOEIS).
Everypropermultiple of a cyclic number (that is, a multiple having the same number of digits) is a rotation:
The reason for the cyclic behavior is apparent from an arithmetic exercise of long division of1/7: the sequential remainders are the cyclic sequence{1, 3, 2, 6, 4, 5}. See also the article142,857for more properties of this cyclic number.
A fraction which is cyclic thus has a recurring decimal of even length that divides into two sequences innines' complementform. For example1/7starts '142' and is followed by '857' while6/7(by rotation) starts '857' followed byitsnines' complement '142'.
The rotation of the repetend of a cyclic number always happens in such a way that each successive repetend is a bigger number than the previous one. In the succession above, for instance, we see that 0.142857... < 0.285714... < 0.428571... < 0.571428... < 0.714285... < 0.857142.... This, for cyclic fractions with long repetends, allows us to easily predict what the result of multiplying the fraction by any natural number n will be, as long as the repetend is known.
Aproper primeis a primepwhich ends in the digit 1 in base 10 and whose reciprocal in base 10 has a repetend with lengthp− 1. In such primes, each digit 0, 1,..., 9 appears in the repeating sequence the same number of times as does each other digit (namely,p− 1/10times). They are:[10]: 166
A prime is a proper prime if and only if it is afull reptend primeandcongruentto 1 mod 10.
If a primepis bothfull reptend primeandsafe prime, then1/pwill produce a stream ofp− 1pseudo-random digits. Those primes are
Some reciprocals of primes that do not generate cyclic numbers are:
(sequenceA006559in theOEIS)
The reason is that 3 is a divisor of 9, 11 is a divisor of 99, 41 is a divisor of 99999, etc.
To find the period of1/p, we can check whether the primepdivides some number 999...999 in which the number of digits dividesp− 1. Since the period is never greater thanp− 1, we can obtain this by calculating10p−1− 1/p. For example, for 11 we get
and then by inspection find the repetend 09 and period of 2.
Those reciprocals of primes can be associated with several sequences of repeating decimals. For example, the multiples of1/13can be divided into two sets, with different repetends. The first set is:
where the repetend of each fraction is a cyclic re-arrangement of 076923. The second set is:
where the repetend of each fraction is a cyclic re-arrangement of 153846.
In general, the set of proper multiples of reciprocals of a primepconsists ofnsubsets, each with repetend lengthk, wherenk=p− 1.
For an arbitrary integern, the lengthL(n) of the decimal repetend of1/ndividesφ(n), whereφis thetotient function. The length is equal toφ(n)if and only if 10 is aprimitive root modulon.[11]
In particular, it follows thatL(p) =p− 1if and only ifpis a prime and 10 is a primitive root modulop. Then, the decimal expansions ofn/pforn= 1, 2, ...,p− 1, all have periodp− 1 and differ only by a cyclic permutation. Such numberspare calledfull repetend primes.
Ifpis a prime other than 2 or 5, the decimal representation of the fraction1/p2repeats:
The period (repetend length)L(49) must be a factor ofλ(49) = 42, whereλ(n) is known as theCarmichael function. This follows fromCarmichael's theoremwhich states that ifnis a positive integer thenλ(n) is the smallest integermsuch that
for every integerathat iscoprimeton.
The period of1/p2is usuallypTp, whereTpis the period of1/p. There are three known primes for which this is not true, and for those the period of1/p2is the same as the period of1/pbecausep2divides 10p−1−1. These three primes are 3, 487, and 56598313 (sequenceA045616in theOEIS).[12]
Similarly, the period of1/pkis usuallypk–1Tp
Ifpandqare primes other than 2 or 5, the decimal representation of the fraction1/pqrepeats. An example is1/119:
where LCM denotes theleast common multiple.
The periodTof1/pqis a factor ofλ(pq) and it happens to be 48 in this case:
The periodTof1/pqis LCM(Tp,Tq), whereTpis the period of1/pandTqis the period of1/q.
Ifp,q,r, etc. are primes other than 2 or 5, andk,ℓ,m, etc. are positive integers, then
is a repeating decimal with a period of
whereTpk,Tqℓ,Trm,... are respectively the period of the repeating decimals1/pk,1/qℓ,1/rm,... as defined above.
An integer that is not coprime to 10 but has a prime factor other than 2 or 5 has a reciprocal that is eventually periodic, but with a non-repeating sequence of digits that precede the repeating part. The reciprocal can be expressed as:
whereaandbare not both zero.
This fraction can also be expressed as:
ifa>b, or as
ifb>a, or as
ifa=b.
The decimal has:
For example1/28= 0.03571428:
Given a repeating decimal, it is possible to calculate the fraction that produces it. For example:
Another example:
The procedure below can be applied in particular if the repetend hasndigits, all of which are 0 except the final one which is 1. For instance forn= 7:
So this particular repeating decimal corresponds to the fraction1/10n− 1, where the denominator is the number written asn9s. Knowing just that, a general repeating decimal can be expressed as a fraction without having to solve an equation. For example, one could reason:
or
It is possible to get a general formula expressing a repeating decimal with ann-digit period (repetend length), beginning right after the decimal point, as a fraction:
More explicitly, one gets the following cases:
If the repeating decimal is between 0 and 1, and the repeating block isndigits long, first occurring right after the decimal point, then the fraction (not necessarily reduced) will be the integer number represented by then-digit block divided by the one represented byn9s. For example,
If the repeating decimal is as above, except that there arek(extra) digits 0 between the decimal point and the repeatingn-digit block, then one can simply addkdigits 0 after thendigits 9 of the denominator (and, as before, the fraction may subsequently be simplified). For example,
Any repeating decimal not of the form described above can be written as a sum of a terminating decimal and a repeating decimal of one of the two above types (actually the first type suffices, but that could require the terminating decimal to be negative). For example,
An even faster method is to ignore the decimal point completely and go like this
It follows that any repeating decimal withperiodn, andkdigits after the decimal point that do not belong to the repeating part, can be written as a (not necessarily reduced) fraction whose denominator is (10n− 1)10k.
Conversely the period of the repeating decimal of a fractionc/dwill be (at most) the smallest numbernsuch that 10n− 1 is divisible byd.
For example, the fraction2/7hasd= 7, and the smallestkthat makes 10k− 1 divisible by 7 isk= 6, because 999999 = 7 × 142857. The period of the fraction2/7is therefore 6.
The following picture suggests kind of compression of the above shortcut.
TherebyI{\displaystyle \mathbf {I} }represents the digits of the integer part of the decimal number (to the left of the decimal point),A{\displaystyle \mathbf {A} }makes up the string of digits of the preperiod and#A{\displaystyle \#\mathbf {A} }its length, andP{\displaystyle \mathbf {P} }being the string of repeated digits (the period) with length#P{\displaystyle \#\mathbf {P} }which is nonzero.
In the generated fraction, the digit9{\displaystyle 9}will be repeated#P{\displaystyle \#\mathbf {P} }times, and the digit0{\displaystyle 0}will be repeated#A{\displaystyle \#\mathbf {A} }times.
Note that in the absence of anintegerpart in the decimal,I{\displaystyle \mathbf {I} }will be represented by zero, which being to the left of the other digits, will not affect the final result, and may be omitted in the calculation of the generating function.
Examples:
3.254444…=3.254¯={I=3A=25P=4#A=2#P=1}=3254−325900=29299000.512512…=0.512¯={I=0A=∅P=512#A=0#P=3}=512−0999=5129991.09191…=1.091¯={I=1A=0P=91#A=1#P=2}=1091−10990=10819901.333…=1.3¯={I=1A=∅P=3#A=0#P=1}=13−19=129=430.3789789…=0.3789¯={I=0A=3P=789#A=1#P=3}=3789−39990=37869990=6311665{\displaystyle {\begin{array}{lllll}3.254444\ldots &=3.25{\overline {4}}&={\begin{Bmatrix}\mathbf {I} =3&\mathbf {A} =25&\mathbf {P} =4\\&\#\mathbf {A} =2&\#\mathbf {P} =1\end{Bmatrix}}&={\dfrac {3254-325}{900}}&={\dfrac {2929}{900}}\\\\0.512512\ldots &=0.{\overline {512}}&={\begin{Bmatrix}\mathbf {I} =0&\mathbf {A} =\emptyset &\mathbf {P} =512\\&\#\mathbf {A} =0&\#\mathbf {P} =3\end{Bmatrix}}&={\dfrac {512-0}{999}}&={\dfrac {512}{999}}\\\\1.09191\ldots &=1.0{\overline {91}}&={\begin{Bmatrix}\mathbf {I} =1&\mathbf {A} =0&\mathbf {P} =91\\&\#\mathbf {A} =1&\#\mathbf {P} =2\end{Bmatrix}}&={\dfrac {1091-10}{990}}&={\dfrac {1081}{990}}\\\\1.333\ldots &=1.{\overline {3}}&={\begin{Bmatrix}\mathbf {I} =1&\mathbf {A} =\emptyset &\mathbf {P} =3\\&\#\mathbf {A} =0&\#\mathbf {P} =1\end{Bmatrix}}&={\dfrac {13-1}{9}}&={\dfrac {12}{9}}&={\dfrac {4}{3}}\\\\0.3789789\ldots &=0.3{\overline {789}}&={\begin{Bmatrix}\mathbf {I} =0&\mathbf {A} =3&\mathbf {P} =789\\&\#\mathbf {A} =1&\#\mathbf {P} =3\end{Bmatrix}}&={\dfrac {3789-3}{9990}}&={\dfrac {3786}{9990}}&={\dfrac {631}{1665}}\end{array}}}
The symbol∅{\displaystyle \emptyset }in the examples above denotes the absence of digits of partA{\displaystyle \mathbf {A} }in the decimal, and therefore#A=0{\displaystyle \#\mathbf {A} =0}and a corresponding absence in the generated fraction.
A repeating decimal can also be expressed as aninfinite series. That is, a repeating decimal can be regarded as the sum of an infinite number of rational numbers. To take the simplest example,
The above series is ageometric serieswith the first term as1/10and the common factor1/10. Because the absolute value of the common factor is less than 1, we can say that the geometric seriesconvergesand find the exact value in the form of a fraction by using the following formula whereais the first term of the series andris the common factor.
Similarly,
The cyclic behavior of repeating decimals in multiplication also leads to the construction of integers which arecyclically permutedwhen multiplied by certain numbers. For example,102564 × 4 = 410256. 102564 is the repetend of4/39and 410256 the repetend of16/39.
Various properties of repetend lengths (periods) are given by Mitchell[13]and Dickson.[14]
For some other properties of repetends, see also.[15]
Various features of repeating decimals extend to the representation of numbers in all other integer bases, not just base 10:
For example, induodecimal,1/2= 0.6,1/3= 0.4,1/4= 0.3 and1/6= 0.2 all terminate;1/5= 0.2497repeats with period length 4, in contrast with the equivalent decimal expansion of 0.2;1/7= 0.186A35has period 6 in duodecimal, just as it does in decimal.
Ifbis an integer base andkis an integer, then
For example 1/7 in duodecimal:17=(1101+5102+21103+A5104+441105+1985106+⋯)base 12{\displaystyle {\frac {1}{7}}=\left({\frac {1}{10^{\phantom {1}}}}+{\frac {5}{10^{2}}}+{\frac {21}{10^{3}}}+{\frac {A5}{10^{4}}}+{\frac {441}{10^{5}}}+{\frac {1985}{10^{6}}}+\cdots \right)_{\text{base 12}}}
which is 0.186A35base12. 10base12is 12base10, 102base12is 144base10, 21base12is 25base10, A5base12is 125base10.
For a rational0 <p/q< 1(and baseb∈N>1) there is the following algorithm producing the repetend together with its length:
The first highlighted line calculates the digitz.
The subsequent line calculates the new remainderp′of the divisionmodulothe denominatorq. As a consequence of thefloor functionfloorwe have
thus
and
Because all these remainderspare non-negative integers less thanq, there can be only a finite number of them with the consequence that they must recur in thewhileloop. Such a recurrence is detected by theassociative arrayoccurs. The new digitzis formed in the yellow line, wherepis the only non-constant. The lengthLof the repetend equals the number of the remainders (see also sectionEvery rational number is either a terminating or repeating decimal).
Repeating decimals (also called decimal sequences) have found cryptographic and error-correction coding applications.[16]In these applications repeating decimals to base 2 are generally used which gives rise to binary sequences. The maximum length binary sequence for1/p(when 2 is a primitive root ofp) is given by:[17]
|
https://en.wikipedia.org/wiki/Repeating_decimal
|
Inmathematics,6-sphere coordinatesare acoordinate systemfor three-dimensional space obtained byinvertingthe 3DCartesian coordinatesacross theunit 2-spherex2+y2+z2=1{\displaystyle x^{2}+y^{2}+z^{2}=1}. They are so named because thelociwhere one coordinate is constant formspherestangent to theoriginfrom one of six sides (depending on which coordinate is held constant and whether its value is positive or negative). This coordinate system exists independently from and has no relation to the6-sphere.
The three coordinates are
Since inversion is aninvolution, the equations forx,y, andzin terms ofu,v, andware similar:
This coordinate system isR{\displaystyle R}-separablefor the 3-variable Laplace equation.
Thisgeometry-relatedarticle is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/6-sphere_coordinates
|
Aunit fractionis a positivefractionwith one as itsnumerator, 1/n. It is themultiplicative inverse(reciprocal) of thedenominatorof the fraction, which must be a positivenatural number. Examples are 1/1, 1/2, 1/3, 1/4, 1/5, etc. When an object is divided into equal parts, each part is a unit fraction of the whole.
Multiplying two unit fractions produces another unit fraction, but other arithmetic operations do not preserve unit fractions. In modular arithmetic, unit fractions can be converted into equivalent whole numbers, allowing modular division to be transformed into multiplication. Everyrational numbercan be represented as a sum of distinct unit fractions; these representations are calledEgyptian fractionsbased on their use inancient Egyptian mathematics. Many infinite sums of unit fractions are meaningful mathematically.
In geometry, unit fractions can be used to characterize the curvature oftriangle groupsand the tangencies ofFord circles. Unit fractions are commonly used infair division, and this familiar application is used inmathematics educationas an early step toward the understanding of other fractions. Unit fractions are common inprobability theorydue to theprinciple of indifference. They also have applications incombinatorial optimizationand in analyzing the pattern of frequencies in thehydrogen spectral series.
The unit fractions are therational numbersthat can be written in the form1n,{\displaystyle {\frac {1}{n}},}wheren{\displaystyle n}can be any positivenatural number. They are thus themultiplicative inversesof the positive integers. When something is divided inton{\displaystyle n}equal parts, each part is a1/n{\displaystyle 1/n}fraction of the whole.[1]
Multiplyingany two unit fractions results in a product that is another unit fraction:[2]1x×1y=1xy.{\displaystyle {\frac {1}{x}}\times {\frac {1}{y}}={\frac {1}{xy}}.}However,adding,[3]subtracting,[3]ordividingtwo unit fractions produces a result that is generally not a unit fraction:1x+1y=x+yxy{\displaystyle {\frac {1}{x}}+{\frac {1}{y}}={\frac {x+y}{xy}}}
1x−1y=y−xxy{\displaystyle {\frac {1}{x}}-{\frac {1}{y}}={\frac {y-x}{xy}}}
1x÷1y=yx.{\displaystyle {\frac {1}{x}}\div {\frac {1}{y}}={\frac {y}{x}}.}
As the last of these formulas shows, every fraction can be expressed as a quotient of two unit fractions.[4]
Inmodular arithmetic, any unit fraction can be converted into an equivalent whole number using theextended Euclidean algorithm.[5][6]This conversion can be used to perform modular division: dividing by a numberx{\displaystyle x}, moduloy{\displaystyle y}, can be performed by converting the unit fraction1/x{\displaystyle 1/x}into an equivalent whole number moduloy{\displaystyle y}, and then multiplying by that number.[7]
In more detail, suppose thatx{\displaystyle x}isrelatively primetoy{\displaystyle y}(otherwise, division byx{\displaystyle x}is not defined moduloy{\displaystyle y}). The extended Euclidean algorithm for thegreatest common divisorcan be used to find integersa{\displaystyle a}andb{\displaystyle b}such thatBézout's identityis satisfied:ax+by=gcd(x,y)=1.{\displaystyle \displaystyle ax+by=\gcd(x,y)=1.}In modulo-y{\displaystyle y}arithmetic, the termby{\displaystyle by}can be eliminated as it is zero moduloy{\displaystyle y}. This leavesax≡1(mody).{\displaystyle \displaystyle ax\equiv 1{\pmod {y}}.}That is,a{\displaystyle a}is the modular inverse ofx{\displaystyle x}, the number that when multiplied byx{\displaystyle x}produces one. Equivalently,[5][6]a≡1x(mody).{\displaystyle a\equiv {\frac {1}{x}}{\pmod {y}}.}Thus division byx{\displaystyle x}(moduloy{\displaystyle y}) can instead be performed by multiplying by the integera{\displaystyle a}.[7]
Several constructions in mathematics involve combining multiple unit fractions together, often by adding them.
Any positive rational number can be written as the sum of distinct unit fractions, in multiple ways. For example,
These sums are calledEgyptian fractions, because the ancient Egyptian civilisations used them as notation for more generalrational numbers. There is still interest today in analyzing the methods used by the ancients to choose among the possible representations for a fractional number, and to calculate with such representations.[8]The topic of Egyptian fractions has also seen interest in modernnumber theory; for instance, theErdős–Graham problem[9]and theErdős–Straus conjecture[10]concern sums of unit fractions, as does the definition ofOre's harmonic numbers.[11]
Ingeometric group theory,triangle groupsare classified into Euclidean, spherical, and hyperbolic cases according to whether an associated sum of unit fractions is equal to one, greater than one, or less than one respectively.[12]
Many well-knowninfinite serieshave terms that are unit fractions. These include:
AHilbert matrixis asquare matrixin which the elements on thei{\displaystyle i}thantidiagonalall equal the unit fraction1/i{\displaystyle 1/i}. That is, it has elementsBi,j=1i+j−1.{\displaystyle B_{i,j}={\frac {1}{i+j-1}}.}For example, the matrix[11213121314131415]{\displaystyle {\begin{bmatrix}1&{\frac {1}{2}}&{\frac {1}{3}}\\{\frac {1}{2}}&{\frac {1}{3}}&{\frac {1}{4}}\\{\frac {1}{3}}&{\frac {1}{4}}&{\frac {1}{5}}\end{bmatrix}}}is a Hilbert matrix. It has the unusual property that all elements in itsinverse matrixare integers.[19]Similarly,Richardson (2001)defined a matrix whose elements are unit fractions whose denominators areFibonacci numbers:Ci,j=1Fi+j−1,{\displaystyle C_{i,j}={\frac {1}{F_{i+j-1}}},}whereFi{\displaystyle F_{i}}denotes thei{\displaystyle i}thFibonacci number. He calls this matrix the Filbert matrix and it has the same property of having an integer inverse.[20]
Two fractionsa/b{\displaystyle a/b}andc/d{\displaystyle c/d}(in lowest terms) are calledadjacentifad−bc=±1,{\displaystyle ad-bc=\pm 1,}which implies that they differ from each other by a unit fraction:|1a−1b|=|ad−bc|bd=1bd.{\displaystyle \left|{\frac {1}{a}}-{\frac {1}{b}}\right|={\frac {|ad-bc|}{bd}}={\frac {1}{bd}}.}For instance,12{\displaystyle {\tfrac {1}{2}}}and35{\displaystyle {\tfrac {3}{5}}}are adjacent:1⋅5−2⋅3=−1{\displaystyle 1\cdot 5-2\cdot 3=-1}and35−12=110{\displaystyle {\tfrac {3}{5}}-{\tfrac {1}{2}}={\tfrac {1}{10}}}. However, some pairs of fractions whose difference is a unit fraction are not adjacent in this sense: for instance,13{\displaystyle {\tfrac {1}{3}}}and23{\displaystyle {\tfrac {2}{3}}}differ by a unit fraction, but are not adjacent, because for themad−bc=3{\displaystyle ad-bc=3}.[21]
This terminology comes from the study ofFord circles. These are a system of circles that are tangent to thenumber lineat a given fraction and have the squared denominator of the fraction as their diameter. Fractionsa/b{\displaystyle a/b}andc/d{\displaystyle c/d}are adjacent if and only if their Ford circles aretangent circles.[21]
Inmathematics education, unit fractions are often introduced earlier than other kinds of fractions, because of the ease of explaining them visually as equal parts of a whole.[22][23]A common practical use of unit fractions is to divide food equally among a number of people, and exercises in performing this sort offair divisionare a standard classroom example in teaching students to work with unit fractions.[24]
In auniform distribution on a discrete space, all probabilities are equal unit fractions. Due to theprinciple of indifference, probabilities of this form arise frequently in statistical calculations.[25]
Unequal probabilities related to unit fractions arise inZipf's law. This states that, for many observed phenomena involving the selection of items from an ordered sequence, the probability that then{\displaystyle n}thitem is selected is proportional to the unit fraction1/n{\displaystyle 1/n}.[26]
In the study ofcombinatorial optimizationproblems,bin packingproblems involve an input sequence of items with fractional sizes, which must be placed into bins whose capacity (the total size of items placed into each bin) is one. Research into these problems has included the study of restricted bin packing problems where the item sizes are unit fractions.[27][28]
One motivation for this is as a test case for more general bin packing methods. Another involves a form ofpinwheel scheduling, in which a collection of messages of equal length must each be repeatedly broadcast on a limited number of communication channels, with each message having a maximum delay between the start times of its repeated broadcasts. An item whose delay isk{\displaystyle k}times the length of a message must occupy a fraction of at least1/k{\displaystyle 1/k}of the time slots on the channel it is assigned to, so a solution to the scheduling problem can only come from a solution to the unit fraction bin packing problem with the channels as bins and the fractions1/k{\displaystyle 1/k}as item sizes.[27]
Even for bin packing problems with arbitrary item sizes, it can be helpful to round each item size up to the next larger unit fraction, and then apply a bin packing algorithm specialized for unit fraction sizes. In particular, theharmonic bin packingmethod does exactly this, and then packs each bin using items of only a single rounded unit fraction size.[28]
The energy levels ofphotonsthat can be absorbed or emitted by a hydrogen atom are, according to theRydberg formula, proportional to the differences of two unit fractions. An explanation for this phenomenon is provided by theBohr model, according to which the energy levels ofelectron orbitalsin ahydrogen atomare inversely proportional to square unit fractions, and the energy of a photon isquantizedto the difference between two levels.[29]
Arthur Eddingtonargued that thefine-structure constantwas a unit fraction. He initially thought it to be 1/136 and later changed his theory to 1/137. This contention has been falsified, given that current estimates of the fine structure constant are (to 6 significant digits) 1/137.036.[30]
|
https://en.wikipedia.org/wiki/Unit_fraction
|
Incomplex analysis(a branch of mathematics), apoleis a certain type ofsingularityof acomplex-valued functionof acomplexvariable. It is the simplest type of non-removable singularityof such a function (seeessential singularity). Technically, a pointz0is a pole of a functionfif it is azeroof the function1/fand1/fisholomorphic(i.e.complex differentiable) in someneighbourhoodofz0.
A functionfismeromorphicin anopen setUif for every pointzofUthere is a neighborhood ofzin which at least one offand1/fis holomorphic.
Iffis meromorphic inU, then a zero offis a pole of1/f, and a pole offis a zero of1/f. This induces a duality betweenzerosandpoles, that is fundamental for the study of meromorphic functions. For example, if a function is meromorphic on the wholecomplex planeplus thepoint at infinity, then the sum of themultiplicitiesof its poles equals the sum of the multiplicities of its zeros.
Afunction of a complex variablezisholomorphicin anopen domainUif it isdifferentiablewith respect tozat every point ofU. Equivalently, it is holomorphic if it isanalytic, that is, if itsTaylor seriesexists at every point ofU, and converges to the function in someneighbourhoodof the point. A function ismeromorphicinUif every point ofUhas a neighbourhood such that at least one offand1/fis holomorphic in it.
Azeroof a meromorphic functionfis a complex numberzsuch thatf(z) = 0. Apoleoffis a zero of1/f.
Iffis a function that is meromorphic in a neighbourhood of a pointz0{\displaystyle z_{0}}of thecomplex plane, then there exists an integernsuch that
is holomorphic and nonzero in a neighbourhood ofz0{\displaystyle z_{0}}(this is a consequence of the analytic property).
Ifn> 0, thenz0{\displaystyle z_{0}}is apoleoforder(or multiplicity)noff. Ifn< 0, thenz0{\displaystyle z_{0}}is a zero of order|n|{\displaystyle |n|}off.Simple zeroandsimple poleare terms used for zeroes and poles of order|n|=1.{\displaystyle |n|=1.}Degreeis sometimes used synonymously to order.
This characterization of zeros and poles implies that zeros and poles areisolated, that is, every zero or pole has a neighbourhood that does not contain any other zero and pole.
Because of theorderof zeros and poles being defined as a non-negative numbernand the symmetry between them, it is often useful to consider a pole of ordernas a zero of order−nand a zero of ordernas a pole of order−n. In this case a point that is neither a pole nor a zero is viewed as a pole (or zero) of order 0.
A meromorphic function may have infinitely many zeros and poles. This is the case for thegamma function(see the image in the infobox), which is meromorphic in the whole complex plane, and has a simple pole at every non-positive integer. TheRiemann zeta functionis also meromorphic in the whole complex plane, with a single pole of order 1 atz= 1. Its zeros in the left halfplane are all the negative even integers, and theRiemann hypothesisis the conjecture that all other zeros are alongRe(z) = 1/2.
In a neighbourhood of a pointz0,{\displaystyle z_{0},}a nonzero meromorphic functionfis the sum of aLaurent serieswith at most finiteprincipal part(the terms with negative index values):
wherenis an integer, anda−n≠0.{\displaystyle a_{-n}\neq 0.}Again, ifn> 0(the sum starts witha−|n|(z−z0)−|n|{\displaystyle a_{-|n|}(z-z_{0})^{-|n|}}, the principal part hasnterms), one has a pole of ordern, and ifn≤ 0(the sum starts witha|n|(z−z0)|n|{\displaystyle a_{|n|}(z-z_{0})^{|n|}}, there is no principal part), one has a zero of order|n|{\displaystyle |n|}.
A functionz↦f(z){\displaystyle z\mapsto f(z)}ismeromorphic at infinityif it is meromorphic in some neighbourhood of infinity (that is outside somedisk), and there is an integernsuch that
exists and is a nonzero complex number.
In this case, thepoint at infinityis a pole of ordernifn> 0, and a zero of order|n|{\displaystyle |n|}ifn< 0.
For example, apolynomialof degreenhas a pole of degreenat infinity.
Thecomplex planeextended by a point at infinity is called theRiemann sphere.
Iffis a function that is meromorphic on the whole Riemann sphere, then it has a finite number of zeros and poles, and the sum of the orders of its poles equals the sum of the orders of its zeros.
Everyrational functionis meromorphic on the whole Riemann sphere, and, in this case, the sum of orders of the zeros or of the poles is the maximum of the degrees of the numerator and the denominator.
All above examples except for the third arerational functions. For a general discussion of zeros and poles of such functions, seePole–zero plot § Continuous-time systems.
The concept of zeros and poles extends naturally to functions on acomplex curve, that iscomplex analytic manifoldof dimension one (over the complex numbers). The simplest examples of such curves are thecomplex planeand theRiemann surface. This extension is done by transferring structures and properties throughcharts, which are analyticisomorphisms.
More precisely, letfbe a function from a complex curveMto the complex numbers. This function is holomorphic (resp. meromorphic) in a neighbourhood of a pointzofMif there is a chartϕ{\displaystyle \phi }such thatf∘ϕ−1{\displaystyle f\circ \phi ^{-1}}is holomorphic (resp. meromorphic) in a neighbourhood ofϕ(z).{\displaystyle \phi (z).}Then,zis a pole or a zero of ordernif the same is true forϕ(z).{\displaystyle \phi (z).}
If the curve iscompact, and the functionfis meromorphic on the whole curve, then the number of zeros and poles is finite, and the sum of the orders of the poles equals the sum of the orders of the zeros. This is one of the basic facts that are involved inRiemann–Roch theorem.
|
https://en.wikipedia.org/wiki/Zeros_and_poles
|
Inengineeringandscience,dimensional analysisis the analysis of the relationships between differentphysical quantitiesby identifying theirbase quantities(such aslength,mass,time, andelectric current) andunits of measurement(such as metres and grams) and tracking these dimensions as calculations or comparisons are performed. The term dimensional analysis is also used to refer toconversion of unitsfrom one dimensional unit to another, which can be used to evaluate scientific formulae.
Commensurablephysical quantities are of the samekindand have the same dimension, and can be directly compared to each other, even if they are expressed in differing units of measurement; e.g., metres and feet, grams and pounds, seconds and years.Incommensurablephysicalquantitiesare of differentkindsand have different dimensions, and can not be directly compared to each other, no matter whatunitsthey are expressed in, e.g. metres and grams, seconds and grams, metres and seconds. For example, asking whether a gram is larger than an hour is meaningless.
Any physically meaningfulequation, orinequality,musthave the same dimensions on its left and right sides, a property known asdimensional homogeneity. Checking for dimensional homogeneity is a common application of dimensional analysis, serving as a plausibility check onderivedequations andcomputations. It also serves as a guide and constraint in deriving equations that may describe a physicalsystemin the absence of a more rigorous derivation.
The concept ofphysical dimensionorquantity dimension, and of dimensional analysis, was introduced byJoseph Fourierin 1822.[1]: 42
TheBuckingham π theoremdescribes how every physically meaningful equation involvingnvariables can be equivalently rewritten as an equation ofn−mdimensionless parameters, wheremis therankof the dimensionalmatrix. Furthermore, and most importantly, it provides a method for computing these dimensionless parameters from the given variables.
A dimensional equation can have the dimensions reduced or eliminated throughnondimensionalization, which begins with dimensional analysis, and involves scaling quantities bycharacteristic unitsof a system orphysical constantsof nature.[1]: 43This may give insight into the fundamental properties of the system, as illustrated in the examples below.
The dimension of aphysical quantitycan be expressed as a product of the base physical dimensions such as length, mass and time, each raised to an integer (and occasionallyrational)power. Thedimensionof a physical quantity is more fundamental than somescaleorunitused to express the amount of that physical quantity. For example,massis a dimension, while the kilogram is a particular reference quantity chosen to express a quantity of mass. The choice of unit is arbitrary, and its choice is often based on historical precedent.Natural units, being based on only universal constants, may be thought of as being "less arbitrary".
There are many possible choices of base physical dimensions. TheSI standardselects the following dimensions and correspondingdimension symbols:
The symbols are by convention usually written inromansans seriftypeface.[2]Mathematically, the dimension of the quantityQis given by
wherea,b,c,d,e,f,gare the dimensional exponents. Other physical quantities could be defined as the base quantities, as long as they form abasis– for instance, one could replace the dimension (I) ofelectric currentof the SI basis with a dimension (Q) ofelectric charge, sinceQ = TI.
A quantity that has onlyb≠ 0(with all other exponents zero) is known as ageometricquantity. A quantity that has only botha≠ 0andb≠ 0is known as akinematicquantity. A quantity that has only all ofa≠ 0,b≠ 0, andc≠ 0is known as adynamicquantity.[3]A quantity that has all exponents null is said to havedimension one.[2]
The unit chosen to express a physical quantity and its dimension are related, but not identical concepts. The units of a physical quantity are defined by convention and related to some standard; e.g., length may have units of metres, feet, inches, miles or micrometres; but any length always has a dimension of L, no matter what units of length are chosen to express it. Two different units of the same physical quantity haveconversion factorsthat relate them. For example,1 in = 2.54 cm; in this case 2.54 cm/in is the conversion factor, which is itself dimensionless. Therefore, multiplying by that conversion factor does not change the dimensions of a physical quantity.
There are also physicists who have cast doubt on the very existence of incompatible fundamental dimensions of physical quantity,[4]although this does not invalidate the usefulness of dimensional analysis.
As examples, the dimension of the physical quantityspeedvis
The dimension of the physical quantityaccelerationais
The dimension of the physical quantityforceFis
The dimension of the physical quantitypressurePis
The dimension of the physical quantityenergyEis
The dimension of the physical quantitypowerPis
The dimension of the physical quantityelectric chargeQis
The dimension of the physical quantityvoltageVis
The dimension of the physical quantitycapacitanceCis
In dimensional analysis,Rayleigh's methodis a conceptual tool used inphysics,chemistry, andengineering. It expresses afunctional relationshipof somevariablesin the form of anexponential equation. It was named afterLord Rayleigh.
The method involves the following steps:
As a drawback, Rayleigh's method does not provide any information regarding number of dimensionless groups to be obtained as a result of dimensional analysis.
Many parameters and measurements in the physical sciences and engineering are expressed as aconcrete number—a numerical quantity and a corresponding dimensional unit. Often a quantity is expressed in terms of several other quantities; for example, speed is a combination of length and time, e.g. 60 kilometres per hour or 1.4 kilometres per second. Compound relations with "per" are expressed withdivision, e.g. 60 km/h. Other relations can involvemultiplication(often shown with acentered dotorjuxtaposition), powers (like m2for square metres), or combinations thereof.
A set ofbase unitsfor asystem of measurementis a conventionally chosen set of units, none of which can be expressed as a combination of the others and in terms of which all the remaining units of the system can be expressed.[5]For example, units forlengthand time are normally chosen as base units. Units forvolume, however, can be factored into the base units of length (m3), thus they are considered derived or compound units.
Sometimes the names of units obscure the fact that they are derived units. For example, anewton(N) is a unit offorce, which may be expressed as the product of mass (with unit kg) and acceleration (with unit m⋅s−2). The newton is defined as1 N = 1 kg⋅m⋅s−2.
Percentages are dimensionless quantities, since they are ratios of two quantities with the same dimensions. In other words, the % sign can be read as "hundredths", since1% = 1/100.
Taking a derivative with respect to a quantity divides the dimension by the dimension of the variable that is differentiated with respect to. Thus:
Likewise, taking an integral adds the dimension of the variable one is integrating with respect to, but in the numerator.
In economics, one distinguishes betweenstocks and flows: a stock has a unit (say, widgets or dollars), while a flow is a derivative of a stock, and has a unit of the form of this unit divided by one of time (say, dollars/year).
In some contexts, dimensional quantities are expressed as dimensionless quantities or percentages by omitting some dimensions. For example,debt-to-GDP ratiosare generally expressed as percentages: total debt outstanding (dimension of currency) divided by annual GDP (dimension of currency)—but one may argue that, in comparing a stock to a flow, annual GDP should have dimensions of currency/time (dollars/year, for instance) and thus debt-to-GDP should have the unit year, which indicates that debt-to-GDP is the number of years needed for a constant GDP to pay the debt, if all GDP is spent on the debt and the debt is otherwise unchanged.
The most basic rule of dimensional analysis is that of dimensional homogeneity.[6]
However, the dimensions form anabelian groupunder multiplication, so:
For example, it makes no sense to ask whether 1 hour is more, the same, or less than 1 kilometre, as these have different dimensions, nor to add 1 hour to 1 kilometre. However, it makes sense to ask whether 1 mile is more, the same, or less than 1 kilometre, being the same dimension of physical quantity even though the units are different. On the other hand, if an object travels 100 km in 2 hours, one may divide these and conclude that the object's average speed was 50 km/h.
The rule implies that in a physically meaningfulexpressiononly quantities of the same dimension can be added, subtracted, or compared. For example, ifmman,mratandLmandenote, respectively, the mass of some man, the mass of a rat and the length of that man, the dimensionally homogeneous expressionmman+mratis meaningful, but the heterogeneous expressionmman+Lmanis meaningless. However,mman/L2manis fine. Thus, dimensional analysis may be used as asanity checkof physical equations: the two sides of any equation must be commensurable or have the same dimensions.
Even when two physical quantities have identical dimensions, it may nevertheless be meaningless to compare or add them. For example, althoughtorqueand energy share the dimensionT−2L2M, they are fundamentally different physical quantities.
To compare, add, or subtract quantities with the same dimensions but expressed in different units, the standard procedure is first to convert them all to the same unit. For example, to compare 32 metres with 35 yards, use1 yard = 0.9144 mto convert 35 yards to 32.004 m.
A related principle is that any physical law that accurately describes the real world must be independent of the units used to measure the physical variables.[7]For example,Newton's laws of motionmust hold true whether distance is measured in miles or kilometres. This principle gives rise to the form that a conversion factor between two units that measure the same dimension must take multiplication by a simple constant. It also ensures equivalence; for example, if two buildings are the same height in feet, then they must be the same height in metres.
In dimensional analysis, a ratio which converts one unit of measure into another without changing the quantity is called aconversion factor. For example, kPa and bar are both units of pressure, and100 kPa = 1 bar. The rules of algebra allow both sides of an equation to be divided by the same expression, so this is equivalent to100 kPa / 1 bar = 1. Since any quantity can be multiplied by 1 without changing it, the expression "100 kPa / 1 bar" can be used to convert from bars to kPa by multiplying it with the quantity to be converted, including the unit. For example,5 bar × 100 kPa / 1 bar = 500 kPabecause5 × 100 / 1 = 500, and bar/bar cancels out, so5 bar = 500 kPa.
Dimensional analysis is most often used in physics and chemistry – and in the mathematics thereof – but finds some applications outside of those fields as well.
A simple application of dimensional analysis to mathematics is in computing the form of thevolume of ann-ball(the solid ball inndimensions), or the area of its surface, then-sphere: being ann-dimensional figure, the volume scales asxn, while the surface area, being(n− 1)-dimensional, scales asxn−1. Thus the volume of then-ball in terms of the radius isCnrn, for some constantCn. Determining the constant takes more involved mathematics, but the form can be deduced and checked by dimensional analysis alone.
In finance, economics, and accounting, dimensional analysis is most commonly referred to in terms of thedistinction between stocks and flows. More generally, dimensional analysis is used in interpreting variousfinancial ratios, economics ratios, and accounting ratios.
Influid mechanics, dimensional analysis is performed to obtain dimensionlesspi termsor groups. According to the principles of dimensional analysis, any prototype can be described by a series of these terms or groups that describe the behaviour of the system. Using suitable pi terms or groups, it is possible to develop a similar set of pi terms for a model that has the same dimensional relationships.[8]In other words, pi terms provide a shortcut to developing a model representing a certain prototype. Common dimensionless groups in fluid mechanics include:
The origins of dimensional analysis have been disputed by historians.[9][10]The first written application of dimensional analysis has been credited toFrançois Daviet, a student ofJoseph-Louis Lagrange, in a 1799 article at theTurinAcademy of Science.[10]
This led to the conclusion that meaningful laws must be homogeneous equations in their various units of measurement, a result which was eventually later formalized in theBuckingham π theorem.Simeon Poissonalso treated the same problem of theparallelogram lawby Daviet, in his treatise of 1811 and 1833 (vol I, p. 39).[11]In the second edition of 1833, Poisson explicitly introduces the termdimensioninstead of the Daviethomogeneity.
In 1822, the important Napoleonic scientistJoseph Fouriermade the first credited important contributions[12]based on the idea that physical laws likeF=mashould be independent of the units employed to measure the physical variables.
James Clerk Maxwellplayed a major role in establishing modern use of dimensional analysis by distinguishing mass, length, and time as fundamental units, while referring to other units as derived.[13]Although Maxwell defined length, time and mass to be "the three fundamental units", he also noted that gravitational mass can be derived from length and time by assuming a form ofNewton's law of universal gravitationin which thegravitational constantGis taken asunity, thereby definingM = T−2L3.[14]By assuming a form ofCoulomb's lawin which theCoulomb constantkeis taken as unity, Maxwell then determined that the dimensions of an electrostatic unit of charge wereQ = T−1L3/2M1/2,[15]which, after substituting hisM = T−2L3equation for mass, results in charge having the same dimensions as mass, viz.Q = T−2L3.
Dimensional analysis is also used to derive relationships between the physical quantities that are involved in a particular phenomenon that one wishes to understand and characterize. It was used for the first time in this way in 1872 byLord Rayleigh, who was trying to understand why the sky is blue.[16]Rayleigh first published the technique in his 1877 bookThe Theory of Sound.[17]
The original meaning of the worddimension, in Fourier'sTheorie de la Chaleur, was the numerical value of the exponents of the base units. For example, acceleration was considered to have the dimension 1 with respect to the unit of length, and the dimension −2 with respect to the unit of time.[18]This was slightly changed by Maxwell, who said the dimensions of acceleration are T−2L, instead of just the exponents.[19]
What is the period ofoscillationTof a massmattached to an ideal linear spring with spring constantksuspended in gravity of strengthg? That period is the solution forTof some dimensionless equation in the variablesT,m,k, andg.
The four quantities have the following dimensions:T[T];m[M];k[M/T2]; andg[L/T2]. From these we can form only one dimensionless product of powers of our chosen variables,G1=T2k/m[T2· M/T2/ M = 1], and puttingG1=Cfor some dimensionless constantCgives the dimensionless equation sought. The dimensionless product of powers of variables is sometimes referred to as a dimensionless group of variables; here the term "group" means "collection" rather than mathematicalgroup. They are often calleddimensionless numbersas well.
The variablegdoes not occur in the group. It is easy to see that it is impossible to form a dimensionless product of powers that combinesgwithk,m, andT, becausegis the only quantity that involves the dimension L. This implies that in this problem thegis irrelevant. Dimensional analysis can sometimes yield strong statements about theirrelevanceof some quantities in a problem, or the need for additional parameters. If we have chosen enough variables to properly describe the problem, then from this argument we can conclude that the period of the mass on the spring is independent ofg: it is the same on the earth or the moon. The equation demonstrating the existence of a product of powers for our problem can be written in an entirely equivalent way:T=κmk{\displaystyle T=\kappa {\sqrt {\tfrac {m}{k}}}}, for some dimensionless constantκ(equal toC{\displaystyle {\sqrt {C}}}from the original dimensionless equation).
When faced with a case where dimensional analysis rejects a variable (g, here) that one intuitively expects to belong in a physical description of the situation, another possibility is that the rejected variable is in fact relevant, but that some other relevant variable has been omitted, which might combine with the rejected variable to form a dimensionless quantity. That is, however, not the case here.
When dimensional analysis yields only one dimensionless group, as here, there are no unknown functions, and the solution is said to be "complete" – although it still may involve unknown dimensionless constants, such asκ.
Consider the case of a vibrating wire oflengthℓ(L) vibrating with anamplitudeA(L). The wire has alinear densityρ(M/L) and is undertensions(LM/T2), and we want to know the energyE(L2M/T2) in the wire. Letπ1andπ2be two dimensionless products ofpowersof the variables chosen, given by
The linear density of the wire is not involved. The two groups found can be combined into an equivalent form as an equation
whereFis some unknown function, or, equivalently as
wherefis some other unknown function. Here the unknown function implies that our solution is now incomplete, but dimensional analysis has given us something that may not have been obvious: the energy is proportional to the first power of the tension. Barring further analytical analysis, we might proceed to experiments to discover the form for the unknown functionf. But our experiments are simpler than in the absence of dimensional analysis. We'd perform none to verify that the energy is proportional to the tension. Or perhaps we might guess that the energy is proportional toℓ, and so infer thatE=ℓs. The power of dimensional analysis as an aid to experiment and forming hypotheses becomes evident.
The power of dimensional analysis really becomes apparent when it is applied to situations, unlike those given above, that are more complicated, the set of variables involved are not apparent, and the underlying equations hopelessly complex. Consider, for example, a small pebble sitting on the bed of a river. If the river flows fast enough, it will actually raise the pebble and cause it to flow along with the water. At what critical velocity will this occur? Sorting out the guessed variables is not so easy as before. But dimensional analysis can be a powerful aid in understanding problems like this, and is usually the very first tool to be applied to complex problems where the underlying equations and constraints are poorly understood. In such cases, the answer may depend on adimensionless numbersuch as theReynolds number, which may be interpreted by dimensional analysis.
Consider the case of a thin, solid, parallel-sided rotating disc of axial thicknesst(L) and radiusR(L). The disc has a densityρ(M/L3), rotates at an angular velocityω(T−1) and this leads to a stressS(T−2L−1M) in the material. There is a theoretical linear elastic solution, given by Lame, to this problem when the disc is thin relative to its radius, the faces of the disc are free to move axially, and the plane stress constitutive relations can be assumed to be valid. As the disc becomes thicker relative to the radius then the plane stress solution breaks down. If the disc is restrained axially on its free faces then a state of plane strain will occur. However, if this is not the case then the state of stress may only be determined though consideration of three-dimensional elasticity and there is no known theoretical solution for this case. An engineer might, therefore, be interested in establishing a relationship between the five variables. Dimensional analysis for this case leads to the following (5 − 3 = 2) non-dimensional groups:
Through the use of numerical experiments using, for example, thefinite element method, the nature of the relationship between the two non-dimensional groups can be obtained as shown in the figure. As this problem only involves two non-dimensional groups, the complete picture is provided in a single plot and this can be used as a design/assessment chart for rotating discs.[20]
The dimensions that can be formed from a given collection of basic physical dimensions, such as T, L, and M, form anabelian group: Theidentityis written as 1;[citation needed]L0= 1, and the inverse of L is 1/L or L−1. L raised to any integer powerpis a member of the group, having an inverse of L−por 1/Lp. The operation of the group is multiplication, having the usual rules for handling exponents (Ln× Lm= Ln+m). Physically, 1/L can be interpreted asreciprocal length, and 1/T as reciprocal time (seereciprocal second).
An abelian group is equivalent to amoduleover the integers, with the dimensional symbolTiLjMkcorresponding to the tuple(i,j,k). When physical measured quantities (be they like-dimensioned or unlike-dimensioned) are multiplied or divided by one other, their dimensional units are likewise multiplied or divided; this corresponds to addition or subtraction in the module. When measurable quantities are raised to an integer power, the same is done to the dimensional symbols attached to those quantities; this corresponds toscalar multiplicationin the module.
A basis for such a module of dimensional symbols is called a set ofbase quantities, and all other vectors are called derived units. As in any module, one may choose differentbases, which yields different systems of units (e.g.,choosingwhether the unit for charge is derived from the unit for current, or vice versa).
The group identity, the dimension of dimensionless quantities, corresponds to the origin in this module,(0, 0, 0).
In certain cases, one can define fractional dimensions, specifically by formally defining fractional powers of one-dimensional vector spaces, likeVL1/2.[21]However, it is not possible to take arbitrary fractional powers of units, due torepresentation-theoreticobstructions.[22]
One can work with vector spaces with given dimensions without needing to use units (corresponding to coordinate systems of the vector spaces). For example, given dimensionsMandL, one has the vector spacesVMandVL, and can defineVML:=VM⊗VLas thetensor product. Similarly, the dual space can be interpreted as having "negative" dimensions.[23]This corresponds to the fact that under thenatural pairingbetween a vector space and its dual, the dimensions cancel, leaving adimensionlessscalar.
The set of units of the physical quantities involved in a problem correspond to a set of vectors (or a matrix). Thenullitydescribes some number (e.g.,m) of ways in which these vectors can be combined to produce a zero vector. These correspond to producing (from the measurements) a number of dimensionless quantities,{π1, ..., πm}. (In fact these ways completely span the null subspace of another different space, of powers of the measurements.) Every possible way of multiplying (andexponentiating) together the measured quantities to produce something with the same unit as some derived quantityXcan be expressed in the general form
Consequently, every possiblecommensurateequation for the physics of the system can be rewritten in the form
Knowing this restriction can be a powerful tool for obtaining new insight into the system.
The dimension of physical quantities of interest inmechanicscan be expressed in terms of base dimensions T, L, and M – these form a 3-dimensional vector space. This is not the only valid choice of base dimensions, but it is the one most commonly used. For example, one might choose force, length and mass as the base dimensions (as some have done), with associated dimensions F, L, M; this corresponds to a different basis, and one may convert between these representations by achange of basis. The choice of the base set of dimensions is thus a convention, with the benefit of increased utility and familiarity. The choice of base dimensions is not entirely arbitrary, because they must form abasis: they mustspanthe space, and belinearly independent.
For example, F, L, M form a set of fundamental dimensions because they form a basis that is equivalent to T, L, M: the former can be expressed as [F = LM/T2], L, M, while the latter can be expressed as [T = (LM/F)1/2], L, M.
On the other hand, length, velocity and time (T, L, V) do not form a set of base dimensions for mechanics, for two reasons:
Depending on the field of physics, it may be advantageous to choose one or another extended set of dimensional symbols. In electromagnetism, for example, it may be useful to use dimensions of T, L, M and Q, where Q represents the dimension ofelectric charge. Inthermodynamics, the base set of dimensions is often extended to include a dimension for temperature, Θ. In chemistry, theamount of substance(the number of molecules divided by theAvogadro constant, ≈6.02×1023mol−1) is also defined as a base dimension, N.
In the interaction ofrelativistic plasmawith strong laser pulses, a dimensionlessrelativistic similarity parameter, connected with the symmetry properties of the collisionlessVlasov equation, is constructed from the plasma-, electron- and critical-densities in addition to the electromagnetic vector potential. The choice of the dimensions or even the number of dimensions to be used in different fields of physics is to some extent arbitrary, but consistency in use and ease of communications are common and necessary features.
Bridgman's theorem restricts the type of function that can be used to define a physical quantity from general (dimensionally compounded) quantities to only products of powers of the quantities, unless some of the independent quantities are algebraically combined to yield dimensionless groups, whose functions are grouped together in the dimensionless numeric multiplying factor.[24][25]This excludes polynomials of more than one term or transcendental functions not of that form.
Scalararguments totranscendental functionssuch asexponential,trigonometricandlogarithmicfunctions, or toinhomogeneous polynomials, must bedimensionless quantities. (Note: this requirement is somewhat relaxed in Siano's orientational analysis described below, in which the square of certain dimensioned quantities are dimensionless.)
While most mathematical identities about dimensionless numbers translate in a straightforward manner to dimensional quantities, care must be taken with logarithms of ratios: the identitylog(a/b) = loga− logb, where the logarithm is taken in any base, holds for dimensionless numbersaandb, but it doesnothold ifaandbare dimensional, because in this case the left-hand side is well-defined but the right-hand side is not.[26]
Similarly, while one can evaluatemonomials(xn) of dimensional quantities, one cannot evaluate polynomials of mixed degree with dimensionless coefficients on dimensional quantities: forx2, the expression(3 m)2= 9 m2makes sense (as an area), while forx2+x, the expression(3 m)2+ 3 m = 9 m2+ 3 mdoes not make sense.
However, polynomials of mixed degree can make sense if the coefficients are suitably chosen physical quantities that are not dimensionless. For example,
This is the height to which an object rises in timetif the acceleration ofgravityis 9.8metres per second per secondand the initial upward speed is 500metres per second. It is not necessary fortto be inseconds. For example, supposet= 0.01 minutes. Then the first term would be
The value of a dimensional physical quantityZis written as the product of aunit[Z] within the dimension and a dimensionless numerical value or numerical factor,n.[27]
When like-dimensioned quantities are added or subtracted or compared, it is convenient to express them in the same unit so that the numerical values of these quantities may be directly added or subtracted. But, in concept, there is no problem adding quantities of the same dimension expressed in different units. For example, 1 metre added to 1 foot is a length, but one cannot derive that length by simply adding 1 and 1. Aconversion factor, which is a ratio of like-dimensioned quantities and is equal to the dimensionless unity, is needed:
The factor 0.3048 m/ft is identical to the dimensionless 1, so multiplying by this conversion factor changes nothing. Then when adding two quantities of like dimension, but expressed in different units, the appropriate conversion factor, which is essentially the dimensionless 1, is used to convert the quantities to the same unit so that their numerical values can be added or subtracted.
Only in this manner is it meaningful to speak of adding like-dimensioned quantities of differing units.
Aquantity equation, also sometimes called acomplete equation, is an equation that remains valid independently of theunit of measurementused when expressing thephysical quantities.[28]
In contrast, in anumerical-value equation, just the numerical values of the quantities occur, without units. Therefore, it is only valid when each numerical values is referenced to a specific unit.
For example, a quantity equation fordisplacementdasspeedsmultiplied by time differencetwould be:
fors= 5 m/s, wheretanddmay be expressed in any units,convertedif necessary.
In contrast, a corresponding numerical-value equation would be:
whereTis the numeric value oftwhen expressed in seconds andDis the numeric value ofdwhen expressed in metres.
Generally, the use of numerical-value equations is discouraged.[28]
The dimensionless constants that arise in the results obtained, such as theCin the Poiseuille's Law problem and theκin the spring problems discussed above, come from a more detailed analysis of the underlying physics and often arise from integrating some differential equation. Dimensional analysis itself has little to say about these constants, but it is useful to know that they very often have a magnitude of order unity. This observation can allow one to sometimes make "back of the envelope" calculations about the phenomenon of interest, and therefore be able to more efficiently design experiments to measure it, or to judge whether it is important, etc.
Paradoxically, dimensional analysis can be a useful tool even if all the parameters in the underlying theory are dimensionless, e.g., lattice models such as theIsing modelcan be used to study phase transitions and critical phenomena. Such models can be formulated in a purely dimensionless way. As we approach the critical point closer and closer, the distance over which the variables in the lattice model are correlated (the so-called correlation length,χ) becomes larger and larger. Now, the correlation length is the relevant length scale related to critical phenomena, so one can, e.g., surmise on "dimensional grounds" that the non-analytical part of the free energy per lattice site should be~ 1/χd, wheredis the dimension of the lattice.
It has been argued by some physicists, e.g.,Michael J. Duff,[4][29]that the laws of physics are inherently dimensionless. The fact that we have assigned incompatible dimensions to Length, Time and Mass is, according to this point of view, just a matter of convention, borne out of the fact that before the advent of modern physics, there was no way to relate mass, length, and time to each other. The three independent dimensionful constants:c,ħ, andG, in the fundamental equations of physics must then be seen as mere conversion factors to convert Mass, Time and Length into each other.
Just as in the case of critical properties of lattice models, one can recover the results of dimensional analysis in the appropriate scaling limit; e.g., dimensional analysis in mechanics can be derived by reinserting the constantsħ,c, andG(but we can now consider them to be dimensionless) and demanding that a nonsingular relation between quantities exists in the limitc→ ∞,ħ→ 0andG→ 0. In problems involving a gravitational field the latter limit should be taken such that the field stays finite.
Following are tables of commonly occurring expressions in physics, related to the dimensions of energy, momentum, and force.[30][31][32]
T−2L2M
T−1LM
T−2LM
Dimensional correctness as part oftype checkinghas been studied since 1977.[33]Implementations for Ada[34]and C++[35]were described in 1985 and 1988.
Kennedy's 1996 thesis describes an implementation inStandard ML,[36]and later inF#.[37]There are implementations forHaskell,[38]OCaml,[39]andRust,[40]Python,[41]and a code checker forFortran.[42][43]Griffioen's 2019 thesis extended Kennedy'sHindley–Milner type systemto support Hart's matrices.[44][45]McBride and Nordvall-Forsberg show how to usedependent typesto extend type systems for units of measure.[46]
Mathematica 13.2 has a function for transformations with quantities named NondimensionalizationTransform that applies a nondimensionalization transform to an equation.[47]Mathematica also has a function to find the dimensions of a unit such as 1 J named UnitDimensions.[48]Mathematica also has a function that will find dimensionally equivalent combinations of a subset of physical quantities named DimensionalCombations.[49]Mathematica can also factor out certain dimension with UnitDimensions by specifying an argument to the function UnityDimensions.[50]For example, you can use UnityDimensions to factor out angles.[50]In addition to UnitDimensions, Mathematica can find the dimensions of a QuantityVariable with the function QuantityVariableDimensions.[51]
Some discussions of dimensional analysis implicitly describe all quantities as mathematical vectors. In mathematics scalars are considered a special case of vectors;[citation needed]vectors can be added to or subtracted from other vectors, and, inter alia, multiplied or divided by scalars. If a vector is used to define a position, this assumes an implicit point of reference: anorigin. While this is useful and often perfectly adequate, allowing many important errors to be caught, it can fail to model certain aspects of physics. A more rigorous approach requires distinguishing between position and displacement (or moment in time versus duration, or absolute temperature versus temperature change).
Consider points on a line, each with a position with respect to a given origin, and distances among them. Positions and displacements all have units of length, but their meaning is not interchangeable:
This illustrates the subtle distinction betweenaffinequantities (ones modeled by anaffine space, such as position) andvectorquantities (ones modeled by avector space, such as displacement).
Properly then, positions have dimension ofaffinelength, while displacements have dimension ofvectorlength. To assign a number to anaffineunit, one must not only choose a unit of measurement, but also apoint of reference, while to assign a number to avectorunit only requires a unit of measurement.
Thus some physical quantities are better modeled by vectorial quantities while others tend to require affine representation, and the distinction is reflected in their dimensional analysis.
This distinction is particularly important in the case of temperature, for which the numeric value ofabsolute zerois not the origin 0 in some scales. For absolute zero,
where the symbol ≘ meanscorresponds to, since although these values on the respective temperature scales correspond, they represent distinct quantities in the same way that the distances from distinct starting points to the same end point are distinct quantities, and cannot in general be equated.
For temperature differences,
(Here °R refers to theRankine scale, not theRéaumur scale).
Unit conversion for temperature differences is simply a matter of multiplying by, e.g., 1 °F / 1 K (although the ratio is not a constant value). But because some of these scales have origins that do not correspond to absolute zero, conversion from one temperature scale to another requires accounting for that. As a result, simple dimensional analysis can lead to errors if it is ambiguous whether 1 K means the absolute temperature equal to −272.15 °C, or the temperature difference equal to 1 °C.
Similar to the issue of a point of reference is the issue of orientation: a displacement in 2 or 3 dimensions is not just a length, but is a length together with adirection. (In 1 dimension, this issue is equivalent to the distinction between positive and negative.) Thus, to compare or combine two dimensional quantities in multi-dimensional Euclidean space, one also needs a bearing: they need to be compared to aframe of reference.
This leads to theextensionsdiscussed below, namely Huntley's directed dimensions and Siano's orientational analysis.
Huntley has pointed out that a dimensional analysis can become more powerful by discovering new independent dimensions in the quantities under consideration, thus increasing the rankm{\displaystyle m}of the dimensional matrix.[52]
He introduced two approaches:
As an example of the usefulness of the first approach, suppose we wish to calculate thedistance a cannonball travelswhen fired with a vertical velocity componentvy{\displaystyle v_{\text{y}}}and a horizontal velocity componentvx{\displaystyle v_{\text{x}}}, assuming it is fired on a flat surface. Assuming no use of directed lengths, the quantities of interest are thenR, the distance travelled, with dimension L,vx{\displaystyle v_{\text{x}}},vy{\displaystyle v_{\text{y}}}, both dimensioned as T−1L, andgthe downward acceleration of gravity, with dimension T−2L.
With these four quantities, we may conclude that the equation for the rangeRmay be written:
Or dimensionally
from which we may deduce thata+b+c=1{\displaystyle a+b+c=1}anda+b+2c=0{\displaystyle a+b+2c=0}, which leaves one exponent undetermined. This is to be expected since we have two fundamental dimensions T and L, and four parameters, with one equation.
However, if we use directed length dimensions, thenvx{\displaystyle v_{\mathrm {x} }}will be dimensioned as T−1Lx,vy{\displaystyle v_{\mathrm {y} }}as T−1Ly,Ras Lxandgas T−2Ly. The dimensional equation becomes:
and we may solve completely asa= 1,b= 1andc= −1. The increase in deductive power gained by the use of directed length dimensions is apparent.
Huntley's concept of directed length dimensions however has some serious limitations:
It also is often quite difficult to assign the L, Lx, Ly, Lz, symbols to the physical variables involved in the problem of interest. He invokes a procedure that involves the "symmetry" of the physical problem. This is often very difficult to apply reliably: It is unclear as to what parts of the problem that the notion of "symmetry" is being invoked. Is it the symmetry of the physical body that forces are acting upon, or to the points, lines or areas at which forces are being applied? What if more than one body is involved with different symmetries?
Consider the spherical bubble attached to a cylindrical tube, where one wants the flow rate of air as a function of the pressure difference in the two parts. What are the Huntley extended dimensions of the viscosity of the air contained in the connected parts? What are the extended dimensions of the pressure of the two parts? Are they the same or different? These difficulties are responsible for the limited application of Huntley's directed length dimensions to real problems.
In Huntley's second approach, he holds that it is sometimes useful (e.g., in fluid mechanics and thermodynamics) to distinguish between mass as a measure of inertia (inertial mass), and mass as a measure of the quantity of matter.Quantity of matteris defined by Huntley as a quantity onlyproportionalto inertial mass, while not implicating inertial properties. No further restrictions are added to its definition.
For example, consider the derivation ofPoiseuille's Law. We wish to find the rate of mass flow of a viscous fluid through a circular pipe. Without drawing distinctions between inertial and substantial mass, we may choose as the relevant variables:
There are three fundamental variables, so the above five equations will yield two independent dimensionless variables:
If we distinguish between inertial mass with dimensionMi{\displaystyle M_{\text{i}}}and quantity of matter with dimensionMm{\displaystyle M_{\text{m}}}, then mass flow rate and density will use quantity of matter as the mass parameter, while the pressure gradient and coefficient of viscosity will use inertial mass. We now have four fundamental parameters, and one dimensionless constant, so that the dimensional equation may be written:
where now onlyCis an undetermined constant (found to be equal toπ/8{\displaystyle \pi /8}by methods outside of dimensional analysis). This equation may be solved for the mass flow rate to yieldPoiseuille's law.
Huntley's recognition of quantity of matter as an independent quantity dimension is evidently successful in the problems where it is applicable, but his definition of quantity of matter is open to interpretation, as it lacks specificity beyond the two requirements he postulated for it. For a given substance, the SI dimensionamount of substance, with unitmole, does satisfy Huntley's two requirements as a measure of quantity of matter, and could be used as a quantity of matter in any problem of dimensional analysis where Huntley's concept is applicable.
Anglesare, by convention, considered to be dimensionless quantities (although the wisdom of this is contested[53]) . As an example, consider again the projectile problem in which a point mass is launched from the origin(x,y) = (0, 0)at a speedvand angleθabove thex-axis, with the force of gravity directed along the negativey-axis. It is desired to find the rangeR, at which point the mass returns to thex-axis. Conventional analysis will yield the dimensionless variableπ=Rg/v2, but offers no insight into the relationship betweenRandθ.
Siano has suggested that the directed dimensions of Huntley be replaced by usingorientational symbols1x1y1zto denote vector directions, and an orientationless symbol 10.[54]Thus, Huntley's Lxbecomes L1xwith L specifying the dimension of length, and1xspecifying the orientation. Siano further shows that the orientational symbols have an algebra of their own. Along with the requirement that1i−1= 1i, the following multiplication table for the orientation symbols results:
The orientational symbols form a group (theKlein four-groupor "Viergruppe"). In this system, scalars always have the same orientation as the identity element, independent of the "symmetry of the problem". Physical quantities that are vectors have the orientation expected: a force or a velocity in the z-direction has the orientation of1z. For angles, consider an angleθthat lies in the z-plane. Form a right triangle in the z-plane withθbeing one of the acute angles. The side of the right triangle adjacent to the angle then has an orientation1xand the side opposite has an orientation1y. Since (using~to indicate orientational equivalence)tan(θ) =θ+ ... ~ 1y/1xwe conclude that an angle in the xy-plane must have an orientation1y/1x= 1z, which is not unreasonable. Analogous reasoning forces the conclusion thatsin(θ)has orientation1zwhilecos(θ)has orientation 10. These are different, so one concludes (correctly), for example, that there are no solutions of physical equations that are of the formacos(θ) +bsin(θ), whereaandbare real scalars. An expression such assin(θ+π/2)=cos(θ){\displaystyle \sin(\theta +\pi /2)=\cos(\theta )}is not dimensionally inconsistent since it is a special case of the sum of angles formula and should properly be written:
which fora=θ{\displaystyle a=\theta }andb=π/2{\displaystyle b=\pi /2}yieldssin(θ1z+[π/2]1z)=1zcos(θ1z){\displaystyle \sin(\theta \,1_{\text{z}}+[\pi /2]\,1_{\text{z}})=1_{\text{z}}\cos(\theta \,1_{\text{z}})}. Siano distinguishes between geometric angles, which have an orientation in 3-dimensional space, and phase angles associated with time-based oscillations, which have no spatial orientation, i.e. the orientation of a phase angle is10{\displaystyle 1_{0}}.
The assignment of orientational symbols to physical quantities and the requirement that physical equations be orientationally homogeneous can actually be used in a way that is similar to dimensional analysis to derive more information about acceptable solutions of physical problems. In this approach, one solves the dimensional equation as far as one can. If the lowest power of a physical variable is fractional, both sides of the solution is raised to a power such that all powers are integral, putting it intonormal form. The orientational equation is then solved to give a more restrictive condition on the unknown powers of the orientational symbols. The solution is then more complete than the one that dimensional analysis alone gives. Often, the added information is that one of the powers of a certain variable is even or odd.
As an example, for the projectile problem, using orientational symbols,θ, being in the xy-plane will thus have dimension1zand the range of the projectileRwill be of the form:
Dimensional homogeneity will now correctly yielda= −1andb= 2, and orientational homogeneity requires that1x/(1ya1zc)=1zc+1=1{\displaystyle 1_{x}/(1_{y}^{a}1_{z}^{c})=1_{z}^{c+1}=1}. In other words, thatcmust be an odd integer. In fact, the required function of theta will besin(θ)cos(θ)which is a series consisting of odd powers ofθ.
It is seen that the Taylor series ofsin(θ)andcos(θ)are orientationally homogeneous using the above multiplication table, while expressions likecos(θ) + sin(θ)andexp(θ)are not, and are (correctly) deemed unphysical.
Siano's orientational analysis is compatible with the conventional conception of angular quantities as being dimensionless, and within orientational analysis, theradianmay still be considered a dimensionless unit. The orientational analysis of a quantity equation is carried out separately from the ordinary dimensional analysis, yielding information that supplements the dimensional analysis.
|
https://en.wikipedia.org/wiki/Dimensional_analysis
|
Amultiplication algorithmis analgorithm(or method) tomultiplytwo numbers. Depending on the size of the numbers, different algorithms are more efficient than others. Numerous algorithms are known and there has been much research into the topic.
The oldest and simplest method, known sinceantiquityaslong multiplicationorgrade-school multiplication, consists of multiplying every digit in the first number by every digit in the second and adding the results. This has atime complexityofO(n2){\displaystyle O(n^{2})}, wherenis the number of digits. When done by hand, this may also be reframed asgrid method multiplicationorlattice multiplication. In software, this may be called "shift and add" due tobitshiftsand addition being the only two operations needed.
In 1960,Anatoly KaratsubadiscoveredKaratsuba multiplication, unleashing a flood of research into fast multiplication algorithms. This method uses three multiplications rather than four to multiply two two-digit numbers. (A variant of this can also be used to multiplycomplex numbersquickly.) Donerecursively, this has a time complexity ofO(nlog23){\displaystyle O(n^{\log _{2}3})}. Splitting numbers into more than two parts results inToom-Cook multiplication; for example, using three parts results in theToom-3algorithm. Using many parts can set the exponent arbitrarily close to 1, but the constant factor also grows, making it impractical.
In 1968, theSchönhage-Strassen algorithm, which makes use of aFourier transformover amodulus, was discovered. It has a time complexity ofO(nlognloglogn){\displaystyle O(n\log n\log \log n)}. In 2007,Martin Fürerproposed an algorithm with complexityO(nlogn2Θ(log∗n)){\displaystyle O(n\log n2^{\Theta (\log ^{*}n)})}. In 2014, Harvey,Joris van der Hoeven, and Lecerf proposed one with complexityO(nlogn23log∗n){\displaystyle O(n\log n2^{3\log ^{*}n})}, thus making theimplicit constantexplicit; this was improved toO(nlogn22log∗n){\displaystyle O(n\log n2^{2\log ^{*}n})}in 2018. Lastly, in 2019, Harvey and van der Hoeven came up with agalactic algorithmwith complexityO(nlogn){\displaystyle O(n\log n)}. This matches a guess by Schönhage and Strassen that this would be the optimal bound, although this remains aconjecturetoday.
Integer multiplication algorithms can also be used to multiply polynomials by means of the method ofKronecker substitution.
If apositional numeral systemis used, a natural way of multiplying numbers is taught in schools
aslong multiplication, sometimes calledgrade-school multiplication, sometimes called theStandard Algorithm:
multiply themultiplicandby each digit of themultiplierand then add up all the properly shifted results. It requires memorization of themultiplication tablefor single digits.
This is the usual algorithm for multiplying larger numbers by hand in base 10. A person doing long multiplication on paper will write down all the products and then add them together; anabacus-user will sum the products as soon as each one is computed.
This example useslong multiplicationto multiply 23,958,233 (multiplicand) by 5,830 (multiplier) and arrives at 139,676,498,390 for the result (product).
In some countries such asGermany, the above multiplication is depicted similarly but with the original product kept horizontal and computation starting with the first digit of the multiplier:[1]
Below pseudocode describes the process of above multiplication. It keeps only one row to maintain the sum which finally becomes the result. Note that the '+=' operator is used to denote sum to existing value and store operation (akin to languages such as Java and C) for compactness.
Somechipsimplement long multiplication, inhardwareor inmicrocode, for various integer and floating-point word sizes. Inarbitrary-precision arithmetic, it is common to use long multiplication with the base set to 2w, wherewis the number of bits in a word, for multiplying relatively small numbers. To multiply two numbers withndigits using this method, one needs aboutn2operations. More formally, multiplying twon-digit numbers using long multiplication requiresΘ(n2) single-digit operations (additions and multiplications).
When implemented in software, long multiplication algorithms must deal with overflow during additions, which can be expensive. A typical solution is to represent the number in a small base,b, such that, for example, 8bis a representable machine integer. Several additions can then be performed before an overflow occurs. When the number becomes too large, we add part of it to the result, or we carry and map the remaining part back to a number that is less thanb. This process is callednormalization. Richard Brent used this approach in his Fortran package, MP.[2]
Computers initially used a very similar algorithm to long multiplication in base 2, but modern processors have optimized circuitry for fast multiplications using more efficient algorithms, at the price of a more complex hardware realization.[citation needed]In base two, long multiplication is sometimes called"shift and add", because the algorithm simplifies and just consists of shifting left (multiplying by powers of two) and adding. Most currently available microprocessors implement this or other similar algorithms (such asBooth encoding) for various integer and floating-point sizes inhardware multipliersor inmicrocode.[citation needed]
On currently available processors, a bit-wise shift instruction is usually (but not always) faster than a multiply instruction and can be used to multiply (shift left) and divide (shift right) by powers of two. Multiplication by a constant anddivision by a constantcan be implemented using a sequence of shifts and adds or subtracts. For example, there are several ways to multiply by 10 using only bit-shift and addition.
In some cases such sequences of shifts and adds or subtracts will outperform hardware multipliers and especially dividers. A division by a number of the form2n{\displaystyle 2^{n}}or2n±1{\displaystyle 2^{n}\pm 1}often can be converted to such a short sequence.
In addition to the standard long multiplication, there are several other methods used to perform multiplication by hand. Such algorithms may be devised for speed, ease of calculation, or educational value, particularly when computers ormultiplication tablesare unavailable.
Thegrid method(or box method) is an introductory method for multiple-digit multiplication that is often taught to pupils atprimary schoolorelementary school. It has been a standard part of the national primary school mathematics curriculum in England and Wales since the late 1990s.[3]
Both factors are broken up ("partitioned") into their hundreds, tens and units parts, and the products of the parts are then calculated explicitly in a relatively simple multiplication-only stage, before these contributions are then totalled to give the final answer in a separate addition stage.
The calculation 34 × 13, for example, could be computed using the grid:
followed by addition to obtain 442, either in a single sum (see right), or through forming the row-by-row totals
This calculation approach (though not necessarily with the explicit grid arrangement) is also known as thepartial products algorithm. Its essence is the calculation of the simple multiplications separately, with all addition being left to the final gathering-up stage.
The grid method can in principle be applied to factors of any size, although the number of sub-products becomes cumbersome as the number of digits increases. Nevertheless, it is seen as a usefully explicit method to introduce the idea of multiple-digit multiplications; and, in an age when most multiplication calculations are done using a calculator or a spreadsheet, it may in practice be the only multiplication algorithm that some students will ever need.
Lattice, or sieve, multiplication is algorithmically equivalent to long multiplication. It requires the preparation of a lattice (a grid drawn on paper) which guides the calculation and separates all the multiplications from theadditions. It was introduced to Europe in 1202 inFibonacci'sLiber Abaci. Fibonacci described the operation as mental, using his right and left hands to carry the intermediate calculations.Matrakçı Nasuhpresented 6 different variants of this method in this 16th-century book, Umdet-ul Hisab. It was widely used inEnderunschools across the Ottoman Empire.[4]Napier's bones, orNapier's rodsalso used this method, as published by Napier in 1617, the year of his death.
As shown in the example, the multiplicand and multiplier are written above and to the right of a lattice, or a sieve. It is found inMuhammad ibn Musa al-Khwarizmi's "Arithmetic", one of Leonardo's sources mentioned by Sigler, author of "Fibonacci's Liber Abaci", 2002.[citation needed]
The pictures on the right show how to calculate 345 × 12 using lattice multiplication. As a more complicated example, consider the picture below displaying the computation of 23,958,233 multiplied by 5,830 (multiplier); the result is 139,676,498,390. Notice 23,958,233 is along the top of the lattice and 5,830 is along the right side. The products fill the lattice and the sum of those products (on the diagonal) are along the left and bottom sides. Then those sums are totaled as shown.
The binary method is also known as peasant multiplication, because it has been widely used by people who are classified as peasants and thus have not memorized themultiplication tablesrequired for long multiplication.[5][failed verification]The algorithm was in use in ancient Egypt.[6]Its main advantages are that it can be taught quickly, requires no memorization, and can be performed using tokens, such aspoker chips, if paper and pencil aren't available. The disadvantage is that it takes more steps than long multiplication, so it can be unwieldy for large numbers.
On paper, write down in one column the numbers you get when you repeatedly halve the multiplier, ignoring the remainder; in a column beside it repeatedly double the multiplicand. Cross out each row in which the last digit of the first number is even, and add the remaining numbers in the second column to obtain the product.
This example uses peasant multiplication to multiply 11 by 3 to arrive at a result of 33.
Describing the steps explicitly:
The method works because multiplication isdistributive, so:
A more complicated example, using the figures from the earlier examples (23,958,233 and 5,830):
This formula can in some cases be used, to make multiplication tasks easier to complete:
In the case wherex{\displaystyle x}andy{\displaystyle y}are integers, we have that
becausex+y{\displaystyle x+y}andx−y{\displaystyle x-y}are either both even or both odd. This means that
and it's sufficient to (pre-)compute the integral part of squares divided by 4 like in the following example.
Below is a lookup table of quarter squares with the remainder discarded for the digits 0 through 18; this allows for the multiplication of numbers up to9×9.
If, for example, you wanted to multiply 9 by 3, you observe that the sum and difference are 12 and 6 respectively. Looking both those values up on the table yields 36 and 9, the difference of which is 27, which is the product of 9 and 3.
In prehistoric time, quarter square multiplication involvedfloor function; that some sources[7][8]attribute toBabylonian mathematics(2000–1600 BC).
Antoine Voisin published a table of quarter squares from 1 to 1000 in 1817 as an aid in multiplication. A larger table of quarter squares from 1 to 100000 was published by Samuel Laundy in 1856,[9]and a table from 1 to 200000 by Joseph Blater in 1888.[10]
Quarter square multipliers were used inanalog computersto form ananalog signalthat was the product of two analog input signals. In this application, the sum and difference of two inputvoltagesare formed usingoperational amplifiers. The square of each of these is approximated usingpiecewise linearcircuits. Finally the difference of the two squares is formed and scaled by a factor of one fourth using yet another operational amplifier.
In 1980, Everett L. Johnson proposed using the quarter square method in adigitalmultiplier.[11]To form the product of two 8-bit integers, for example, the digital device forms the sum and difference, looks both quantities up in a table of squares, takes the difference of the results, and divides by four by shifting two bits to the right. For 8-bit integers the table of quarter squares will have 29−1=511 entries (one entry for the full range 0..510 of possible sums, the differences using only the first 256 entries in range 0..255) or 29−1=511 entries (using for negative differences the technique of 2-complements and 9-bit masking, which avoids testing the sign of differences), each entry being 16-bit wide (the entry values are from (0²/4)=0 to (510²/4)=65025).
The quarter square multiplier technique has benefited 8-bit systems that do not have any support for a hardware multiplier. Charles Putney implemented this for the6502.[12]
A line of research intheoretical computer scienceis about the number of single-bit arithmetic operations necessary to multiply twon{\displaystyle n}-bit integers. This is known as thecomputational complexityof multiplication. Usual algorithms done by hand have asymptotic complexity ofO(n2){\displaystyle O(n^{2})}, but in 1960Anatoly Karatsubadiscovered that better complexity was possible (with theKaratsuba algorithm).[13]
Currently, the algorithm with the best computational complexity is a 2019 algorithm ofDavid HarveyandJoris van der Hoeven, which uses the strategies of usingnumber-theoretic transformsintroduced with theSchönhage–Strassen algorithmto multiply integers using onlyO(nlogn){\displaystyle O(n\log n)}operations.[14]This is conjectured to be the best possible algorithm, but lower bounds ofΩ(nlogn){\displaystyle \Omega (n\log n)}are not known.
Karatsuba multiplication is an O(nlog23) ≈ O(n1.585) divide and conquer algorithm, that uses recursion to merge together sub calculations.
By rewriting the formula, one makes it possible to do sub calculations / recursion. By doing recursion, one can solve this in a fast manner.
Letx{\displaystyle x}andy{\displaystyle y}be represented asn{\displaystyle n}-digit strings in some baseB{\displaystyle B}. For any positive integerm{\displaystyle m}less thann{\displaystyle n}, one can write the two given numbers as
wherex0{\displaystyle x_{0}}andy0{\displaystyle y_{0}}are less thanBm{\displaystyle B^{m}}. The product is then
xy=(x1Bm+x0)(y1Bm+y0)=x1y1B2m+(x1y0+x0y1)Bm+x0y0=z2B2m+z1Bm+z0,{\displaystyle {\begin{aligned}xy&=(x_{1}B^{m}+x_{0})(y_{1}B^{m}+y_{0})\\&=x_{1}y_{1}B^{2m}+(x_{1}y_{0}+x_{0}y_{1})B^{m}+x_{0}y_{0}\\&=z_{2}B^{2m}+z_{1}B^{m}+z_{0},\\\end{aligned}}}
where
These formulae require four multiplications and were known toCharles Babbage.[15]Karatsuba observed thatxy{\displaystyle xy}can be computed in only three multiplications, at the cost of a few extra additions. Withz0{\displaystyle z_{0}}andz2{\displaystyle z_{2}}as before one can observe that
Because of the overhead of recursion, Karatsuba's multiplication is slower than long multiplication for small values ofn; typical implementations therefore switch to long multiplication for small values ofn.
By exploring patterns after expansion, one see following:
(x1Bm+x0)(y1Bm+y0)(z1Bm+z0)(a1Bm+a0)=a1x1y1z1B4m+a1x1y1z0B3m+a1x1y0z1B3m+a1x0y1z1B3m+a0x1y1z1B3m+a1x1y0z0B2m+a1x0y1z0B2m+a0x1y1z0B2m+a1x0y0z1B2m+a0x1y0z1B2m+a0x0y1z1B2m+a1x0y0z0Bm1+a0x1y0z0Bm1+a0x0y1z0Bm1+a0x0y0z1Bm1+a0x0y0z0B1m{\displaystyle {\begin{alignedat}{5}(x_{1}B^{m}+x_{0})(y_{1}B^{m}+y_{0})(z_{1}B^{m}+z_{0})(a_{1}B^{m}+a_{0})&=a_{1}x_{1}y_{1}z_{1}B^{4m}&+a_{1}x_{1}y_{1}z_{0}B^{3m}&+a_{1}x_{1}y_{0}z_{1}B^{3m}&+a_{1}x_{0}y_{1}z_{1}B^{3m}\\&+a_{0}x_{1}y_{1}z_{1}B^{3m}&+a_{1}x_{1}y_{0}z_{0}B^{2m}&+a_{1}x_{0}y_{1}z_{0}B^{2m}&+a_{0}x_{1}y_{1}z_{0}B^{2m}\\&+a_{1}x_{0}y_{0}z_{1}B^{2m}&+a_{0}x_{1}y_{0}z_{1}B^{2m}&+a_{0}x_{0}y_{1}z_{1}B^{2m}&+a_{1}x_{0}y_{0}z_{0}B^{m{\phantom {1}}}\\&+a_{0}x_{1}y_{0}z_{0}B^{m{\phantom {1}}}&+a_{0}x_{0}y_{1}z_{0}B^{m{\phantom {1}}}&+a_{0}x_{0}y_{0}z_{1}B^{m{\phantom {1}}}&+a_{0}x_{0}y_{0}z_{0}{\phantom {B^{1m}}}\end{alignedat}}}
Each summand is associated to a unique binary number from 0 to2N+1−1{\displaystyle 2^{N+1}-1}, for examplea1x1y1z1⟷1111,a1x0y1z0⟷1010{\displaystyle a_{1}x_{1}y_{1}z_{1}\longleftrightarrow 1111,\ a_{1}x_{0}y_{1}z_{0}\longleftrightarrow 1010}etc. Furthermore; B is powered to number of 1, in this binary string, multiplied with m.
If we express this in fewer terms, we get:
∏j=1N(xj,1Bm+xj,0)=∑i=12N+1−1∏j=1Nxj,c(i,j)Bm∑j=1Nc(i,j)=∑j=0NzjBjm{\displaystyle \prod _{j=1}^{N}(x_{j,1}B^{m}+x_{j,0})=\sum _{i=1}^{2^{N+1}-1}\prod _{j=1}^{N}x_{j,c(i,j)}B^{m\sum _{j=1}^{N}c(i,j)}=\sum _{j=0}^{N}z_{j}B^{jm}}, wherec(i,j){\displaystyle c(i,j)}means digit in number i at position j. Notice thatc(i,j)∈{0,1}{\displaystyle c(i,j)\in \{0,1\}}
z0=∏j=1Nxj,0zN=∏j=1Nxj,1zN−1=∏j=1N(xj,0+xj,1)−∑i≠N−1Nzi{\displaystyle {\begin{aligned}z_{0}&=\prod _{j=1}^{N}x_{j,0}\\z_{N}&=\prod _{j=1}^{N}x_{j,1}\\z_{N-1}&=\prod _{j=1}^{N}(x_{j,0}+x_{j,1})-\sum _{i\neq N-1}^{N}z_{i}\end{aligned}}}
Karatsuba's algorithm was the first known algorithm for multiplication that is asymptotically faster than long multiplication,[16]and can thus be viewed as the starting point for the theory of fast multiplications.
Another method of multiplication is called Toom–Cook or Toom-3. The Toom–Cook method splits each number to be multiplied into multiple parts. The Toom–Cook method is one of the generalizations of the Karatsuba method. A three-way Toom–Cook can do a size-3Nmultiplication for the cost of five size-Nmultiplications. This accelerates the operation by a factor of 9/5, while the Karatsuba method accelerates it by 4/3.
Although using more and more parts can reduce the time spent on recursive multiplications further, the overhead from additions and digit management also grows. For this reason, the method of Fourier transforms is typically faster for numbers with several thousand digits, and asymptotically faster for even larger numbers.
Every number in base B, can be written as a polynomial:
X=∑i=0NxiBi{\displaystyle X=\sum _{i=0}^{N}{x_{i}B^{i}}}
Furthermore, multiplication of two numbers could be thought of as a product of two polynomials:
XY=(∑i=0NxiBi)(∑j=0NyiBj){\displaystyle XY=(\sum _{i=0}^{N}{x_{i}B^{i}})(\sum _{j=0}^{N}{y_{i}B^{j}})}
Because,forBk{\displaystyle B^{k}}:ck=∑(i,j):i+j=kaibj=∑i=0kaibk−i{\displaystyle c_{k}=\sum _{(i,j):i+j=k}{a_{i}b_{j}}=\sum _{i=0}^{k}{a_{i}b_{k-i}}},
we have a convolution.
By using fft (fast fourier transformation) with convolution rule, we can get
f^(a∗b)=f^(∑i=0kaibk−i)=f^(a)∙f^(b){\displaystyle {\hat {f}}(a*b)={\hat {f}}(\sum _{i=0}^{k}{a_{i}b_{k-i}})={\hat {f}}(a)\bullet {\hat {f}}(b)}. That is;Ck=ak∙bk{\displaystyle C_{k}=a_{k}\bullet b_{k}}, whereCk{\displaystyle C_{k}}is the corresponding coefficient in fourier space. This can also be written as:fft(a∗b)=fft(a)∙fft(b){\displaystyle \mathrm {fft} (a*b)=\mathrm {fft} (a)\bullet \mathrm {fft} (b)}.
We have the same coefficient due to linearity under fourier transformation, and because these polynomials
only consist of one unique term per coefficient:
f^(xn)=(i2π)nδ(n){\displaystyle {\hat {f}}(x^{n})=\left({\frac {i}{2\pi }}\right)^{n}\delta ^{(n)}}andf^(aX(ξ)+bY(ξ))=aX^(ξ)+bY^(ξ){\displaystyle {\hat {f}}(a\,X(\xi )+b\,Y(\xi ))=a\,{\hat {X}}(\xi )+b\,{\hat {Y}}(\xi )}
We have reduced our convolution problem
to product problem, through fft.
By finding ifft (polynomial interpolation), for eachck{\displaystyle c_{k}}, one get the desired coefficients.
Algorithm uses divide and conquer strategy, to divide problem to subproblems.
It has a time complexity of O(nlog(n) log(log(n))).
The algorithm was invented byStrassen(1968). It was made practical and theoretical guarantees were provided in 1971 bySchönhageand Strassen resulting in theSchönhage–Strassen algorithm.[17]
In 2007 theasymptotic complexityof integer multiplication was improved by the Swiss mathematicianMartin Fürerof Pennsylvania State University toO(nlogn⋅2Θ(log∗(n))){\textstyle O(n\log n\cdot {2}^{\Theta (\log ^{*}(n))})}using Fourier transforms overcomplex numbers,[18]where log*denotes theiterated logarithm. Anindya De, Chandan Saha, Piyush Kurur and Ramprasad Saptharishi gave a similar algorithm usingmodular arithmeticin 2008 achieving the same running time.[19]In context of the above material, what these latter authors have achieved is to findNmuch less than 23k+ 1, so thatZ/NZhas a (2m)th root of unity. This speeds up computation and reduces the time complexity. However, these latter algorithms are only faster than Schönhage–Strassen for impractically large inputs.
In 2014, Harvey,Joris van der Hoevenand Lecerf[20]gave a new algorithm that achieves a running time ofO(nlogn⋅23log∗n){\displaystyle O(n\log n\cdot 2^{3\log ^{*}n})}, making explicit the implied constant in theO(log∗n){\displaystyle O(\log ^{*}n)}exponent. They also proposed a variant of their algorithm which achievesO(nlogn⋅22log∗n){\displaystyle O(n\log n\cdot 2^{2\log ^{*}n})}but whose validity relies on standard conjectures about the distribution ofMersenne primes. In 2016, Covanov and Thomé proposed an integer multiplication algorithm based on a generalization ofFermat primesthat conjecturally achieves a complexity bound ofO(nlogn⋅22log∗n){\displaystyle O(n\log n\cdot 2^{2\log ^{*}n})}. This matches the 2015 conditional result of Harvey, van der Hoeven, and Lecerf but uses a different algorithm and relies on a different conjecture.[21]In 2018, Harvey and van der Hoeven used an approach based on the existence of short lattice vectors guaranteed byMinkowski's theoremto prove an unconditional complexity bound ofO(nlogn⋅22log∗n){\displaystyle O(n\log n\cdot 2^{2\log ^{*}n})}.[22]
In March 2019,David HarveyandJoris van der Hoevenannounced their discovery of anO(nlogn)multiplication algorithm.[23]It was published in theAnnals of Mathematicsin 2021.[24]Because Schönhage and Strassen predicted thatnlog(n) is the "best possible" result, Harvey said: "...our work is expected to be the end of the road for this problem, although we don't know yet how to prove this rigorously."[25]
There is a trivial lower bound ofΩ(n) for multiplying twon-bit numbers on a single processor; no matching algorithm (on conventional machines, that is on Turing equivalent machines) nor any sharper lower bound is known. Multiplication lies outside ofAC0[p]for any primep, meaning there is no family of constant-depth, polynomial (or even subexponential) size circuits using AND, OR, NOT, and MODpgates that can compute a product. This follows from a constant-depth reduction of MODqto multiplication.[26]Lower bounds for multiplication are also known for some classes ofbranching programs.[27]
Complex multiplication normally involves four multiplications and two additions.
Or
As observed by Peter Ungar in 1963, one can reduce the number of multiplications to three, using essentially the same computation asKaratsuba's algorithm.[28]The product (a+bi) · (c+di) can be calculated in the following way.
This algorithm uses only three multiplications, rather than four, and five additions or subtractions rather than two. If a multiply is more expensive than three adds or subtracts, as when calculating by hand, then there is a gain in speed. On modern computers a multiply and an add can take about the same time so there may be no speed gain. There is a trade-off in that there may be some loss of precision when using floating point.
Forfast Fourier transforms(FFTs) (or anylinear transformation) the complex multiplies are by constant coefficientsc+di(calledtwiddle factorsin FFTs), in which case two of the additions (d−candc+d) can be precomputed. Hence, only three multiplies and three adds are required.[29]However, trading off a multiplication for an addition in this way may no longer be beneficial with modernfloating-point units.[30]
All the above multiplication algorithms can also be expanded to multiplypolynomials. Alternatively theKronecker substitutiontechnique may be used to convert the problem of multiplying polynomials into a single binary multiplication.[31]
Long multiplication methods can be generalised to allow the multiplication of algebraic formulae:
As a further example of column based multiplication, consider multiplying 23 long tons (t), 12 hundredweight (cwt) and 2 quarters (qtr) by 47. This example usesavoirdupoismeasures: 1 t = 20 cwt, 1 cwt = 4 qtr.
First multiply the quarters by 47, the result 94 is written into the first workspace. Next, multiply cwt 12*47 = (2 + 10)*47 but don't add up the partial results (94, 470) yet. Likewise multiply 23 by 47 yielding (141, 940). The quarters column is totaled and the result placed in the second workspace (a trivial move in this case). 94 quarters is 23 cwt and 2 qtr, so place the 2 in the answer and put the 23 in the next column left. Now add up the three entries in the cwt column giving 587. This is 29 t 7 cwt, so write the 7 into the answer and the 29 in the column to the left. Now add up the tons column. There is no adjustment to make, so the result is just copied down.
The same layout and methods can be used for any traditional measurements and non-decimal currencies such as the old British£sdsystem.
|
https://en.wikipedia.org/wiki/Multiplication_algorithm
|
TheKaratsuba algorithmis a fastmultiplication algorithmforintegers. It was discovered byAnatoly Karatsubain 1960 and published in 1962.[1][2][3]It is adivide-and-conquer algorithmthat reduces the multiplication of twon-digit numbers to three multiplications ofn/2-digit numbers and, by repeating this reduction, to at mostnlog23≈n1.58{\displaystyle n^{\log _{2}3}\approx n^{1.58}}single-digit multiplications. It is thereforeasymptotically fasterthan thetraditionalalgorithm, which performsn2{\displaystyle n^{2}}single-digit products.
The Karatsuba algorithm was the first multiplication algorithm asymptotically faster than the quadratic "grade school" algorithm.
TheToom–Cook algorithm(1963) is a faster generalization of Karatsuba's method, and theSchönhage–Strassen algorithm(1971) is even faster, for sufficiently largen.
The standard procedure for multiplication of twon-digit numbers requires a number of elementary operations proportional ton2{\displaystyle n^{2}\,\!}, orO(n2){\displaystyle O(n^{2})\,\!}inbig-O notation.Andrey Kolmogorovconjectured that the traditional algorithm wasasymptotically optimal,meaning that any algorithm for that task would requireΩ(n2){\displaystyle \Omega (n^{2})\,\!}elementary operations.
In 1960, Kolmogorov organized a seminar on mathematical problems incyberneticsat theMoscow State University, where he stated theΩ(n2){\displaystyle \Omega (n^{2})\,\!}conjecture and other problems in thecomplexity of computation. Within a week, Karatsuba, then a 23-year-old student, found an algorithm that multiplies twon-digit numbers inO(nlog23){\displaystyle O(n^{\log _{2}3})}elementary steps, thus disproving the conjecture. Kolmogorov was very excited about the discovery; he communicated it at the next meeting of the seminar, which was then terminated. Kolmogorov gave some lectures on the Karatsuba result at conferences all over the world (see, for example, "Proceedings of the International Congress of Mathematicians 1962", pp. 351–356, and also "6 Lectures delivered at the International Congress of Mathematicians in Stockholm, 1962") and published the method in 1962, in theProceedings of the USSR Academy of Sciences. The article had been written by Kolmogorov and contained two results on multiplication, Karatsuba's algorithm and a separate result byYuri Ofman; it listed "A. Karatsuba and Yu. Ofman" as the authors. Karatsuba only became aware of the paper when he received the reprints from the publisher.[2]
The basic principle of Karatsuba's algorithm isdivide-and-conquer, using a formula that allows one to compute the product of two large numbersx{\displaystyle x}andy{\displaystyle y}using three multiplications of smaller numbers, each with about half as many digits asx{\displaystyle x}ory{\displaystyle y}, plus some additions and digit shifts. This basic step is, in fact, a generalization ofa similar complex multiplication algorithm, where theimaginary unitiis replaced by a power of thebase.
Letx{\displaystyle x}andy{\displaystyle y}be represented asn{\displaystyle n}-digit strings in some baseB{\displaystyle B}. For any positive integerm{\displaystyle m}less thann{\displaystyle n}, one can write the two given numbers as
wherex0{\displaystyle x_{0}}andy0{\displaystyle y_{0}}are less thanBm{\displaystyle B^{m}}. The product is then
xy=(x1Bm+x0)(y1Bm+y0)=x1y1B2m+(x1y0+x0y1)Bm+x0y0=z2B2m+z1Bm+z0,{\displaystyle {\begin{aligned}xy&=(x_{1}B^{m}+x_{0})(y_{1}B^{m}+y_{0})\\&=x_{1}y_{1}B^{2m}+(x_{1}y_{0}+x_{0}y_{1})B^{m}+x_{0}y_{0}\\&=z_{2}B^{2m}+z_{1}B^{m}+z_{0},\\\end{aligned}}}
where
These formulae require four multiplications and were known toCharles Babbage.[4]Karatsuba observed thatxy{\displaystyle xy}can be computed in only three multiplications, at the cost of a few extra additions. Withz0{\displaystyle z_{0}}andz2{\displaystyle z_{2}}as before andz3=(x1+x0)(y1+y0),{\displaystyle z_{3}=(x_{1}+x_{0})(y_{1}+y_{0}),}one can observe that
Thus only three multiplications are required for computingz0,z1{\displaystyle z_{0},z_{1}}andz2.{\displaystyle z_{2}.}
To compute the product of 12345 and 6789, whereB= 10, choosem= 3. We usemright shifts for decomposing the input operands using the resulting base (Bm=1000), as:
Only three multiplications, which operate on smaller integers, are used to compute three partial results:
We get the result by just adding these three partial results, shifted accordingly (and then taking carries into account by decomposing these three inputs in base1000as for the input operands):
Note that the intermediate third multiplication operates on an input domain which is less than two times larger than for the two first multiplications, its output domain is less than four times larger, and base-1000carries computed from the first two multiplications must be taken into account when computing these two subtractions.
Ifnis four or more, the three multiplications in Karatsuba's basic step involve operands with fewer thanndigits. Therefore, those products can be computed byrecursivecalls of the Karatsuba algorithm. The recursion can be applied until the numbers are so small that they can (or must) be computed directly.
In a computer with a full 32-bit by 32-bitmultiplier, for example, one could chooseB= 231and store each digit as a separate 32-bit binary word. Then the sumsx1+x0andy1+y0will not need an extra binary word for storing the carry-over digit (as incarry-save adder), and the Karatsuba recursion can be applied until the numbers to multiply are only one digit long.
Karatsuba's basic step works for any baseBand anym, but the recursive algorithm is most efficient whenmis equal ton/2, rounded up. In particular, ifnis 2k, for some integerk, and the recursion stops only whennis 1, then the number of single-digit multiplications is 3k, which isncwherec= log23.
Since one can extend any inputs with zero digits until their length is a power of two, it follows that the number of elementary multiplications, for anyn, is at most3⌈log2n⌉≤3nlog23{\displaystyle 3^{\lceil \log _{2}n\rceil }\leq 3n^{\log _{2}3}\,\!}.
Since the additions, subtractions, and digit shifts (multiplications by powers ofB) in Karatsuba's basic step take time proportional ton, their cost becomes negligible asnincreases. More precisely, ifT(n) denotes the total number of elementary operations that the algorithm performs when multiplying twon-digit numbers, then
for some constantscandd. For thisrecurrence relation, themaster theorem for divide-and-conquer recurrencesgives theasymptoticboundT(n)=Θ(nlog23){\displaystyle T(n)=\Theta (n^{\log _{2}3})\,\!}.
It follows that, for sufficiently largen, Karatsuba's algorithm will perform fewer shifts and single-digit additions than longhand multiplication, even though its basic step uses more additions and shifts than the straightforward formula. For small values ofn, however, the extra shift and add operations may make it run slower than the longhand method.
Here is the pseudocode for this algorithm, using numbers represented in base ten. For the binary representation of integers, it suffices to replace everywhere 10 by 2.[5]
The second argument of the split_at function specifies the number of digits to extract from theright: for example, split_at("12345", 3) will extract the 3 final digits, giving: high="12", low="345".
An issue that occurs when implementation is that the above computation of(x1+x0){\displaystyle (x_{1}+x_{0})}and(y1+y0){\displaystyle (y_{1}+y_{0})}forz1{\displaystyle z_{1}}may result in overflow (will produce a result in the rangeBm≤result<2Bm{\displaystyle B^{m}\leq {\text{result}}<2B^{m}}), which require a multiplier having one extra bit. This can be avoided by noting that
This computation of(x0−x1){\displaystyle (x_{0}-x_{1})}and(y1−y0){\displaystyle (y_{1}-y_{0})}will produce a result in the range of−Bm<result<Bm{\displaystyle -B^{m}<{\text{result}}<B^{m}}. This method may produce negative numbers, which require one extra bit to encode signedness, and would still require one extra bit for the multiplier. However, one way to avoid this is to record the sign and then use the absolute value of(x0−x1){\displaystyle (x_{0}-x_{1})}and(y1−y0){\displaystyle (y_{1}-y_{0})}to perform an unsigned multiplication, after which the result may be negated when both signs originally differed. Another advantage is that even though(x0−x1)(y1−y0){\displaystyle (x_{0}-x_{1})(y_{1}-y_{0})}may be negative, the final computation ofz1{\displaystyle z_{1}}only involves additions.
|
https://en.wikipedia.org/wiki/Karatsuba_algorithm
|
Toom–Cook, sometimes known asToom-3, named afterAndrei Toom, who introduced the new algorithm with its low complexity, andStephen Cook, who cleaned the description of it, is amultiplication algorithmfor large integers.
Given two large integers,aandb, Toom–Cook splits upaandbintoksmaller parts each of lengthl, and performs operations on the parts. Askgrows, one may combine many of the multiplication sub-operations, thus reducing the overallcomputational complexityof the algorithm. The multiplication sub-operations can then be computed recursively using Toom–Cook multiplication again, and so on. Although the terms "Toom-3" and "Toom–Cook" are sometimes incorrectly used interchangeably, Toom-3 is only a single instance of the Toom–Cook algorithm, wherek= 3.
Toom-3 reduces nine multiplications to five, and runs in Θ(nlog(5)/log(3)) ≈ Θ(n1.46). In general, Toom-kruns inΘ(c(k)ne), wheree= log(2k− 1) / log(k),neis the time spent on sub-multiplications, andcis the time spent on additions and multiplication by small constants.[1]TheKaratsuba algorithmis equivalent to Toom-2, where the number is split into two smaller ones. It reduces four multiplications to three and so operates at Θ(nlog(3)/log(2)) ≈ Θ(n1.58).
Although the exponentecan be set arbitrarily close to 1 by increasingk, the constant term in the function grows very rapidly.[1][2]The growth rate for mixed-level Toom–Cook schemes was still an open research problem in 2005.[3]An implementation described byDonald Knuthachieves the time complexityΘ(n2√2 lognlogn).[4]
Due to its overhead, Toom–Cook is slower than long multiplication with small numbers, and it is therefore typically used for intermediate-size multiplications, before the asymptotically fasterSchönhage–Strassen algorithm(with complexityΘ(nlognlog logn)) becomes practical.
Toom first described this algorithm in 1963, and Cook published an improved (asymptotically equivalent) algorithm in his PhD thesis in 1966.[5]
This section discusses exactly how to perform Toom-kfor any given value ofk, and is a simplification of a description of Toom–Cook polynomial multiplication described by Marco Bodrato.[6]The algorithm has five main steps:
In a typical large integer implementation, each integer is represented as a sequence of digits inpositional notation, with the base or radix set to some (typically large) valueb; for this example we useb= 10000, so that each digit corresponds to a group of four decimal digits (in a computer implementation,bwould typically be a power of 2 instead). Say the two integers being multiplied are:
These are much smaller than would normally be processed with Toom–Cook (grade-school multiplication would be faster) but they will serve to illustrate the algorithm.
In Toom-k, we want to split the factors intokparts.
The first step is to select the baseB=bi, such that the number of digits of bothmandnin baseBis at mostk(e.g., 3 in Toom-3). A typical choice foriis given by:
In our example we'll be doing Toom-3, so we chooseB=b2= 108. We then separatemandninto their baseBdigitsmi,ni:
We then use these digits as coefficients in degree-(k− 1)polynomialspandq, with the property thatp(B) =mandq(B) =n:
The purpose of defining these polynomials is that if we can compute their productr(x) =p(x)q(x), our answer will ber(B) =m×n.
In the case where the numbers being multiplied are of different sizes, it's useful to use different values ofkformandn, which we'll callkmandkn. For example, the algorithm "Toom-2.5" refers to Toom–Cook withkm= 3 andkn= 2. In this case theiinB=biis typically chosen by:
The Toom–Cook approach to computing the polynomial productp(x)q(x) is a commonly used one. Note that a polynomial of degreedis uniquely determined byd+ 1 points (for example, a line - polynomial of degree one is specified by two points). The idea is to evaluatep(·) andq(·) at various points. Then multiply their values at these points to get points on the product polynomial. Finally interpolate to find its coefficients.
Sincedeg(pq) = deg(p) + deg(q), we will needdeg(p) + deg(q) + 1 =km+kn− 1points to determine the final result. Call thisd. In the case of Toom-3,d= 5. The algorithm will work no matter what points are chosen (with a few small exceptions, see matrix invertibility requirement inInterpolation), but in the interest of simplifying the algorithm it's better to choose small integer values like 0, 1, −1, and −2.
One unusual point value that is frequently used is infinity, written ∞ or 1/0. To "evaluate" a polynomialpat infinity actually means to take the limit ofp(x)/xdegpasxgoes to infinity. Consequently,p(∞) is always the value of its highest-degree coefficient (in the example above coefficient m2).
In our Toom-3 example, we will use the points 0, 1, −1, −2, and ∞. These choices simplify evaluation, producing the formulas:
and analogously forq. In our example, the values we get are:
As shown, these values may be negative.
For the purpose of later explanation, it will be useful to view this evaluation process as a matrix-vector multiplication, where each row of the matrix contains powers of one of the evaluation points, and the vector contains the coefficients of the polynomial:
The dimensions of the matrix aredbykmforpanddbyknforq. The row for infinity is always all zero except for a 1 in the last column.
Multipoint evaluation can be obtained faster than with the above formulas. The number of elementary operations (addition/subtraction) can be reduced. The sequence given by Bodrato[6]for Toom-3, executed here over the first operand (polynomialp) of the running example is the following:
This sequence requires five addition/subtraction operations, one less than the straightforward evaluation. Moreover the multiplication by 4 in the calculation ofp(−2) was saved.
Unlike multiplying the polynomialsp(·) andq(·), multiplying the evaluated valuesp(a) andq(a) just involves multiplying integers — a smaller instance of the original problem. We recursively invoke our multiplication procedure to multiply each pair of evaluated points. In practical implementations, as the operands become smaller, the algorithm will switch toschoolbook long multiplication. Lettingrbe the product polynomial, in our example we have:
As shown, these can also be negative. For large enough numbers, this is the most expensive step, the only step that is not linear in the sizes ofmandn.
This is the most complex step, the reverse of the evaluation step: given ourdpoints on the product polynomialr(·), we need to determine its coefficients. In other words, we want to solve this matrix equation for the vector on the right-hand side:
This matrix is constructed the same way as the one in the evaluation step, except that it'sd×d. We could solve this equation with a technique likeGaussian elimination, but this is too expensive. Instead, we use the fact that, provided the evaluation points were chosen suitably, this matrix is invertible (see alsoVandermonde matrix), and so:
All that remains is to compute this matrix-vector product. Although the matrix contains fractions, the resulting coefficients will be integers — so this can all be done with integer arithmetic, just additions, subtractions, and multiplication/division by small constants. A difficult design challenge in Toom–Cook is to find an efficient sequence of operations to compute this product; one sequence given by Bodrato[6]for Toom-3 is the following, executed here over the running example:
We now know our product polynomialr:
If we were using differentkm,kn, or evaluation points, the matrix and so our interpolation strategy would change; but it does not depend on the inputs and so can be hard-coded for any given set of parameters.
Finally, we evaluate r(B) to obtain our final answer. This is straightforward since B is a power ofband so the multiplications by powers of B are all shifts by a whole number of digits in baseb. In the running example b = 104and B = b2= 108.
And this is in fact the product of 1234567890123456789012 and 987654321987654321098.
Here we give common interpolation matrices for a few different common small values ofkmandkn.
Applying formally the definition, we may consider Toom-1 (km=kn= 1). This does not yield a multiplication algorithm, but a recursive algorithm that never halts, as it trivially reduces each input instance to a recursive call with the same instance. The algorithm requires 1 evaluation point, whose value is irrelevant, as it is used only to "evaluate" constant polynomials. Thus, the interpolation matrix is the identity matrix:
Toom-1.5 (km= 2,kn= 1) is still degenerate: it recursively reduces one input by halving its size, but leaves the other input unchanged, hence we can make it into a multiplication algorithm only if we supply a 1 ×nmultiplication algorithm as a base case (whereas the true Toom–Cook algorithm reduces to constant-size base cases). It requires 2 evaluation points, here chosen to be 0 and ∞. Its interpolation matrix is then the identity matrix:
The algorithm is essentially equivalent to a form of long multiplication: both coefficients of one factor are multiplied by the sole coefficient of the other factor.
Toom-2 (km= 2,kn= 2) requires 3 evaluation points, here chosen to be 0, 1, and ∞. It is the same asKaratsuba multiplication, with an interpolation matrix of:
Toom-2.5 (km= 3,kn= 2) requires 4 evaluation points, here chosen to be 0, 1, −1, and ∞. It then has an interpolation matrix of:
|
https://en.wikipedia.org/wiki/Toom%E2%80%93Cook_multiplication
|
TheSchönhage–Strassen algorithmis an asymptotically fastmultiplication algorithmfor largeintegers, published byArnold SchönhageandVolker Strassenin 1971.[1]It works by recursively applyingfast Fourier transform(FFT) overthe integers modulo2n+1{\displaystyle 2^{n}+1}. The run-timebit complexityto multiply twon-digit numbers using the algorithm isO(n⋅logn⋅loglogn){\displaystyle O(n\cdot \log n\cdot \log \log n)}inbigOnotation.
The Schönhage–Strassen algorithm was theasymptotically fastest multiplication methodknown from 1971 until 2007. It is asymptotically faster than older methods such asKaratsubaandToom–Cook multiplication, and starts to outperform them in practice for numbers beyond about 10,000 to 100,000 decimal digits.[2]In 2007, Martin Fürer publishedan algorithmwith faster asymptotic complexity.[3]In 2019, David Harvey andJoris van der Hoevendemonstrated that multi-digit multiplication has theoreticalO(nlogn){\displaystyle O(n\log n)}complexity; however, their algorithm has constant factors which make it impossibly slow for any conceivable practical problem (seegalactic algorithm).[4]
Applications of the Schönhage–Strassen algorithm include large computations done for their own sake such as theGreat Internet Mersenne Prime Searchandapproximations ofπ, as well as practical applications such asLenstra elliptic curve factorizationviaKronecker substitution, which reduces polynomial multiplication to integer multiplication.[5][6]
This section has a simplified version of the algorithm, showing how to compute the productab{\displaystyle ab}of two natural numbersa,b{\displaystyle a,b}, modulo a number of the form2n+1{\displaystyle 2^{n}+1}, wheren=2kM{\displaystyle n=2^{k}M}is some fixed number. The integersa,b{\displaystyle a,b}are to be divided intoD=2k{\displaystyle D=2^{k}}blocks ofM{\displaystyle M}bits, so in practical implementations, it is important to strike the right balance between the parametersM,k{\displaystyle M,k}. In any case, this algorithm will provide a way to multiply two positive integers, providedn{\displaystyle n}is chosen so thatab<2n+1{\displaystyle ab<2^{n}+1}.
Letn=DM{\displaystyle n=DM}be the number of bits in the signalsa{\displaystyle a}andb{\displaystyle b}, whereD=2k{\displaystyle D=2^{k}}is a power of two. Divide the signalsa{\displaystyle a}andb{\displaystyle b}intoD{\displaystyle D}blocks ofM{\displaystyle M}bits each, storing the resulting blocks as arraysA,B{\displaystyle A,B}(whose entries we shall consider for simplicity as arbitrary precision integers).
We now select a modulus for the Fourier transform, as follows. LetM′{\displaystyle M'}be such thatDM′≥2M+k{\displaystyle DM'\geq 2M+k}. Also putn′=DM′{\displaystyle n'=DM'}, and regard the elements of the arraysA,B{\displaystyle A,B}as (arbitrary precision) integers modulo2n′+1{\displaystyle 2^{n'}+1}. Observe that since2n′+1≥22M+k+1=D22M+1{\displaystyle 2^{n'}+1\geq 2^{2M+k}+1=D2^{2M}+1}, the modulus is large enough to accommodate any carries that can result from multiplyinga{\displaystyle a}andb{\displaystyle b}. Thus, the productab{\displaystyle ab}(modulo2n+1{\displaystyle 2^{n}+1}) can be calculated by evaluating the convolution ofA,B{\displaystyle A,B}. Also, withg=22M′{\displaystyle g=2^{2M'}}, we havegD/2≡−1(mod2n′+1){\displaystyle g^{D/2}\equiv -1{\pmod {2^{n'}+1}}}, and sog{\displaystyle g}is a primitiveD{\displaystyle D}th root of unity modulo2n′+1{\displaystyle 2^{n'}+1}.
We now take the discrete Fourier transform of the arraysA,B{\displaystyle A,B}in the ringZ/(2n′+1)Z{\displaystyle \mathbb {Z} /(2^{n'}+1)\mathbb {Z} }, using the root of unityg{\displaystyle g}for the Fourier basis, giving the transformed arraysA^,B^{\displaystyle {\widehat {A}},{\widehat {B}}}. BecauseD=2k{\displaystyle D=2^{k}}is a power of two, this can be achieved in logarithmic time using afast Fourier transform.
LetC^i=A^iB^i{\displaystyle {\widehat {C}}_{i}={\widehat {A}}_{i}{\widehat {B}}_{i}}(pointwise product), and compute the inverse transformC{\displaystyle C}of the arrayC^{\displaystyle {\widehat {C}}}, again using the root of unityg{\displaystyle g}. The arrayC{\displaystyle C}is now the convolution of the arraysA,B{\displaystyle A,B}. Finally, the productab(mod2n+1){\displaystyle ab{\pmod {2^{n}+1}}}is given by evaluatingab≡∑jCj2Mjmod2n+1.{\displaystyle ab\equiv \sum _{j}C_{j}2^{Mj}\mod {2^{n}+1}.}
This basic algorithm can be improved in several ways. Firstly, it is not necessary to store the digits ofa,b{\displaystyle a,b}to arbitrary precision, but rather only up ton′+1{\displaystyle n'+1}bits, which gives a more efficient machine representation of the arraysA,B{\displaystyle A,B}. Secondly, it is clear that the multiplications in the forward transforms are simple bit shifts. With some care, it is also possible to compute the inverse transform using only shifts. Taking care, it is thus possible to eliminate any true multiplications from the algorithm except for where the pointwise productC^i=A^iB^i{\displaystyle {\widehat {C}}_{i}={\widehat {A}}_{i}{\widehat {B}}_{i}}is evaluated. It is therefore advantageous to select the parametersD,M{\displaystyle D,M}so that this pointwise product can be performed efficiently, either because it is a single machine word or using some optimized algorithm for multiplying integers of a (ideally small) number of words. Selecting the parametersD,M{\displaystyle D,M}is thus an important area for further optimization of the method.
Every number in base B, can be written as a polynomial:
Furthermore, multiplication of two numbers could be thought of as a product of two polynomials:
Because, forBk{\displaystyle B^{k}}:ck=∑(i,j):i+j=kaibj=∑i=0kaibk−i{\displaystyle c_{k}=\sum _{(i,j):i+j=k}{a_{i}b_{j}}=\sum _{i=0}^{k}{a_{i}b_{k-i}}},
we have a convolution.
By using FFT (fast Fourier transform), used in the original version rather than NTT (Number-theoretic transform),[7]with convolution rule; we get
That is;Ck=ak∙bk{\displaystyle C_{k}=a_{k}\bullet b_{k}}, whereCk{\displaystyle C_{k}}is the corresponding coefficient in Fourier space. This can also be written as:fft(a∗b)=fft(a)∙fft(b){\displaystyle {\text{fft}}(a*b)={\text{fft}}(a)\bullet {\text{fft}}(b)}.
We have the same coefficients due to linearity under the Fourier transform, and because these polynomials
only consist of one unique term per coefficient:
Convolution rule:f^(X∗Y)=f^(X)∙f^(Y){\displaystyle {\hat {f}}(X*Y)=\ {\hat {f}}(X)\bullet {\hat {f}}(Y)}
We have reduced our convolution problem
to product problem, through FFT.
By finding the FFT of thepolynomial interpolationof eachCk{\displaystyle C_{k}}, one can determine the desired coefficients.
This algorithm uses thedivide-and-conquer methodto divide the problem into subproblems.
By letting:
whereθN=−1{\displaystyle \theta ^{N}=-1}is the nthroot, one sees that:[8]
This mean, one can use weightθi{\displaystyle \theta ^{i}}, and then multiply withθ−k{\displaystyle \theta ^{-k}}after.
Instead of using weight, asθN=−1{\displaystyle \theta ^{N}=-1}, in first step of recursion (whenn=N{\displaystyle n=N}), one can calculate:
In a normal FFT which operates over complex numbers, one would use:
However, FFT can also be used as a NTT (number theoretic transformation) in Schönhage–Strassen. This means that we have to useθto generate numbers in a finite field (for exampleGF(2n+1){\displaystyle \mathrm {GF} (2^{n}+1)}).
A root of unity under a finite fieldGF(r), is an element a such thatθr−1≡1{\displaystyle \theta ^{r-1}\equiv 1}orθr≡θ{\displaystyle \theta ^{r}\equiv \theta }. For exampleGF(p), wherepis aprime number,
gives{1,2,…,p−1}{\displaystyle \{1,2,\ldots ,p-1\}}.
Notice that2n≡−1{\displaystyle 2^{n}\equiv -1}inGF(2n+1){\displaystyle \operatorname {GF} (2^{n}+1)}and2≡−1{\displaystyle {\sqrt {2}}\equiv -1}inGF(2n+2+1){\displaystyle \operatorname {GF} (2^{n+2}+1)}. For these candidates,θN≡−1{\displaystyle \theta ^{N}\equiv -1}under its finite field, and therefore act the way we want .
Same FFT algorithms can still be used, though, as long asθis aroot of unityof a finite field.
To find FFT/NTT transform, we do the following:
First product gives contribution tock{\displaystyle c_{k}}, for eachk. Second gives contribution tock{\displaystyle c_{k}}, due to(i+j){\displaystyle (i+j)}modN(n){\displaystyle N(n)}.
To do the inverse:
depending whether data needs to be normalized.
One multiplies by2−m{\displaystyle 2^{-m}}to normalize FFT data into a specific range, where1n≡2−mmodN(n){\displaystyle {\frac {1}{n}}\equiv 2^{-m}{\bmod {N}}(n)}, wheremis found using themodular multiplicative inverse.
In Schönhage–Strassen algorithm,N=2M+1{\displaystyle N=2^{M}+1}. This should be thought of as a binary tree, where one have values in0≤index≤2M=2i+j{\displaystyle 0\leq {\text{index}}\leq 2^{M}=2^{i+j}}. By lettingK∈[0,M]{\displaystyle K\in [0,M]}, for eachKone can find alli+j=K{\displaystyle i+j=K}, and group all(i,j){\displaystyle (i,j)}pairs into M different groups. Usingi+j=k{\displaystyle i+j=k}to group(i,j){\displaystyle (i,j)}pairs through convolution is a classical problem in algorithms.[9]
Having this in mind,N=2M+1{\displaystyle N=2^{M}+1}help us to group(i,j){\displaystyle (i,j)}intoM2k{\displaystyle {\frac {M}{2^{k}}}}groups for each group of subtasks in depthkin a tree withN=2M2k+1{\displaystyle N=2^{\frac {M}{2^{k}}}+1}
Notice thatN=2M+1=22L+1{\displaystyle N=2^{M}+1=2^{2^{L}}+1}, for some L. This makes N aFermat number. When doing modN=2M+1=22L+1{\displaystyle N=2^{M}+1=2^{2^{L}}+1}, we have a Fermat ring.
Because some Fermat numbers are Fermat primes, one can in some cases avoid calculations.
There are otherNthat could have been used, of course, with same prime number advantages. By lettingN=2k−1{\displaystyle N=2^{k}-1}, one have the maximal number in a binary number withk+1{\displaystyle k+1}bits.N=2k−1{\displaystyle N=2^{k}-1}is a Mersenne number, that in some cases is a Mersenne prime. It is a natural candidate against Fermat numberN=22L+1{\displaystyle N=2^{2^{L}}+1}
Doing several mod calculations against differentN, can be helpful when it comes to solving integer product. By using theChinese remainder theorem, after splittingMinto smaller different types ofN, one can find the answer of multiplicationxy[10]
Fermat numbers and Mersenne numbers are just two types of numbers, in something called generalized Fermat Mersenne number (GSM); with formula:[11]
In this formula,M2,2k{\displaystyle M_{2,2^{k}}}is a Fermat number, andMp,1{\displaystyle M_{p,1}}is a Mersenne number.
This formula can be used to generate sets of equations, that can be used in CRT (Chinese remainder theorem):[12]
Furthermore;g2(p−1)n−1≡a2n−1(modMp,n){\displaystyle g^{2^{(p-1)n}-1}\equiv a^{2^{n}-1}{\pmod {M_{p,n}}}}, whereais an element that generates elements in{1,2,4,...2n−1,2n}{\displaystyle \{1,2,4,...2^{n-1},2^{n}\}}in a cyclic manner.
IfN=2t{\displaystyle N=2^{t}}, where1≤t≤n{\displaystyle 1\leq t\leq n}, thengt=a(2n−1)2n−t{\displaystyle g_{t}=a^{(2^{n}-1)2^{n-t}}}.
The following formula is helpful, finding a properK(number of groups to divideNbits into) given bit sizeNby calculating efficiency :[13]
E=2NK+kn{\displaystyle E={\frac {{\frac {2N}{K}}+k}{n}}}Nis bit size (the one used in2N+1{\displaystyle 2^{N}+1}) at outermost level.KgivesNK{\displaystyle {\frac {N}{K}}}groups of bits, whereK=2k{\displaystyle K=2^{k}}.
nis found throughN, Kandkby finding the smallestx, such that2N/K+k≤n=K2x{\displaystyle 2N/K+k\leq n=K2^{x}}
If one assume efficiency above 50%,n2≤2NK,K≤n{\displaystyle {\frac {n}{2}}\leq {\frac {2N}{K}},K\leq n}andkis very small compared to rest of formula; one get
This means: When something is very effective;Kis bound above by2N{\displaystyle 2{\sqrt {N}}}or asymptotically bound above byN{\displaystyle {\sqrt {N}}}
Following algorithm, the standard Modular Schönhage-Strassen Multiplication algorithm (with some optimizations), is found in overview through[14]
Use at leastK+1{\displaystyle K+1}bits to store them,
For implemantion details, one can read the bookPrime Numbers: A Computational Perspective.[15]This variant differs somewhat from Schönhage's original method in that it exploits thediscrete weighted transformto performnegacyclic convolutionsmore efficiently. Another source for detailed information isKnuth'sThe Art of Computer Programming.[16]
This section explains a number of important practical optimizations, when implementing Schönhage–Strassen.
Below a certain cutoff point, it's more efficient to use other multiplication algorithms, such asToom–Cook multiplication.[17]
The idea is to use2{\displaystyle {\sqrt {2}}}as aroot of unityof order2n+2{\displaystyle 2^{n+2}}in finite fieldGF(2n+2+1){\displaystyle \mathrm {GF} (2^{n+2}+1)}( it is a solution to equationθ2n+2≡1(mod2n+2+1){\displaystyle \theta ^{2^{n+2}}\equiv 1{\pmod {2^{n+2}+1}}}), when weighting values in NTT (number theoretic transformation) approach. It has been shown to save 10% in integer multiplication time.[18]
By lettingm=N+h{\displaystyle m=N+h}, one can computeuvmod2N+1{\displaystyle uv{\bmod {2^{N}+1}}}and(umod2h)(vmod2h){\displaystyle (u{\bmod {2^{h}}})(v{\bmod {2}}^{h})}. In combination with CRT (Chinese Remainder Theorem) to
find exact values of multiplicationuv[19]
Van Meter, Rodney; Itoh, Kohei M. (2005). "Fast Quantum Modular Exponentiation".Physical Review.71(5): 052320.arXiv:quant-ph/0408006.Bibcode:2005PhRvA..71e2320V.doi:10.1103/PhysRevA.71.052320.S2CID14983569.
A discussion of practical crossover points between various algorithms can be found in:Overview of Magma V2.9 Features, arithmetic sectionArchived2006-08-20 at theWayback Machine
Luis Carlos Coronado García, "Can Schönhage multiplication speed up the RSA encryption or decryption?Archived",University of Technology, Darmstadt(2005)
TheGNU Multi-Precision Libraryuses it for values of at least 1728 to 7808 64-bit words (33,000 to 150,000 decimal digits), depending on architecture. See:
"FFT Multiplication (GNU MP 6.2.1)".gmplib.org. Retrieved2021-07-20.
"MUL_FFT_THRESHOLD".GMP developers' corner. Archived fromthe originalon 24 November 2010. Retrieved3 November2011.
"MUL_FFT_THRESHOLD".gmplib.org. Retrieved2021-07-20.
Fürer's algorithm is used in the Basic Polynomial Algebra Subprograms (BPAS) open source library. See:Covanov, Svyatoslav; Mohajerani, Davood; Moreno Maza, Marc; Wang, Linxiao (2019-07-08)."Big Prime Field FFT on Multi-core Processors".Proceedings of the 2019 on International Symposium on Symbolic and Algebraic Computation(PDF). Beijing China: ACM. pp.106–113.doi:10.1145/3326229.3326273.ISBN978-1-4503-6084-5.S2CID195848601.
|
https://en.wikipedia.org/wiki/Sch%C3%B6nhage%E2%80%93Strassen_algorithm
|
Inmathematics, amultiplication table(sometimes, less formally, atimes table) is amathematical tableused to define amultiplicationoperationfor an algebraic system.
Thedecimalmultiplication table was traditionally taught as an essential part of elementary arithmetic around the world, as it lays the foundation for arithmetic operations with base-ten numbers. Many educators believe it is necessary to memorize the table up to 9 × 9.[1]
The oldest known multiplication tables were used by theBabyloniansabout 4000 years ago.[2]However, they used a base of 60.[2]The oldest known tables using a base of 10 are theChinesedecimal multiplication table on bamboo stripsdating to about 305 BC, during China'sWarring Statesperiod.[2]
The multiplication table is sometimes attributed to the ancient Greek mathematicianPythagoras(570–495 BC). It is also called the Table of Pythagoras in many languages (for example French, Italian and Russian), sometimes in English.[4]TheGreco-RomanmathematicianNichomachus(60–120 AD), a follower ofNeopythagoreanism, included a multiplication table in hisIntroduction to Arithmetic, whereas the oldest survivingGreekmultiplication table is on a wax tablet dated to the 1st century AD and currently housed in theBritish Museum.[5]
In 493 AD,Victorius of Aquitainewrote a 98-column multiplication table which gave (inRoman numerals) the product of every number from 2 to 50 times and the rows were "a list of numbers starting with one thousand, descending by hundreds to one hundred, then descending by tens to ten, then by ones to one, and then the fractions down to 1/144."[6]
In his 1820 bookThe Philosophy of Arithmetic,[7]mathematicianJohn Lesliepublished a table of "quarter-squares" which could be used, with some additional steps, for multiplication up to 1000 × 1000. Leslie also recommended that young pupils memorize the multiplication table up to 50 × 50.
In 1897,August Leopold CrellepublishedCalculating tables giving the products of every two numbers from one to one thousand[8]which is a simple multiplication table for products up to 1000 × 10000.
The illustration below shows a table up to 12 × 12, which is a size commonly used nowadays in English-world schools.
Because multiplication of integers iscommutative, many schools use a smaller table as below. Some schools even remove the first column since 1 is themultiplicative identity.[citation needed]
The traditionalrote learningof multiplication was based on memorization of columns in the table, arranged as follows.
0 × 0 = 01 × 0 = 02 × 0 = 03 × 0 = 04 × 0 = 05 × 0 = 06 × 0 = 07 × 0 = 08 × 0 = 09 × 0 = 010 × 0 = 011 × 0 = 012 × 0 = 0
0 × 1 = 01 × 1 = 12 × 1 = 23 × 1 = 34 × 1 = 45 × 1 = 56 × 1 = 67 × 1 = 78 × 1 = 89 × 1 = 910 × 1 = 1011 × 1 = 1112 × 1 = 12
0 × 2 = 01 × 2 = 22 × 2 = 43 × 2 = 64 × 2 = 85 × 2 = 106 × 2 = 127 × 2 = 148 × 2 = 169 × 2 = 1810 × 2 = 2011 × 2 = 2212 × 2 = 24
0 × 3 = 01 × 3 = 32 × 3 = 63 × 3 = 94 × 3 = 125 × 3 = 156 × 3 = 187 × 3 = 218 × 3 = 249 × 3 = 2710 × 3 = 3011 × 3 = 3312 × 3 = 36
0 × 4 = 01 × 4 = 42 × 4 = 83 × 4 = 124 × 4 = 165 × 4 = 206 × 4 = 247 × 4 = 288 × 4 = 329 × 4 = 3610 × 4 = 4011 × 4 = 4412 × 4 = 48
0 × 5 = 01 × 5 = 52 × 5 = 103 × 5 = 154 × 5 = 205 × 5 = 256 × 5 = 307 × 5 = 358 × 5 = 409 × 5 = 4510 × 5 = 5011 × 5 = 5512 × 5 = 60
0 × 6 = 01 × 6 = 62 × 6 = 123 × 6 = 184 × 6 = 245 × 6 = 306 × 6 = 367 × 6 = 428 × 6 = 489 × 6 = 5410 × 6 = 6011 × 6 = 6612 × 6 = 72
0 × 7 = 01 × 7 = 72 × 7 = 143 × 7 = 214 × 7 = 285 × 7 = 356 × 7 = 427 × 7 = 498 × 7 = 569 × 7 = 6310 × 7 = 7011 × 7 = 7712 × 7 = 84
0 × 8 = 01 × 8 = 82 × 8 = 163 × 8 = 244 × 8 = 325 × 8 = 406 × 8 = 487 × 8 = 568 × 8 = 649 × 8 = 7210 × 8 = 8011 × 8 = 8812 × 8 = 96
0 × 9 = 01 × 9 = 92 × 9 = 183 × 9 = 274 × 9 = 365 × 9 = 456 × 9 = 547 × 9 = 638 × 9 = 729 × 9 = 8110 × 9 = 9011 × 9 = 9912 × 9 = 108
0 × 10 = 01 × 10 = 102 × 10 = 203 × 10 = 304 × 10 = 405 × 10 = 506 × 10 = 607 × 10 = 708 × 10 = 809 × 10 = 9010 × 10 = 10011 × 10 = 11012 × 10 = 120
0 × 11 = 01 × 11 = 112 × 11 = 223 × 11 = 334 × 11 = 445 × 11 = 556 × 11 = 667 × 11 = 778 × 11 = 889 × 11 = 9910 × 11 = 11011 × 11 = 12112 × 11 = 132
0 × 12 = 01 × 12 = 122 × 12 = 243 × 12 = 364 × 12 = 485 × 12 = 606 × 12 = 727 × 12 = 848 × 12 = 969 × 12 = 10810 × 12 = 12011 × 12 = 13212 × 12 = 144
This form of writing the multiplication table in columns with complete number sentences is still used in some countries, such as Colombia, Bosnia and Herzegovina,[citation needed]instead of the modern grids above.
There is a pattern in the multiplication table that can help people to memorize the table more easily. It uses the figures below:
Figure 1 is used for multiples of 1, 3, 7, and 9. Figure 2 is used for the multiples of 2, 4, 6, and 8. These patterns can be used to memorize the multiples of any number from 0 to 10, except 5. As you would start on the number you are multiplying, when you multiply by 0, you stay on 0 (0 is external and so the arrows have no effect on 0, otherwise 0 is used as a link to create a perpetual cycle). The pattern also works with multiples of 10, by starting at 1 and simply adding 0, giving you 10, then just apply every number in the pattern to the "tens" unit as you would normally do as usual to the "ones" unit.
For example, to recall all the multiples of 7:
Tables can also define binary operations ongroups,fields,rings, and otheralgebraic systems. In such contexts they are calledCayley tables.
For every natural numbern, addition and multiplication inZn, the ring of integers modulon, is described by annbyntable. (SeeModular arithmetic.) For example, the tables forZ5are:
For other examples, seegroup.
Hypercomplex numbermultiplication tables show the non-commutativeresults of multiplying two hypercomplex imaginary units. The simplest example is that of thequaternionmultiplication table.
For further examples, seeOctonion § Multiplication,Sedenion § Multiplication, andTrigintaduonion § Multiplication.
Mokkandiscovered atHeijō Palacesuggest that the multiplication table may have been introduced to Japan through Chinese mathematical treatises such as theSunzi Suanjing, because their expression of the multiplication table share the character如in products less than ten.[9]Chinese and Japanese share a similar system of eighty-one short, easily memorable sentences taught to students to help them learn the multiplication table up to 9 × 9. In current usage, the sentences that express products less than ten include an additional particle in both languages. In the case of modern Chinese, this is得(dé); and in Japanese, this isが(ga). This is useful for those who practice calculation with asuanpanor asoroban, because the sentences remind them to shift one column to the right when inputting a product that does not begin with atens digit. In particular, the Japanese multiplication table uses non-standard pronunciations for numbers in some specific instances (such as the replacement ofsan rokuwithsaburoku).
A bundle of 21 bamboo slips dated 305 BC in theWarring Statesperiod in theTsinghua Bamboo Slips(清華簡) collection is the world's earliest known example of a decimal multiplication table.[10]
In 1989, theNational Council of Teachers of Mathematics(NCTM) developed new standards which were based on the belief that all students should learn higher-order thinking skills, which recommended reduced emphasis on the teaching of traditional methods that relied on rote memorization, such as multiplication tables. Widely adopted texts such asInvestigations in Numbers, Data, and Space(widely known asTERCafter its producer, Technical Education Research Centers) omitted aids such as multiplication tables in early editions. NCTM made it clear in their 2006Focal Pointsthat basic mathematics facts must be learned, though there is no consensus on whether rote memorization is the best method. In recent years, a number of nontraditional methods have been devised to help children learn multiplication facts, including video-game style apps and books that aim to teach times tables through character-based stories.
|
https://en.wikipedia.org/wiki/Multiplication_table
|
Abinary multiplieris anelectronic circuitused indigital electronics, such as acomputer, tomultiplytwobinary numbers.
A variety ofcomputer arithmetictechniques can be used to implement a digital multiplier. Most techniques involve computing the set ofpartial products,which are then summed together usingbinary adders. This process is similar tolong multiplication, except that it uses a base-2 (binary)numeral system.
Between 1947 and 1949 Arthur Alec Robinson worked forEnglish Electric, as a student apprentice, and then as a development engineer. Crucially during this period he studied for a PhD degree at the University of Manchester, where he worked on the design of the hardware multiplier for the earlyMark 1 computer.
However, until the late 1970s, mostminicomputersdid not have a multiply instruction, and so programmers used a "multiply routine"[1][2][3]which repeatedlyshifts and accumulatespartial results,
often written usingloop unwinding.Mainframe computershad multiply instructions, but they did the same sorts of shifts and adds as a "multiply routine".
Earlymicroprocessorsalso had no multiply instruction. Though the multiply instruction became common with the 16-bit generation,[4]at least two 8-bit processors have a multiply instruction: theMotorola 6809, introduced in 1978,[5]andIntel MCS-51family, developed in 1980, and later the modernAtmel AVR8-bit microprocessors present in the ATMega, ATTiny and ATXMega microcontrollers.
As moretransistors per chipbecame available due to larger-scale integration, it became possible to put enough adders on a single chip to sum all the partial products at once, rather than reuse a single adder to handle each partial product one at a time.
Because some commondigital signal processingalgorithms spend most of their time multiplying,digital signal processordesigners sacrifice considerable chip area in order to make the multiply as fast as possible; a single-cyclemultiply–accumulateunit often used up most of the chip area of early DSPs.
The method taught in school for multiplying decimal numbers is based on calculating partial products, shifting them to the left and then adding them together. The most difficult part is to obtain the partial products, as that involves multiplying a long number by one digit (from 0 to 9):
A binary computer does exactly the same multiplication as decimal numbers do, but with binary numbers. In binary encoding each long number is multiplied by one digit (either 0 or 1), and that is much easier than in decimal, as the product by 0 or 1 is just 0 or the same number. Therefore, the multiplication of two binary numbers comes down to calculating partial products (which are 0 or the first number),shiftingthem left, and then adding them together (a binary addition, of course):
This is much simpler than in the decimal system, as there is no table of multiplication to remember: just shifts and adds.
This method is mathematically correct and has the advantage that a small CPU may perform the multiplication by using the shift and add features of its arithmetic logic unit rather than a specialized circuit. The method is slow, however, as it involves many intermediate additions. These additions are time-consuming. Faster multipliers may be engineered in order to do fewer additions; a modern processor might implement a dedicated parallel adder for partial products, letting the multiplication of two 64-bit numbers be done with only 6 rounds of additions, rather than 63.
The second problem is that the basic school method handles the sign with a separate rule ("+ with + yields +", "+ with − yields −", etc.). Modern computers embed the sign of the number in the number itself, usually in thetwo's complementrepresentation. That forces the multiplication process to be adapted to handle two's complement numbers, and that complicates the process a bit more. Similarly, processors that useones' complement,sign-and-magnitude,IEEE-754or other binary representations require specific adjustments to the multiplication process.
For example, suppose we want to multiply twounsigned8-bit integers together:a[7:0] andb[7:0]. We can produce eight partial products by performing eight 1-bit multiplications, one for each bit in multiplicanda:
where {8{a[0]}} means repeating a[0] (the 0th bit of a) 8 times (Verilognotation).
In order to obtain our product, we then need to add up all eight of our partial products, as shown here:
In other words,P[15:0] is produced by summingp0,p1 << 1,p2 << 2, and so forth, to produce our final unsigned 16-bit product.
Ifbhad been asignedinteger instead of anunsignedinteger, then the partial products would need to have been sign-extended up to the width of the product before summing. Ifahad been a signed integer, then partial productp7would need to be subtracted from the final sum, rather than added to it.
The above array multiplier can be modified to supporttwo's complement notationsigned numbers by inverting several of the product terms and inserting a one to the left of the first partial product term:
Where ~p represents the complement (opposite value) of p.
There are many simplifications in the bit array above that are not shown and are not obvious. The sequences of one complemented bit followed by noncomplemented bits are implementing a two's complement trick to avoid sign extension. The sequence of p7 (noncomplemented bit followed by all complemented bits) is because we're subtracting this term so they were all negated to start out with (and a 1 was added in the least significant position). For both types of sequences, the last bit is flipped and an implicit −1 should be added directly below the MSB. When the +1 from the two's complement negation for p7 in bit position 0 (LSB) and all the −1's in bit columns 7 through 14 (where each of the MSBs are located) are added together, they can be simplified to the single 1 that "magically" is floating out to the left. For an explanation and proof of why flipping the MSB saves us the sign extension, see a computer arithmetic book.[6]
Abinary floating-point numbercontains a sign bit, significant bits (known as the significand) and exponent bits (for simplicity, we don't consider base and combination field). The sign bits of each operand are XOR'd to get the sign of the answer. Then, the two exponents are added to get the exponent of the result. Finally, multiplication of each operand's significand will return the significand of the result. However, if the result of the binary multiplication is higher than the total number of bits for a specific precision (e.g. 32, 64, 128), rounding is required and the exponent is changed appropriately.
The process of multiplication can be split into 3 steps:[7][8]
Older multiplier architectures employed a shifter and accumulator to sum each partial product, often one partial product per cycle, trading off speed for die area. Modern multiplier architectures use the (Modified)Baugh–Wooley algorithm,[9][10][11][12]Wallace trees, orDadda multipliersto add the partial products together in a single cycle. The performance of theWallace treeimplementation is sometimes improved bymodifiedBooth encodingone of the two multiplicands, which reduces the number of partial products that must be summed.
For speed, shift-and-add multipliers require a fast adder (something faster than ripple-carry).[13]
A "single cycle" multiplier (or "fast multiplier") is purecombinational logic.
In a fast multiplier,
the partial-product reduction process usually contributes the most to the delay, power, and area of the multiplier.[7]For speed, the "reduce partial product" stages are typically implemented as acarry-save addercomposed of compressors and the "compute final product" step is implemented as a fast adder (something faster than ripple-carry).
Many fast multipliers use full adders as compressors ("3:2 compressors") implemented in staticCMOS.
To achieve better performance in the same area or the same performance in a smaller area, multiplier designs may use higher order compressors such as 7:3 compressors;[8][7]implement the compressors in faster logic (such transmission gate logic, pass transistor logic,domino logic);[13]connect the compressors in a different pattern; or some combination.
|
https://en.wikipedia.org/wiki/Binary_multiplier
|
Booth's multiplication algorithmis amultiplication algorithmthat multiplies two signedbinarynumbers intwo's complement notation. Thealgorithmwas invented byAndrew Donald Boothin 1950 while doing research oncrystallographyatBirkbeck CollegeinBloomsbury,London.[1]Booth's algorithm is of interest in the study ofcomputer architecture.
Booth's algorithm examines adjacent pairs ofbitsof the 'N'-bit multiplierYin signedtwo's complementrepresentation, including an implicit bit below theleast significant bit,y−1= 0. For each bityi, forirunning from 0 toN− 1, the bitsyiandyi−1are considered. Where these two bits are equal, the product accumulatorPis left unchanged. Whereyi= 0 andyi−1= 1, the multiplicand times 2iis added toP; and whereyi= 1 andyi−1= 0, the multiplicand times 2iis subtracted fromP. The final value ofPis the signed product.[citation needed]
The representations of the multiplicand and product are not specified; typically, these are both also in two's complement representation, like the multiplier, but any number system that supports addition and subtraction will work as well. As stated here, the order of the steps is not determined. Typically, it proceeds fromLSBtoMSB, starting ati= 0; the multiplication by 2iis then typically replaced by incremental shifting of thePaccumulator to the right between steps; low bits can be shifted out, and subsequent additions and subtractions can then be done just on the highestNbits ofP.[2]There are many variations and optimizations on these details.
The algorithm is often described as converting strings of 1s in the multiplier to a high-order +1 and a low-order −1 at the ends of the string. When a string runs through the MSB, there is no high-order +1, and the net effect is interpretation as a negative of the appropriate value.
Booth's algorithm can be implemented by repeatedly adding (with ordinary unsigned binary addition) one of two predetermined valuesAandSto a productP, then performing a rightwardarithmetic shiftonP. Letmandrbe themultiplicandandmultiplier, respectively; and letxandyrepresent the number of bits inmandr.
Find 3 × (−4), withm= 3 andr= −4, andx= 4 andy= 4:
The above-mentioned technique is inadequate when the multiplicand is themost negative numberthat can be represented (e.g. if the multiplicand has 4 bits then this value is −8). This is because then an overflow occurs when computing -m, the negation of the multiplicand, which is needed in order to set S. One possible correction to this problem is to extend A, S, and P by one bit each, while they still represent the same number. That is, while −8 was previously represented in four bits by 1000, it is now represented in 5 bits by 1 1000. This then follows the implementation described above, with modifications in determining the bits of A and S; e.g., the value ofm, originally assigned to the firstxbits of A, will be now be extended tox+1 bits and assigned to the firstx+1 bits of A. Below, the improved technique is demonstrated by multiplying −8 by 2 using 4 bits for the multiplicand and the multiplier:
Consider a positive multiplier consisting of a block of 1s surrounded by 0s. For example, 00111110. The product is given by:M×00111110=M×(25+24+23+22+21)=M×62{\displaystyle M\times {\begin{array}{|r|r|r|r|r|r|r|r|}\hline 0&0&1&1&1&1&1&0\\\hline \end{array}}=M\times (2^{5}+2^{4}+2^{3}+2^{2}+2^{1})=M\times 62}where M is the multiplicand. The number of operations can be reduced to two by rewriting the same asM×010000−10=M×(26−21)=M×62.{\displaystyle M\times {\begin{array}{|r|r|r|r|r|r|r|r|}\hline 0&1&0&0&0&0&-1&0\\\hline \end{array}}=M\times (2^{6}-2^{1})=M\times 62.}
In fact, it can be shown that any sequence of 1s in a binary number can be broken into the difference of two binary numbers:
(…01…1⏞n0…)2≡(…10…0⏞n0…)2−(…00…1⏞n0…)2.{\displaystyle (\ldots 0\overbrace {1\ldots 1} ^{n}0\ldots )_{2}\equiv (\ldots 1\overbrace {0\ldots 0} ^{n}0\ldots )_{2}-(\ldots 0\overbrace {0\ldots 1} ^{n}0\ldots )_{2}.}
Hence, the multiplication can actually be replaced by the string of ones in the original number by simpler operations, adding the multiplier, shifting the partial product thus formed by appropriate places, and then finally subtracting the multiplier. It is making use of the fact that it is not necessary to do anything but shift while dealing with 0s in a binary multiplier, and is similar to using the mathematical property that 99 = 100 − 1 while multiplying by 99.
This scheme can be extended to any number of blocks of 1s in a multiplier (including the case of a single 1 in a block). Thus,
M×00111010=M×(25+24+23+21)=M×58{\displaystyle M\times {\begin{array}{|r|r|r|r|r|r|r|r|}\hline 0&0&1&1&1&0&1&0\\\hline \end{array}}\ =M\times (2^{5}+2^{4}+2^{3}+2^{1})=M\times 58}M×0100−11−10=M×(26−23+22−21)=M×58.{\displaystyle M\times {\begin{array}{|r|r|r|r|r|r|r|r|}\hline 0&1&0&0&-1&1&-1&0\\\hline \end{array}}=M\times (2^{6}-2^{3}+2^{2}-2^{1})=M\times 58.}
Booth's algorithm follows this old scheme by performing an addition when it encounters the first digit of a block of ones (0 1) and subtraction when it encounters the end of the block (1 0). This works for a negative multiplier as well. When the ones in a multiplier are grouped into long blocks, Booth's algorithm performs fewer additions and subtractions than the normal multiplication algorithm.
Intel'sPentiummicroprocessor uses a radix-8 variant of Booth's algorithm in its 64-bit hardware multiplier. Because of the way it implements the radix-8 multiplication, it needs a complex auxiliary circuit to perform the special case of multiplication by 3 in a way that minimizes latency, combining the use ofcarry-lookahead,carry-select, andKogge–Stone addition.[3]
|
https://en.wikipedia.org/wiki/Booth%27s_multiplication_algorithm
|
Incomputing,floating-point arithmetic(FP) isarithmeticon subsets ofreal numbersformed by asignificand(asignedsequence of a fixed number of digits in somebase) multiplied by aninteger powerof that base.
Numbers of this form are calledfloating-point numbers.[1]: 3[2]: 10
For example, the number 2469/200 is a floating-point number in base ten with five digits:2469/200=12.345=12345⏟significand×10⏟base−3⏞exponent{\displaystyle 2469/200=12.345=\!\underbrace {12345} _{\text{significand}}\!\times \!\underbrace {10} _{\text{base}}\!\!\!\!\!\!\!\overbrace {{}^{-3}} ^{\text{exponent}}}However, 7716/625 = 12.3456 is not a floating-point number in base ten with five digits—it needs six digits.
The nearest floating-point number with only five digits is 12.346.
And 1/3 = 0.3333… is not a floating-point number in base ten with any finite number of digits.
In practice, most floating-point systems usebase two, though base ten (decimal floating point) is also common.
Floating-point arithmetic operations, such as addition and division, approximate the corresponding real number arithmetic operations byroundingany result that is not a floating-point number itself to a nearby floating-point number.[1]: 22[2]: 10For example, in a floating-point arithmetic with five base-ten digits, the sum 12.345 + 1.0001 = 13.3451 might be rounded to 13.345.
The termfloating pointrefers to the fact that the number'sradix pointcan "float" anywhere to the left, right, or between thesignificant digitsof the number. This position is indicated by the exponent, so floating point can be considered a form ofscientific notation.
A floating-point system can be used to represent, with a fixed number of digits, numbers of very differentorders of magnitude— such as the number of metersbetween galaxiesorbetween protons in an atom. For this reason, floating-point arithmetic is often used to allow very small and very large real numbers that require fast processing times. The result of thisdynamic rangeis that the numbers that can be represented are not uniformly spaced; the difference between two consecutive representable numbers varies with their exponent.[3]
Over the years, a variety of floating-point representations have been used in computers. In 1985, theIEEE 754Standard for Floating-Point Arithmetic was established, and since the 1990s, the most commonly encountered representations are those defined by the IEEE.
The speed of floating-point operations, commonly measured in terms ofFLOPS, is an important characteristic of acomputer system, especially for applications that involve intensive mathematical calculations.
Afloating-point unit(FPU, colloquially a mathcoprocessor) is a part of a computer system specially designed to carry out operations on floating-point numbers.
Anumber representationspecifies some way of encoding a number, usually as a string of digits.
There are several mechanisms by which strings of digits can represent numbers. In standard mathematical notation, the digit string can be of any length, and the location of theradix pointis indicated by placing an explicit"point" character(dot or comma) there. If the radix point is not specified, then the string implicitly represents anintegerand the unstated radix point would be off the right-hand end of the string, next to the least significant digit. Infixed-pointsystems, a position in the string is specified for the radix point. So a fixed-point scheme might use a string of 8 decimal digits with the decimal point in the middle, whereby "00012345" would represent 0001.2345.
Inscientific notation, the given number is scaled by apower of 10, so that it lies within a specific range—typically between 1 and 10, with the radix point appearing immediately after the first digit. As a power of ten, the scaling factor is then indicated separately at the end of the number. For example, the orbital period ofJupiter's moonIois152,853.5047seconds, a value that would be represented in standard-form scientific notation as1.528535047×105seconds.
Floating-point representation is similar in concept to scientific notation. Logically, a floating-point number consists of:
To derive the value of the floating-point number, thesignificandis multiplied by thebaseraised to the power of theexponent, equivalent to shifting the radix point from its implied position by a number of places equal to the value of the exponent—to the right if the exponent is positive or to the left if the exponent is negative.
Using base-10 (the familiardecimalnotation) as an example, the number152,853.5047, which has ten decimal digits of precision, is represented as the significand1,528,535,047together with 5 as the exponent. To determine the actual value, a decimal point is placed after the first digit of the significand and the result is multiplied by 105to give1.528535047×105, or152,853.5047. In storing such a number, the base (10) need not be stored, since it will be the same for the entire range of supported numbers, and can thus be inferred.
Symbolically, this final value is:sbp−1×be,{\displaystyle {\frac {s}{b^{\,p-1}}}\times b^{e},}
wheresis the significand (ignoring any implied decimal point),pis the precision (the number of digits in the significand),bis the base (in our example, this is the numberten), andeis the exponent.
Historically, several number bases have been used for representing floating-point numbers, with base two (binary) being the most common, followed by base ten (decimal floating point), and other less common varieties, such as base sixteen (hexadecimal floating point[4][5][nb 3]), base eight (octal floating point[1][5][6][4][nb 4]), base four (quaternary floating point[7][5][nb 5]), base three (balanced ternary floating point[1]) and even base 256[5][nb 6]and base65,536.[8][nb 7]
A floating-point number is arational number, because it can be represented as one integer divided by another; for example1.45×103is (145/100)×1000 or145,000/100. The base determines the fractions that can be represented; for instance, 1/5 cannot be represented exactly as a floating-point number using a binary base, but 1/5 can be represented exactly using a decimal base (0.2, or2×10−1). However, 1/3 cannot be represented exactly by either binary (0.010101...) or decimal (0.333...), but inbase 3, it is trivial (0.1 or 1×3−1) . The occasions on which infinite expansions occurdepend on the base and its prime factors.
The way in which the significand (including its sign) and exponent are stored in a computer is implementation-dependent. The common IEEE formats are described in detail later and elsewhere, but as an example, in the binary single-precision (32-bit) floating-point representation,p=24{\displaystyle p=24}, and so the significand is a string of 24bits. For instance, the numberπ's first 33 bits are:110010010000111111011010_101000100.{\displaystyle 11001001\ 00001111\ 1101101{\underline {0}}\ 10100010\ 0.}
In this binary expansion, let us denote the positions from 0 (leftmost bit, or most significant bit) to 32 (rightmost bit). The 24-bit significand will stop at position 23, shown as the underlined bit0above. The next bit, at position 24, is called theround bitorrounding bit. It is used to round the 33-bit approximation to the nearest 24-bit number (there arespecific rules for halfway values, which is not the case here). This bit, which is1in this example, is added to the integer formed by the leftmost 24 bits, yielding:110010010000111111011011_.{\displaystyle 11001001\ 00001111\ 1101101{\underline {1}}.}
When this is stored in memory using the IEEE 754 encoding, this becomes thesignificands. The significand is assumed to have a binary point to the right of the leftmost bit. So, the binary representation of π is calculated from left-to-right as follows:(∑n=0p−1bitn×2−n)×2e=(1×2−0+1×2−1+0×2−2+0×2−3+1×2−4+⋯+1×2−23)×21≈1.57079637×2≈3.1415927{\displaystyle {\begin{aligned}&\left(\sum _{n=0}^{p-1}{\text{bit}}_{n}\times 2^{-n}\right)\times 2^{e}\\={}&\left(1\times 2^{-0}+1\times 2^{-1}+0\times 2^{-2}+0\times 2^{-3}+1\times 2^{-4}+\cdots +1\times 2^{-23}\right)\times 2^{1}\\\approx {}&1.57079637\times 2\\\approx {}&3.1415927\end{aligned}}}
wherepis the precision (24in this example),nis the position of the bit of the significand from the left (starting at0and finishing at23here) andeis the exponent (1in this example).
It can be required that the most significant digit of the significand of a non-zero number be non-zero (except when the corresponding exponent would be smaller than the minimum one). This process is callednormalization. For binary formats (which uses only the digits0and1), this non-zero digit is necessarily1. Therefore, it does not need to be represented in memory, allowing the format to have one more bit of precision. This rule is variously called theleading bit convention, theimplicit bit convention, thehidden bit convention,[1]or theassumed bit convention.
The floating-point representation is by far the most common way of representing in computers an approximation to real numbers. However, there are alternatives:
In 1914, the Spanish engineerLeonardo Torres QuevedopublishedEssays on Automatics,[9]where he designed a special-purpose electromechanical calculator based onCharles Babbage'sanalytical engineand described a way to store floating-point numbers in a consistent manner. He stated that numbers will be stored in exponential format asn× 10m{\displaystyle ^{m}}, and offered three rules by which consistent manipulation of floating-point numbers by machines could be implemented. For Torres, "nwill always be the same number ofdigits(e.g. six), the first digit ofnwill be of order of tenths, the second of hundredths, etc, and one will write each quantity in the form:n;m." The format he proposed shows the need for a fixed-sized significand as is presently used for floating-point data, fixing the location of the decimal point in the significand so that each representation was unique, and how to format such numbers by specifying a syntax to be used that could be entered through atypewriter, as was the case of hisElectromechanical Arithmometerin 1920.[10][11][12]
In 1938,Konrad Zuseof Berlin completed theZ1, the first binary, programmablemechanical computer;[13]it uses a 24-bit binary floating-point number representation with a 7-bit signed exponent, a 17-bit significand (including one implicit bit), and a sign bit.[14]The more reliablerelay-basedZ3, completed in 1941, has representations for both positive and negative infinities; in particular, it implements defined operations with infinity, such as1/∞=0{\displaystyle ^{1}/_{\infty }=0}, and it stops on undefined operations, such as0×∞{\displaystyle 0\times \infty }.
Zuse also proposed, but did not complete, carefully rounded floating-point arithmetic that includes±∞{\displaystyle \pm \infty }and NaN representations, anticipating features of the IEEE Standard by four decades.[15]In contrast,von Neumannrecommended against floating-point numbers for the 1951IAS machine, arguing that fixed-point arithmetic is preferable.[15]
The firstcommercialcomputer with floating-point hardware was Zuse'sZ4computer, designed in 1942–1945. In 1946, Bell Laboratories introduced theModel V, which implementeddecimal floating-point numbers.[16]
ThePilot ACEhas binary floating-point arithmetic, and it became operational in 1950 atNational Physical Laboratory, UK. Thirty-three were later sold commercially as theEnglish Electric DEUCE. The arithmetic is actually implemented in software, but with a one megahertz clock rate, the speed of floating-point and fixed-point operations in this machine were initially faster than those of many competing computers.
The mass-producedIBM 704followed in 1954; it introduced the use of abiased exponent. For many decades after that, floating-point hardware was typically an optional feature, and computers that had it were said to be "scientific computers", or to have "scientific computation" (SC) capability (see alsoExtensions for Scientific Computation(XSC)). It was not until the launch of the Intel i486 in 1989 thatgeneral-purposepersonal computers had floating-point capability in hardware as a standard feature.
TheUNIVAC 1100/2200 series, introduced in 1962, supported two floating-point representations:
TheIBM 7094, also introduced in 1962, supported single-precision and double-precision representations, but with no relation to the UNIVAC's representations. Indeed, in 1964, IBM introducedhexadecimal floating-point representationsin itsSystem/360mainframes; these same representations are still available for use in modernz/Architecturesystems. In 1998, IBM implemented IEEE-compatible binary floating-point arithmetic in its mainframes; in 2005, IBM also added IEEE-compatible decimal floating-point arithmetic.
Initially, computers used many different representations for floating-point numbers. The lack of standardization at the mainframe level was an ongoing problem by the early 1970s for those writing and maintaining higher-level source code; these manufacturer floating-point standards differed in the word sizes, the representations, and the rounding behavior and general accuracy of operations. Floating-point compatibility across multiple computing systems was in desperate need of standardization by the early 1980s, leading to the creation of theIEEE 754standard once the 32-bit (or 64-bit)wordhad become commonplace. This standard was significantly based on a proposal from Intel, which was designing thei8087numerical coprocessor; Motorola, which was designing the68000around the same time, gave significant input as well.
In 1989, mathematician and computer scientistWilliam Kahanwas honored with theTuring Awardfor being the primary architect behind this proposal; he was aided by his student Jerome Coonen and a visiting professor,Harold Stone.[17]
Among the x86 innovations are these:
A floating-point number consists of twofixed-pointcomponents, whose range depends exclusively on the number of bits or digits in their representation. Whereas components linearly depend on their range, the floating-point range linearly depends on the significand range and exponentially on the range of exponent component, which attaches outstandingly wider range to the number.
On a typical computer system, adouble-precision(64-bit) binary floating-point number has a coefficient of 53 bits (including 1 implied bit), an exponent of 11 bits, and 1 sign bit. Since 210= 1024, the complete range of the positive normal floating-point numbers in this format is from 2−1022≈ 2 × 10−308to approximately 21024≈ 2 × 10308.
The number of normal floating-point numbers in a system (B,P,L,U) where
is2(B−1)(BP−1)(U−L+1){\displaystyle 2\left(B-1\right)\left(B^{P-1}\right)\left(U-L+1\right)}.
There is a smallest positive normal floating-point number,
which has a 1 as the leading digit and 0 for the remaining digits of the significand, and the smallest possible value for the exponent.
There is a largest floating-point number,
which hasB− 1 as the value for each digit of the significand and the largest possible value for the exponent.
In addition, there are representable values strictly between −UFL and UFL. Namely,positive and negative zeros, as well assubnormal numbers.
TheIEEEstandardized the computer representation for binary floating-point numbers inIEEE 754(a.k.a. IEC 60559) in 1985. This first standard is followed by almost all modern machines. It wasrevised in 2008. IBM mainframes supportIBM's own hexadecimal floating point formatand IEEE 754-2008decimal floating pointin addition to the IEEE 754 binary format. TheCray T90series had an IEEE version, but theSV1still uses Cray floating-point format.[citation needed]
The standard provides for many closely related formats, differing in only a few details. Five of these formats are calledbasic formats, and others are termedextended precision formatsandextendable precision format. Three formats are especially widely used in computer hardware and languages:[citation needed]
Increasing the precision of the floating-point representation generally reduces the amount of accumulatedround-off errorcaused by intermediate calculations.[24]Other IEEE formats include:
Any integer with absolute value less than 224can be exactly represented in the single-precision format, and any integer with absolute value less than 253can be exactly represented in the double-precision format. Furthermore, a wide range of powers of 2 times such a number can be represented. These properties are sometimes used for purely integer data, to get 53-bit integers on platforms that have double-precision floats but only 32-bit integers.
The standard specifies some special values, and their representation: positiveinfinity(+∞), negative infinity (−∞), anegative zero(−0) distinct from ordinary ("positive") zero, and "not a number" values (NaNs).
Comparison of floating-point numbers, as defined by the IEEE standard, is a bit different from usual integer comparison. Negative and positive zero compare equal, and every NaN compares unequal to every value, including itself. All finite floating-point numbers are strictly smaller than+∞and strictly greater than−∞, and they are ordered in the same way as their values (in the set of real numbers).
Floating-point numbers are typically packed into a computer datum as the sign bit, the exponent field, and a field for the significand, from left to right. For theIEEE 754binary formats (basic and extended) that have extant hardware implementations, they are apportioned as follows:
While the exponent can be positive or negative, in binary formats it is stored as an unsigned number that has a fixed "bias" added to it. Values of all 0s in this field are reserved for the zeros andsubnormal numbers; values of all 1s are reserved for the infinities and NaNs. The exponent range for normal numbers is [−126, 127] for single precision, [−1022, 1023] for double, or [−16382, 16383] for quad. Normal numbers exclude subnormal values, zeros, infinities, and NaNs.
In the IEEE binary interchange formats the leading bit of a normalized significand is not actually stored in the computer datum, since it is always 1. It is called the "hidden" or "implicit" bit. Because of this, the single-precision format actually has a significand with 24 bits of precision, the double-precision format has 53, quad has 113, and octuple has 237.
For example, it was shown above that π, rounded to 24 bits of precision, has:
The sum of the exponent bias (127) and the exponent (1) is 128, so this is represented in the single-precision format as
An example of a layout for32-bit floating pointis
and the64-bit ("double")layout is similar.
In addition to the widely usedIEEE 754standard formats, other floating-point formats are used, or have been used, in certain domain-specific areas.
By their nature, all numbers expressed in floating-point format arerational numberswith a terminating expansion in the relevant base (for example, a terminating decimal expansion in base-10, or a terminating binary expansion in base-2). Irrational numbers, such asπor2{\textstyle {\sqrt {2}}}, or non-terminating rational numbers, must be approximated. The number of digits (or bits) of precision also limits the set of rational numbers that can be represented exactly. For example, the decimal number 123456789 cannot be exactly represented if only eight decimal digits of precision are available (it would be rounded to one of the two straddling representable values, 12345678 × 101or 12345679 × 101), the same applies tonon-terminating digits(.5to be rounded to either .55555555 or .55555556).
When a number is represented in some format (such as a character string) which is not a native floating-point representation supported in a computer implementation, then it will require a conversion before it can be used in that implementation. If the number can be represented exactly in the floating-point format then the conversion is exact. If there is not an exact representation then the conversion requires a choice of which floating-point number to use to represent the original value. The representation chosen will have a different value from the original, and the value thus adjusted is called therounded value.
Whether or not a rational number has a terminating expansion depends on the base. For example, in base-10 the number 1/2 has a terminating expansion (0.5) while the number 1/3 does not (0.333...). In base-2 only rationals with denominators that are powers of 2 (such as 1/2 or 3/16) are terminating. Any rational with a denominator that has a prime factor other than 2 will have an infinite binary expansion. This means that numbers that appear to be short and exact when written in decimal format may need to be approximated when converted to binary floating-point. For example, the decimal number 0.1 is not representable in binary floating-point of any finite precision; the exact binary representation would have a "1100" sequence continuing endlessly:
where, as previously,sis the significand andeis the exponent.
When rounded to 24 bits this becomes
which is actually 0.100000001490116119384765625 in decimal.
As a further example, the real numberπ, represented in binary as an infinite sequence of bits is
but is
when approximated byroundingto a precision of 24 bits.
In binary single-precision floating-point, this is represented ass= 1.10010010000111111011011 withe= 1.
This has a decimal value of
whereas a more accurate approximation of the true value of π is
The result of rounding differs from the true value by about 0.03 parts per million, and matches the decimal representation of π in the first 7 digits. The difference is thediscretization errorand is limited by themachine epsilon.
The arithmetical difference between two consecutive representable floating-point numbers which have the same exponent is called aunit in the last place(ULP). For example, if there is no representable number lying between the representable numbers 1.45A70C2216and 1.45A70C2416, the ULP is 2×16−8, or 2−31. For numbers with a base-2 exponent part of 0, i.e. numbers with an absolute value higher than or equal to 1 but lower than 2, an ULP is exactly 2−23or about 10−7in single precision, and exactly 2−53or about 10−16in double precision. The mandated behavior of IEEE-compliant hardware is that the result be within one-half of a ULP.
Rounding is used when the exact result of a floating-point operation (or a conversion to floating-point format) would need more digits than there are digits in the significand. IEEE 754 requirescorrect rounding: that is, the rounded result is as if infinitely precise arithmetic was used to compute the value and then rounded (although in implementation only three extra bits are needed to ensure this). There are several differentroundingschemes (orrounding modes). Historically,truncationwas the typical approach. Since the introduction of IEEE 754, the default method (round to nearest, ties to even, sometimes called Banker's Rounding) is more commonly used. This method rounds the ideal (infinitely precise) result of an arithmetic operation to the nearest representable value, and gives that representation as the result.[nb 8]In the case of a tie, the value that would make the significand end in an even digit is chosen. The IEEE 754 standard requires the same rounding to be applied to all fundamental algebraic operations, including square root and conversions, when there is a numeric (non-NaN) result. It means that the results of IEEE 754 operations are completely determined in all bits of the result, except for the representation of NaNs. ("Library" functions such as cosine and log are not mandated.)
Alternative rounding options are also available. IEEE 754 specifies the following rounding modes:
Alternative modes are useful when the amount of error being introduced must be bounded. Applications that require a bounded error are multi-precision floating-point, andinterval arithmetic.
The alternative rounding modes are also useful in diagnosing numerical instability: if the results of a subroutine vary substantially between rounding to + and − infinity then it is likely numerically unstable and affected by round-off error.[34]
Converting a double-precision binary floating-point number to a decimal string is a common operation, but an algorithm producing results that are both accurate and minimal did not appear in print until 1990, with Steele and White's Dragon4. Some of the improvements since then include:
Many modern language runtimes use Grisu3 with a Dragon4 fallback.[41]
The problem of parsing a decimal string into a binary FP representation is complex, with an accurate parser not appearing until Clinger's 1990 work (implemented in dtoa.c).[35]Further work has likewise progressed in the direction of faster parsing.[42]
For ease of presentation and understanding, decimalradixwith 7 digit precision will be used in the examples, as in the IEEE 754decimal32format. The fundamental principles are the same in anyradixor precision, except that normalization is optional (it does not affect the numerical value of the result). Here,sdenotes the significand andedenotes the exponent.
A simple method to add floating-point numbers is to first represent them with the same exponent. In the example below, the second number (with the smaller exponent) is shifted right by three digits, and one then proceeds with the usual addition method:
In detail:
This is the true result, the exact sum of the operands. It will be rounded to seven digits and then normalized if necessary. The final result is
The lowest three digits of the second operand (654) are essentially lost. This isround-off error. In extreme cases, the sum of two non-zero numbers may be equal to one of them:
In the above conceptual examples it would appear that a large number of extra digits would need to be provided by the adder to ensure correct rounding; however, for binary addition or subtraction using careful implementation techniques only aguardbit, aroundingbit and one extrastickybit need to be carried beyond the precision of the operands.[43][44]: 218–220
Another problem of loss of significance occurs whenapproximationsto two nearly equal numbers are subtracted. In the following examplee= 5;s= 1.234571 ande= 5;s= 1.234567 are approximations to the rationals 123457.1467 and 123456.659.
The floating-point difference is computed exactly because the numbers are close—theSterbenz lemmaguarantees this, even in case of underflow whengradual underflowis supported. Despite this, the difference of the original numbers ise= −1;s= 4.877000, which differs more than 20% from the differencee= −1;s= 4.000000 of the approximations. In extreme cases, all significant digits of precision can be lost.[43][45]Thiscancellationillustrates the danger in assuming that all of the digits of a computed result are meaningful. Dealing with the consequences of these errors is a topic innumerical analysis; see alsoAccuracy problems.
To multiply, the significands are multiplied while the exponents are added, and the result is rounded and normalized.
Similarly, division is accomplished by subtracting the divisor's exponent from the dividend's exponent, and dividing the dividend's significand by the divisor's significand.
There are no cancellation or absorption problems with multiplication or division, though small errors may accumulate as operations are performed in succession.[43]In practice, the way these operations are carried out in digital logic can be quite complex (seeBooth's multiplication algorithmandDivision algorithm).[nb 9]
Literals for floating-point numbers depend on languages. They typically useeorEto denotescientific notation. TheC programming languageand theIEEE 754standard also define ahexadecimal literal syntaxwith a base-2 exponent instead of 10. In languages likeC, when the decimal exponent is omitted, a decimal point is needed to differentiate them from integers. Other languages do not have an integer type (such asJavaScript), or allow overloading of numeric types (such asHaskell). In these cases, digit strings such as123may also be floating-point literals.
Examples of floating-point literals are:
Floating-point computation in a computer can run into three kinds of problems:
Prior to the IEEE standard, such conditions usually caused the program to terminate, or triggered some kind oftrapthat the programmer might be able to catch. How this worked was system-dependent, meaning that floating-point programs were notportable. (The term "exception" as used in IEEE 754 is a general term meaning an exceptional condition, which is not necessarily an error, and is a different usage to that typically defined in programming languages such as a C++ or Java, in which an "exception" is an alternative flow of control, closer to what is termed a "trap" in IEEE 754 terminology.)
Here, the required default method of handling exceptions according to IEEE 754 is discussed (the IEEE 754 optional trapping and other "alternate exception handling" modes are not discussed). Arithmetic exceptions are (by default) required to be recorded in "sticky" status flag bits. That they are "sticky" means that they are not reset by the next (arithmetic) operation, but stay set until explicitly reset. The use of "sticky" flags thus allows for testing of exceptional conditions to be delayed until after a full floating-point expression or subroutine: without them exceptional conditions that could not be otherwise ignored would require explicit testing immediately after every floating-point operation. By default, an operation always returns a result according to specification without interrupting computation. For instance, 1/0 returns +∞, while also setting the divide-by-zero flag bit (this default of ∞ is designed to often return a finite result when used in subsequent operations and so be safely ignored).
The original IEEE 754 standard, however, failed to recommend operations to handle such sets of arithmetic exception flag bits. So while these were implemented in hardware, initially programming language implementations typically did not provide a means to access them (apart from assembler). Over time some programming language standards (e.g.,C99/C11 and Fortran) have been updated to specify methods to access and change status flag bits. The 2008 version of the IEEE 754 standard now specifies a few operations for accessing and handling the arithmetic flag bits. The programming model is based on a single thread of execution and use of them by multiple threads has to be handled by ameansoutside of the standard (e.g.C11specifies that the flags havethread-local storage).
IEEE 754 specifies five arithmetic exceptions that are to be recorded in the status flags ("sticky bits"):
The default return value for each of the exceptions is designed to give the correct result in the majority of cases such that the exceptions can be ignored in the majority of codes.inexactreturns a correctly rounded result, andunderflowreturns a value less than or equal to the smallest positive normal number in magnitude and can almost always be ignored.[46]divide-by-zeroreturns infinity exactly, which will typically then divide a finite number and so give zero, or else will give aninvalidexception subsequently if not, and so can also typically be ignored. For example, the effective resistance of n resistors in parallel (see fig. 1) is given byRtot=1/(1/R1+1/R2+⋯+1/Rn){\displaystyle R_{\text{tot}}=1/(1/R_{1}+1/R_{2}+\cdots +1/R_{n})}. If a short-circuit develops withR1{\displaystyle R_{1}}set to 0,1/R1{\displaystyle 1/R_{1}}will return +infinity which will give a finalRtot{\displaystyle R_{tot}}of 0, as expected[47](see the continued fraction example ofIEEE 754 design rationalefor another example).
Overflowandinvalidexceptions can typically not be ignored, but do not necessarily represent errors: for example, aroot-findingroutine, as part of its normal operation, may evaluate a passed-in function at values outside of its domain, returning NaN and aninvalidexception flag to be ignored until finding a useful start point.[46]
The fact that floating-point numbers cannot accurately represent all real numbers, and that floating-point operations cannot accurately represent true arithmetic operations, leads to many surprising situations. This is related to the finiteprecisionwith which computers generally represent numbers.
For example, the decimal numbers 0.1 and 0.01 cannot be represented exactly as binary floating-point numbers. In the IEEE 754 binary32 format with its 24-bit significand, the result of attempting to square the approximation to 0.1 is neither 0.01 nor the representable number closest to it. The decimal number 0.1 is represented in binary ase= −4;s= 110011001100110011001101, which is
Squaring this number gives
Squaring it with rounding to the 24-bit precision gives
But the representable number closest to 0.01 is
Also, the non-representability of π (and π/2) means that an attempted computation of tan(π/2) will not yield a result of infinity, nor will it even overflow in the usual floating-point formats (assuming an accurate implementation of tan). It is simply not possible for standard floating-point hardware to attempt to compute tan(π/2), because π/2 cannot be represented exactly. This computation in C:
will give a result of 16331239353195370.0. In single precision (using thetanffunction), the result will be −22877332.0.
By the same token, an attempted computation of sin(π) will not yield zero. The result will be (approximately) 0.1225×10−15in double precision, or −0.8742×10−7in single precision.[nb 10]
While floating-point addition and multiplication are bothcommutative(a+b=b+aanda×b=b×a), they are not necessarilyassociative. That is,(a+b) +cis not necessarily equal toa+ (b+c). Using 7-digit significand decimal arithmetic:
They are also not necessarilydistributive. That is,(a+b) ×cmay not be the same asa×c+b×c:
In addition to loss of significance, inability to represent numbers such as π and 0.1 exactly, and other slight inaccuracies, the following phenomena may occur:
Q(h)=f(a+h)−f(a)h.{\displaystyle Q(h)={\frac {f(a+h)-f(a)}{h}}.}
Machine precisionis a quantity that characterizes the accuracy of a floating-point system, and is used inbackward error analysisof floating-point algorithms. It is also known as unit roundoff ormachine epsilon. Usually denotedΕmach, its value depends on the particular rounding being used.
With rounding to zero,Emach=B1−P,{\displaystyle \mathrm {E} _{\text{mach}}=B^{1-P},\,}whereas rounding to nearest,Emach=12B1−P,{\displaystyle \mathrm {E} _{\text{mach}}={\tfrac {1}{2}}B^{1-P},}whereBis the base of the system andPis the precision of the significand (in baseB).
This is important since it bounds therelative errorin representing any non-zero real numberxwithin the normalized range of a floating-point system:|fl(x)−xx|≤Emach.{\displaystyle \left|{\frac {\operatorname {fl} (x)-x}{x}}\right|\leq \mathrm {E} _{\text{mach}}.}
Backward error analysis, the theory of which was developed and popularized byJames H. Wilkinson, can be used to establish that an algorithm implementing a numerical function is numerically stable.[52]The basic approach is to show that although the calculated result, due to roundoff errors, will not be exactly correct, it is the exact solution to a nearby problem with slightly perturbed input data. If the perturbation required is small, on the order of the uncertainty in the input data, then the results are in some sense as accurate as the data "deserves". The algorithm is then defined asbackward stable. Stability is a measure of the sensitivity to rounding errors of a given numerical procedure; by contrast, thecondition numberof a function for a given problem indicates the inherent sensitivity of the function to small perturbations in its input and is independent of the implementation used to solve the problem.[53]
As a trivial example, consider a simple expression giving the inner product of (length two) vectorsx{\displaystyle x}andy{\displaystyle y}, thenfl(x⋅y)=fl(fl(x1⋅y1)+fl(x2⋅y2)),wherefl()indicates correctly rounded floating-point arithmetic=fl((x1⋅y1)(1+δ1)+(x2⋅y2)(1+δ2)),whereδn≤Emach,from above=((x1⋅y1)(1+δ1)+(x2⋅y2)(1+δ2))(1+δ3)=(x1⋅y1)(1+δ1)(1+δ3)+(x2⋅y2)(1+δ2)(1+δ3),{\displaystyle {\begin{aligned}\operatorname {fl} (x\cdot y)&=\operatorname {fl} {\big (}\operatorname {fl} (x_{1}\cdot y_{1})+\operatorname {fl} (x_{2}\cdot y_{2}){\big )},&&{\text{ where }}\operatorname {fl} (){\text{ indicates correctly rounded floating-point arithmetic}}\\&=\operatorname {fl} {\big (}(x_{1}\cdot y_{1})(1+\delta _{1})+(x_{2}\cdot y_{2})(1+\delta _{2}){\big )},&&{\text{ where }}\delta _{n}\leq \mathrm {E} _{\text{mach}},{\text{ from above}}\\&={\big (}(x_{1}\cdot y_{1})(1+\delta _{1})+(x_{2}\cdot y_{2})(1+\delta _{2}){\big )}(1+\delta _{3})\\&=(x_{1}\cdot y_{1})(1+\delta _{1})(1+\delta _{3})+(x_{2}\cdot y_{2})(1+\delta _{2})(1+\delta _{3}),\end{aligned}}}and sofl(x⋅y)=x^⋅y^,{\displaystyle \operatorname {fl} (x\cdot y)={\hat {x}}\cdot {\hat {y}},}
where
x^1=x1(1+δ1);x^2=x2(1+δ2);y^1=y1(1+δ3);y^2=y2(1+δ3),{\displaystyle {\begin{aligned}{\hat {x}}_{1}&=x_{1}(1+\delta _{1});&{\hat {x}}_{2}&=x_{2}(1+\delta _{2});\\{\hat {y}}_{1}&=y_{1}(1+\delta _{3});&{\hat {y}}_{2}&=y_{2}(1+\delta _{3}),\\\end{aligned}}}
where
δn≤Emach{\displaystyle \delta _{n}\leq \mathrm {E} _{\text{mach}}}
by definition, which is the sum of two slightly perturbed (on the order of Εmach) input data, and so is backward stable. For more realistic examples innumerical linear algebra, see Higham 2002[54]and other references below.
Although individual arithmetic operations of IEEE 754 are guaranteed accurate to within half aULP, more complicated formulae can suffer from larger errors for a variety of reasons. The loss of accuracy can be substantial if a problem or its data areill-conditioned, meaning that the correct result is hypersensitive to tiny perturbations in its data. However, even functions that are well-conditioned can suffer from large loss of accuracy if an algorithmnumerically unstablefor that data is used: apparently equivalent formulations of expressions in a programming language can differ markedly in their numerical stability. One approach to remove the risk of such loss of accuracy is the design and analysis of numerically stable algorithms, which is an aim of the branch of mathematics known asnumerical analysis. Another approach that can protect against the risk of numerical instabilities is the computation of intermediate (scratch) values in an algorithm at a higher precision than the final result requires,[55]which can remove, or reduce by orders of magnitude,[56]such risk:IEEE 754 quadruple precisionandextended precisionare designed for this purpose when computing at double precision.[57][nb 11]
For example, the following algorithm is a direct implementation to compute the functionA(x) = (x−1) / (exp(x−1) − 1)which is well-conditioned at 1.0,[nb 12]however it can be shown to be numerically unstable and lose up to half the significant digits carried by the arithmetic when computed near 1.0.[58]
If, however, intermediate computations are all performed in extended precision (e.g. by setting line [1] toC99long double), then up to full precision in the final double result can be maintained.[nb 13]Alternatively, a numerical analysis of the algorithm reveals that if the following non-obvious change to line [2] is made:
then the algorithm becomes numerically stable and can compute to full double precision.
To maintain the properties of such carefully constructed numerically stable programs, careful handling by thecompileris required. Certain "optimizations" that compilers might make (for example, reordering operations) can work against the goals of well-behaved software. There is some controversy about the failings of compilers and language designs in this area: C99 is an example of a language where such optimizations are carefully specified to maintain numerical precision. See the external references at the bottom of this article.
A detailed treatment of the techniques for writing high-quality floating-point software is beyond the scope of this article, and the reader is referred to,[54][59]and the other references at the bottom of this article. Kahan suggests several rules of thumb that can substantially decrease by orders of magnitude[59]the risk of numerical anomalies, in addition to, or in lieu of, a more careful numerical analysis. These include: as noted above, computing all expressions and intermediate results in the highest precision supported in hardware (a common rule of thumb is to carry twice the precision of the desired result, i.e. compute in double precision for a final single-precision result, or in double extended or quad precision for up to double-precision results[60]); and rounding input data and results to only the precision required and supported by the input data (carrying excess precision in the final result beyond that required and supported by the input data can be misleading, increases storage cost and decreases speed, and the excess bits can affect convergence of numerical procedures:[61]notably, the first form of the iterative example given below converges correctly when using this rule of thumb). Brief descriptions of several additional issues and techniques follow.
As decimal fractions can often not be exactly represented in binary floating-point, such arithmetic is at its best when it is simply being used to measure real-world quantities over a wide range of scales (such as the orbital period of a moon around Saturn or the mass of aproton), and at its worst when it is expected to model the interactions of quantities expressed as decimal strings that are expected to be exact.[56][59]An example of the latter case is financial calculations. For this reason, financial software tends not to use a binary floating-point number representation.[62]The "decimal" data type of theC#andPythonprogramming languages, and the decimal formats of theIEEE 754-2008standard, are designed to avoid the problems of binary floating-point representations when applied to human-entered exact decimal values, and make the arithmetic always behave as expected when numbers are printed in decimal.
Expectations from mathematics may not be realized in the field of floating-point computation. For example, it is known that(x+y)(x−y)=x2−y2{\displaystyle (x+y)(x-y)=x^{2}-y^{2}\,}, and thatsin2θ+cos2θ=1{\displaystyle \sin ^{2}{\theta }+\cos ^{2}{\theta }=1\,}, however these facts cannot be relied on when the quantities involved are the result of floating-point computation.
The use of the equality test (if (x==y) ...) requires care when dealing with floating-point numbers. Even simple expressions like0.6/0.2-3==0will, on most computers, fail to be true[63](in IEEE 754 double precision, for example,0.6/0.2 - 3is approximately equal to−4.44089209850063×10−16). Consequently, such tests are sometimes replaced with "fuzzy" comparisons (if (abs(x-y) < epsilon) ..., where epsilon is sufficiently small and tailored to the application, such as 1.0E−13). The wisdom of doing this varies greatly, and can require numerical analysis to bound epsilon.[54]Values derived from the primary data representation and their comparisons should be performed in a wider, extended, precision to minimize the risk of such inconsistencies due to round-off errors.[59]It is often better to organize the code in such a way that such tests are unnecessary. For example, incomputational geometry, exact tests of whether a point lies off or on a line or plane defined by other points can be performed using adaptive precision or exact arithmetic methods.[64]
Small errors in floating-point arithmetic can grow when mathematical algorithms perform operations an enormous number of times. A few examples arematrix inversion,eigenvectorcomputation, and differential equation solving. These algorithms must be very carefully designed, using numerical approaches such asiterative refinement, if they are to work well.[65]
Summation of a vector of floating-point values is a basic algorithm inscientific computing, and so an awareness of when loss of significance can occur is essential. For example, if one is adding a very large number of numbers, the individual addends are very small compared with the sum. This can lead to loss of significance. A typical addition would then be something like
The low 3 digits of the addends are effectively lost. Suppose, for example, that one needs to add many numbers, all approximately equal to 3. After 1000 of them have been added, the running sum is about 3000; the lost digits are not regained. TheKahan summation algorithmmay be used to reduce the errors.[54]
Round-off error can affect the convergence and accuracy of iterative numerical procedures. As an example,Archimedesapproximated π by calculating the perimeters of polygons inscribing and circumscribing a circle, starting with hexagons, and successively doubling the number of sides. As noted above, computations may be rearranged in a way that is mathematically equivalent but less prone to error (numerical analysis). Two forms of the recurrence formula for the circumscribed polygon are:[citation needed]
Here is a computation using IEEE "double" (a significand with 53 bits of precision) arithmetic:
While the two forms of the recurrence formula are clearly mathematically equivalent,[nb 14]the first subtracts 1 from a number extremely close to 1, leading to an increasingly problematic loss ofsignificant digits. As the recurrence is applied repeatedly, the accuracy improves at first, but then it deteriorates. It never gets better than about 8 digits, even though 53-bit arithmetic should be capable of about 16 digits of precision. When the second form of the recurrence is used, the value converges to 15 digits of precision.
The aforementioned lack ofassociativityof floating-point operations in general means thatcompilerscannot as effectively reorder arithmetic expressions as they could with integer and fixed-point arithmetic, presenting a roadblock in optimizations such ascommon subexpression eliminationand auto-vectorization.[66]The "fast math" option on many compilers (ICC, GCC, Clang, MSVC...) turns on reassociation along with unsafe assumptions such as a lack of NaN and infinite numbers in IEEE 754. Some compilers also offer more granular options to only turn on reassociation. In either case, the programmer is exposed to many of the precision pitfalls mentioned above for the portion of the program using "fast" math.[67]
In some compilers (GCC and Clang), turning on "fast" math may cause the program todisable subnormal floatsat startup, affecting the floating-point behavior of not only the generated code, but also any program using such code as alibrary.[68]
In mostFortrancompilers, as allowed by the ISO/IEC 1539-1:2004 Fortran standard, reassociation is the default, with breakage largely prevented by the "protect parens" setting (also on by default). This setting stops the compiler from reassociating beyond the boundaries of parentheses.[69]Intel Fortran Compileris a notable outlier.[70]
A common problem in "fast" math is that subexpressions may not be optimized identically from place to place, leading to unexpected differences. One interpretation of the issue is that "fast" math as implemented currently has a poorly defined semantics. One attempt at formalizing "fast" math optimizations is seen inIcing, a verified compiler.[71]
|
https://en.wikipedia.org/wiki/Floating-point_arithmetic
|
Incomputing, especiallydigital signal processing, themultiply–accumulate(MAC) ormultiply–add(MAD) operation is a common step that computes the product of two numbers and adds that product to anaccumulator. The hardware unit that performs the operation is known as amultiplier–accumulator(MAC unit); the operation itself is also often called a MAC or a MAD operation. The MAC operation modifies an accumulatora:a←a+(b×c){\displaystyle a\gets a+(b\times c)}
When done withfloating-pointnumbers, it might be performed with tworoundings(typical in manyDSPs), or with a single rounding. When performed with a single rounding, it is called afused multiply–add(FMA) orfused multiply–accumulate(FMAC).
Modern computers may contain a dedicated MAC, consisting of a multiplier implemented incombinational logicfollowed by anadderand an accumulator register that stores the result. The output of the register is fed back to one input of the adder, so that on each clock cycle, the output of the multiplier is added to the register. Combinational multipliers require a large amount of logic, but can compute a product much more quickly than themethod of shifting and addingtypical of earlier computers.Percy Ludgatewas the first to conceive a MAC in his Analytical Machine of 1909,[1]and the first to exploit a MAC for division (using multiplication seeded by reciprocal, via the convergent series(1+x)−1). The first modern processors to be equipped with MAC units weredigital signal processors, but the technique is now also common in general-purpose processors.[2][3][4][5]
When done withintegers, the operation is typically exact (computedmodulosomepower of two). However,floating-pointnumbers have only a certain amount of mathematicalprecision. That is, digital floating-point arithmetic is generally notassociativeordistributive. (SeeFloating-point arithmetic § Accuracy problems.)
Therefore, it makes a difference to the result whether the multiply–add is performed with two roundings, or in one operation with a single rounding (a fused multiply–add).IEEE 754-2008specifies that it must be performed with one rounding, yielding a more accurate result.[6]
Afused multiply–add(FMAorfmadd)[7]is a floating-point multiply–add operation performed in one step (fused operation), with a single rounding. That is, where an unfused multiply–add would compute the productb×c, round it toNsignificant bits, add the result toa, and round back toNsignificant bits, a fused multiply–add would compute the entire expressiona+ (b×c)to its full precision before rounding the final result down toNsignificant bits.
A fast FMA can speed up and improve the accuracy of many computations that involve the accumulation of products:
Fused multiply–add can usually be relied on to give more accurate results. However,William Kahanhas pointed out that it can give problems if used unthinkingly.[8]Ifx2−y2is evaluated as((x×x) −y×y)(following Kahan's suggested notation in which redundant parentheses direct the compiler to round the(x×x)term first) using fused multiply–add, then the result may be negative even whenx=ydue to the first multiplication discarding low significance bits. This could then lead to an error if, for instance, the square root of the result is then evaluated.
When implemented inside amicroprocessor, an FMA can be faster than a multiply operation followed by an add. However, standard industrial implementations based on the original IBM RS/6000 design require a 2N-bit adder to compute the sum properly.[9]
Another benefit of including this instruction is that it allows an efficient software implementation ofdivision(seedivision algorithm) andsquare root(seemethods of computing square roots) operations, thus eliminating the need for dedicated hardware for those operations.[10]
Some machines combine multiple fused multiply add operations into a single step, e.g. performing a four-element dot-product on two 128-bitSIMDregistersa0×b0 + a1×b1 + a2×b2 + a3×b3with single cycle throughput.
The FMA operation is included inIEEE 754-2008.
The1999 standardof theC programming languagesupports the FMA operation through thefma()standard math library function and the automatic transformation of a multiplication followed by an addition (contraction of floating-point expressions), which can be explicitly enabled or disabled with standard pragmas (#pragma STDC FP_CONTRACT). TheGCCandClangC compilers do such transformations by default for processor architectures that support FMA instructions. With GCC, which does not support the aforementioned pragma,[11]this can be globally controlled by the-ffp-contractcommand line option.[12]
The fused multiply–add operation was introduced as "multiply–add fused" in the IBMPOWER1(1990) processor,[13]but has been added to numerous processors:
|
https://en.wikipedia.org/wiki/Multiply%E2%80%93accumulate_operation
|
Incomputing, especiallydigital signal processing, themultiply–accumulate(MAC) ormultiply–add(MAD) operation is a common step that computes the product of two numbers and adds that product to anaccumulator. The hardware unit that performs the operation is known as amultiplier–accumulator(MAC unit); the operation itself is also often called a MAC or a MAD operation. The MAC operation modifies an accumulatora:a←a+(b×c){\displaystyle a\gets a+(b\times c)}
When done withfloating-pointnumbers, it might be performed with tworoundings(typical in manyDSPs), or with a single rounding. When performed with a single rounding, it is called afused multiply–add(FMA) orfused multiply–accumulate(FMAC).
Modern computers may contain a dedicated MAC, consisting of a multiplier implemented incombinational logicfollowed by anadderand an accumulator register that stores the result. The output of the register is fed back to one input of the adder, so that on each clock cycle, the output of the multiplier is added to the register. Combinational multipliers require a large amount of logic, but can compute a product much more quickly than themethod of shifting and addingtypical of earlier computers.Percy Ludgatewas the first to conceive a MAC in his Analytical Machine of 1909,[1]and the first to exploit a MAC for division (using multiplication seeded by reciprocal, via the convergent series(1+x)−1). The first modern processors to be equipped with MAC units weredigital signal processors, but the technique is now also common in general-purpose processors.[2][3][4][5]
When done withintegers, the operation is typically exact (computedmodulosomepower of two). However,floating-pointnumbers have only a certain amount of mathematicalprecision. That is, digital floating-point arithmetic is generally notassociativeordistributive. (SeeFloating-point arithmetic § Accuracy problems.)
Therefore, it makes a difference to the result whether the multiply–add is performed with two roundings, or in one operation with a single rounding (a fused multiply–add).IEEE 754-2008specifies that it must be performed with one rounding, yielding a more accurate result.[6]
Afused multiply–add(FMAorfmadd)[7]is a floating-point multiply–add operation performed in one step (fused operation), with a single rounding. That is, where an unfused multiply–add would compute the productb×c, round it toNsignificant bits, add the result toa, and round back toNsignificant bits, a fused multiply–add would compute the entire expressiona+ (b×c)to its full precision before rounding the final result down toNsignificant bits.
A fast FMA can speed up and improve the accuracy of many computations that involve the accumulation of products:
Fused multiply–add can usually be relied on to give more accurate results. However,William Kahanhas pointed out that it can give problems if used unthinkingly.[8]Ifx2−y2is evaluated as((x×x) −y×y)(following Kahan's suggested notation in which redundant parentheses direct the compiler to round the(x×x)term first) using fused multiply–add, then the result may be negative even whenx=ydue to the first multiplication discarding low significance bits. This could then lead to an error if, for instance, the square root of the result is then evaluated.
When implemented inside amicroprocessor, an FMA can be faster than a multiply operation followed by an add. However, standard industrial implementations based on the original IBM RS/6000 design require a 2N-bit adder to compute the sum properly.[9]
Another benefit of including this instruction is that it allows an efficient software implementation ofdivision(seedivision algorithm) andsquare root(seemethods of computing square roots) operations, thus eliminating the need for dedicated hardware for those operations.[10]
Some machines combine multiple fused multiply add operations into a single step, e.g. performing a four-element dot-product on two 128-bitSIMDregistersa0×b0 + a1×b1 + a2×b2 + a3×b3with single cycle throughput.
The FMA operation is included inIEEE 754-2008.
The1999 standardof theC programming languagesupports the FMA operation through thefma()standard math library function and the automatic transformation of a multiplication followed by an addition (contraction of floating-point expressions), which can be explicitly enabled or disabled with standard pragmas (#pragma STDC FP_CONTRACT). TheGCCandClangC compilers do such transformations by default for processor architectures that support FMA instructions. With GCC, which does not support the aforementioned pragma,[11]this can be globally controlled by the-ffp-contractcommand line option.[12]
The fused multiply–add operation was introduced as "multiply–add fused" in the IBMPOWER1(1990) processor,[13]but has been added to numerous processors:
|
https://en.wikipedia.org/wiki/Fused_multiply%E2%80%93add
|
AWallace multiplieris ahardwareimplementation of abinary multiplier, a digital circuit that multiplies two integers. It uses a selection of full and halfadders(theWallace treeorWallace reduction) to sum partial products in stages until two numbers are left. Wallace multipliers reduce as much as possible on each layer, whereasDadda multiplierstry to minimize the required number of gates by postponing the reduction to the upper layers.[1]
Wallace multipliers were devised by the Australian computer scientistChris Wallacein 1964.[2]
The Wallace tree has three steps:
Compared to naively adding partial products with regular adders, the benefit of the Wallace tree is its faster speed. It hasO(logn){\displaystyle O(\log n)}reduction layers, but each layer has onlyO(1){\displaystyle O(1)}propagation delay. A naive addition of partial products would requireO(log2n){\displaystyle O(\log ^{2}n)}time.
As making the partial products isO(1){\displaystyle O(1)}and the final addition isO(logn){\displaystyle O(\log n)}, the total multiplication isO(logn){\displaystyle O(\log n)}, not much slower than addition. From acomplexity theoreticperspective, the Wallace tree algorithm puts multiplication in the classNC1.
The downside of the Wallace tree, compared to naive addition of partial products, is its much higher gate count.
These computations only considergate delaysand don't deal with wire delays, which can also be very substantial.
The Wallace tree can be also represented by a tree of 3/2 or 4/2 adders.
It is sometimes combined withBooth encoding.[4][5]
The Wallace tree is a variant oflong multiplication. The first step is to multiply each digit (each bit) of one factor by each digit of the other. Each of these partial products has weight equal to the product of its factors. The final product is calculated by the weighted sum of all these partial products.
The first step, as said above, is to multiply each bit of one number by each bit of the other, which is accomplished as a simple AND gate, resulting inn2{\displaystyle n^{2}}bits; the partial product of bitsam{\displaystyle a_{m}}bybn{\displaystyle b_{n}}has weight2(m+n){\displaystyle 2^{(m+n)}}
In the second step, the resulting bits are reduced to two numbers; this is accomplished as follows:
As long as there are three or more wires with the same weight add a following layer:-
In the third and final step, the two resulting numbers are fed to an adder, obtaining the final product.
n=4{\displaystyle n=4}, multiplyinga3a2a1a0{\displaystyle a_{3}a_{2}a_{1}a_{0}}byb3b2b1b0{\displaystyle b_{3}b_{2}b_{1}b_{0}}:
|
https://en.wikipedia.org/wiki/Wallace_tree
|
Inmathematics, thefactorialof a non-negativeintegern{\displaystyle n},denotedbyn!{\displaystyle n!},is theproductof all positive integers less than or equalton{\displaystyle n}.The factorialofn{\displaystyle n}also equals the product ofn{\displaystyle n}with the next smaller factorial:n!=n×(n−1)×(n−2)×(n−3)×⋯×3×2×1=n×(n−1)!{\displaystyle {\begin{aligned}n!&=n\times (n-1)\times (n-2)\times (n-3)\times \cdots \times 3\times 2\times 1\\&=n\times (n-1)!\\\end{aligned}}}For example,5!=5×4!=5×4×3×2×1=120.{\displaystyle 5!=5\times 4!=5\times 4\times 3\times 2\times 1=120.}The value of 0! is 1, according to the convention for anempty product.[1]
Factorials have been discovered in several ancient cultures, notably inIndian mathematicsin the canonical works ofJain literature, and by Jewish mystics in the Talmudic bookSefer Yetzirah. The factorial operation is encountered in many areas of mathematics, notably incombinatorics, where its most basic use counts the possible distinctsequences– thepermutations– ofn{\displaystyle n}distinct objects: therearen!{\displaystyle n!}.Inmathematical analysis, factorials are used inpower seriesfor theexponential functionand other functions, and they also have applications inalgebra,number theory,probability theory, andcomputer science.
Much of the mathematics of the factorial function was developed beginning in the late 18th and early 19th centuries.Stirling's approximationprovides an accurate approximation to the factorial of large numbers, showing that it grows more quickly thanexponential growth.Legendre's formuladescribes the exponents of the prime numbers in aprime factorizationof the factorials, and can be used to count the trailing zeros of the factorials.Daniel BernoulliandLeonhard Eulerinterpolatedthe factorial function to a continuous function ofcomplex numbers, except at the negative integers, the (offset)gamma function.
Many other notable functions and number sequences are closely related to the factorials, including thebinomial coefficients,double factorials,falling factorials,primorials, andsubfactorials. Implementations of the factorial function are commonly used as an example of differentcomputer programmingstyles, and are included inscientific calculatorsand scientific computing software libraries. Although directly computing large factorials using the product formula or recurrence is not efficient, faster algorithms are known, matching to within a constant factor the time for fastmultiplication algorithmsfor numbers with the same number of digits.
The concept of factorials has arisen independently in many cultures:
From the late 15th century onward, factorials became the subject of study by Western mathematicians. In a 1494 treatise, Italian mathematicianLuca Paciolicalculated factorials up to 11!, in connection with a problem of dining table arrangements.[12]Christopher Claviusdiscussed factorials in a 1603 commentary on the work ofJohannes de Sacrobosco, and in the 1640s, French polymathMarin Mersennepublished large (but not entirely correct) tables of factorials, up to 64!, based on the work of Clavius.[13]Thepower seriesfor theexponential function, with the reciprocals of factorials for its coefficients, was first formulated in 1676 byIsaac Newtonin a letter toGottfried Wilhelm Leibniz.[14]Other important works of early European mathematics on factorials include extensive coverage in a 1685 treatise byJohn Wallis, a study of their approximate values for large values ofn{\displaystyle n}byAbraham de Moivrein 1721, a 1729 letter fromJames Stirlingto de Moivre stating what became known asStirling's approximation, and work at the same time byDaniel BernoulliandLeonhard Eulerformulating the continuous extension of the factorial function to thegamma function.[15]Adrien-Marie LegendreincludedLegendre's formula, describing the exponents in thefactorizationof factorials intoprime powers, in an 1808 text onnumber theory.[16]
The notationn!{\displaystyle n!}for factorials was introduced by the French mathematicianChristian Krampin 1808.[17]Many other notations have also been used. Another later notation|n_{\displaystyle \vert \!{\underline {\,n}}}, in which the argument of the factorial was half-enclosed by the left and bottom sides of a box, was popular for some time in Britain and America but fell out of use, perhaps because it is difficult to typeset.[17]The word "factorial" (originally French:factorielle) was first used in 1800 byLouis François Antoine Arbogast,[18]in the first work onFaà di Bruno's formula,[19]but referring to a more general concept of products ofarithmetic progressions. The "factors" that this name refers to are the terms of the product formula for the factorial.[20]
The factorial function of a positive integern{\displaystyle n}is defined by the product of all positive integers not greater thann{\displaystyle n}[1]n!=1⋅2⋅3⋯(n−2)⋅(n−1)⋅n.{\displaystyle n!=1\cdot 2\cdot 3\cdots (n-2)\cdot (n-1)\cdot n.}This may be written more concisely inproduct notationas[1]n!=∏i=1ni.{\displaystyle n!=\prod _{i=1}^{n}i.}
If this product formula is changed to keep all but the last term, it would define a product of the same form, for a smaller factorial. This leads to arecurrence relation, according to which each value of the factorial function can be obtained by multiplying the previous valuebyn{\displaystyle n}:[21]n!=n⋅(n−1)!.{\displaystyle n!=n\cdot (n-1)!.}For example,5!=5⋅4!=5⋅24=120{\displaystyle 5!=5\cdot 4!=5\cdot 24=120}.
The factorialof0{\displaystyle 0}is1{\displaystyle 1},or in symbols,0!=1{\displaystyle 0!=1}.There are several motivations for this definition:
The earliest uses of the factorial function involve countingpermutations: there aren!{\displaystyle n!}different ways of arrangingn{\displaystyle n}distinct objects into a sequence.[26]Factorials appear more broadly in many formulas incombinatorics, to account for different orderings of objects. For instance thebinomial coefficients(nk){\displaystyle {\tbinom {n}{k}}}count thek{\displaystyle k}-elementcombinations(subsets ofk{\displaystyle k}elements)from a set withn{\displaystyle n}elements,and can be computed from factorials using the formula[27](nk)=n!k!(n−k)!.{\displaystyle {\binom {n}{k}}={\frac {n!}{k!(n-k)!}}.}TheStirling numbers of the first kindsum to the factorials, and count the permutationsofn{\displaystyle n}grouped into subsets with the same numbers of cycles.[28]Another combinatorial application is in countingderangements, permutations that do not leave any element in its original position; the number of derangements ofn{\displaystyle n}items is thenearest integerton!/e{\displaystyle n!/e}.[29]
Inalgebra, the factorials arise through thebinomial theorem, which uses binomial coefficients to expand powers of sums.[30]They also occur in the coefficients used to relate certain families of polynomials to each other, for instance inNewton's identitiesforsymmetric polynomials.[31]Their use in counting permutations can also be restated algebraically: the factorials are theordersof finitesymmetric groups.[32]Incalculus, factorials occur inFaà di Bruno's formulafor chaining higher derivatives.[19]Inmathematical analysis, factorials frequently appear in the denominators ofpower series, most notably in the series for theexponential function,[14]ex=1+x1+x22+x36+⋯=∑i=0∞xii!,{\displaystyle e^{x}=1+{\frac {x}{1}}+{\frac {x^{2}}{2}}+{\frac {x^{3}}{6}}+\cdots =\sum _{i=0}^{\infty }{\frac {x^{i}}{i!}},}and in the coefficients of otherTaylor series(in particular those of thetrigonometricandhyperbolic functions), where they cancel factors ofn!{\displaystyle n!}coming from then{\displaystyle n}th derivativeofxn{\displaystyle x^{n}}.[33]This usage of factorials in power series connects back toanalytic combinatoricsthrough theexponential generating function, which for acombinatorial classwithni{\displaystyle n_{i}}elements ofsizei{\displaystyle i}is defined as the power series[34]∑i=0∞xinii!.{\displaystyle \sum _{i=0}^{\infty }{\frac {x^{i}n_{i}}{i!}}.}
Innumber theory, the most salient property of factorials is thedivisibilityofn!{\displaystyle n!}by all positive integers upton{\displaystyle n},described more precisely for prime factors byLegendre's formula. It follows that arbitrarily largeprime numberscan be found as the prime factors of the numbersn!±1{\displaystyle n!\pm 1}, leading to a proof ofEuclid's theoremthat the number of primes is infinite.[35]Whenn!±1{\displaystyle n!\pm 1}is itself prime it is called afactorial prime;[36]relatedly,Brocard's problem, also posed bySrinivasa Ramanujan, concerns the existence ofsquare numbersof the formn!+1{\displaystyle n!+1}.[37]In contrast, the numbersn!+2,n!+3,…n!+n{\displaystyle n!+2,n!+3,\dots n!+n}must all be composite, proving the existence of arbitrarily largeprime gaps.[38]An elementaryproof of Bertrand's postulateon the existence of a prime in any interval of theform[n,2n]{\displaystyle [n,2n]},one of the first results ofPaul Erdős, was based on the divisibility properties of factorials.[39][40]Thefactorial number systemis amixed radixnotation for numbers in which the place values of each digit are factorials.[41]
Factorials are used extensively inprobability theory, for instance in thePoisson distribution[42]and in the probabilities ofrandom permutations.[43]Incomputer science, beyond appearing in the analysis ofbrute-force searchesover permutations,[44]factorials arise in thelower boundoflog2n!=nlog2n−O(n){\displaystyle \log _{2}n!=n\log _{2}n-O(n)}on the number of comparisons needed tocomparison sorta set ofn{\displaystyle n}items,[45]and in the analysis of chainedhash tables, where the distribution of keys per cell can be accurately approximated by a Poisson distribution.[46]Moreover, factorials naturally appear in formulae fromquantumandstatistical physics, where one often considers all the possible permutations of a set of particles. Instatistical mechanics, calculations ofentropysuch asBoltzmann's entropy formulaor theSackur–Tetrode equationmust correct the count ofmicrostatesby dividing by the factorials of the numbers of each type ofindistinguishable particleto avoid theGibbs paradox. Quantum physics provides the underlying reason for why these corrections are necessary.[47]
As a functionofn{\displaystyle n},the factorial has faster thanexponential growth, but grows more slowly than adouble exponential function.[48]Its growth rate is similartonn{\displaystyle n^{n}},but slower by an exponential factor. One way of approaching this result is by taking thenatural logarithmof the factorial, which turns its product formula into a sum, and then estimating the sum by an integral:lnn!=∑x=1nlnx≈∫1nlnxdx=nlnn−n+1.{\displaystyle \ln n!=\sum _{x=1}^{n}\ln x\approx \int _{1}^{n}\ln x\,dx=n\ln n-n+1.}Exponentiating the result (and ignoring the negligible+1{\displaystyle +1}term) approximatesn!{\displaystyle n!}as(n/e)n{\displaystyle (n/e)^{n}}.[49]More carefully bounding the sum both above and below by an integral, using thetrapezoid rule, shows that this estimate needs a correction factor proportionalton{\displaystyle {\sqrt {n}}}.The constant of proportionality for this correction can be found from theWallis product, which expressesπ{\displaystyle \pi }as a limiting ratio of factorials and powers of two. The result of these corrections isStirling's approximation:[50]n!∼2πn(ne)n.{\displaystyle n!\sim {\sqrt {2\pi n}}\left({\frac {n}{e}}\right)^{n}\,.}Here, the∼{\displaystyle \sim }symbol means that, asn{\displaystyle n}goes to infinity, the ratio between the left and right sides approaches one in thelimit.
Stirling's formula provides the first term in anasymptotic seriesthat becomes even more accurate when taken to greater numbers of terms:[51]n!∼2πn(ne)n(1+112n+1288n2−13951840n3−5712488320n4+⋯).{\displaystyle n!\sim {\sqrt {2\pi n}}\left({\frac {n}{e}}\right)^{n}\left(1+{\frac {1}{12n}}+{\frac {1}{288n^{2}}}-{\frac {139}{51840n^{3}}}-{\frac {571}{2488320n^{4}}}+\cdots \right).}An alternative version uses only odd exponents in the correction terms:[51]n!∼2πn(ne)nexp(112n−1360n3+11260n5−11680n7+⋯).{\displaystyle n!\sim {\sqrt {2\pi n}}\left({\frac {n}{e}}\right)^{n}\exp \left({\frac {1}{12n}}-{\frac {1}{360n^{3}}}+{\frac {1}{1260n^{5}}}-{\frac {1}{1680n^{7}}}+\cdots \right).}Many other variations of these formulas have also been developed, bySrinivasa Ramanujan,Bill Gosper, and others.[51]
Thebinary logarithmof the factorial, used to analyzecomparison sorting, can be very accurately estimated using Stirling's approximation. In the formula below, theO(1){\displaystyle O(1)}term invokesbig O notation.[45]log2n!=nlog2n−(log2e)n+12log2n+O(1).{\displaystyle \log _{2}n!=n\log _{2}n-(\log _{2}e)n+{\frac {1}{2}}\log _{2}n+O(1).}
The product formula for the factorial implies thatn!{\displaystyle n!}isdivisibleby allprime numbersthat are atmostn{\displaystyle n},and by no larger prime numbers.[52]More precise information about its divisibility is given byLegendre's formula, which gives the exponent of each primep{\displaystyle p}in the prime factorization ofn!{\displaystyle n!}as[53][54]∑i=1∞⌊npi⌋=n−sp(n)p−1.{\displaystyle \sum _{i=1}^{\infty }\left\lfloor {\frac {n}{p^{i}}}\right\rfloor ={\frac {n-s_{p}(n)}{p-1}}.}Heresp(n){\displaystyle s_{p}(n)}denotes the sum of thebase-p{\displaystyle p}digitsofn{\displaystyle n},and the exponent given by this formula can also be interpreted in advanced mathematics as thep-adic valuationof the factorial.[54]Applying Legendre's formula to the product formula forbinomial coefficientsproducesKummer's theorem, a similar result on the exponent of each prime in the factorization of a binomial coefficient.[55]Grouping the prime factors of the factorial intoprime powersin different ways produces themultiplicative partitions of factorials.[56]
The special case of Legendre's formula forp=5{\displaystyle p=5}gives the number oftrailing zerosin the decimal representation of the factorials.[57]According to this formula, the number of zeros can be obtained by subtracting the base-5 digits ofn{\displaystyle n}fromn{\displaystyle n}, and dividing the result by four.[58]Legendre's formula implies that the exponent of the primep=2{\displaystyle p=2}is always larger than the exponent forp=5{\displaystyle p=5},so each factor of five can be paired with a factor of two to produce one of these trailing zeros.[57]The leading digits of the factorials are distributed according toBenford's law.[59]Every sequence of digits, in any base, is the sequence of initial digits of some factorial number in that base.[60]
Another result on divisibility of factorials,Wilson's theorem, states that(n−1)!+1{\displaystyle (n-1)!+1}is divisible byn{\displaystyle n}if and only ifn{\displaystyle n}is aprime number.[52]For any givenintegerx{\displaystyle x},theKempner functionofx{\displaystyle x}is given by the smallestn{\displaystyle n}for whichx{\displaystyle x}dividesn!{\displaystyle n!}.[61]For almost all numbers (all but a subset of exceptions withasymptotic densityzero), it coincides with the largest prime factorofx{\displaystyle x}.[62]
The product of two factorials,m!⋅n!{\displaystyle m!\cdot n!},always evenly divides(m+n)!{\displaystyle (m+n)!}.[63]There are infinitely many factorials that equal the product of other factorials: ifn{\displaystyle n}is itself any product of factorials, thenn!{\displaystyle n!}equals that same product multiplied by one more factorial,(n−1)!{\displaystyle (n-1)!}.The only known examples of factorials that are products of other factorials but are not of this "trivial" form are9!=7!⋅3!⋅3!⋅2!{\displaystyle 9!=7!\cdot 3!\cdot 3!\cdot 2!},10!=7!⋅6!=7!⋅5!⋅3!{\displaystyle 10!=7!\cdot 6!=7!\cdot 5!\cdot 3!},and16!=14!⋅5!⋅2!{\displaystyle 16!=14!\cdot 5!\cdot 2!}.[64]It would follow from theabcconjecturethat there are only finitely many nontrivial examples.[65]
Thegreatest common divisorof the values of aprimitive polynomialof degreed{\displaystyle d}over the integers evenly dividesd!{\displaystyle d!}.[63]
There are infinitely many ways to extend the factorials to acontinuous function.[66]The most widely used of these[67]uses thegamma function, which can be defined for positive real numbers as theintegralΓ(z)=∫0∞xz−1e−xdx.{\displaystyle \Gamma (z)=\int _{0}^{\infty }x^{z-1}e^{-x}\,dx.}The resulting function is related to the factorial of a non-negative integern{\displaystyle n}by the equationn!=Γ(n+1),{\displaystyle n!=\Gamma (n+1),}which can be used as a definition of the factorial for non-integer arguments.
At all valuesx{\displaystyle x}for which bothΓ(x){\displaystyle \Gamma (x)}andΓ(x−1){\displaystyle \Gamma (x-1)}are defined, the gamma function obeys thefunctional equationΓ(n)=(n−1)Γ(n−1),{\displaystyle \Gamma (n)=(n-1)\Gamma (n-1),}generalizing therecurrence relationfor the factorials.[66]
The same integral converges more generally for anycomplex numberz{\displaystyle z}whose real part is positive. It can be extended to the non-integer points in the rest of thecomplex planeby solving for Euler'sreflection formulaΓ(z)Γ(1−z)=πsinπz.{\displaystyle \Gamma (z)\Gamma (1-z)={\frac {\pi }{\sin \pi z}}.}However, this formula cannot be used at integers because, for them, thesinπz{\displaystyle \sin \pi z}term would produce adivision by zero. The result of this extension process is ananalytic function, theanalytic continuationof the integral formula for the gamma function. It has a nonzero value at all complex numbers, except for the non-positive integers where it hassimple poles. Correspondingly, this provides a definition for the factorial at all complex numbers other than the negative integers.[67]One property of the gamma function, distinguishing it from other continuous interpolations of the factorials, is given by theBohr–Mollerup theorem, which states that the gamma function (offset by one) is the onlylog-convexfunction on the positive real numbers that interpolates the factorials and obeys the same functional equation. A related uniqueness theorem ofHelmut Wielandtstates that the complex gamma function and its scalar multiples are the onlyholomorphic functionson the positive complex half-plane that obey the functional equation and remain bounded for complex numbers with real part between 1 and 2.[68]
Other complex functions that interpolate the factorial values includeHadamard's gamma function, which is anentire functionover all the complex numbers, including the non-positive integers.[69][70]In thep-adic numbers, it is not possible to continuously interpolate the factorial function directly, because the factorials of large integers (a dense subset of thep-adics) converge to zero according to Legendre's formula, forcing any continuous function that is close to their values to be zero everywhere. Instead, thep-adic gamma functionprovides a continuous interpolation of a modified form of the factorial, omitting the factors in the factorial that are divisible byp.[71]
Thedigamma functionis thelogarithmic derivativeof the gamma function. Just as the gamma function provides a continuous interpolation of the factorials, offset by one, the digamma function provides a continuous interpolation of theharmonic numbers, offset by theEuler–Mascheroni constant.[72]
The factorial function is a common feature inscientific calculators.[73]It is also included in scientific programming libraries such as thePythonmathematical functions module[74]and theBoost C++ library.[75]If efficiency is not a concern, computing factorials is trivial: just successively multiply a variable initializedto1{\displaystyle 1}by the integers upton{\displaystyle n}.The simplicity of this computation makes it a common example in the use of different computer programming styles and methods.[76]
The computation ofn!{\displaystyle n!}can be expressed inpseudocodeusingiteration[77]as
or usingrecursion[78]based on its recurrence relation as
Other methods suitable for its computation includememoization,[79]dynamic programming,[80]andfunctional programming.[81]Thecomputational complexityof these algorithms may be analyzed using the unit-costrandom-access machinemodel of computation, in which each arithmetic operation takes constant time and each number uses a constant amount of storage space. In this model, these methods can computen!{\displaystyle n!}in timeO(n){\displaystyle O(n)},and the iterative version uses spaceO(1){\displaystyle O(1)}.Unless optimized fortail recursion, the recursive version takes linear space to store itscall stack.[82]However, this model of computation is only suitable whenn{\displaystyle n}is small enough to allown!{\displaystyle n!}to fit into amachine word.[83]The values 12! and 20! are the largest factorials that can be stored in, respectively, the32-bit[84]and64-bitintegers.[85]Floating pointcan represent larger factorials, but approximately rather than exactly, and will still overflow for factorials larger than170!{\displaystyle 170!}.[84]
The exact computation of larger factorials involvesarbitrary-precision arithmetic, because offast growthandinteger overflow. Time of computation can be analyzed as a function of the number of digits or bits in the result.[85]By Stirling's formula,n!{\displaystyle n!}hasb=O(nlogn){\displaystyle b=O(n\log n)}bits.[86]TheSchönhage–Strassen algorithmcan produce ab{\displaystyle b}-bitproduct in timeO(blogbloglogb){\displaystyle O(b\log b\log \log b)},and fastermultiplication algorithmstaking timeO(blogb){\displaystyle O(b\log b)}are known.[87]However, computing the factorial involves repeated products, rather than a single multiplication, so these time bounds do not apply directly. In this setting, computingn!{\displaystyle n!}by multiplying the numbers from 1ton{\displaystyle n}in sequence is inefficient, because it involvesn{\displaystyle n}multiplications, a constant fraction of which take timeO(nlog2n){\displaystyle O(n\log ^{2}n)}each, giving total timeO(n2log2n){\displaystyle O(n^{2}\log ^{2}n)}.A better approach is to perform the multiplications as adivide-and-conquer algorithmthat multiplies a sequence ofi{\displaystyle i}numbers by splitting it into two subsequences ofi/2{\displaystyle i/2}numbers, multiplies each subsequence, and combines the results with one last multiplication. This approach to the factorial takes total timeO(nlog3n){\displaystyle O(n\log ^{3}n)}:one logarithm comes from the number of bits in the factorial, a second comes from the multiplication algorithm, and a third comes from the divide and conquer.[88]
Even better efficiency is obtained by computingn!from its prime factorization, based on the principle thatexponentiation by squaringis faster than expanding an exponent into a product.[86][89]An algorithm for this byArnold Schönhagebegins by finding the list of the primes upton{\displaystyle n},for instance using thesieve of Eratosthenes, and uses Legendre's formula to compute the exponent for each prime. Then it computes the product of the prime powers with these exponents, using a recursive algorithm, as follows:
The product of all primes up ton{\displaystyle n}is anO(n){\displaystyle O(n)}-bit number, by theprime number theorem, so the time for the first step isO(nlog2n){\displaystyle O(n\log ^{2}n)}, with one logarithm coming from the divide and conquer and another coming from the multiplication algorithm. In the recursive calls to the algorithm, the prime number theorem can again be invoked to prove that the numbers of bits in the corresponding products decrease by a constant factor at each level of recursion, so the total time for these steps at all levels of recursion adds in ageometric seriestoO(nlog2n){\displaystyle O(n\log ^{2}n)}.The time for the squaring in the second step and the multiplication in the third step are againO(nlog2n){\displaystyle O(n\log ^{2}n)},because each is a single multiplication of a number withO(nlogn){\displaystyle O(n\log n)}bits. Again, at each level of recursion the numbers involved have a constant fraction as many bits (because otherwise repeatedly squaring them would produce too large a final result) so again the amounts of time for these steps in the recursive calls add in a geometric seriestoO(nlog2n){\displaystyle O(n\log ^{2}n)}.Consequentially, the whole algorithm takestimeO(nlog2n){\displaystyle O(n\log ^{2}n)},proportional to a single multiplication with the same number of bits in its result.[89]
Several other integer sequences are similar to or related to the factorials:
|
https://en.wikipedia.org/wiki/Factorial
|
Genaille–Lucas rulers(also known asGenaille's rods) are anarithmetictool invented byHenri Genaille, a French railway engineer, in 1891. The device is a variant ofNapier's bones. By representing thecarrygraphically, the user can read off the results of simplemultiplicationproblems directly, with no intermediatemental calculations.
In 1885, French mathematicianÉdouard Lucasposed an arithmetic problem during a session of theAcadémie française. Genaille, already known for having invented a number of arithmetic tools, created his rulers in the course of solving the problem. He presented his invention to theAcadémie françaisein 1891. The popularity of Genaille's rods was widespread but short-lived, asmechanical calculatorssoon began to displace manual arithmetic methods.[1]
A full set of Genaille–Lucas rulers consists of eleven strips. On each strip is printed a column of triangles and a column of numbers.
By arranging the rulers in the proper order, the user can find unit multiples of short natural numbers by sight.
Soon after their development by Genaille, the rulers were adapted to a set of rods that can perform division. The division rods are aligned similarly to the multiplication rods, with the index rod on the left denoting thedivisor, and the following rods spelling out the digits of the dividend. After these, a special "remainder" rod is placed on the right. Thequotientis read from left to right, following the lines from one rod to the next. The path of digits ends with a number on the remainder rod, which is theremaindergiven by the division.
|
https://en.wikipedia.org/wiki/Genaille%E2%80%93Lucas_rulers
|
Lunar arithmetic, formerly calleddismal arithmetic,[1][2]is a version ofarithmeticin which the addition and multiplicationoperationson digits are defined as themax and minoperations. Thus, in lunar arithmetic,
The lunar arithmetic operations on nonnegative multidigit numbers are performed as in usual arithmetic as illustrated in the following examples. The world of lunar arithmetic is restricted to the set ofnonnegative integers.
The concept of lunar arithmetic was proposed by David Applegate, Marc LeBrun, andNeil Sloane.[3]
In the general definition of lunar arithmetic, one considers numbers expressed in an arbitrarybaseb{\displaystyle b}and define lunar arithmetic operations as the max and min operations on the digits corresponding to the chosen base.[3]However, for simplicity, in the following discussion it will be assumed that the numbers are represented using10 as the base.
A few of the elementary properties of the lunar operations are listed below.[3]
It may be noted that, in lunar arithmetic,n+n≠2×n{\displaystyle n+n\neq 2\times n}andn+n=n{\displaystyle n+n=n}. Theeven numbersare numbers of the form2×n{\displaystyle 2\times n}. The first few distinct even numbers under lunar arithmetic are listed below:
These are the numbers whose digits are all less than or equal to 2.
Asquare numberis a number of the formn×n{\displaystyle n\times n}. So in lunar arithmetic, the first few squares are the following.
Atriangular numberis a number of the form1+2+⋯+n{\displaystyle 1+2+\cdots +n}. The first few triangular lunar numbers are:
In lunar arithmetic, the first few values of thefactorialn!=1×2×⋯×n{\displaystyle n!=1\times 2\times \cdots \times n}are as follows:
In the usual arithmetic, aprime numberis defined as a numberp{\displaystyle p}whose only possible factorisation is1×p{\displaystyle 1\times p}. Analogously, in the lunar arithmetic, a prime number is defined as a numberm{\displaystyle m}whose only factorisation is9×n{\displaystyle 9\times n}where 9 is the multiplicative identity which corresponds to 1 in usual arithmetic. Accordingly, the following are the first few prime numbers in lunar arithmetic:
Every number of the form10…(nzeros)…09{\displaystyle 10\ldots (n{\text{ zeros}})\ldots 09}, wheren{\displaystyle n}is arbitrary, is a prime in lunar arithmetic. Sincen{\displaystyle n}is arbitrary this shows that there are an infinite number of primes in lunar arithmetic.
There is an interesting relation between the operation of formingsumsetsof subsets of nonnegative integers and lunar multiplication onbinary numbers. LetA{\displaystyle A}andB{\displaystyle B}be nonempty subsets of the setN{\displaystyle N}of nonnegative integers. The sumsetA+B{\displaystyle A+B}is defined by
To the setA{\displaystyle A}we can associate a unique binary numberβ(A){\displaystyle \beta (A)}as follows. Letm=max(A){\displaystyle m=\max(A)}.
Fori=0,1,…,m{\displaystyle i=0,1,\ldots ,m}we define
and then we define
It has been proved that
A magic square of squares is amagic squareformed by squares of numbers. It is not known whether there are any magic squares of squares of order 3 with the usual addition and multiplication of integers. However, it has been observed that, if we consider the lunar arithmetic operations, there are an infinite amount of magic squares of squares of order 3. Here is an example:[2]
|
https://en.wikipedia.org/wiki/Lunar_arithmetic
|
Napier's bonesis a manually operated calculating device created byJohn NapierofMerchiston,Scotlandfor thecalculationof products andquotientsof numbers. The method was based onlattice multiplication, and also calledrabdology, a word invented by Napier. Napier published his version in1617.[1]It was printed inEdinburghand dedicated to his patronAlexander Seton.
Using themultiplication tablesembedded in the rods,multiplicationcan be reduced toadditionoperations anddivisiontosubtractions. Advanced use of the rods can extractsquare roots. Napier's bones are not the same aslogarithms, with which Napier's name is also associated, but are based on dissected multiplication tables.
The complete device usually includes a base board with a rim; the user places Napier's rods and the rim to conduct multiplication or division. The board's left edge is divided into nine squares, holding the numbers 1 to 9. In Napier's original design, the rods are made of metal, wood orivoryand have a square cross-section. Each rod isengravedwith a multiplication table on each of the four faces. In some later designs, the rods are flat and have two tables or only one engraved on them, and made of plastic or heavycardboard. A set of such bones might be enclosed in a carrying case.
A rod's face is marked with nine squares. Eachsquareexcept the top is divided into two halves by adiagonalline from the bottom left corner to the top right. The squares contain a simplemultiplication table. The first holds a singledigit, which Napier called the 'single'. The others hold themultiplesof the single, namely twice the single, three times the single and so on up to the ninth square containing nine times the number in the top square. Single-digit numbers are written in the bottom right triangle leaving the other triangle blank, while double-digit numbers are written with a digit on either side of the diagonal.
If the tables are held on single-sided rods, 40 rods are needed in order to multiply 4-digit numbers – since numbers may have repeated digits, four copies of the multiplication table for each of the digits 0 to 9 are needed. If square rods are used, the 40 multiplication tables can be inscribed on 10 rods. Napier gave details of a scheme for arranging the tables so that no rod has two copies of the same table, enabling every possible four-digit number to be represented by 4 of the 10 rods. A set of 20 rods, consisting of two identical copies of Napier's 10 rods, allows calculation with numbers of up to eight digits, and a set of 30 rods can be used for 12-digit numbers.
The simplest sort ofmultiplication, a number with multiple digits by a number with a single digit, is done by placing rods representing the multi-digit number in the frame against the left edge. The answer is read off the row corresponding to the single-digit number which is marked on the left of the frame, with a small amount of addition required, as explained in the examples below.
When multiplying a multi-digit number by another multi-digit number, the larger number is set up on the rods in the frame. An intermediate result is produced by the device for multiplication by each of the digits of the smaller number. These are written down and the final result is calculated by pen and paper.
To demonstrate how to use Napier's bones for multiplication, three examples of increasing difficulty are explained below.
The first example computes425 × 6.
Napier's bones for 4, 2, and 5 are placed into the board, insequence. These bones show the larger figure which will be multiplied. The numbers lower in each column, or bone, are the digits found by ordinary multiplication tables for the corresponding integer, positioned above and below a diagonal line. (For example, the digits shown in the seventh row of the 4 bone are2⁄8, representing7 × 4 = 28.) In the example below for425 × 6, the bones are here depicted as red (4), yellow (2), and blue (5).
The left-most column, preceding the bones shown coloured, may represent the 1 bone. (A blank space or zero to the upper left of each digit, separated by a diagonal line, should be understood, since1 × 1 = 01,1 × 2 = 02,1 x 3 = 03, etc.) A small number is chosen, usually 2 through 9, by which to multiply the large number. In this example the small number by which to multiply the larger is 6. The horizontal row in which this number stands is the only row needed to perform the remaining calculations and may now be viewed in isolation.
For the calculation, the digits separated by vertical lines (i.e. paired between diagonal lines, crossing over from one bone to the next) are added together to form the digits of the product. The final (right-most) number on that row will never require addition, as it is always isolated by the last diagonal line, and will always be the final digit of the product. In this example, there are four digits, since there are four groups of bone values lying between diagonal lines. The product's digits will stand in the order as calculated left to right. Apart from the first and the final digit, the product's digits will each be the sum of two values taken from two different bones.
Bone values are added together, as described above, to find the digits of the product. In this diagram, the third product digit from the yellow and blue bones have their relevant values coloured green. Each sum is written in the space below. The sequence of the summations from left to right produces the figure of 2550. Therefore, the solution to multiplying 425 by 6 is 2550.
When multiplying by larger single digits, it is common that upon adding a diagonal column, the sum of the numbers results in a number that is 10 or greater.
The second example computes6785 × 8.
Like Example 1, the corresponding bones to the biggest number are placed in the board. For this example, bones 6, 7, 8, and 5 were placed in the proper order as shown below.
In the first column, the number by which the biggest number is multiplied by is located. In this example, the number was 8. Only row 8 will be used for the remaining calculations, so the rest of the board has been cleared for clarity in explaining the remaining steps.
Just as before, each diagonal column is evaluated, starting at the right side. If the sum of a diagonal column equals 10 or greater, the "tens" place of this sum must be carried over and added along with the numbers in the adjacent left column as demonstrated below.
After each diagonal column is evaluated, the calculated numbers are read from left to right to produce a final answer; in this example, 54280 was produced.
Therefore: The solution to multiplying 6785 by 8 is 54280.
The third example computes825 × 913.
The corresponding bones to the leading number are placed in the board. For this example, the bones 8, 2, and 5 were placed in the proper order as shown below.
To multiply by a multi-digit number, multiple rows are reviewed. For this example, the rows for 9, 1, and 3 have been removed from the board for clarity.
Each row is evaluated individually and each diagonal column is added as explained in the previous examples. The sums are read from left to right, producing the numbers needed for the long hand addition calculations to follow. For this example, row 9, row 1, and row 3 were evaluated separately to produce the results shown below.
Starting with the rightmost digit of the second number, the sums are placed from the rows in sequential order as seen from right to left under each other while utilising a 0 for a place holder.
The rows and place holders aresummedto produce a final answer.
In this example, the final answer produced was 753225. Therefore: The solution to multiplying 825 by 913 is 753225.
Division is performed in a similar fashion. To divide 46785399 by 96431, the bars for the divisor (96431) are placed on the board, as shown in the graphic below. Using theabacus, all the products of the divisor from 1 to 9 are found by reading the displayed numbers. Note that the dividend has eight digits, whereas the partial products (save for the first one) all have six. So the final two digits of 46785399, namely the '99', are temporarily ignored, leaving the number 467853. Then, the greatest partial product that is less than the truncated dividend is found. In this case, 385724. Two things must be marked down, as seen in the diagram: since 385724 is in the '4' row of the abacus, a '4' is marked down as the left-most digit of the quotient; the partial product, left-aligned, under the original dividend, is also written. The two terms are subtracted, which leaves 8212999. The same steps are repeated: the number is truncated to six digits, the partial product immediately less than the truncated number is chosen, the row number is written as the next digit of the quotient, and the partial product is subtracted from the difference found in the first repetition. The process is shown in the diagram. The cycle is repeated until the result of subtraction is less than the divisor. The number left is the remainder.
So in this example, what remains is a quotient of 485 with a remainder of 16364. The process usually stops here and the answer uses the fractional form485+16364/96431.
For more accuracy, the cycle is continued to find as many decimal places required. A decimal point is marked after the last digit of the quotient and a zero is appended to the remainder which leaves 163640. The cycle is continued, each time appending a zero to the
result after the subtraction.
For extracting the square root, an additional bone is used which is different from the others as it has three columns. The first column has the first nine square numbers, the second has the first nine even numbers, and the last has the numbers 1 to 9.
To find the square root of 46785399, its digits are grouped into twos starting from the right so it looks like this:
The leftmost group is chosen first, in this case 46. The largest square on thesquare rootbone less than 46 is picked, which is 36 from the sixth row. The first digit of the solution is 6, since the sixth row was chosen.
Then, the number in the second column from the sixth row on the square root bone, 12, is set on the board.
The value in the first column of the sixth row, 36, is subtracted from 46, which leaves 10.
The next group of digits, 78, is added next to 10; this leaves the remainder 1078.
At this stage, the board and intermediate calculations should look like this:
The numbers in each row are "read", ignoring the second and third columns from the square root bone; these are recorded. (For example, the sixth row is read as:0⁄61⁄23⁄6→ 756).
Like in multiplication shown before, the numbers are read from right to left and add the diagonal numbers from top-right to left-bottom (6 + 0 = 6;3 + 2 = 5;1 + 6 = 7).
The largest number less than the current remainder, 1078 (from the eighth row), is found.
Like before, 8 is appended to get the next digit of the square root and the value of the eighth row, 1024, is subtracted from the current remainder, 1078, to get 54. The second column of the eighth row on the square
root bone, 16, is read and the number is set on the board as follows.
The current number on the board is 12. The first digit of
16 is added to 12, and the second digit of 16 is appended to the result. So the board should be set to:
The board and intermediate calculations now look like this.
Once again, the row with the largest value less than the current partial remainder, 5453, is found. This time, it is the third row with 4089.
The next digit of the square root is 3. The same steps as before are repeated and 4089 is subtracted from the current remainder, 5453, to get 1364 as the next remainder. When the board is rearranged, the second column of the square root bone is 6, a single digit. So 6 is appended to the current number on the board, 136, to leave 1366 on the board.
The process is repeated again. Now, the largest value on the board smaller than the current remainder, 136499, is 123021 from the ninth row.
The value of every row often doesn't need to be found to get the answer. The row that has the answer may be guessed by looking at the number on the first few bones and comparing it with the first few digits of the remainder. But the diagrams show the value of all rows to make it understandable.
9 is appended to the result and 123021 is subtracted from the current remainder.
If all the digits have been used, and a remainder is left, then the integer part is solved, but a fractional bit still needs to be found.
If the integer part is solved, the current result squared (68392= 46771921) must be the largest perfect square smaller than 46785899.
This idea is used later on to understand how the technique works, but more digits can be generated.
Similar to finding the fractional portion inlong division, two zeros are appended to the remainder to get the new remainder 1347800. The second column of the ninth row of the square root bone is 18 and the current number on the board is 1366.
is computed to set 13678 on the board.
The board and intermediate computations now look like this.
The ninth row with 1231101 is the largest value smaller than the
remainder, so the first digit of the fractional part of the square root is 9.
The value of the ninth row is subtracted from the remainder and a few more zeros are appended to get the new remainder 11669900. The second column on the ninth row is 18 with 13678 on the board, so
is computed to set 136798 on the board.
The steps can be continued to find as many digits needed and if the precision needed is achieved. If the remainder becomes zero, this means the exact square root was found.
Having found the desired number of digits, it is easy to determine whether or not it needsroundingup; i.e., changing the last digit. Another digit does not need to be found to see if it is equal to or greater than 5. 25 is appended to the root and it is compared to the remainder; if it is less than or equal to the remainder, then the next digit will be at least five and rounding up is needed. In the example above, 6839925 is less than 11669900, so the root needs to be rounded up to 6840.0.
To find the square root of a number that isn't an integer, say 54782.917, everything is the same, except that the digits to the left and right of the decimal point are grouped into twos.
So 54782.917 would be grouped as
Then the square root can be found using the process previously mentioned.
During the 19th century, Napier's bones were transformed to make them easier to read. The rods were made with an angle of about 65° so that the triangles that had to be added were aligned. In this case, in each square of the rod the unit is to the right and the ten (or the zero) to the left.
The rods were made such that the vertical and horizontal lines were more visible than the line where the rods touched, making the two components of each digit of the result easier to read. Thus, in the picture it is immediately clear that:
In 1891,Henri Genailleinvented a variant of Napier's bones which became known asGenaille–Lucas rulers. By representing thecarrygraphically, the results of simple multiplication problems can be read directly, with no intermediate mental calculations.[2]
The following example calculates52749 × 4 = 210996.
|
https://en.wikipedia.org/wiki/Napier%27s_bones
|
Inmathematics,ancient Egyptian multiplication(also known asEgyptian multiplication,Ethiopian multiplication,Russian multiplication, orpeasant multiplication), one of twomultiplicationmethods used by scribes, is a systematic method for multiplying two numbers that does not require themultiplication table, only the ability to multiply anddivide by 2, and toadd. It decomposes one of themultiplicands(preferably the smaller) into a set of numbers ofpowers of twoand then creates a table of doublings of the second multiplicand by every value of the set which is summed up to give result of multiplication.
This method may be calledmediation and duplation, wheremediationmeans halving one number and duplation means doubling the other number. It is still used in some areas.[1]
The second Egyptian multiplication and division technique was known from thehieraticMoscowandRhind Mathematical Papyriwritten in the seventeenth century B.C. by the scribeAhmes.[2]
Although in ancient Egypt the concept ofbase 2did not exist, the algorithm is essentially the same algorithm aslong multiplicationafter the multiplier and multiplicand are converted tobinary. The method as interpreted by conversion to binary is therefore still in wide use today as implemented bybinary multiplier circuitsin modern computer processors.[1]
Theancient Egyptianshad laid out tables of a great number of powers of two, rather than recalculating them each time. To decompose a number, they identified the powers of two which make it up. The Egyptians knew empirically that a given power of two would only appear once in a number. For the decomposition, they proceeded methodically; they would initially find the largest power of two less than or equal to the number in question,subtractit out and repeat until nothing remained. (The Egyptians did not make use of the numberzeroin mathematics.)
After the decomposition of the first multiplicand, the person would construct a table of powers of two times the second multiplicand (generally the smaller) from one up to the largest power of two found during the decomposition.
The result is obtained by adding the numbers from the second column for which the corresponding power of two makes up part of the decomposition of the first multiplicand.[1]
Because mathematically speaking, multiplication of natural numbers is just "exponentiation in the additivemonoid", this multiplication method can also be recognised as a special case of theSquare and multiplyalgorithm for exponentiation.
25 × 7 = ?
Decomposition of the number 25:
The largest power of two is 16 and the second multiplicand is 7.
As 25 = 16 + 8 + 1, the corresponding multiples of 7 are added to get 25 × 7 = 112 + 56 + 7 = 175.
In the Russian peasant method, the powers of two in the decomposition of the multiplicand are found by writing it on the left and progressively halving the left column, discarding any remainder, until the value is 1 (or −1, in which case the eventual sum is negated), while doubling the right column as before. Lines withevennumbers on the left column are struck out, and the remaining numbers on the right are added together.[3]
238 × 13 = ?
|
https://en.wikipedia.org/wiki/Peasant_multiplication
|
Inmathematics, aproductis the result ofmultiplication, or anexpressionthat identifiesobjects(numbers orvariables) to be multiplied, calledfactors. For example, 21 is the product of 3 and 7 (the result of multiplication), andx⋅(2+x){\displaystyle x\cdot (2+x)}is the product ofx{\displaystyle x}and(2+x){\displaystyle (2+x)}(indicating that the two factors should be multiplied together).
When one factor is aninteger, the product is called amultiple.
The order in whichrealorcomplexnumbers are multiplied has no bearing on the product; this is known as thecommutative lawof multiplication. Whenmatricesor members of various otherassociative algebrasare multiplied, the product usually depends on the order of the factors.Matrix multiplication, for example, is non-commutative, and so is multiplication in other algebras in general as well.
There are many different kinds of products in mathematics: besides being able to multiply just numbers, polynomials or matrices, one can also define products on many differentalgebraic structures.
Originally, a product was and is still the result of the multiplication of two or morenumbers. For example,15is the product of3and5. Thefundamental theorem of arithmeticstates that everycomposite numberis a product ofprime numbers, that is uniqueup tothe order of the factors.
With the introduction ofmathematical notationandvariablesat the end of the 15th century, it became common to consider the multiplication of numbers that are either unspecified (coefficientsandparameters), or to be found (unknowns). These multiplications that cannot be effectively performed are calledproducts. For example, in thelinear equationax+b=0,{\displaystyle ax+b=0,}the termax{\displaystyle ax}denotes theproductof the coefficienta{\displaystyle a}and the unknownx.{\displaystyle x.}
Later and essentially from the 19th century on, newbinary operationshave been introduced, which do not involve numbers at all, and have been calledproducts; for example, thedot product. Most of this article is devoted to such non-numerical products.
The product operator for theproduct of a sequenceis denoted by the capital Greek letterpiΠ(in analogy to the use of the capital SigmaΣassummationsymbol).[1]For example, the expression∏i=16i2{\displaystyle \textstyle \prod _{i=1}^{6}i^{2}}is another way of writing1⋅4⋅9⋅16⋅25⋅36{\displaystyle 1\cdot 4\cdot 9\cdot 16\cdot 25\cdot 36}.[2]
The product of a sequence consisting of only one number is just that number itself; the product of no factors at all is known as theempty product, and is equal to 1.
Commutative ringshave a product operation.
Residue classes in the ringsZ/NZ{\displaystyle \mathbb {Z} /N\mathbb {Z} }can be added:
and multiplied:
Two functions from the reals to itself can be multiplied in another way, called theconvolution.
If
then the integral
is well defined and is called the convolution.
Under theFourier transform, convolution becomes point-wise function multiplication.
The product of two polynomials is given by the following:
with
There are many different kinds of products in linear algebra. Some of these have confusingly similar names (outer product,exterior product) with very different meanings, while others have very different names (outer product, tensor product, Kronecker product) and yet convey essentially the same idea. A brief overview of these is given in the following sections.
By the very definition of a vector space, one can form the product of any scalar with any vector, giving a mapR×V→V{\displaystyle \mathbb {R} \times V\rightarrow V}.
Ascalar productis a bi-linear map:
with the following conditions, thatv⋅v>0{\displaystyle v\cdot v>0}for all0≠v∈V{\displaystyle 0\not =v\in V}.
From the scalar product, one can define anormby letting‖v‖:=v⋅v{\displaystyle \|v\|:={\sqrt {v\cdot v}}}.
The scalar product also allows one to define an angle between two vectors:
Inn{\displaystyle n}-dimensional Euclidean space, the standard scalar product (called thedot product) is given by:
Thecross productof two vectors in 3-dimensions is a vector perpendicular to the two factors, with length equal to the area of the parallelogram spanned by the two factors.
The cross product can also be expressed as theformal[a]determinant:
A linear mapping can be defined as a functionfbetween two vector spacesVandWwith underlying fieldF, satisfying[3]
If one only considers finite dimensional vector spaces, then
in whichbVandbWdenote thebasesofVandW, andvidenotes thecomponentofvonbVi, andEinstein summation conventionis applied.
Now we consider the composition of two linear mappings between finite dimensional vector spaces. Let the linear mappingfmapVtoW, and let the linear mappinggmapWtoU. Then one can get
Or in matrix form:
in which thei-row,j-column element ofF, denoted byFij, isfji, andGij=gji.
The composition of more than two linear mappings can be similarly represented by a chain of matrix multiplication.
Given two matrices
their product is given by
There is a relationship between the composition of linear functions and the product of two matrices. To see this, let r = dim(U), s = dim(V) and t = dim(W) be the (finite)dimensionsof vector spaces U, V and W. LetU={u1,…,ur}{\displaystyle {\mathcal {U}}=\{u_{1},\ldots ,u_{r}\}}be abasisof U,V={v1,…,vs}{\displaystyle {\mathcal {V}}=\{v_{1},\ldots ,v_{s}\}}be a basis of V andW={w1,…,wt}{\displaystyle {\mathcal {W}}=\{w_{1},\ldots ,w_{t}\}}be a basis of W. In terms of this basis, letA=MVU(f)∈Rs×r{\displaystyle A=M_{\mathcal {V}}^{\mathcal {U}}(f)\in \mathbb {R} ^{s\times r}}be the matrix representing f : U → V andB=MWV(g)∈Rr×t{\displaystyle B=M_{\mathcal {W}}^{\mathcal {V}}(g)\in \mathbb {R} ^{r\times t}}be the matrix representing g : V → W. Then
is the matrix representingg∘f:U→W{\displaystyle g\circ f:U\rightarrow W}.
In other words: the matrix product is the description in coordinates of the composition of linear functions.
Given two finite dimensional vector spacesVandW, the tensor product of them can be defined as a (2,0)-tensor satisfying:
whereV*andW*denote thedual spacesofVandW.[4]
For infinite-dimensional vector spaces, one also has the:
The tensor product,outer productandKronecker productall convey the same general idea. The differences between these are that the Kronecker product is just a tensor product of matrices, with respect to a previously-fixed basis, whereas the tensor product is usually given in itsintrinsic definition. The outer product is simply the Kronecker product, limited to vectors (instead of matrices).
In general, whenever one has two mathematicalobjectsthat can be combined in a way that behaves like a linear algebra tensor product, then this can be most generally understood as theinternal productof amonoidal category. That is, the monoidal category captures precisely the meaning of a tensor product; it captures exactly the notion of why it is that tensor products behave the way they do. More precisely, a monoidal category is theclassof all things (of a giventype) that have a tensor product.
Other kinds of products in linear algebra include:
Inset theory, aCartesian productis amathematical operationwhich returns aset(orproduct set) from multiple sets. That is, for setsAandB, the Cartesian productA×Bis the set of allordered pairs(a, b)—wherea ∈Aandb ∈B.[5]
The class of all things (of a giventype) that have Cartesian products is called aCartesian category. Many of these areCartesian closed categories. Sets are an example of such objects.
Theempty producton numbers and mostalgebraic structureshas the value of 1 (the identity element of multiplication), just like theempty sumhas the value of 0 (the identity element of addition). However, the concept of the empty product is more general, and requires special treatment inlogic,set theory,computer programmingandcategory theory.
Products over other kinds ofalgebraic structuresinclude:
A few of the above products are examples of the general notion of aninternal productin amonoidal category; the rest are describable by the general notion of aproduct in category theory.
All of the previous examples are special cases or examples of the general notion of a product. For the general treatment of the concept of a product, seeproduct (category theory), which describes how to combine twoobjectsof some kind to create an object, possibly of a different kind. But also, in category theory, one has:
|
https://en.wikipedia.org/wiki/Product_(mathematics)
|
Aslide ruleis a hand-operatedmechanical calculatorconsisting ofslidablerulersfor conducting mathematical operations such asmultiplication,division,exponents,roots,logarithms, andtrigonometry. It is one of the simplestanalog computers.[1][2]
Slide rules exist in a diverse range of styles and generally appear in a linear, circular or cylindrical form. Slide rules manufactured for specialized fields such as aviation or finance typically feature additional scales that aid in specialized calculations particular to those fields. The slide rule is closely related tonomogramsused for application-specific computations. Though similar in name and appearance to a standardruler, the slide rule is not meant to be used for measuring length or drawing straight lines. It is not designed for addition or subtraction. Maximum accuracy for standard linear slide rules is about three decimal significant digits, whilescientific notationis used to keep track of theorder of magnitudeof results.
English mathematician and clergyman ReverendWilliam Oughtredand others developed the slide rule in the 17th century based on the emerging work onlogarithmsbyJohn Napier. It made calculations faster and lesserror-pronethan evaluating onpaper. Before the advent of thescientific pocket calculator, it was the most commonly used calculation tool inscienceandengineering.[3]The slide rule's ease of use, ready availability, and low cost caused its use to continue to grow through the 1950s and 1960 even with the introduction ofmainframedigital electronic computers. But after the handheldHP-35scientific calculatorwas introduced in 1972 and became inexpensive in the mid-1970s, slide rules became largelyobsoleteand no longer were in use by the advent of personaldesktop computersin the 1980s.
In theUnited States, the slide rule is colloquially called aslipstick.[4][5]
Eachruler's scalehasgraduationslabeled withprecomputedoutputs of variousmathematical functions, acting as alookup tablethatmapsfrom position on the ruler as each function's input. Calculations that can be reduced to simple addition or subtraction using those precomputed functions can be solved by aligning the two rulers and reading the approximate result.
For example, a number to be multiplied on onelogarithmic-scaleruler can be aligned with the start of another such ruler to sum theirlogarithms. Then by applying thelaw of the logarithm of a product, theproductof the two numbers can be read. More elaborate slide rules can perform other calculations, such assquare roots,exponentials, andtrigonometric functions.
The user may estimate the location of the decimal point in the result by mentallyinterpolatingbetween labeled graduations.Scientific notationis used to track the decimal point for more precise calculations. Addition and subtraction steps in a calculation are generally done mentally or on paper, not on the slide rule.
Most slide rules consist of three parts:
Some slide rules ("duplex" models) have scales on both sides of the rule and slide strip, others on one side of the outer strips and both sides of the slide strip (which can usually be pulled out, flipped over and reinserted for convenience), still others on one side only ("simplex" rules). A slidingcursorwith a vertical alignment line is used to find corresponding points on scales that are not adjacent to each other or, in duplex models, are on the other side of the rule. The cursor can also record an intermediate result on any of the scales.
Scalesmay be grouped indecades, where each decade corresponds to a range of numbers that spans a ratio of 10 (i.e. a range from 10nto 10n+1). For example, the range 1 to 10 is a single decade, and the range from 10 to 100 is another decade. Thus, single-decade scales (named C and D) range from 1 to 10 across the entire length of the slide rule, while double-decade scales (named A and B) range from 1 to 100 over the length of the slide rule.
The followinglogarithmic identitiestransform the operations of multiplication and division to addition and subtraction, respectively:
log(x×y)=log(x)+log(y),{\displaystyle \log(x\times y)=\log(x)+\log(y)\,,}log(x/y)=log(x)−log(y).{\displaystyle \log(x/y)=\log(x)-\log(y)\,.}
With two logarithmic scales, the act of positioning the top scale to start at the bottom scale's label forx{\displaystyle x}corresponds to shifting the top logarithmic scale by a distance oflog(x){\displaystyle \log(x)}. This aligns each top scale's numbery{\displaystyle y}at offsetlog(y){\displaystyle \log(y)}with the bottom scale's number at positionlog(x)+log(y){\displaystyle \log(x)+\log(y)}. Becauselog(x)+log(y)=log(x×y){\displaystyle \log(x)+\log(y)=\log(x\times y)}, the mark on the bottom scale at that position corresponds tox×y{\displaystyle x\times y}. Withx=2andy=3for example, by positioning the top scale to start at the bottom scale's2, the result of the multiplication3×2=6can then be read on the bottom scale under the top scale's3:
While the above example lies within one decade, users must mentally account for additional zeroes when dealing with multiple decades. For example, the answer to7×2=14is found by first positioning the top scale to start above the 2 of the bottom scale, and then reading the marking 1.4 off the bottom two-decade scale where7is on the top scale:
But since the7is above thesecondset of numbers that numbermustbe multiplied by10. Thus, even though the answer directly reads1.4, the correct answer is1.4×10 = 14.
For an example with even larger numbers, to multiply88×20, the top scale is again positioned to start at the2on the bottom scale. Since2represents20, all numbers in that scale are multiplied by10. Thus, any answer in thesecondset of numbers is multiplied by100. Since8.8in the top scale represents88, the answer must additionally be multiplied by10. The answer directly reads1.76. Multiply by100and then by10to get the actual answer:1,760.
In general, the1on the top is moved to a factor on the bottom, and the answer is read off the bottom where the other factor is on the top. This works because the distances from the1mark are proportional to the logarithms of the marked values.
The illustration below demonstrates the computation of5.5/2. The2on the top scale is placed over the5.5on the bottom scale. The resulting quotient,2.75, can then be read below the top scale's1:
There is more than one method for doing division, and the method presented here has the advantage that the final result cannot be off-scale, because one has a choice of using the1at either end.
With more complex calculations involving multiple factors in the numerator and denominator of an expression, movement of the scales can be minimized by alternating divisions and multiplications. Thus5.5×3/2would be computed as5.5/2×3and the result,8.25, can be read beneath the3in the top scale in the figure above, without the need to register the intermediate result for5.5/2.
Because pairs of numbers that are aligned on the logarithmic scales form constant ratios, no matter how the scales are offset, slide rules can be used to generate equivalent fractions that solve proportion and percent problems.
For example, setting 7.5 on one scale over 10 on the other scale, the user can see that at the same time 1.5 is over 2, 2.25 is over 3, 3 is over 4, 3.75 is over 6, 4.5 is over 6, and 6 is over 8, among other pairs. For a real-life situation where 750 represents a whole 100%, these readings could be interpreted to suggest that 150 is 20%, 225 is 30%, 300 is 40%, 375 is 50%, 450 is 60%, and 600 is 80%.
In addition to the logarithmic scales, some slide rules have other mathematicalfunctionsencoded on other auxiliary scales. The most popular aretrigonometric, usuallysineandtangent,common logarithm(log10) (for taking the log of a value on a multiplier scale),natural logarithm(ln) andexponential(ex) scales. Others feature scales for calculatinghyperbolic functions. On linear rules, the scales and their labeling are highly standardized, with variation usually occurring only in terms of which scales are included and in what order.[6]
There are single-decade (C and D), double-decade (A and B), and triple-decade (K) scales. To computex2{\displaystyle x^{2}}, for example, locate x on the D scale and read its square on the A scale. Inverting this process allows square roots to be found, and similarly for the powers 3, 1/3, 2/3, and 3/2. Care must be taken when the base, x, is found in more than one place on its scale. For instance, there are two nines on the A scale; to find the square root of nine, use the first one; the second one gives the square root of 90.
Forxy{\displaystyle x^{y}}problems, use the LL scales. When several LL scales are present, use the one withxon it. First, align the leftmost 1 on the C scale with x on the LL scale. Then, findyon the C scale and go down to the LL scale withxon it. That scale will indicate the answer. Ifyis "off the scale", locatexy/2{\displaystyle x^{y/2}}and square it using the A and B scales as described above. Alternatively, use the rightmost 1 on the C scale, and read the answer off the next higher LL scale. For example, aligning the rightmost 1 on the C scale with 2 on the LL2 scale, 3 on the C scale lines up with 8 on the LL3 scale.
To extract a cube root using a slide rule with only C/D and A/B scales, align 1 on the B cursor with the base number on the A scale (taking care as always to distinguish between the lower and upper halves of the A scale). Slide the slide until the number on the D scale which is against 1 on the C cursor is the same as the number on the B cursor which is against the base number on the A scale. (Examples: A 8, B 2, C 1, D 2; A 27, B 3, C 1, D 3.)
Quadratic equationsof the formax2+bx+c=0{\displaystyle ax^{2}+bx+c=0}can be solved by first reducing the equation to the formx2−px+q=0{\displaystyle x^{2}-px+q=0}(wherep=−b/a{\displaystyle p=-b/a}andq=c/a{\displaystyle q=c/a}), and then aligning the index ("1") of the C scale to the valueq{\displaystyle q}on the D scale. The cursor is then moved along the rule until a position is found where the numbers on the CI and D scales add up top{\displaystyle p}. These two values are the roots of the equation.
The LLN scales can be used to compute and compare the cost or return on a fixed rate loan or investment.
The simplest case is for continuously compounded interest. Example: Taking D as the interest rate in percent,
slide the index (the "1" at the right or left end of the scale) of C to the percent on D. The corresponding value on LL2 directly below the index will be the multiplier for 10 cycles of interest (typically years). The value on LL2 below 2 on the C scale will be the multiplier after 20 cycles, and so on.
The S, T, and ST scales are used for trig functions and multiples of trig functions, for angles in degrees.
For angles from around 5.7 up to 90 degrees, sines are found by comparing the S scale with C (or D) scale. (On many closed-body rules the S scale relates to the A and B scales instead and covers angles from around 0.57 up to 90 degrees; what follows must be adjusted appropriately.) The S scale has a second set of angles (sometimes in a different color), which run in the opposite direction, and are used for cosines. Tangents are found by comparing the T scale with the C (or D) scale for angles less than 45 degrees. For angles greater than 45 degrees the CI scale is used. Common forms such asksinx{\displaystyle k\sin x}can be read directly fromxon the S scale to the result on the D scale, when the C scale index is set atk. For angles below 5.7 degrees, sines, tangents, and radians are approximately equal, and are found on the ST or SRT (sines, radians, and tangents) scale, or simply divided by 57.3 degrees/radian. Inverse trigonometric functions are found by reversing the process.
Many slide rules have S, T, and ST scales marked with degrees and minutes (e.g. some Keuffel and Esser models (Doric duplex 5" models, for example), late-model Teledyne-Post Mannheim-type rules). So-calleddecitrigmodels use decimal fractions of degrees instead.
Base-10 logarithms and exponentials are found using the L scale, which is linear. Some slide rules have a Ln scale, which is for base e. Logarithms to any other base can be calculated by reversing the procedure for calculating powers of a number. For example, log2 values can be determined by lining up either leftmost or rightmost 1 on the C scale with 2 on the LL2 scale, finding the number whose logarithm is to be calculated on the corresponding LL scale, and reading the log2 value on the C scale.
Addition and subtraction aren't typically performed on slide rules, but is possible using either of the following two techniques:[7]
Using (almost) any strictlymonotonic scales, other calculations can also be made with one movement.[8][9]For example, reciprocal scales can be used for the equality1x+1y=1z{\displaystyle {\frac {1}{x}}+{\frac {1}{y}}={\frac {1}{z}}}(calculatingparallel resistances,harmonic mean, etc.), and quadratic scales can be used to solvex2+y2=z2{\displaystyle x^{2}+y^{2}=z^{2}}.
The width of the slide rule is quoted in terms of the nominal width of the scales. Scales on the most common "10-inch" models are actually 25 cm, as they were made to metric standards, though some rules offer slightly extended scales to simplify manipulation when a result overflows. Pocket rules are typically 5 inches (12 cm). Models a couple of metres (yards) wide were made to be hung in classrooms for teaching purposes.[10]
Typically the divisions mark a scale to a precision of twosignificant figures, and the user estimates the third figure. Some high-end slide rules have magnifier cursors that make the markings easier to see. Such cursors can effectively double the accuracy of readings, permitting a 10-inch slide rule to serve as well as a 20-inch model.
Various other conveniences have been developed. Trigonometric scales are sometimes dual-labeled, in black and red, with complementary angles, the so-called "Darmstadt" style. Duplex slide rules often duplicate some of the scales on the back. Scales are often "split" to get higher accuracy. For example, instead of reading from an A scale to a D scale to find a square root, it may be possible to read from a D scale to an R1 scale running from 1 to square root of 10 or to an R2 scale running from square root of 10 to 10, where having more subdivisions marked can result in being able to read an answer with one more significant digit.
Circular slide rules come in two basic types, one with two cursors, and another with a free dish and one cursor. The dual cursor versions perform multiplication and division by holding a constant angle between the cursors as they are rotated around the dial. The onefold cursor version operates more like the standard slide rule through the appropriate alignment of the scales.
The basic advantage of a circular slide rule is that the widest dimension of the tool was reduced by a factor of about 3 (i.e. byπ). For example, a 10 cm (3.9 in) circular would have a maximum precision approximately equal to a 31.4 cm (12.4 in) ordinary slide rule. Circular slide rules also eliminate "off-scale" calculations, because the scales were designed to "wrap around"; they never have to be reoriented when results are near 1.0—the rule is always on scale. However, for non-cyclical non-spiral scales such as S, T, and LL's, the scale width is narrowed to make room for end margins.[11]
Circular slide rules are mechanically more rugged and smoother-moving, but their scale alignment precision is sensitive to the centering of a central pivot; a minute 0.1 mm (0.0039 in) off-centre of the pivot can result in a 0.2 mm (0.0079 in) worst case alignment error. The pivot does prevent scratching of the face and cursors. The highest accuracy scales are placed on the outer rings. Rather than "split" scales, high-end circular rules use spiral scales for more complex operations like log-of-log scales. One eight-inch premium circular rule had a 50-inch spiral log-log scale. Around 1970, an inexpensive model from B. C. Boykin (Model 510) featured 20 scales, including 50-inch C-D (multiplication) and log scales. The RotaRule featured a friction brake for the cursor.
The main disadvantages of circular slide rules are the difficulty in locating figures along a dish, and limited number of scales. Another drawback of circular slide rules is that less-important scales are closer to the center, and have lower precisions. Most students learned slide rule use on the linear slide rules, and did not find reason to switch.
One slide rule remaining in daily use around the world is theE6-B. This is a circular slide rule first created in the 1930s for aircraft pilots to help withdead reckoning. With the aid of scales printed on the frame it also helps with such miscellaneous tasks as converting time, distance, speed, and temperature values, compass errors, and calculating fuel use. The so-called "prayer wheel" is still available in flight shops, and remains widely used. WhileGPShas reduced the use of dead reckoning for aerial navigation, and handheld calculators have taken over many of its functions, the E6-B remains widely used as a primary or backup device and the majority of flight schools demand that their students have some degree of proficiency in its use.
Proportion wheels are simple circular slide rules used in graphic design to calculateaspect ratios. Lining up the original and desired size values on the inner and outer wheels will display their ratio as a percentage in a small window. Though not as common since the advent of computerized layout, theyare still made and used.[citation needed]
In 1952, Swiss watch companyBreitlingintroduced a pilot's wristwatch with an integrated circular slide rule specialized for flight calculations: theBreitling Navitimer. The Navitimer circular rule, referred to by Breitling as a "navigation computer", featuredairspeed,rate/time of climb/descent, flight time, distance, and fuel consumption functions, as well as kilometer—nautical mileand gallon—liter fuel amount conversion functions.
Cylindrical slide rules are made in two styles: those with helical scales such as theFuller calculator, theOtis Kingand theBygrave slide rule, and those with bars, such as the Thacher and some Loga models. In either case, the advantage is a much longer scale, and hence potentially greater precision, than afforded by a straight or circular rule.
Traditionally slide rules were made out of a relatively dense, stable hardwood such asmahoganyorboxwoodwith cursors of glass and metal. Aluminum was used, and at least one high precision instrument was made of steel.
In 1895, a Japanese firm, Hemmi, started to make slide rules fromcelluloid-clad bamboo, which had the advantages of being dimensionally stable, strong, and naturally self-lubricating. These bamboo slide rules were introduced in Sweden in September, 1933,[12]and probably only a little earlier in Germany.
Scales were also made of celluloid or other polymers, or printed on aluminium. Later cursors were molded fromacrylicsorpolycarbonate, sometimes withTeflonbearing surfaces.
All premium slide rules had numbers and scales deeply engraved, and then filled with paint or otherresin. Painted or imprinted slide rules were viewed as inferior, because the markings could wear off or be chemically damaged. Nevertheless, Pickett & Eckel, an American slide rule company, made only printed scale rules.[citation needed]Premium slide rules included clever mechanical catches so the rule would not fall apart by accident, and bumpers to protect the scales and cursor from rubbing on tabletops.
The slide rule was invented around 1620–1630, shortly afterJohn Napier's publication of the concept of thelogarithm. In 1620Edmund Gunterof Oxford developed a calculating device with a single logarithmic scale; with additional measuring tools it could be used to multiply and divide.[13]In c. 1622,William Oughtredof Cambridge combined two handheldGunter rulesto make a device that is recognizably the modern slide rule.[14]Oughtred became involved in a vitriolic controversy overpriority, with his one-time studentRichard Delamainand the prior claims of Wingate. Oughtred's ideas were only made public in publications of his student William Forster in 1632 and 1653.
In 1677, Henry Coggeshall created a two-foot folding rule for timber measure, called theCoggeshall slide rule, expanding the slide rule's use beyond mathematical inquiry.
In 1722, Warner introduced the two- and three-decade scales, and in 1755 Everard included an inverted scale; a slide rule containing all of these scales is usually known as a "polyphase" rule.
In 1815,Peter Mark Rogetinvented the log log slide rule, which included a scale displaying the logarithm of the logarithm. This allowed the user to directly perform calculations involving roots and exponents. This was especially useful for fractional powers.
In 1821,Nathaniel Bowditch, described in theAmerican Practical Navigatora "sliding rule" that contained scaled trigonometric functions on the fixed part and a line of log-sines and log-tans on the slider used to solve navigation problems.
In 1845, Paul Cameron of Glasgow introduced a nautical slide rule capable of answering navigation questions, includingright ascensionanddeclinationof the sun and principal stars.[15]
A more modern form of slide rule was created in 1859 by French artillery lieutenantAmédée Mannheim, who was fortunate both in having his rule made by a firm of national reputation, and its adoption by the French Artillery. Mannheim's rule had two major modifications that made it easier to use than previous general-purpose slide rules. Such rules had four basic scales, A, B, C, and D, and D was the only single-decade logarithmic scale; C had two decades, like A and B. Most operations were done on the A and B scales; D was only used for finding squares and square roots.
Mannheim changed the C scale to a single-decade scale and performed most operations with C and D instead of A and B. Because the C and D scales were single-decade, they could be read more precisely, so the rule's results could be more accurate. The change also made it easier to include squares and square roots as part of a larger calculation. Mannheim's rule also had a cursor, unlike almost all preceding rules, so any of the scales could be easily and accurately compared across the rule width. The "Mannheim rule" became the standard slide rule arrangement for the later 19th century and remained a common standard throughout the slide-rule era.
The growth of theengineeringprofession during the later 19th century drove widespread slide-rule use, beginning in Europe and eventually taking hold in the United States as well. The duplex rule was invented by William Cox in 1891 and was produced byKeuffel and Esser Co.of New York.[16][17]
In 1881, the American inventor Edwin Thacher introduced his cylindrical rule, which had a much longer scale than standard linear rules and thus could calculate to higher precision, about four to five significant digits. However, the Thacher rule was quite expensive, as well as being non-portable, so it was used in far more limited numbers than conventional slide rules.
Astronomical work also required precise computations, and, in 19th-century Germany, a steel slide rule about two meters long was used at one observatory. It had a microscope attached, giving it accuracy to six decimal places.[citation needed]
In the 1920s, the novelist and engineerNevil Shute Norway(he called his autobiographySlide Rule) wasChief Calculatoron the design of the BritishR100airship forVickers Ltd.from 1924. The stress calculations for each transverse frame required computations by a pair ofcalculators(people) usingFuller's cylindrical slide rulesfor two or three months. The simultaneous equation contained up to seven unknown quantities, took about a week to solve, and had to be repeated with a different selection of slack wires if the guess on which of the eight radial wires were slack was wrong and one of the wires guessed to be slack was not slack. After months of labour filling perhaps fiftyfoolscapsheets with calculations "the truth stood revealed (and) produced a satisfaction almost amounting to a religious experience".[18]
In 1937, physicistLucy Haynerdesigned and constructed a circular slide rule inBraille.[19]
Throughout the 1950s and 1960s, the slide rule was the symbol of the engineer's profession in the same way thestethoscopeis that of the medical profession.[20]
Aluminium Pickett-brand slide rules were carried onProject Apollospace missions. The model N600-ES owned byBuzz Aldrinthat flew with him to the Moon onApollo 11was sold at auction in 2007.[21]The model N600-ES taken along onApollo 13in 1970 is owned by theNational Air and Space Museum.[22]
Some engineering students and engineers carried ten-inch slide rules in belt holsters, a common sight on campuses even into the mid-1970s. Until the advent of the pocket digital calculator, students also might keep a ten- or twenty-inch rule for precision work at home or the office[23]while carrying a five-inch pocket slide rule around with them.
In 2004, education researchers David B. Sher and Dean C. Nataro conceived a new type of slide rule based onprosthaphaeresis, an algorithm for rapidly computing products that predates logarithms. However, there has been little practical interest in constructing one beyond the initial prototype.[24]
Slide rules have often been specialized to varying degrees for their field of use, such as excise, proof calculation, engineering, navigation, etc., and some slide rules are extremely specialized for very narrow applications. For example, the John Rabone & Sons 1892 catalog lists a "Measuring Tape and Cattle Gauge", a device to estimate the weight of a cow from its measurements.
There were many specialized slide rules for photographic applications. For example, theactinographofHurter and Driffieldwas a two-slide boxwood, brass, and cardboard device for estimatingexposurefrom time of day, time of year, and latitude.
Specialized slide rules were invented for various forms of engineering, business and banking. These often had common calculations directly expressed as special scales, for example loan calculations, optimal purchase quantities, or particular engineering equations. For example, theFisher Controlscompany distributed a customized slide rule adapted to solving the equations used for selecting the proper size of industrial flow control valves.[25]
Pilot balloon slide rules were used by meteorologists in weather services to determine the upper wind velocities from an ascending hydrogen or helium-filled pilot balloon.[26]
TheE6-Bis a circular slide rule used by pilots and navigators.
Circular slide rules to estimate ovulation dates and fertility are known aswheel calculators.[27]
A Department of Defense publication from 1962[28]infamously included a special-purpose circular slide rule for calculating blast effects, overpressure, and radiation exposure from a given yield of an atomic bomb.[29]
The importance of the slide rule began to diminish as electronic computers, a new but rare resource in the 1950s, became more widely available to technical workers during the 1960s.
The first step away from slide rules was the introduction of relatively inexpensive electronic desktopscientific calculators. These included theWang LaboratoriesLOCI-2,[30][31]introduced in 1965, which used logarithms for multiplication and division; and theHewlett-PackardHP 9100A, introduced in 1968.[32]Both of these were programmable and provided exponential and logarithmic functions; the HP hadtrigonometric functions(sine, cosine, and tangent) and hyperbolic trigonometric functions as well. The HP used theCORDIC(coordinate rotation digital computer) algorithm,[33]which allows for calculation of trigonometric functions using only shift and add operations. This method facilitated the development of ever smaller scientific calculators.
As with mainframe computing, the availability of these desktop machines did not significantly affect the ubiquitous use of the slide rule, until cheap hand-held scientific electronic calculators became available in the mid-1970s, at which point it rapidly declined. The pocket-sized Hewlett-PackardHP-35scientific calculator was the first handheld device of its type, but it cost US$395 in 1972. This was justifiable for some engineering professionals, but too expensive for most students.
Around 1974, lower-cost handheld electronic scientific calculators started to make slide rules largely obsolete.[34][35][36][37]By 1975, basic four-function electronic calculators could be purchased for less than $50, and by 1976 theTI-30scientific calculator was sold for less than $25 ($138 adjusted for inflation).
1980 was the final year of theUniversity Interscholastic League(UIL) competition inTexasto use slide rules. The UIL had been originally been organized in 1910 to administer literary events, but had become the governing body of school sports events as well.[38]
Even during their heyday, slide rules never caught on with the general public.[39]Addition and subtraction are not well-supported operations on slide rules and doing a calculation on a slide rule tends to be slower than on a calculator.[40]This led engineers to use mathematical equations that favored operations that were easy on a slide rule over more accurate but complex functions; these approximations could lead to inaccuracies and mistakes.[41]On the other hand, the spatial, manual operation of slide rules cultivates in the user an intuition for numerical relationships and scale that people who have used only digital calculators often lack.[42]A slide rule will also display all the terms of a calculation along with the result, thus eliminating uncertainty about what calculation was actually performed. It has thus been compared withreverse Polish notation(RPN) implemented in electronic calculators.[43]
A slide rule requires the user to separately compute theorder of magnitudeof the answer to position the decimal point in the results. For example, 1.5 × 30 (which equals 45) will show the same result as1500000× 0.03 (which equals45000). This separate calculation forces the user to keep track of magnitude in short-term memory (which is error-prone), keep notes (which is cumbersome) or reason about it in every step (which distracts from the other calculation requirements).
The typicalarithmetic precisionof a slide rule is about threesignificant digits, compared to many digits on digital calculators. As order of magnitude gets the greatest prominence when using a slide rule, users are less likely to make errors offalse precision.
When performing a sequence of multiplications or divisions by the same number, the answer can often be determined by merely glancing at the slide rule without any manipulation. This can be especially useful when calculating percentages (e.g. for test scores) or when comparing prices (e.g. in dollars per kilogram). Multiplespeed-time-distancecalculations can be performed hands-free at a glance with a slide rule. Other useful linear conversions such as pounds to kilograms can be easily marked on the rule and used directly in calculations.
Being entirely mechanical, a slide rule does not depend ongrid electricityor batteries. Mechanical imprecision in slide rules that were poorly constructed or warped by heat or use will lead to errors.
Many sailors keep slide rules as backups for navigation in case of electric failure or battery depletion on long route segments. Slide rules are still commonly used in aviation, particularly for smaller planes. They are being replaced only by integrated, special purpose and expensive flight computers, and not general-purpose calculators. TheE6-Bcircular slide rule used by pilots has been in continuous production and remains available in a variety of models. Some wrist watches designed for aviation use still feature slide rule scales to permit quick calculations. The Citizen Skyhawk AT and the Seiko Flightmaster SNA411 are two notable examples.[44]
Even in the 21st century, some people prefer a slide rule over an electronic calculator as a practical computing device. Others keep their old slide rules out of a sense of nostalgia, or collect them as a hobby.[45]
A popular collectible model is theKeuffel & EsserDeci-Lon, a premium scientific and engineering slide rule available both in a ten-inch (25 cm) "regular" (Deci-Lon 10) and a five-inch "pocket" (Deci-Lon 5) variant. Another prized American model is the eight-inch (20 cm) Scientific Instruments circular rule. Of European rules,Faber-Castell's high-end models are the most popular among collectors.
Although a great many slide rules are circulating on the market, specimens in good condition tend to be expensive. Many rules found for sale ononline auction sitesare damaged or have missing parts, and the seller may not know enough to supply the relevant information. Replacement parts are scarce, expensive, and generally available only for separate purchase on individual collectors' web sites. The Keuffel and Esser rules from the period up to about 1950 are particularly problematic, because the end-pieces on the cursors, made ofcelluloid, tend to chemically break down over time.Methods of preserving plasticmay be used to slow the deterioration of some older slide rules, and3D printingmay be used to recreate missing or irretrievably broken cursor parts.[46]
There are still a handful of sources for brand new slide rules. The Concise Company of Tokyo, which began as a manufacturer of circular slide rules in July 1954,[47]continues to make and sell them today. In September 2009, on-line retailerThinkGeekintroduced its own brand of straight slide rules, described as "faithful replica[s]" that were "individually hand tooled".[48]These were no longer available in 2012.[49]In addition, Faber-Castell had a number of slide rules in inventory, available for international purchase through their web store, until mid 2018.[50]Proportion wheels are still used in graphic design.
Various slide rule simulator apps are available for Android and iOS-based smart phones and tablets.
Specialized slide rules such as the E6-B used in aviation, and gunnery slide rules used inlaying artilleryare still used though no longer on a routine basis. These rules are used as part of the teaching and instruction process as in learning to use them the student also learns about the principles behind the calculations, it also allows the student to be able to use these instruments as a backup in the event that the modern electronics in general use fail.
TheMIT MuseuminCambridge, Massachusetts, has a collection of hundreds of slide rules,nomograms, andmechanical calculators.[51]TheKeuffel and EsserCompany collection, from the slide rule manufacturer formerly located inHoboken, New Jersey, was donated to MIT around 2005, substantially expanding existing holdings.[52]Selected items from the collection are usually on display at the museum.[53][54]
TheInternational Slide Rule Museumis claimed to be "[the world's] most extensive resource for all things concerning slide rules and logarithmic calculators".[55]The museum's Web page includes extensive literature relative to slide rules in its "Slide Rule Library" section.[56]
|
https://en.wikipedia.org/wiki/Slide_rule
|
Incryptography,GMRis adigital signaturealgorithmnamed after its inventorsShafi Goldwasser,Silvio MicaliandRon Rivest.
As withRSAthe security of the system is related to the difficulty offactoring very large numbers. But, in contrast to RSA, GMR is secure againstadaptivechosen-message attacks, which is the currently accepted security definition for signature schemes— even when an attacker receives signatures for messages of his choice, this does not allow them to forge a signature for a single additional message.
|
https://en.wikipedia.org/wiki/GMR_(cryptography)
|
49°15′24″N122°59′57″W / 49.256613°N 122.9990452°W /49.256613; -122.9990452
D-Wave Quantum Inc.is aquantum computingcompany with locations inPalo Alto, CaliforniaandBurnaby, British Columbia. D-Wave claims to be the world's first company to sell computers that exploit quantum effects in their operation.[2]D-Wave's early customers includeLockheed Martin, theUniversity of Southern California,Google/NASA, andLos Alamos National Laboratory.
D-Wave does not implement a generic quantum computer; instead, their computers implement specializedquantum annealing.[3]
D-Wave was founded by Haig Farris, Geordie Rose, Bob Wiens, and Alexandre Zagoskin in 1999.[4]Farris taught a business course at theUniversity of British Columbia(UBC), where Rose obtained hisPhD, and Zagoskin was apostdoctoral fellow. The company name refers to their first qubit designs, which usedd-wavesuperconductors.
D-Wave operated as an offshoot from UBC, while maintaining ties with the Department of Physics and Astronomy.[5]It funded academic research in quantum computing, thus building a collaborative network of research scientists. The company collaborated with several universities and institutions, includingUBC,IPHT Jena,Université de Sherbrooke,University of Toronto,University of Twente,Chalmers University of Technology,University of Erlangen, andJet Propulsion Laboratory. These partnerships were listed on D-Wave's website until 2005.[6][7]In June 2014, D-Wave announced a new quantum applications ecosystem with computational finance firm1QB Information Technologies (1QBit)and cancer research group DNA-SEQ to focus on solving real-world problems with quantum hardware.[8]
On May 11, 2011, D-Wave announcedD-Wave One, described as "the world's first commercially available quantum computer", operating on a 128-qubitchipset[9]usingquantum annealing(a general method for finding the global minimum of a function by a process usingquantum fluctuations)[10][11][12][13]to solveoptimization problems. The D-Wave One was built on early prototypes such as D-Wave's Orion Quantum Computer. The prototype was a 16-qubitquantum annealingprocessor, demonstrated on February 13, 2007, at theComputer History MuseuminMountain View, California.[14]D-Wave demonstrated what they claimed to be a 28-qubit quantum annealing processor on November 12, 2007.[15]The chip was fabricated at the NASAJet Propulsion LaboratoryMicrodevices Lab in Pasadena, California.[16]
In May 2013, a collaboration betweenNASA,Google, and theUniversities Space Research Association(USRA) launched aQuantum Artificial Intelligence Labbased on theD-Wave Two512-qubit quantum computer that would be used for research into machine learning, among other fields of study.[17]
On February 17, 2014, D-Wave was featured on the cover ofTime magazine.[18]In the accompanying article,Lev Grossmandescribes D-Wave's approach to quantum computing, the potential of the technology, and the enthusiasm of investors likeJeff Bezos, while acknowledging skepticism from some critics.[19]
On August 20, 2015, D-Wave announced[20]the general availability of the D-Wave 2X[21]system, a 1000-qubit+ quantum computer. This was followed by an announcement[22]on September 28, 2015, that it had been installed at theQuantum Artificial Intelligence Labat NASAAmes Research Center.
In January 2017, D-Wave released the D-Wave 2000Q, and an open-source repository containing software tools for quantum annealers. It containsQbsolv,[23][24][25]which isopen-source softwarethat solvesquadratic unconstrained binary optimizationproblems on both the company's quantum processors and classic hardware architectures.
In 2018, D-Wave released the Leap quantum cloud service.[26]
In 2025, D-Wave announced the sale of an Advantage system toForschungszentrum Jülich, a research center in Germany. The system is installed at Jülich Supercomputing Centre (JSC) at Forschungszentrum Jülich.[27]Scientists at JSC, working with collaborators from other institutions, published inNaturethe results of research conducted on the Advantage system simulating the dynamics of false vacuum decay. This work demonstrates that quantum computers can be used to explore complex cosmological phenomena.[28]
Also in 2025, D-Wave published a paper in theScience journaldescribing a computational simulation of a magnetic material that was performed on a quantum computer dramatically faster than performing such a simulation on a traditional computer.[29]However, some physicists questioned these claims.[30]
D-Wave operated from various locations in Vancouver, British Columbia, and laboratory spaces at UBC before moving to its current location in the neighboring suburb of Burnaby. D-Wave also has offices in Palo Alto, California and Vienna, California, USA.[citation needed]
The first commercially produced D-Wave processor was a programmable,[31]superconductingintegrated circuitwith up to 128 pair-wise coupled[32]superconductingflux qubits.[33][34][35]The 128-qubit processor was superseded by a 512-qubit processor in 2013.[36]The processor is designed to implement a special-purposequantum annealing[10][11][12][13]as opposed to being operated as a universalgate-model quantum computer.
The underlying ideas for the D-Wave approach arose from experimental results incondensed matter physics, and particular work on quantum annealing in magnets performed byGabriel Aeppli,Thomas Felix Rosenbaum, and collaborators,[37]who had been checking[38][39]the advantages,[40]proposed byBikas K. Chakrabarti& collaborators, of quantum tunneling/fluctuations in the search for ground state(s) inspin glasses. These ideas were later recast in the language of quantum computation by MIT physicistsEdward Farhi,Seth Lloyd, Terry Orlando, and Bill Kaminsky, whose publications in 2000[41]and 2004[42]provided both a theoretical model for quantum computation that fit with the earlier work inquantum magnetism(specifically theadiabatic quantum computingmodel and quantum annealing, its finite temperature variant), and a specific enablement of that idea using superconductingflux qubitswhich is a close cousin to the designs D-Wave produced. To understand the origins of much of the controversy around the D-Wave approach, it is important to note that the origins of the D-Wave approach to quantum computation arose not from the conventionalquantum informationfield, but from experimentalcondensed matter physics.
On February 13, 2007, D-Wave demonstrated the Orion system, running three different applications at theComputer History Museum in Mountain View, California. This marked the first public demonstration of, supposedly, a quantum computer and associated service.[citation needed]
The first application, an example ofpattern matching, performed a search for a similar compound to a known drug within a database ofmolecules. The next application computed a seating arrangement for an event subject to compatibilities and incompatibilities between guests. The last involved solving aSudokupuzzle.[43]
The processors at the heart of D-Wave's "Orion quantum computing system" are designed for use ashardware acceleratorprocessors rather than general-purpose computermicroprocessors. The system is designed to solve a particularNP-completeproblem related to the two-dimensionalIsing modelin amagnetic field.[14]D-Wave terms the device as a 16-qubitsuperconductingadiabaticquantum computerprocessor.[44][45]
According to the company, a conventional front-end running an application that requires the solution of an NP-complete problem, such as pattern matching, passes the problem to the Orion system.
According to Geordie Rose, founder and Chief Technology Officer of D-Wave, NP-complete problems "are probably not exactly solvable, no matter how big, fast or advanced computers get"; the adiabatic quantum computer used by the Orion system is intended to quickly compute an approximate solution.[46]
On December 8, 2009, at the Neural Information Processing Systems (NeurIPS) conference, a Google research team led byHartmut Nevenused D-Wave's processor to train a binary image classifier.[47]
On May 11, 2011, D-Wave announced the D-Wave One, an integrated quantum computer system running on a 128-qubit processor. The processor used in the D-Wave One, performs a single mathematical operation,discrete optimization. Rainier uses quantum annealing to solve optimization problems. The D-Wave One was claimed to be the world's first commercially available quantum computer system.[48]Its price was quoted at approximatelyUS$10,000,000.[2]
A research team led by Matthias Troyer andDaniel Lidarfound that, while there is evidence of quantum annealing in D-Wave One, they saw no speed increase compared to classical computers. They implemented an optimized classical algorithm to solve the same particular problem as the D-Wave One.[49][50]
In November 2010,[51]Lockheed Martinsigned a multi-year contract with D-Wave to realize the benefits based upon a quantum annealing processor applied to some of Lockheed's most challenging computation problems. The contract was later announced on May 25, 2011. The contract included the purchase of the D-Wave One quantum computer, maintenance, and associated professional services.[52]
In August 2012, a team of Harvard University researchers presented results of the largest protein-folding problem solved to date using a quantum computer. The researchers solved instances of a lattice protein folding model, known as theMiyazawa–Jernigan model, on a D-Wave One quantum computer.[53][54]
In early 2012, D-Wave revealed a 512-qubit quantum computer,[55]which was launched as a production processor in 2013.[56]
In May 2013,Catherine McGeoch, a consultant for D-Wave, published the first comparison of the technology against regular top-end desktop computers running an optimization algorithm. Using a configuration with 439 qubits, the system performed 3,600 times as fast asCPLEX, the best algorithm on the conventional machine, solving problems with 100 or more variables in half a second compared with half an hour. The results are presented at the Computing Frontiers 2013 conference.[57]
In March 2013, several groups of researchers at the Adiabatic Quantum Computing workshop at theInstitute of Physicsin London, England, produced evidence, though only indirect, ofquantum entanglementin the D-Wave chips.[58]
In May 2013, it was announced that a collaboration between NASA, Google, and the USRA launched a Quantum Artificial Intelligence Lab at the NASA Advanced Supercomputing Division atAmes Research Centerin California, using a 512-qubit D-Wave Two that would be used for research into machine learning, among other fields of study.[17][59]
On August 20, 2015, D-Wave released the general availability of their D-Wave 2X computer, with 1000 qubits in a Chimera graph architecture (although, due to magnetic offsets and manufacturing variability inherent in the superconductor circuit fabrication, fewer than 1152 qubits are functional and available for use; the exact number of qubits yielded will vary with each specific processor manufactured). This was accompanied by a report comparing speeds with high-end single-threaded CPUs.[60]Unlike previous reports, this one explicitly stated that the question of quantum speedup was not something they were trying to address, and focused on constant-factor performance gains over classical hardware. For general-purpose problems, a speedup of 15x was reported, but it is worth noting that these classical algorithms benefit efficiently from parallelization—so that the computer would be performing on par with, perhaps, 30 traditional high-end single-threaded cores.
The D-Wave 2X processor is based on a 2048-qubit chip with half of the qubits disabled; these were activated in the D-Wave 2000Q.[61][62]
In February 2019, D-Wave announced the next-generation system that would become theAdvantage[63]and delivered that system in 2020. The Advantage architecture would increase the total number of qubits to 5760 and switch to the Pegasus graph topology, increasing the per-qubit connections to 15. D-Wave claimed the Advantage architecture provided a 10x speedup in time-to-solve over the 2000Q product offering. D-Wave claims that an incremental follow-upAdvantage Performance Updateprovides a 2x speedup over Advantage and a 20x speedup over 2000Q, among other improvements.[64]
In 2021, D-Wave announced the next-generation system that would become the Advantage2[65]with delivery expected in late 2024 or early 2025. The Advantage architecture was expected to increase the total number of qubits to over 7000 and switch to the Zephyr graph topology, increasing the per-qubit connections to 20.[65][66][67][68][69]
|
https://en.wikipedia.org/wiki/D-Wave_Systems
|
Electronic quantum holography(also known as quantumholographic data storage) is aninformation storagetechnology which can encode and read outdataat unprecedented density storing as much as 35bitsperelectron.[1]
In 2009,Stanford University's Department of Physics set a new world record for the smallest writing using ascanning tunneling microscopeand electron waves to write the initials "SU" at 0.3nanometers, surpassing theprevious recordset by IBM in 1989 usingxenonatoms. This achievement also set a record for the density of information. Before this technology was invented the density of information had not exceeded one bit per atom. Researchers of electronic quantum holography however were able to push the limit to 35 bits per electron or 20 bits nm−2.[2]
Acopperchip is placed in a microscope and cleaned.Carbon monoxidemolecules are then placed on the surface and moved around. When the electrons in copper interact with the carbon monoxide molecules, they create interference patterns that create an electronic quantum hologram. This hologram can be read like a stack of pages in a book,[3]and can contain multiple images at differentwavelengths.[4]
Thiscomputer sciencearticle is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Electronic_quantum_holography
|
Thisglossary of quantum computingis a list of definitions of terms and concepts used inquantum computing, its sub-disciplines, and related fields.
|
https://en.wikipedia.org/wiki/Glossary_of_quantum_computing
|
TheIntelligence Advanced Research Projects Activity(IARPA) is an organization, within theOffice of the Director of National Intelligence(ODNI), that is responsible for leading research to overcome difficult challenges facing theUnited States Intelligence Community.[1]IARPA characterizes its mission as follows: "To envision and lead high-risk, high-payoff research that delivers innovative technology for future overwhelming intelligence advantage."
IARPA funds academic and industry research across a broad range of technical areas, including mathematics, computer science, physics, chemistry, biology, neuroscience, linguistics, political science, andcognitive psychology. Most IARPA research is unclassified and openly published. IARPA transfers successful research results and technologies to other government agencies. Notable IARPA investments includequantum computing,[2]superconducting computing, machine learning, and forecasting tournaments.
IARPA characterizes its mission as "to envision and lead high-risk, high-payoff research that delivers innovative technology for future overwhelming intelligence advantage".
In 1958, the first Advanced Research Projects Agency, or ARPA, was created in response to an unanticipated surprise—theSoviet Union's successful launch ofSputnikon October 4, 1957. The ARPA model was designed to anticipate and pre-empt such technological surprises. As then-Secretary of DefenseNeil McElroysaid, "I want an agency that makes sure no important thing remains undone because it doesn't fit somebody's mission." The ARPA model has been characterized by ambitious technical goals, competitively awarded research led by term-limited staff, and independent testing and evaluation.
Authorized by the ODNI in 2006, IARPA was modeled afterDARPAbut focused on national intelligence, rather than military, needs. The agency was formed from a consolidation of theNational Security Agency'sDisruptive Technology Office, theNational Geospatial-Intelligence Agency's National Technology Alliance, and theCentral Intelligence Agency's Intelligence Technology Innovation Center.[3]IARPA operations began on October 1, 2007 withLisa Porteras founding director. Its headquarters, a new building in M Square, theUniversity of Maryland's research park inRiverdale Park, Maryland, was dedicated in April 2009.[4]
In 2010, IARPA's quantum computing research was namedSciencemagazine's Breakthrough of the Year.[5][6]In 2015, IARPA was named to lead foundational research and development for theNational Strategic Computing Initiative.[citation needed]IARPA is also a part of other White House science and technology efforts, including the U.S.BRAIN Initiative, and thenanotechnology-inspired Grand Challenge for Future Computing.[7][8]In 2013,The New York Times's op-ed columnistDavid Brookscalled IARPA "one of the government's most creative agencies."[9]
IARPA invests in multi-year research programs, in which academic and industry teams compete to solve a well-defined set of technical problems, regularly scored on a shared set of metrics and milestones. Each program is led by an IARPA Program Manager (PM) who is a term-limited Government employee. IARPA programs are meant to enable researchers to pursue ideas that are potentially disruptive to the status quo.
Most IARPA research is unclassified and openly published.[10]Former directorJason Mathenyhas stated that the agency's goals of openness and external engagement serve to draw in expertise from academia and industry, or even individuals who "might be working in their basement on somedata-scienceproject and might have an idea for how to solve an important problem".[11]IARPA transfers successful research results and technologies to other government agencies.
IARPA is known for its programs to fund research into anticipatory intelligence, usingdata scienceto make predictions about future events ranging from political elections to disease outbreaks tocyberattacks, some of which focus onopen-source intelligence.[12][13][14]IARPA has pursued these objectives not only through traditional funding programs but also through tournaments[12][13]and prizes.[11]Aggregative Contingent Estimation(ACE) is an example of one such program.[11][13]Other projects involve the analysis of images or videos that lackmetadataby directly analyzing the media's content itself. Examples given by IARPA include determining the location of an image by analyzing features such as the placement of trees or a mountain skyline, or determining whether a video is of a baseball game or a traffic jam.[11]Another program focuses on developingspeech recognitiontools that can transcribe arbitrary languages.[15]
IARPA is also involved inhigh-performance computingand alternative computing methods. In 2015, IARPA was named one of two foundational research and development agencies in theNational Strategic Computing Initiative, with the specific charge of finding "future computing paradigms offering an alternative to standardsemiconductorcomputing technologies".[citation needed]One such approach is cryogenicsuperconducting computing, which seeks to usesuperconductorssuch asniobium, rather thansemiconductors, to reduce the energy consumption of futureexascale supercomputers.[11][15]
Several programs at IARPA focus onquantum computing[2]andneuroscience.[16]IARPA is a major funder of quantum computing research, due to its applications inquantum cryptography. As of 2009, IARPA was said to provide a large portion of quantum computing funding resources in the United States.[17]Quantum computing research funded by IARPA was named Science Magazine's Breakthrough of the Year in 2010,[5][6]and physicistDavid Winelandwas a winner of the 2012Nobel Prize in Physicsfor quantum computing research funded by IARPA.[11]IARPA is also involved inneuromorphic computationefforts as part of the U.S.BRAIN Initiativeand theNational Nanotechnology Initiative's Grand Challenge for Future Computing. IARPA'sMICrONSproject seeks toreverse engineerone cubic millimeter ofbrain tissueand use insights from its study to improvemachine learningandartificial intelligence.[7][8]
Below are some of the past and current research programs of IARPA.
|
https://en.wikipedia.org/wiki/IARPA
|
India's quantum computeris the proposed and plannedquantum computerto be developed by 2026. A quantum computer is a computer based on quantum phenomena and governed by the principles ofquantum mechanicsinphysics. In the present time, India has a small scale quantum computer of 7qubitsdeveloped atTata Institute of Fundamental Research,Mumbai.[1]In the next five years, it is expected that India will invest around one billion dollars in the programs related to the development of the quantum computer.[2]TheGovernment of Indiahas launched an initiative called asNational Quantum Missionto achieve the goal of the
development of the India's quantum computer.[3][4]India is one of the seven countries having dedicated National Quantum Mission to the development of quantum technologies in the country.[5]The union defence ministerRajnath Singhemphasized on the development of quantum computing during the ceremony of 16th foundation day ofIndian Institute Technology, Mandi.[6]
"The time to come is ofquantum computing."
India started its journey towards the development of quantum computer in 2018 by launchingQuantum Enabled Science and Technology(QuEST) program. The QuEST program funded 51 national quantum labs with a budget of 250 crores Indian rupees to develop the required infrastructures for the development of quantum technologies in India.[7]In 2020, the Government of India announced a budget of 8000 crore Indian rupees for the development of quantum technologies and its applications. In the same year, the government launched aNational Mission on Quantum Technologies & Applications (NM-QTA)for a period of five years. The mission was to be implemented by theDepartment of Science & Technology(DST) of the government.[8]After the announcement of the mission, it delayed for four years with no further progress. On 19 April 2023, the government revised the budget to 6003.65 crore Indian rupees and launched National Quantum Mission for period from 2023-24 to 2030-31.Ajai Chowdhry, the co-founder ofHCLwas appointed as the chairman of theMission Governing Boardfor the National Quantum Mission.[3]After the announcement of the mission in 2023, India became the seventh country afterUS,Austria,Finland,France,CanadaandChinato have dedicated national mission for the development of quantum technologies. The National Quantum Mission in India is one of the nine missions for national importance under thePrime Minister's Science and Technology Innovation Advisory Council(PM-STIAC).[5]
According to Ajai Chowdhry, the chairman of the Mission Governing Body of the National Quantum Mission, India's first quantum computer will be of capacity to achieve computation of 6qubits. It is expected to be built within the period of one year or few months.[3]
The mission has planned to establish 20-50 qubits quantum computer in the next three years. And in the next five years, it is planned to build 50-100 qubits quantum computer. Similarly in the next ten years, the mission has planned to establish a quantum computer of capability to achieve computation of 50-1000 qubits.[3]
The mission has further more plan to establish satellite-based securequantum communicationsupto distances of 2,000 kilometres between ground stations within the country. Similarly it is also planned to enable long-distance secure quantum communications with other countries by both satellite and fibre-based. Apart from that it has planned to establish a multi-node quantum network to implement inter-cityquantum key distribution(QKD) for covering distances of over 2,000 kilometres.[5]There is also planning for development ofatomic clocksandmagnetometersfor precision navigation.[9]
TheNational Quantum Missionhas established four Thematic Hubs (T-Hubs) to propel research and innovation ofquantum technologiesin India to position the country in the race of global quantum technology. The Thematic Hubs have four verticals. They arequantum computing,quantum communication,quantum sensing & metrologyandquantum materials & devices. TheIndian Institute of ScienceinBangaloreis made Thematic Hub for quantum computing andIndian Institute of Technology Madrasis selected for quantum communication. SimilarlyIndian Institute of Technology BombayandIndian Institute of Technology Delhiare made Thematic Hubs for quantum sensing & metrology and quantum materials & devices, respectively.[10]
TheMinistry of Electronics and Information Technology(MeitY) in collaboration withAmazon Web Services(AWS) established aQuantum Computing Applications Labto facilitate research and development related to quantum computing in the month of January in 2021. Similarly in the month of March, theDepartment of Science and Technology(Government of India) and 13 research groups from theIndian Institute of Science Education and Research(IISER) launchedI-HUB Quantum Technology Foundation (I-HUB QTF)atPunefor the development of quantum technologies. On the 22nd day of the same month, theIndian Space Research Organisation(ISRO) successfully demonstrated free-spaceQuantum Communicationover a distance of 300 metre. A number of indigenous key technologies were developed to achieve it. It used the indigenousNAVICreceiver for time synchronization between the transmitter and receiver modules, and gimbal mechanism systems. A live videoconferencing was demonstrated usingQuantum Key Distribution(QKD) link. It was demonstrated at the campus ofSpace Applications Centre(SAC) inAhmedabad. The demonstration was between the two line-of-sight buildings within the campus. It was conducted at night to prevent the interference of the direct sunlight in the demonstration. The experiment is considered as a major achievement of ISRO towards the goal of demonstratingSatellite Based Quantum Communication(SBQC).[11][12]In the month of July, theDefense Institute of Advanced Technology(DIAT) and theCentre for Development of Advanced Computing(C-DAC) collaborated to develop quantum computers in India. In the month of August,Quantum Computer Simulator(QSim) Toolkit was launched for academicians, industry professionals, students and the scientific community in India. It was launched to provide the environment for the development of quantum technologies and allow researchers to write and debug quantum code necessary forquantum algorithmsin the country. In the month of October, theCentre for Development of Telematics(C-DOT) unveiled aQuantum Key Distribution(QKD) solution to support more than 100 kilometers on standard optical fiber and launched a quantum communication lab. In the month of December, a quantum computing laboratory and an AI center was established by theIndian Armyat its engineering college in the state ofMadhya Pradeshwhich was backed by theNational Security Council Secretariate(NSCS).[13]
In April 2022, Indian scientists ofDRDOandIndian Institute of Technology Delhiwere successful in demonstrating aQuantum Key Distribution(QKD) link for more than 100 kilometers. The scientists used existing commercial-grade fiber-optic networks betweenPrayagrajandVindhyachalinUttar Pradeshfor achieving the demonstration of the Quantum Key Distribution (QKD) link.[13]On 27 March 2023, the Union Telecom MinisterAshwini Vaishnavaannounced atIndia's first international quantum enclavethat the country's first quantum computing based telecom network link is operational between theSanchar Bhawanand theNational Informatics Centreoffice at the CGO Complex in the national capitalNew Delhi.[14]
According to professorR VijayaraghavanofTata Institute of Fundamental Research Mumbai, the institute has demonstrated 3-qubit quantum computer based on superconductingqubits.[15]On 28 August 2024, Indian scientists ofDRDO Young Scientists Laboratoryfor Quantum Technologies (DYSL-QT) at Pune, andTata Institute of Fundamental Research, Mumbai completed end-to-end testing of 6-qubitquantum processor. It was based on superconducting circuit technology.[16]A quantum computing device which uses 6 quantum bits (also known as qubits) for processing information is known as 6-qubit quantum processor. This project was completed by the collaborative efforts of the three organisations DYSL-QT, TIFR, andTata Consultancy Services(TCS).[17]
The control and measurement apparatus for the quantum processor was developed by the team of DYSL-QT at Pune. It uses a combination of off-the-shelf electronics and custom-programmed development boards. Similarly a novel ring-resonator architecture was invented by the team of Tata Institute of Fundamental Research at their campus. It was employed in designing and fabrication of the qubits. The contribution of the team of Tata Consultancy Services was in the development of the cloud-based interface for the quantum hardware. This successful testing of the 6-qubit quantum processor is considered as a significant milestone in the journey of quantum computing in India and positioned the country as a significant player in the race of global quantum technologies.[17]
C-DACis building a quantum computing center at its campus inBangaloreby the help ofNational Quantum Mission. It is called asQuantum Reference Facility. The project of the quantum reference facility has three components. These are importing components, assembling, and developing software and applications. It is expected that the quantum reference facility will be completed and fully operational in the next three years.[18]
TheIndian Institute of Technology Mandiis developing an indigenous room-temperature quantum computer at itsCenter for Quantum Science and Technologies(CQST) by the assistance of National Quantum Mission. The quantum computer will usephotonsfor faster calculations. According to the official of the institute, it is expected that the room-temperature optical quantum computer will have "unique ability to analyse data and suggest solutions with 86 per cent accuracy without using traditional algorithms". Similarly "instead of CPU, the quantum computer will operate as a graphics processor (GPU) with a sophisticated user interface, quantum simulator and quantum processing capabilities in place".[19]
In the race of the development of quantum technologies, somestartupscompanies in India are emerging to boost research and development projects of quantum computing in India.Bangalorebased startup company QpiAI was founded in 2019 for advancements in quantum computing and generative AI technologies. It was founded byNagendra Nagarajawho is presently the CEO of the company. The company has planned to establish a 25-qubit quantum computer at its headquarter in Bangalore very soon within this year.[20][21]
Similarly another quantum computing startup companyBosonQ Psiwas also established in Bangalore. It is a simulation software company utilizing quantum computing. It was named after the famous Indian quantum physicistSatyendra Nath Boseand the fundamental quantity Psi. It is also onboard with the US-based IT companyIBM's quantum networks.[22][23]
TheGovernment of Indiaunder its two flagship initiativesNational Quantum MissionandNational Mission on Interdisciplinary Cyber-Physical Systemshas selected eight major startups companies for innovation of advanced technologies in the areas of quantum computing, communication, sensing, and advanced materials. These eight major startups companies areQNu Labs(Bengaluru), QPiAI India Private Limited (Bengaluru), Dimira Technologies Private Limited (IIT Mumbai), Prenishq Private Limited (IIT Delhi), QuPrayog Private Limited (Pune), Quanastra Private Limited (Delhi), Pristine Diamonds Private Limited (Ahmedabad) and Quan2D Technologies Private Limited (Bengaluru).[9]
The eight startups companies have been given responsibilities for the development of various range of technologies. The startup company QNu Labs represents to the development of quantum communication. It specializes in developing quantum-safe heterogeneous networks that offers secure communication solutions preventing cyber threats. Similarly QPiAI India Private Ltd represents to the development ofsuperconducting quantum computing. It is building a superconducting quantum computer which will contribute towards development of scalable and high performance quantum systems.[9]
The startups companies Dimira Technologies Private Limited and Prenishq Private Limited are working on essential hardware development for the quantum computer. Dimira Technologies Private Limited is developing indigenouscryogeniccables which is a critical component for maintaining the low-temperature environments required for the quantum hardware. Similarly Prenishq Private Limited is developing precision diode-laser systems. These precision diode-laser systems are essential part for quantum computing and sensing technologies. The startups companies QuPrayog Private Limited and Quanastra Private Limited are working onquantum sensingtechnologies. QuPrayog Private Limited is working on the innovations of optical atomic clocks and relatedquantum metrologytechnologies. These technologies have potential applications in healthcare and precise timekeeping. Quanastra Private Limited is working on the creation of advanced cryogenic systems and superconducting detectors to support quantum sensing and communication efforts.[9]
The startups companies Pristine Diamonds Private Limited in Ahmedabad and Quan2D Technologies Private Limited in Bangalore are developingQuantum MaterialsandPhoton Detection. The Pristine Diamonds Private Limited is working towards designing diamond-based materials for quantum sensing which is a promising avenue inquantum materialsscience. Similarly the Quan2D Technologies Private Limited is developing superconductingnanowiresingle-photon detectorsto enhancequantum communicationcapabilities.[9]
|
https://en.wikipedia.org/wiki/India%27s_quantum_computer
|
IonQis aquantum computinghardware and software company headquartered inCollege Park, Maryland. The company develops general-purposetrapped ion quantum computersand accompanying software to generate, optimize, and executequantum circuits.
IonQ was co-founded byChristopher Monroeand Jungsang Kim, professors atDuke University,[1]in 2015,[2]with the help of Harry Weller and Andrew Schoen, partners at venture firmNew Enterprise Associates.[3]
The company is an offshoot of the co-founders’ 25 years of academic research inquantum information science.[2]Monroe's quantum computing research began as a Staff Researcher at theNational Institute of Standards and Technology(NIST) withNobel-laureate physicistDavid Wineland[4]where he led a team using trapped ions to produce the first controllable qubits and the first controllable quantum logic gate,[5]culminating in a proposed architecture for a large-scale trapped ion computer.[6]
Kim and Monroe began collaborating formally as a result of larger research initiatives funded by theIntelligence Advanced Research Projects Activity(IARPA).[7]They wrote a review paper[7]forScience MagazineentitledScaling the Ion Trap Quantum Processor,[8]pairing Monroe's research in trapped ions with Kim's focus on scalablequantum information processingandquantum communicationhardware.[9]
This research partnership became the seed for IonQ's founding. In 2015, New Enterprise Associates invested $2 million to commercialize the technology Monroe and Kim proposed in theirSciencepaper.[3]
In 2016, they brought on David Moehring fromIARPA—where he was in charge of several quantum computing initiatives[10][3]—to be the company's chief executive.[2]In 2017, they raised a $20 million series B, led byGV(formerly Google Ventures) and New Enterprise Associates, the first investment GV has made in quantum computing technology.[11]They began hiring in earnest in 2017,[12]with the intent to bring an offering to market by late 2018.[2][13]In May 2019, formerAmazon Primeexecutive Peter Chapman was named new CEO of the company.[14][15]IonQ then partnered to make its quantum computers available to the public throughAmazon Web Services,Microsoft Azure, andGoogle Cloud.[16][17][18]
In October 2021, IonQ became publicly listed on theNew York Stock Exchangevia aspecial-purpose acquisition company.[19][20]The company opened a dedicated research and development facility inBothell, Washington, in February 2024, touting it as the first quantum computing factory in the United States.[21]
IonQ's hardware is based on atrapped ionarchitecture, from technology that Monroe developed at the University of Maryland, and that Kim developed at Duke.[22]
In November 2017, IonQ presented a paper at theIEEEInternational Conference onRebooting Computingdescribing their technology strategy and current progress. It outlines using amicrofabricatedion trapand several optical andacousto-opticalsystems to cool, initialize, and calculate. They also describe acloud API, custom language bindings, and quantum computing simulators that take advantage of their trapped ion system'scomplete connectivity.[23]
IonQ and some experts claim that trapped ions could provide a number of benefits over other physical qubit types in several measures, such as accuracy, scalability, predictability, andcoherence time.[24][2][25]Others criticize the slow operational times and relative size of trapped ion hardware, claiming other qubit technologies are just as promising.[24]
|
https://en.wikipedia.org/wiki/IonQ
|
IQMmay refer to:
|
https://en.wikipedia.org/wiki/IQM
|
This is alist of emerging technologies, which arein-development technical innovationsthat have significant potential in their applications. The criteria for this list is that the technology must:
Listing here is not a prediction that the technology will become widely adopted, only a recognition of significantpotentialto become widely adopted or highly useful if ongoing work continues, is successful, and the work is not overtaken by other technologies.
(T-RAM,CBRAM,SONOS,RRAM,racetrack memory,NRAM,phase-change memory,FJG RAM,millipede memory,Skyrmion,programmable metallization cell,ferroelectric RAM,magnetoresistive RAM,nvSRAM)
(SMR,HAMR,BPM,MAMR,TDMR,CPP/GMR,PMR,hard disk drive)
General:
Ethics:
Apple’s first set of AI features on iOS 18 will run natively on iPhone: Reportindianexpress.com April 16, 2024 Archived from theoriginal source
|
https://en.wikipedia.org/wiki/List_of_emerging_technologies
|
This list containsquantum processors, also known as quantum processing units (QPUs). Some devices listed below have only been announced at press conferences so far, with no actual demonstrations or scientific publications characterizing the performance.
Quantum processors are difficult to compare due to the differentarchitecturesand approaches. Due to this, published physicalqubitnumbers do not reflect the performance levels of the processor. This is instead achieved through the number oflogical qubitsor benchmarking metrics such asquantum volume,randomized benchmarkingor circuit layer operations per second (CLOPS).[1]
These QPUs are based on thequantum circuitandquantum logic gate-basedmodel of computing.
designation
90.02 (2 qubit)
99.30 (SPAM)
98.33 (2 qubit)
98.94 ((SPAM)
87 (Two-qubit gates)
87.5 (Two-qubit gates)
90.84 (Two-qubit gates)
94.42 (Two-qubit gates)
95.2 (Two-qubit gates)
94.34 (Two-qubit gates)
94.28 (Two-qubit gates)
94.66 (Two-qubit gates)
These QPUs are based onquantum annealing, not to be confused with digital annealing.[72]
/Designation
These QPUs are based on analog Hamiltonian simulation.
|
https://en.wikipedia.org/wiki/List_of_quantum_processors
|
Magic state distillationis a method for creating more accuratequantum statesfrom multiple noisy ones, which is important[1]for buildingfault tolerantquantum computers. It has also been linked[2]toquantum contextuality, a concept thought to contribute to quantum computers' power.[3]
The technique was first proposed byEmanuel Knillin 2004,[4]and further analyzed by Sergey Bravyi andAlexei Kitaevthe same year.[5]
Thanks to theGottesman–Knill theorem, it is known that some quantum operations (operations in theClifford group) can be perfectly simulated inpolynomial timeon a classical computer. In order to achieve universal quantum computation, a quantum computer must be able to perform operations outside this set. Magic state distillation achieves this, in principle, by concentrating the usefulness of imperfect resources, represented bymixed states, into states that are conducive for performing operations that are difficult to simulate classically.
A variety of qubit magic state distillation routines[6][7]and distillation routines for qubits[8][9][10]with various advantages have been proposed.
TheClifford groupconsists of a set ofn{\displaystyle n}-qubit operations generated by the gates{H,S,CNOT}(whereHisHadamardandSis[100i]{\displaystyle {\begin{bmatrix}1&0\\0&i\end{bmatrix}}}) called Clifford gates. The Clifford group generates stabilizer states which can be efficiently simulated classically, as shown by the Gottesman–Knill theorem. This set of gates with a non-Clifford operation is universal for quantum computation.[5]
Magic states are purified fromn{\displaystyle n}copies of amixed stateρ{\displaystyle \rho }.[6]These states are typically provided via an ancilla to the circuit. A magic state for theπ/6{\displaystyle \pi /6}rotation operator is|M⟩=cos(β/2)|0⟩+eiπ4sin(β/2)|1⟩{\displaystyle |M\rangle =\cos(\beta /2)|0\rangle +e^{i{\frac {\pi }{4}}}\sin(\beta /2)|1\rangle }whereβ=arccos(13){\displaystyle \beta =\arccos \left({\frac {1}{\sqrt {3}}}\right)}. A non-Clifford gate can be generated by combining (copies of) magic states with Clifford gates.[5]Since a set of Clifford gates combined with a non-Clifford gate is universal for quantum computation, magic states combined with Clifford gates are also universal.
The first magic state distillation algorithm, invented bySergey BravyiandAlexei Kitaev, is as follows.[5]
|
https://en.wikipedia.org/wiki/Magic_state_distillation
|
Metacomputingis allcomputingand computing-oriented activity which involves computingknowledge(science and technology) utilized for theresearch, development and application of different types of computing. It may also deal with numerous types of computing applications, such as: industry, business, management and human-related management. New emerging fields of metacomputing focus on the methodological and technological aspects of the development of largecomputer networks/grids, such as theInternet,intranetand other territorially distributed computer networks for special purposes.[1]
Metacomputing, as acomputing of computing, includes: the organization of large computer networks, choice of the design criteria (for example:peer-to-peeror centralized solution) and metacomputing software (middleware,metaprogramming) development where, in the specific domains, the concept metacomputing is used as a description of softwaremeta-layerswhich are networked platforms for the development of user-oriented calculations, for example forcomputational physicsandbio-informatics.
Here, serious scientific problems ofsystems/networkscomplexityemerge, not only related to domain-dependentcomplexitiesbut focused onsystemicmeta-complexityof computer network infrastructures.
Metacomputing is also a useful descriptor for self-referential programming systems. Often these systems are functional asfifth-generation computer languageswhich require the use of an underlying metaprocessor software operating system in order to be operative. Typically metacomputing occurs in an interpreted or real-time compiling system since the changing nature of information in processing results may result in an unpredictable compute state throughout the existence of the metacomputer (the information state operated upon by the metacomputing platform).
From the human and social perspectives, metacomputing is especially focused on: human-computer software, cognitive interrelations/interfaces, the possibilities of the development of intelligent computer grids for the cooperation of human organizations, and onubiquitous computingtechnologies. In particular, it relates to the development of software infrastructures for the computational modeling and simulation ofcognitive architecturesfor variousdecision support systems.
Metacomputing refers to the general problems ofcomputationalityof human knowledge, to the limits of the transformation of human knowledge and individual thinking to the form of computer programs. These and similar questions are also of interest ofmathematical psychology.
|
https://en.wikipedia.org/wiki/Metacomputing
|
Natural computing,[1][2]also callednatural computation, is a terminology introduced to encompass three classes of methods: 1) those that take inspiration from nature for the development of novel problem-solving techniques; 2) those that are based on the use of computers to synthesize natural phenomena; and 3) those that employ natural materials (e.g., molecules) to compute. The main fields of research that compose these three branches areartificial neural networks,evolutionary algorithms,swarm intelligence,artificial immune systems, fractal geometry,artificial life,DNA computing, andquantum computing, among others. However, the field is more related toBiological Computation.
Computational paradigms studied by natural computing are abstracted from natural phenomena as diverse asself-replication, the functioning of thebrain,Darwinian evolution,group behavior, theimmune system, the defining properties of life forms,cell membranes, andmorphogenesis.
Besides traditionalelectronic hardware, these computational paradigms can be implemented on alternative physical media such as biomolecules (DNA, RNA), or trapped-ionquantum computingdevices.
Dually, one can view processes occurring in nature as information processing. Such processes includeself-assembly,developmental processes,gene regulationnetworks,protein–protein interactionnetworks, biological transport (active transport,passive transport) networks, andgene assemblyinunicellular organisms. Efforts to
understand biological systems also include engineering of semi-synthetic organisms, and understanding the universe itself from the point of view of information processing. Indeed, the idea was even advanced that information is more fundamental than matter or energy.
The Zuse-Fredkin thesis, dating back to the 1960s, states that the entire universe is a hugecellular automatonwhich continuously updates its rules.[3][4]Recently it has been suggested that the whole universe is aquantum computerthat computes its own behaviour.[5]The universe/nature as computational mechanism is addressed by,[6]exploring nature with help the ideas of computability, and[7]studying natural processes as computations (information processing).
The most established "classical" nature-inspired models of computation are cellular automata, neural computation, and evolutionary computation. More recent computational systems abstracted from natural processes include swarm intelligence, artificial immune systems,
membrane computing, and amorphous computing. Detailed reviews can be found in many books
.[8][9]
A cellular automaton is adynamical systemconsisting of an array of cells. Space and time are discrete and each of the cells can be in a finite number ofstates. The cellular automaton updates the states of its cells
synchronously according to the transition rules givena priori. The next state of a cell is computed by a transition rule and it depends only on its current state and the states of its neighbors.
Conway's Game of Lifeis one of the best-known examples of cellular automata, shown to becomputationally universal. Cellular automata have been applied to modelling a variety of phenomena such as communication, growth, reproduction, competition, evolution and other physical and biological processes.
Neural computation is the field of research that emerged from the comparison betweencomputing machinesand the humannervous system.[10]This field aims both to understand how thebrainofliving organismsworks
(brain theoryorcomputational neuroscience), and to design efficient algorithms based on the principles of how the human brain processes information (Artificial Neural Networks, ANN[11]).
Anartificial neural networkis a network ofartificial neurons.[12]An artificial neuronAis equipped with a functionfA{\displaystyle f_{A}}, receivesnreal-valuedinputsx1,x2,…,xn{\displaystyle x_{1},x_{2},\ldots ,x_{n}}with respectiveweightsw1,w2,…,wn{\displaystyle w_{1},w_{2},\ldots ,w_{n}}, and it outputsfA(w1x1+w2x2+…+wnxn){\displaystyle f_{A}(w_{1}x_{1}+w_{2}x_{2}+\ldots +w_{n}x_{n})}. Some neurons are selected to be the output neurons, and the network function is the vectorial function that associates to theninput values, the outputs of themselected output neurons.
Note that different choices of weights produce different network functions for the same inputs. Back-propagation is asupervised learning methodby which the weights of the connections in the network are repeatedly adjusted so as to minimize the difference between the vector of actual outputs and that of desired outputs.Learning algorithmsbased onbackwards propagation of errorscan be used to find optimal weights for giventopology of the networkand input-output pairs.
Evolutionary computation[13]is a computational paradigm inspired byDarwinian evolution.
An artificial evolutionary system is a computational system based on the notion of simulated evolution. It comprises a constant- or variable-size population of individuals, afitness criterion, and genetically inspired operators that produce the nextgenerationfrom the current one.
The initial population is typically generated randomly or heuristically, and typical operators
aremutationandrecombination. At each step, the individuals are evaluated according to the given fitness function (survival of the fittest). The next generation is obtained from selected individuals (parents) by using genetically inspired operators. The choice of parents can be guided by a selection operator which reflects the biological principle ofmate selection. This process of simulatedevolutioneventually converges towards a nearly optimal population of individuals, from the point of view of the fitness function.
The study of evolutionary systems has historically evolved along three main branches:Evolution strategiesprovide a solution toparameter optimization problemsfor real-valued as well as discrete and mixed types of parameters.Evolutionary programmingoriginally aimed at creating optimal "intelligent agents" modelled, e.g., as finite state machines.Genetic algorithms[14]applied the idea of evolutionary computation to the problem of finding a (nearly-)optimal solution to a given problem. Genetic algorithms initially consisted of an input population of individuals encoded as fixed-length bit strings, the genetic operators mutation (bit flips) and recombination (combination of a prefix of a parent with the suffix of the other), and a problem-dependent fitness function.
Genetic algorithms have been used to optimize computer programs, calledgenetic programming, and today they are also applied to real-valued parameter optimization problems as well as to many types ofcombinatorial tasks.
Estimation of Distribution Algorithm(EDA), on the other hand, are evolutionary algorithms that substitute traditional reproduction operators by model-guided ones. Such models are learned from the population by employing machine learning techniques and represented as Probabilistic Graphical Models, from which new solutions can be sampled[15][16]or generated from guided-crossover.[17][18]
Swarm intelligence,[19]sometimes referred to ascollective intelligence, is defined as the problem solving behavior that emerges from the interaction ofindividual agents(e.g.,bacteria,ants,termites,bees,spiders,fish,birds) which communicate with other agents by acting on theirlocal environments.
Particle swarm optimizationapplies this idea to the problem of finding an optimal solution to a given problem
by a search through a (multi-dimensional)solution space. The initial set-up is a swarm ofparticles, each representing a possible solution to the problem. Each particle has its ownvelocitywhich depends on its previous velocity (the inertia component), the tendency towards the past personal best position (the nostalgia component), and its tendency towards a global neighborhood optimum or local neighborhood optimum (the social component). Particles thus move through a multidimensional space and eventually converge towards a point between theglobal bestand their personal best.
Particle swarm optimization algorithms have been applied to various optimization problems, and tounsupervised learning,game learning, andschedulingapplications.
In the same vein,ant algorithmsmodel the foraging behaviour of ant colonies.
To find the best path between the nest and a source of food, ants rely on indirect communication by laying apheromonetrail on the way back to the nest if they found food, respectively
following the concentration of pheromones if they are looking for food. Ant algorithms have been successfully applied to a variety of combinatorial optimization problems over discrete search spaces.
Artificial immune systems (a.k.a. immunological computation orimmunocomputing) are computational systems inspired by the natural immune systems of biological organisms.
Viewed as an information processing system, thenatural immune systemof organisms performs many complex tasks inparallelanddistributed computingfashion.[20]These include distinguishing between self andnonself,[21]neutralizationof nonselfpathogens(viruses, bacteria,fungi, andparasites),learning,memory,associative retrieval,self-regulation, andfault-tolerance.Artificial immune systemsare abstractions of the natural immune system, emphasizing these computational aspects.
Their applications includecomputer virus detection,anomaly detectionin a time series of data,fault diagnosis,pattern recognition, machine learning,bioinformatics, optimization,roboticsandcontrol.
Membrane computinginvestigates computing models abstracted from thecompartmentalized structureof living cells affected bymembranes.[22]A generic membrane system (P-system) consists of cell-like compartments (regions) delimited bymembranes, that are placed in anested hierarchicalstructure. Each membrane-enveloped region contains objects, transformation rules which modify these objects, as well as transfer rules, which specify whether the objects will be transferred outside or stay inside the region.
Regions communicate with each other via the transfer of objects.
The computation by a membrane system starts with an initial configuration, where the number (multiplicity) of each object is set to some value for each region (multiset of objects).
It proceeds by choosing,nondeterministicallyand in amaximally parallel manner,
which rules are applied to which objects. The output of the computation is collected from ana prioridetermined output region.
Applications of membrane systems include machine learning, modelling of biological processes (photosynthesis, certainsignaling pathways,quorum sensingin bacteria, cell-mediatedimmunity), as well as computer science applications such ascomputer graphics,public-key cryptography,approximationandsorting algorithms, as well as analysis of variouscomputationally hard problems.
In biological organisms,morphogenesis(the development of well-defined shapes and functional structures) is achieved by the interactions between cells guided by the geneticprogramencoded in the organism's DNA.
Inspired by this idea,amorphous computingaims at engineering well-defined shapes and patterns, or coherent computational behaviours, from the local interactions of a multitude of simple unreliable, irregularly placed, asynchronous, identically programmed computing elements (particles).[23]As a programming paradigm, the aim is to find newprogramming techniquesthat would work well for amorphous computing environments. Amorphous computing also plays an important role as the basis for "cellular computing" (see the topicssynthetic biologyandcellular computing, below).
The understanding that the morphology performs computation is used to analyze the relationship between morphology and control and to theoretically guide the design of robots with reduced control requirements, has been used in both robotics and for understanding of cognitive processes in living organisms, seeMorphological computationand
.[24]
Cognitive computing CC is a new type of computing, typically with the goal of modelling of functions of human sensing, reasoning, and response to stimulus, seeCognitive computingand
.[25]
Cognitive capacities of present-day cognitive computing are far from human level. The same info-computational approach can be applied to other, simpler living organisms. Bacteria are an example of a cognitive system modelled computationally, seeEshel Ben-JacobandMicrobes-mind.
Artificial life(ALife) is a research field whose ultimate goal is to understand the essential properties of life organisms[26]by building, within electronic computers or other artificial media,ab initiosystems that exhibit properties normally associated only with living organisms.
Early examples includeLindenmayer systems(L-systems), that have been used to model plant growth and development. An L-system is a parallel rewriting system that starts with an initial word, and applies its rewriting rules in parallel to all letters of the word.[27]
Pioneering experiments in artificial life included the design of evolving "virtual block creatures" acting in simulated environments with realistic features such askinetics,dynamics,gravity,collision, andfriction.[28]These artificial creatures were selected for their abilities endowed to swim, or walk, or jump, and they competed for a common limited resource (controlling a cube). The simulation resulted in the evolution of creatures exhibiting surprising behaviour: some developed hands to grab the cube, others developed legs to move towards the cube. This computational approach was further combined with rapid manufacturing technology to actually build the physical robots that virtually evolved.[29]This marked the emergence of the field ofmechanical artificial life.
The field ofsynthetic biologyexplores a biological implementation of similar ideas.
Other research directions within the field of artificial life includeartificial chemistryas well as traditionally biological phenomena explored in artificial systems, ranging from computational processes such asco-evolutionaryadaptation and development, to physical processes such as growth,self-replication, andself-repair.
All of the computational techniques mentioned above, while inspired by nature, have been implemented until now mostly on traditionalelectronic hardware. In contrast, the two paradigms introduced here,molecular computingandquantum computing, employ radically different types of hardware.
Molecular computing(a.k.a. biomolecular computing, biocomputing, biochemical computing,DNA computing) is a computational paradigm in which data is encoded asbiomoleculessuch asDNA strands, and molecular biology tools act on the data to perform various operations (e.g.,arithmeticorlogical operations).
The first experimental realization of special-purpose molecular computer was the 1994 breakthrough experiment byLeonard Adlemanwho solved a
7-node instance of theHamiltonian Path Problemsolely by manipulating DNA strands in test tubes.[30]DNA computations start from an initial input encoded as a DNA sequence (essentially a sequence over the four-letter alphabet {A, C, G, T}),
and proceed by a succession of bio-operations such as cut-and-paste (byrestriction enzymesandligases),
extraction of strands containing a certain subsequence (by using Watson-Crick complementarity), copy (by usingpolymerase chain reactionthat employs the polymerase enzyme), and read-out.[31]Recent experimental research succeeded in solving more complex instances ofNP-completeproblems such as a 20-variable instance of3SAT, and wet DNA implementations of finite state machines with potential applications to the design ofsmart drugs.
One of the most notable contributions of research in this field is to the understanding ofself-assembly.[33]Self-assembly is thebottom-upprocess by which objects autonomously come together to form complex structures. Instances in nature abound, and includeatomsbinding by chemical bonds to formmolecules, and molecules formingcrystalsormacromolecules. Examples of self-assembly research topics include self-assembled DNA nanostructures[34]such asSierpinski triangles[35]or arbitrary nanoshapes obtained using theDNA origami[36]technique, and DNA nanomachines[37]such as DNA-based circuits (binary counter,bit-wise cumulative XOR), ribozymes for logic operations, molecular switches (DNA tweezers), and autonomous molecular motors (DNA walkers).
Theoretical research in molecular computing has yielded several novel models of DNA computing (e.g.splicing systemsintroduced by Tom Head already in 1987) and their computational power has been investigated.[38]Various subsets of bio-operations are now known to be able to achieve the computational power ofTuring machines[citation needed].
A quantum computer[39]processes data stored as quantum bits (qubits), and uses quantum mechanical phenomena such assuperpositionandentanglementto perform computations.
A qubit can hold a "0", a "1", or a quantum superposition of these.
A quantum computer operates on qubits withquantum logic gates.
ThroughShor's polynomial algorithmfor factoring integers, andGrover's algorithmfor quantum database search that has a quadratic time advantage, quantum computers were shown to potentially possess a significant benefit relative to electronic computers.
Quantum cryptographyis not based on thecomplexity of the computation, but on the special properties ofquantum information, such as the fact that quantum information cannot be measured reliably and any attempt at measuring it results in an unavoidable and irreversible disturbance.
A successful open air experiment in quantum cryptography was reported in 2007, where data was transmitted securely over a distance of 144 km.[40]Quantum teleportationis another promising application, in which a quantum state (not matter or energy) is transferred to an arbitrary distant location. Implementations of practical quantum computers are based on various substrates such asion-traps,superconductors,nuclear magnetic resonance, etc.
As of 2006, the largest quantum computing experiment used liquid state nuclear magnetic resonance quantum information processors, and could operate on up to 12 qubits.[41]
The dual aspect of natural computation is that it aims to understand nature by regarding natural phenomena as information processing.
Already in the 1960s, Zuse and Fredkin suggested the idea that the entire universe is a computational (information processing) mechanism, modelled as a cellular automaton which continuously updates its rules.[3][4]A recent quantum-mechanical approach of Lloyd suggests the universe as a quantum computer that computes its own behaviour,[5]while Vedral[42]suggests that information is the most fundamental building block of reality.
The universe/nature as computational mechanism is elaborated in,[6]exploring the nature with help of the ideas of computability, whilst,[7]based on the idea of nature as network of networks of information processes on different levels of organization, is studying natural processes as computations (information processing).
The main directions of research in this area aresystems biology,synthetic biologyandcellular computing.
Computational systems biology (or simply systems biology) is an integrative and qualitative approach that investigates the complex communications and interactions taking place in biological systems.
Thus, in systems biology, the focus of the study is theinteraction networksthemselves and the properties of biological systems that arise due to these networks, rather than the individual components of functional processes in an organism.
This type of research on organic components has focused strongly on four different interdependent interaction networks:[43]gene-regulatory networks, biochemical networks, transport networks, and carbohydrate networks.
Gene regulatory networkscomprise gene-gene interactions, as well as interactions between genes and other substances in the cell.Genesare transcribed intomessenger RNA(mRNA), and then translated intoproteinsaccording to thegenetic code.
Each gene is associated with other DNA segments (promoters,enhancers, orsilencers) that act asbinding sitesforactivatorsorrepressorsforgene transcription.
Genes interact with each other either through their gene products (mRNA, proteins) which can regulate gene transcription, or through smallRNA speciesthat can directly regulate genes.
Thesegene-gene interactions, together with genes' interactions with other substances in the cell, form the most basic interaction
network: thegene regulatory networks. They perform information processing tasks within the cell, including the assembly and maintenance of other networks. Models of gene regulatory networks include random and probabilisticBoolean networks,asynchronous automata, andnetwork motifs.
Another viewpoint is that the entire genomic regulatory system is a computational system, agenomic computer. This interpretation allows one to compare human-made electronic computation with computation as it occurs in nature.[44]
In addition, unlike a conventional computer, robustness in a genomic computer is achieved by variousfeedback mechanismsby which poorly functional processes are rapidly degraded, poorly functional cells are killed byapoptosis, and poorly functional organisms are out-competed by more fit species.
Biochemical networksrefer to the interactions between proteins, and they perform various mechanical and metabolic tasks inside a cell. Two or more proteins may bind to each other via binding of their interactions sites, and form a dynamic protein complex (complexation). These protein complexes may act ascatalystsfor other chemical reactions, or may chemically modify each other.
Such modifications cause changes to available binding sites of proteins. There are tens of thousands of proteins in a cell, and they interact with each other. To describe such a massive scale interactions,Kohn maps[45]were introduced
as a graphical notation to depict molecular interactions in succinct pictures. Other approaches to describing accurately and succinctly protein–protein interactions include the use oftextual bio-calculus[46]orpi-calculusenriched with stochastic features.[47]
Transport networksrefer to the separation and transport of substances mediated by lipid membranes.
Some lipids can self-assemble into biological membranes. A lipid membrane consists of alipid bilayerin which proteins and other molecules are embedded, being able to travel along this layer. Through lipid bilayers, substances are transported between the inside and outside of membranes to interact with other molecules.
Formalisms depicting transport networks include membrane systems andbrane calculi.[48]
Synthetic biology aims at engineering synthetic biological components, with the ultimate goal of assembling whole biological systems from their constituent components. The history of synthetic biology can be traced back to the 1960s, whenFrançois JacobandJacques Monoddiscovered the mathematical logic in gene regulation. Genetic engineering techniques, based onrecombinant DNAtechnology, are a precursor of today's synthetic biology which extends these techniques to entire systems of genes and gene products.
Along with the possibility of synthesizing longer and longer DNA strands, the prospect of creating synthetic genomes with the purpose of building entirely artificialsynthetic organismsbecame a reality.
Indeed, rapid assembly of chemically synthesized short DNA strands made it possible to generate a 5386bp synthetic genome of a virus.[49]
Alternatively, Smith et al. found about 100 genes that can be removed individually from the genome ofMycoplasma Genitalium.
This discovery paves the way to the assembly of a minimal but still viable artificial genome consisting of the essential genes only.
A third approach to engineering semi-synthetic cells is the construction of a single type of RNA-like molecule with the ability of self-replication.[50]Such a molecule could be obtained by guiding the rapid evolution of an initial population of RNA-like molecules, by selection for the desired traits.
Another effort in this field is towards engineering multi-cellular systems by designing, e.g.,cell-to-cell communication modulesused to coordinate living bacterial cell populations.[51]
Computation in living cells (a.k.a.cellular computing, orin-vivo computing) is another approach to understand nature as computation.
One particular study in this area is that of the computational nature of gene assembly in unicellular organisms calledciliates.
Ciliates store a copy of their DNA containing functional genes in themacronucleus, and another "encrypted" copy in themicronucleus. Conjugation of two ciliates consists of the exchange of their micronuclear genetic information, leading to the formation of two new micronuclei, followed by each ciliate re-assembling the information from its new micronucleus to construct a new functional macronucleus.
The latter process is calledgene assembly, or gene re-arrangement. It involves re-ordering some fragments of DNA (permutationsand possiblyinversion) and deleting other fragments from the micronuclear copy.
From the computational point of view, the study of this gene assembly process led to many challenging research themes and results, such as the Turing universality of various models of this process.[52]From the biological point of view, a plausible hypothesis about the "bioware" that implements the gene-assembly process was proposed, based ontemplate guided recombination.[53][54]
Other approaches to cellular computing include developing anin vivoprogrammable and autonomous finite-state automaton withE. coli,[55]designing and constructingin vivocellular logic gates and genetic circuits that harness the cell's existing biochemical processes (see for example[56]) and the global optimization ofstomataaperture in leaves, following a set of local rules resembling acellular automaton.[57]
This article was written based on the following references with the kind permission of their authors:
Many of the constituent research areas of natural computing have their own specialized journals and books series.
Journals and book series dedicated to the broad field of Natural Computing include the journalsNatural Computing(Springer Verlag),Theoretical Computer Science, Series C: Theory of Natural Computing(Elsevier),the Natural Computing book series(Springer Verlag), and theHandbook of Natural Computing(G.Rozenberg, T.Back, J.Kok, Editors, Springer Verlag).
For readers interested in popular science article, consider this one on Medium:Nature-Inspired Algorithms
|
https://en.wikipedia.org/wiki/Natural_computing
|
Optical computingorphotonic computinguseslight wavesproduced bylasersor incoherent sources fordata processing, data storage ordata communicationforcomputing. For decades,photonshave shown promise to enable a higherbandwidththan theelectronsused in conventional computers (seeoptical fibers).
Most research projects focus on replacing current computer components with optical equivalents, resulting in an opticaldigital computersystem processingbinary data. This approach appears to offer the best short-term prospects for commercial optical computing, since optical components could be integrated into traditional computers to produce an optical-electronic hybrid. However,optoelectronicdevices consume 30% of their energy converting electronic energy into photons and back; this conversion also slows the transmission of messages. All-optical computers eliminate the need for optical-electrical-optical (OEO) conversions, thus reducing electricalpower consumption.[1]
Application-specific devices, such assynthetic-aperture radar(SAR) andoptical correlators, have been designed to use the principles of optical computing. Correlators can be used, for example, to detect and track objects,[2]and to classify serial time-domain optical data.[3]
The fundamental building block of modern electronic computers is thetransistor. To replace electronic components with optical ones, an equivalentoptical transistoris required. This is achieved bycrystal optics(using materials with anon-linear refractive index).[4]In particular, materials exist[5]where the intensity of incoming light affects the intensity of the light transmitted through the material in a similar manner to the current response of a bipolar transistor. Such an optical transistor[6][7]can be used to create opticallogic gates,[7]which in turn are assembled into the higher level components of the computer'scentral processing unit(CPU). These will be nonlinear optical crystals used to manipulate light beams into controlling other light beams.
Like any computing system, an optical computing system needs four things to function well:
Substituting electrical components will need data format conversion from photons to electrons, which will make the system slower.
There are some disagreements between researchers about the future capabilities of optical computers; whether or not they may be able to compete with semiconductor-based electronic computers in terms of speed, power consumption, cost, and size is an open question. Critics note that[9]real-world logic systems require "logic-level restoration, cascadability,fan-outand input–output isolation", all of which are currently provided by electronic transistors at low cost, low power, and high speed. For optical logic to be competitive beyond a few niche applications, major breakthroughs in non-linear optical device technology would be required, or perhaps a change in the nature of computing itself.[10]
A significant challenge to optical computing is that computation is anonlinearprocess in which multiple signals must interact. Light, which is anelectromagnetic wave, can only interact with another electromagnetic wave in the presence of electrons in a material,[11]and the strength of this interaction is much weaker for electromagnetic waves, such as light, than for the electronic signals in a conventional computer. This may result in the processing elements for an optical computer requiring more power and larger dimensions than those for a conventional electronic computer using transistors.[citation needed]
A further misconception[by whom?]is that since light can travel much faster than thedrift velocityof electrons, and at frequencies measured inTHz, optical transistors should be capable of extremely high frequencies. However, any electromagnetic wave must obey thetransform limit, and therefore the rate at which an optical transistor can respond to a signal is still limited by itsspectral bandwidth. Infiber-optic communications, practical limits such asdispersionoften constrainchannelsto bandwidths of tens of GHz, only slightly better than many silicon transistors. Obtaining dramatically faster operation than electronic transistors would therefore require practical methods of transmittingultrashort pulsesdown highly dispersive waveguides.
Photonic logic is the use of photons (light) inlogic gates(NOT, AND, OR, NAND, NOR, XOR, XNOR). Switching is obtained usingnonlinear optical effectswhen two or more signals are combined.[7]
Resonatorsare especially useful in photonic logic, since they allow a build-up of energy fromconstructive interference, thus enhancing optical nonlinear effects.
Other approaches that have been investigated include photonic logic at amolecular level, usingphotoluminescentchemicals. In a demonstration, Witlicki et al. performed logical operations using molecules andSERS.[12]
The basic idea is to delay light (or any other signal) in order to perform useful computations.[13]Of interest would be to solveNP-complete problemsas those are difficult problems for the conventional computers.
There are two basic properties of light that are actually used in this approach:
When solving a problem with time-delays the following steps must be followed:
The first problem attacked in this way was theHamiltonian path problem.[13]
The simplest one is thesubset sum problem.[14]An optical device solving an instance with four numbers {a1, a2, a3, a4} is depicted below:
The light will enter in Start node. It will be divided into two (sub)rays of smaller intensity. These two rays will arrive into the second node at momentsa1and 0. Each of them will be divided into two subrays which
will arrive in the third node at moments 0,a1,a2anda1 + a2. These represents the all subsets of the set {a1, a2}. We expect fluctuations in the intensity of the signal at no more than four different moments. In the destination node we expect fluctuations at no more than 16 different moments (which are all the subsets of the given). If we have a fluctuation in the target momentB, it means that we have a solution of the problem, otherwise there is no subset whose sum of elements equalsB. For the practical implementation we cannot have zero-length cables, thus all cables are increased with a small (fixed for all) valuek'. In this case the solution is expected at momentB+n×k.
With increasing demands on graphical processing unit-based accelerator technologies, in the second decade of the 21st century, there has been a huge emphasis on the use of on-chip integrated optics to create photonics-based processors. The emergence of both deep learning neural networks based on phase modulation,[15]and more recently amplitude modulation using photonic memories[16]have created a new area of photonic technologies for neuromorphic computing,[17][18]leading to new photonic computing technologies, all on a chip such as the photonic tensor core.[19]
Wavelength-based computing[20]can be used to solve the3-SATproblem withnvariables,mclauses and with no more than three variables per clause. Each wavelength, contained in a light ray, is considered as possible value-assignments tonvariables. The optical device contains prisms and mirrors are used to discriminate proper wavelengths which satisfy the formula.[21]
This approach uses a photocopier and transparent sheets for performing computations.[22]k-SAT problemwithnvariables,mclauses and at mostkvariables per clause has been solved in three steps:[23]
Thetravelling salesman problemhas been solved by Shakedet al.(2007)[24]by using an optical approach. All possible TSP paths have been generated and stored in a binary matrix which was multiplied with another gray-scale vector containing the distances between cities. The multiplication is performed optically by using an optical correlator.
Many computations, particularly in scientific applications, require frequent use of the 2Ddiscrete Fourier transform(DFT) – for example in solving differential equations describing propagation of waves or transfer of heat. Though modern GPU technologies typically enable high-speed computation of large 2D DFTs, techniques have been developed that can perform continuous Fourier transform optically by utilising the naturalFourier transforming property of lenses. The input is encoded using aliquid crystalspatial light modulatorand the result is measured using a conventional CMOS or CCD image sensor. Such optical architectures can offer superior scaling of computational complexity due to the inherently highly interconnected nature of optical propagation, and have been used to solve 2D heat equations.[25]
Physical computers whose design was inspired by the theoreticalIsing modelare called Ising machines.[26][27][28]
Yoshihisa Yamamoto's lab atStanfordpioneered building Ising machines using photons. Initially Yamamoto and his colleagues built an Ising machine using lasers, mirrors, and other optical components commonly found on anoptical table.[26][27]
Later a team atHewlett Packard Labsdevelopedphotonic chipdesign tools and used them to build an Ising machine on a single chip, integrating 1,052 optical components on that single chip.[26]
Some additional companies involved with optical computing development includeIBM,[29]Microsoft,[30]Procyon Photonics,[31]Lightelligence,[32]Lightmatter,[33]Optalysys,[34]Xanadu Quantum Technologies,QuiX Quantum,ORCA Computing,PsiQuantum,Quandela[fr], andTundraSystems Global.[35]
Media related toOptical computingat Wikimedia Commons
|
https://en.wikipedia.org/wiki/Optical_computing
|
Aquantum busis a device which can be used to store or transfer information between independentqubitsin aquantum computer, or combine two qubits into asuperposition. It is thequantumanalog of aclassical bus.
There are several physical systems that can be used to realize a quantum bus, includingtrapped ions,photons, andsuperconducting qubits. Trapped ions, for example, can use the quantized motion of ions (phonons) as a quantum bus, while photons can act as a carrier of quantum information by utilizing the increased interaction strength provided by cavity quantum electrodynamics.Circuit quantum electrodynamics, which uses superconducting qubits coupled to amicrowave cavityon a chip, is another example of a quantum bus that has been successfully demonstrated in experiments.[1]
The concept was first demonstrated by researchers atYale Universityand theNational Institute of Standards and Technology(NIST) in 2007.[1][2][3]Prior to this experimental demonstration, the quantum bus had been described by scientists atNISTas one of the possible cornerstone building blocks inquantum computingarchitectures.[4][5]
A quantum bus forsuperconducting qubitscan be built with aresonance cavity. Thehamiltonianfor a system with qubit A, qubit B, and the resonance cavity or quantum bus connecting the two isH^=H^r+∑j=A,BH^j+∑j=A,Bhgi(a^†σ^−j+a^σ^+j){\displaystyle {\hat {H}}={\hat {H}}_{r}+\sum \limits _{j=A,B}{\hat {H}}_{j}+\sum \limits _{j=A,B}hg_{i}\left({\hat {a}}^{\dagger }{\hat {\sigma }}_{-}^{j}+{\hat {a}}{\hat {\sigma }}_{\text{+}}^{j}\right)}whereH^j=12ℏωjσ^+jσ^−j{\displaystyle {\hat {H}}_{j}={\frac {1}{2}}\hbar \omega _{j}{\hat {\sigma }}_{+}^{j}{\hat {\sigma }}_{-}^{j}}is the single qubit hamiltonian,σ^+jσ^−j{\displaystyle {\hat {\sigma }}_{+}^{j}{\hat {\sigma }}_{-}^{j}}is the raising or lowering operator for creating or destroying excitations in thej{\displaystyle j}th qubit, andℏωj{\displaystyle \hbar \omega _{j}}is controlled by the amplitude of theD.C.andradio frequencyfluxbias.[6]
Thisquantum mechanics-related article is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Quantum_bus
|
Quantum cognitionuses the mathematical formalism of quantum probability theory to model psychology phenomena when classicalprobability theoryfails.[1]The field focuses on modeling phenomena incognitive sciencethat have resisted traditional techniques or where traditional models seem to have reached a barrier (e.g., human memory),[2]and modeling preferences indecision theorythat seem paradoxical from a traditional rational point of view (e.g., preference reversals).[3]Since the use of a quantum-theoretic framework is for modeling purposes, the identification of quantum structures in cognitive phenomena does not presuppose the existence of microscopic quantum processes in the human brain.[4][5]
Quantum cognition can be applied to model cognitive phenomena such asinformation processing[6]by thehuman brain,language,decision making,[7]human memory,conceptsand conceptual reasoning, humanjudgment, andperception.[8][9][10]
Classical probability theory is a rational approach to inference which does not easily explain some observations of human inference in psychology.
Some cases where quantum probability theory has advantages include theconjunction fallacy, thedisjunction fallacy, the failures of thesure-thing principle, andquestion-order biasin judgement.[1]: 752
If participants in a psychology experiment are told about "Linda", described as looking like a feminist but not like a bank teller, then asked to rank the probability,P{\displaystyle P}that Linda is feminist, a bank teller or a feminist and a bank teller, they respond with values that indicate:P(feminist)>P(feminist&bank teller)>P(bank teller){\displaystyle P({\text{feminist}})>P({\text{feminist}}\ \&\ {\text{bank teller}})>P({\text{bank teller}})}Rational classical probability theory makes the incorrect prediction: it expects humans to rank the conjunction less probable than the bank teller option. Many variations of this experiment demonstrate that the fallacy represents human cognition in this case and not an artifact of one presentation.[1]: 753
Quantum cognition models this probability-estimation scenario with quantum probability theory which always ranks sequential probability,P(feminist&bank teller){\displaystyle P({\text{feminist}}\ \&\ {\text{bank teller}})}, greater than the direct probability,P(bank teller){\displaystyle P({\text{bank teller}})}. The idea is that a person's understanding of "bank teller" is affected by the context of the question involving "feminist".[1]: 753The two questions are "incompatible": to treat them with classical theory would require separate reasoning steps.[11]
The quantum cognition concept is based on the observation that various cognitive phenomena are more adequately described by quantum probability theory than by the classical probability theory (see examples below). Thus, the quantum formalism is considered an operational formalism that describes non-classical processing of probabilistic data.
Here, contextuality is the key word (see the monograph of Khrennikov for detailed representation of this viewpoint).[8]Quantum mechanics is fundamentally contextual.[12]Quantum systems do not have objective properties which can be defined independently of measurement context. As has been pointed out byNiels Bohr, the whole experimental arrangement must be taken into account. Contextuality implies existence of incompatible mental variables, violation of the classical law of total probability, and constructive or destructive interference effects. Thus, the quantum cognition approach can be considered an attempt to formalize contextuality of mental processes, by using the mathematical apparatus of quantum mechanics.
Suppose a person is given an opportunity to play two rounds of the following gamble: a coin toss will determine whether the subject wins $200 or loses $100. Suppose the subject has decided to play the first round, and does so. Some subjects are then given the result (win or lose) of the first round, while other subjects are not yet given any information about the results. The experimenter then asks whether the subject wishes to play the second round. Performing this experiment with real subjects gives the following results:
Given these two separate choices, according to thesure thingprinciple of rational decision theory, they should also play the second round even if they don't know or think about the outcome of the first round.[13]But, experimentally, when subjects are not told the results of the first round, the majority of them decline to play a second round.[14]This finding violates the law of total probability, yet it can be explained as aquantum interferenceeffect in a manner similar to the explanation for the results fromdouble-slit experimentin quantum physics.[9][15][16]Similar violations of the sure-thing principle are seen in empirical studies of thePrisoner's Dilemmaand have likewise been modeled in terms of quantum interference.[17]
The above deviations from classical rational expectations in agents’ decisions under uncertainty produce well known paradoxes in behavioral economics, that is, theAllais,Ellsbergand Machina paradoxes.[18][19][20]These deviations can be explained if one assumes that the overall conceptual landscape influences the subject's choice in a neither predictable nor controllable way. A decision process is thus an intrinsically contextual process, hence it cannot be modeled in a single Kolmogorovian probability space, which justifies the employment of quantum probability models in decision theory. More explicitly, the paradoxical situations above can be represented in a unified Hilbert space formalism where human behavior under uncertainty is explained in terms of genuine quantum aspects, namely, superposition, interference, contextuality and incompatibility.[21][22][23][16]
Considering automated decision making, quantumdecision treeshave different structure compared to classical decision trees. Data can be analyzed to see if a quantumdecision tree modelfits the data better.[24]
Quantum probability provides a new way to explain human probability judgment errors including the conjunction and disjunction errors.[25]A conjunction error occurs when a person judges the probability of a likely event Landan unlikely event U to be greater than the unlikely event U; a disjunction error occurs when a person judges the probability of a likely event L to be greater than the probability of the likely event Loran unlikely event U. Quantum probability theory is a generalization ofBayesian probabilitytheory because it is based on a set ofvon Neumannaxioms that relax some of the classicKolmogorovaxioms.[26]The quantum model introduces a new fundamental concept to cognition—the compatibility versus incompatibility of questions and the effect this can have on the sequential order of judgments. Quantum probability provides a simple account of conjunction and disjunction errors as well as many other findings such as order effects on probability judgments.[27][28][29]
The liar paradox - The contextual influence of a human subject on the truth behavior of a cognitive entity is explicitly exhibited by the so-calledliar paradox, that is, the truth value of a sentence like "this sentence is false". One can show that the true-false state of this paradox is represented in a complex Hilbert space, while the typical oscillations between true and false are dynamically described by the Schrödinger equation.[30][31]
Concepts are basic cognitive phenomena, which provide the content for inference, explanation, and language understanding.Cognitive psychologyhas researched different approaches forunderstanding conceptsincluding exemplars, prototypes, andneural networks, and different fundamental problems have been identified, such as the experimentally tested non classical behavior for the conjunction and disjunction of concepts, more specifically the Pet-Fish problem or guppy effect,[32]and the overextension and underextension of typicality and membership weight for conjunction and disjunction.[33][34]By and large, quantum cognition has drawn on quantum theory in three ways to model concepts.
The large amount of data collected by Hampton[33][34]on the combination of two concepts can be modeled in a specific quantum-theoretic framework in Fock space where the observed deviations from classical set (fuzzy set) theory, the above-mentioned over- and under- extension of membership weights, are explained in terms of contextual interactions, superposition, interference, entanglement and emergence.[27][40][41][42]And, more, a cognitive test on a specific concept combination has been performed which directly reveals, through the violation of Bell's inequalities, quantum entanglement between the component concepts.[43][44]
The research in (iv) had a deep impact on the understanding and initial development of a formalism to obtain semantic information when dealing with concepts, their combinations and variable contexts in a corpus of unstructured documents. This conundrum ofnatural language processing(NLP) andinformation retrieval(IR) on the web – and data bases in general – can be addressed using the mathematical formalism of quantum theory. As basic steps, (a) K. Van Rijsbergen introduced a quantum structure approach to IR,[45](b) Widdows and Peters utilised a quantum logical negation for a concrete search system,[38][46]and Aerts and Czachor identified quantum structure in semantic space theories, such aslatent semantic analysis.[47]Since then, the employment of techniques and procedures induced from the mathematical formalisms of quantum theory – Hilbert space, quantum logic and probability, non-commutative algebras, etc. – in fields such as IR and NLP, has produced significant results.[48]
Ideas for applying the formalisms of quantum theory to cognition first appeared in the 1990s byDiederik Aertsand his collaborators Jan Broekaert,Sonja Smetsand Liane Gabora, by Harald Atmanspacher, Robert Bordley, andAndrei Khrennikov. A special issue onQuantum Cognition and Decisionappeared in theJournal of Mathematical Psychology(2009, vol 53.), which planted a flag for the field. A few books related to quantum cognition have been published including those by Khrennikov (2004, 2010), Ivancivic and Ivancivic (2010), Busemeyer and Bruza (2012), E. Conte (2012). The first Quantum Interaction workshop was held atStanfordin 2007 organized by Peter Bruza, William Lawless, C. J. van Rijsbergen, and Don Sofge as part of the 2007AAAISpring Symposium Series. This was followed by workshops atOxfordin 2008,Saarbrückenin 2009, at the 2010 AAAI Fall Symposium Series held inWashington, D.C., 2011 inAberdeen, 2012 inParis, and 2013 inLeicester. Tutorials also were presented annually beginning in 2007 until 2013 at the annual meeting of theCognitive Science Society. ASpecial Issue on Quantum models of Cognitionappeared in 2013 in the journalTopics in Cognitive Science.
|
https://en.wikipedia.org/wiki/Quantum_cognition
|
Withinquantum technology, aquantum sensorutilizes properties of quantum mechanics, such asquantum entanglement,quantum interference, andquantum statesqueezing, which have optimized precision and beat current limits insensor technology.[1]The field of quantum sensing deals with the design and engineering of quantum sources (e.g., entangled) and quantum measurements that are able to beat the performance of any classical strategy in a number of technological applications.[2]This can be done withphotonicsystems[3]orsolid statesystems.[4]
Inphotonicsandquantum optics, photonic quantum sensing leveragesentanglement, single photons andsqueezed statesto perform extremely precise measurements. Optical sensing makes use of continuously variable quantum systems such as different degrees of freedom of the electromagnetic field, vibrational modes of solids, andBose–Einstein condensates.[5]These quantum systems can be probed to characterize an unknown transformation between two quantum states. Several methods are in place to improve photonic sensors'quantum illuminationof targets, which have been used to improve detection of weak signals by the use of quantum correlation.[6][7][8][9][10]
Quantum sensors are often built on continuously variable systems, i.e., quantum systems characterized by continuous degrees of freedom such as position and momentum quadratures. The basic working mechanism typically relies on optical states of light, often involving quantum mechanical properties such as squeezing or two-mode entanglement.[3]These states are sensitive to physical transformations that are detected by interferometric measurements.[5]
Quantum sensing can also be utilized in non-photonic areas such asspin qubits,trapped ions,flux qubits,[4]andnanoparticles.[11]These systems can be compared by physical characteristics to which they respond, for example, trapped ions respond to electrical fields while spin systems will respond to magnetic fields.[4]Trapped Ionsare useful in their quantized motional levels which are strongly coupled to the electric field. They have been proposed to study electric field noise above surfaces,[12]and more recently, rotation sensors.[13]
In solid-state physics, a quantum sensor is a quantum device that responds to a stimulus. Usually this refers to a sensor, which hasquantized energy levels, usesquantum coherenceor entanglement to improve measurements beyond what can be done with classical sensors.[4]There are four criteria for solid-state quantum sensors:[4]
Quantum sensors have applications in a wide variety of fields including microscopy, positioning systems, communication technology, electric and magnetic field sensors, as well as geophysical areas of research such as mineral prospecting andseismology.[4]Many measurement devices utilize quantum properties in order to probe measurements such asatomic clocks,superconducting quantum interference devices, andnuclear magnetic resonancespectroscopy.[4][14]With new technological advancements, individual quantum systems can be used as measurement devices, utilizingentanglement,superposition, interference andsqueezingto enhance sensitivity and surpass performance of classical strategies.
A good example of an early quantum sensor is anavalanche photodiode(APD). APDs have been used to detect entangledphotons.With additional cooling and sensor improvements can be used wherephotomultiplier tubes(PMT) in fields such as medical imaging. APDs, in the form of 2-D and even 3-D stacked arrays, can be used as a direct replacement for conventional sensors based onsilicondiodes.[15]
TheDefense Advanced Research Projects Agency(DARPA) launched a research program in optical quantum sensors that seeks to exploit ideas fromquantum metrologyandquantum imaging, such asquantum lithographyand theNOON state,[16]in order to achieve these goals with optical sensor systems such aslidar.[6][17][18][19]TheUnited Statesjudges quantum sensing to be the most mature of quantum technologies for military use, theoretically replacingGPSin areas without coverage or possibly acting withISRcapabilities or detecting submarine or subterranean structures or vehicles, as well asnuclear material.[20]
For photonic systems, current areas of research consider feedback and adaptive protocols. This is an active area of research in discrimination and estimation of bosonic loss.[21]
Injecting squeezed light intointerferometersallows for higher sensitivity to weak signals that would be unable to be classically detected.[1]A practical application of quantum sensing is realized in gravitational wave sensing.[22]Gravitational wave detectors, such asLIGO, utilizesqueezed lightto measure signals below thestandard quantum limit.[23]Squeezed lighthas also been used to detect signals below thestandard quantum limitinplasmonicsensors andatomic force microscopy.[24]
Quantum sensing also has the capability to overcome resolution limits, where current issues of vanishing distinguishability between two close frequencies can be overcome by making the projection noise vanish.[25][26]The diminishing projection noise has direct applications in communication protocols and nano-Nuclear Magnetic Resonance.[27][28]
Entanglement can be used to improve upon existingatomic clocks[29][30][31]or create more sensitivemagnetometers.[32][33]
Quantum radaris also an active area of research. Current classical radars can interrogate many target bins while quantum radars are limited to a single polarization or range.[34]A proof-of-concept quantum radar or quantum illuminator using quantum entangled microwaves was able to detect low reflectivity objects at room-temperature – such may be useful for improved radar systems, security scanners and medical imaging systems.[35][36][37]
Inneuroimaging, the first quantum brain scanner uses magnetic imaging and could become a novel whole-brain scanning approach.[38][39]
Quantum gravity-gradiometersthat could be used tomapand investigate subterraneans are also in development.[40][41]
|
https://en.wikipedia.org/wiki/Quantum_sensor
|
Quantum volumeis a metric that measures the capabilities and error rates of aquantum computer. It expresses the maximum size of squarequantum circuitsthat can be implemented successfully by the computer. The form of the circuits is independent from the quantum computer architecture, but compiler can transform and optimize it to take advantage of the computer's features. Thus, quantum volumes for different architectures can be compared.
Quantum computers are difficult to compare. Quantum volume is a single number designed to show all around performance. It is a measurement and not a calculation, and takes into account several features of a quantum computer, starting with its number ofqubits—other measures used are gate and measurement errors,crosstalkand connectivity.[1][2][3]
IBM defined its Quantum Volume metric[4]because a classical computer's transistor count and a quantum computer's quantum bit count aren't the same. Qubits decohere with a resulting loss of performance so a few fault tolerant bits are more valuable as a performance measure than a larger number of noisy, error-prone qubits.[5][6]
Generally, the larger the quantum volume, the more complex the problems a quantum computer can solve.[7]
Alternative benchmarks, such asCross-entropy benchmarking, reliable Quantum Operations per Second (rQOPS) proposed byMicrosoft, Circuit Layer Operations Per Second (CLOPS) proposed by IBM andIonQ's Algorithmic Qubits, have also been proposed.[8][9]
The quantum volume of a quantum computer was originally defined in 2018 by Nikolaj Mollet al.[10]However, since around 2021 that definition has been supplanted by IBM's 2019redefinition.[11][12]The original definition depends on the number of qubitsNas well as the number of steps that can be executed, the circuit depthd
The circuit depth depends on the effective error rateεeffas
The effective error rateεeffis defined as the average error rate of a two-qubit gate. If the physical two-qubit gates do not have all-to-all connectivity, additionalSWAPgates may be needed to implement an arbitrary two-qubit gate andεeff>ε, whereεis the error rate of the physical two-qubit gates. If more complex hardware gates are available, such as the three-qubitToffoli gate, it is possible thatεeff<ε.
The allowable circuit depth decreases when more qubits with the same effective error rate are added. So with these definitions, as soon asd(N) <N, the quantum volume goes down if more qubits are added. To run an algorithm that only requiresn<Nqubits on anN-qubit machine, it could be beneficial to select a subset of qubits with good connectivity. For this case, Mollet al.[10]give a refined definition of quantum volume.
where the maximum is taken over an arbitrary choice ofnqubits.
In 2019, IBM's researchers modified the quantum volume definition to be an exponential of the circuit size, stating that it corresponds to the complexity of simulating the circuit on a classical computer:[4][13]
The world record, as of May 2025[update], for the highest quantum volume is 223.[14]Here is an overview of historically achieved quantum volumes:
The quantum volume benchmark defines a family ofsquarecircuits, whose number of qubitsNand depthdare the same. Therefore, the output of this benchmark is a single number. However, a proposed generalization is the volumetric benchmark[34]framework, which defines a family ofrectangularquantum circuits, for whichNanddare uncoupled to allow the study of time/space performance trade-offs, thereby sacrificing the simplicity of a single-figure benchmark.
Volumetric benchmarks can be generalized not only to account for uncoupledNandddimensions, but also to test different types of quantum circuits. While quantum volume benchmarks the quantum computer's ability to implement a specific type ofrandomized circuits, these can, in principle, be substituted by other families of random circuits, periodic circuits,[35]or algorithm-inspired circuits. Each benchmark must have a success criterion that defines whether a processor has "passed" a given test circuit.
While these data can be analyzed in many ways, a simple method of visualization is illustrating thePareto frontof theNversusdtrade-off for the processor being benchmarked. This Pareto front provides information on the largest depthda patch of a given number of qubitsNcan withstand, or, alternatively, the biggest patch ofNqubits that can withstand executing a circuit of given depthd.
|
https://en.wikipedia.org/wiki/Quantum_volume
|
Quantum weirdnessencompasses the aspects ofquantum mechanicsthat challenge and defy human physical intuition.[1]
Human physical intuition is based on macroscopic physical phenomena as are experienced in everyday life, which can mostly be adequately described by theNewtonian mechanicsofclassical physics.[2]Early 20th-century models ofatomic physics, such as theRutherford–Bohr model, representedsubatomic particlesas little balls occupying well-defined spatial positions, but it was soon found that the physics needed at asubatomic scale, which became known as "quantum mechanics", implies many aspects for which the models of classical physics are inadequate.[3]These aspects include:[citation needed]
Thisquantum mechanics-related article is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Quantum_weirdness
|
Rigetti Computing, Inc.is aBerkeley, California-based developer of Superconducting quantumintegrated circuitsused forquantum computers. Rigetti also develops a cloud platform called Forest that enables programmers to write quantum algorithms.[2]
Rigetti Computing was founded in 2013 byChad Rigetti, a physicist with a background in quantum computers fromIBM, and studied underMichel Devoret.[2][3]The company emerged from startup incubatorY Combinatorin 2014 as a so-called "spaceshot" company.[4][5]Later that year, Rigetti also participated in The Alchemist Accelerator, a venture capital programme.[5]
By February 2016, Rigetti created its firstquantum processor, a three-qubitchip made using aluminum circuits on a silicon wafer.[6]That same year, Rigetti raisedSeries Afunding of US$24 million in a round led byAndreessen Horowitz. In November, the company secured Series B funding of $40 million in a round led by investment firm Vy Capital, along with additional funding fromAndreessen Horowitzand other investors. Y Combinator also participated in both rounds.[5]
By Spring of 2017, Rigetti had advanced to testing eight-qubit quantum computers.[3]In June, the company announced the release of Forest 1.0, a quantum computing platform designed to enable developers to create quantum algorithms.[2]This was a major milestone.
In October 2021, Rigetti announced plans to go public via aSPAC merger, with estimated valuation of around US$1.5 billion.[7][8]This deal was expected to raise an additional US$458 million, bringing the total funding to US$658 million.[7]The fund will be used to accelerate the company's growth, including scaling its quantum processors from 80 qubits to 1,000 qubits by 2024, and to 4,000 by 2026.[9]The SPAC deal closed on 2 March 2022, and Rigetti began trading on the NASDAQ under the ticker symbol RGTI.[10]
In December 2022, Subodh Kulkarni became president and CEO of the company.[11]
In July 2023 Rigetti launched a single-chip 84qubitquantum processorthat can scale to even larger systems.[12]
Rigetti Computing is a full-stack quantum computing company, a term that indicates that the company designs and fabricates quantum chips, integrates them with a controlling architecture, and develops software for programmers to use to build algorithms for the chips.[13]
The company hosts a cloud computing platform called Forest, which gives developers access to quantum processors so they can write quantum algorithms for testing purposes. The computing platform is based on a custom instruction language the company developed calledQuil, which stands for Quantum Instruction Language. Quil facilitates hybrid quantum/classical computing, and programs can be built and executed using open sourcePythontools.[13][14]As of June 2017, the platform allows coders to write quantum algorithms for a simulation of a quantum chip with 36 qubits.[2]
The company operates a rapid prototyping fabrication ("fab") lab called Fab-1, designed to quickly create integrated circuits. Lab engineers design and generate experimental designs for 3D-integrated quantum circuits for qubit-based quantum hardware.[13]
The company was recognized in 2016 byX-PrizefounderPeter Diamandisas being one of the three leaders in the quantum computing space, along with IBM andGoogle.[15]MIT Technology Reviewnamed the company one of the 50 smartest companies of 2017.[16]
Rigetti Computing is headquartered in Berkeley, California, where it hosts developmental systems and cooling equipment.[15]The company also operates its Fab-1 manufacturing facility in nearby Fremont.[2]
|
https://en.wikipedia.org/wiki/Rigetti_Computing
|
Asupercomputeris a type ofcomputerwith a high level of performance as compared to a general-purpose computer. The performance of a supercomputer is commonly measured infloating-pointoperations per second (FLOPS) instead ofmillion instructions per second(MIPS). Since 2022, supercomputers have existed which can perform over 1018FLOPS, so calledexascale supercomputers.[3]For comparison, a desktop computer has performance in the range of hundreds of gigaFLOPS (1011) to tens of teraFLOPS (1013).[4][5]Since November 2017, all of theworld's fastest 500 supercomputersrun onLinux-based operating systems.[6]Additional research is being conducted in the United States, theEuropean Union, Taiwan, Japan, and China to build faster, more powerful and technologically superior exascale supercomputers.[7]
Supercomputers play an important role in the field ofcomputational science, and are used for a wide range of computationally intensive tasks in various fields, includingquantum mechanics,weather forecasting,climate research,oil and gas exploration,molecular modeling(computing the structures and properties of chemical compounds, biologicalmacromolecules, polymers, and crystals), and physical simulations (such as simulations of the early moments of the universe, airplane and spacecraftaerodynamics, the detonation ofnuclear weapons, andnuclear fusion). They have been essential in the field ofcryptanalysis.[8]
Supercomputers were introduced in the 1960s, and for several decades the fastest was made bySeymour CrayatControl Data Corporation(CDC),Cray Researchand subsequent companies bearing his name or monogram. The first such machines were highly tuned conventional designs that ran more quickly than their more general-purpose contemporaries. Through the decade, increasing amounts ofparallelismwere added, with one to fourprocessorsbeing typical. In the 1970s,vector processorsoperating on large arrays of data came to dominate. A notable example is the highly successfulCray-1of 1976. Vector computers remained the dominant design into the 1990s. From then until today,massively parallelsupercomputers with tens of thousands of off-the-shelf processors became the norm.[9][10]
The U.S. has long been a leader in the supercomputer field, initially through Cray's nearly uninterrupted dominance, and later through a variety of technology companies. Japan made significant advancements in the field during the 1980s and 1990s, while China has become increasingly active in supercomputing in recent years. As of November 2024[update], Lawrence Livermore National Laboratory'sEl Capitanis the world's fastest supercomputer.[11]The US has five of the top 10; Italy two, Japan, Finland, Switzerland have one each.[12]In June 2018, all combined supercomputers on the TOP500 list broke the 1 exaFLOPS mark.[13]
In 1960,UNIVACbuilt theLivermore Atomic Research Computer(LARC), today considered among the first supercomputers, for the US Navy Research and Development Center. It still used high-speeddrum memory, rather than the newly emergingdisk drivetechnology.[14]Also, among the first supercomputers was theIBM 7030 Stretch. The IBM 7030 was built by IBM for theLos Alamos National Laboratory, which then in 1955 had requested a computer 100 times faster than any existing computer. The IBM 7030 usedtransistors, magnetic core memory,pipelinedinstructions, prefetched data through a memory controller and included pioneering random access disk drives. The IBM 7030 was completed in 1961 and despite not meeting the challenge of a hundredfold increase in performance, it was purchased by the Los Alamos National Laboratory. Customers in England and France also bought the computer, and it became the basis for theIBM 7950 Harvest, a supercomputer built forcryptanalysis.[15]
The third pioneering supercomputer project in the early 1960s was theAtlasat theUniversity of Manchester, built by a team led byTom Kilburn. He designed the Atlas to have memory space for up to a million words of 48 bits, but because magnetic storage with such a capacity was unaffordable, the actual core memory of the Atlas was only 16,000 words, with a drum providing memory for a further 96,000 words. TheAtlas Supervisorswappeddata in the form of pages between the magnetic core and the drum. The Atlas operating system also introducedtime-sharingto supercomputing, so that more than one program could be executed on the supercomputer at any one time.[16]Atlas was a joint venture betweenFerrantiandManchester Universityand was designed to operate at processing speeds approaching one microsecond per instruction, about one million instructions per second.[17]
TheCDC 6600, designed bySeymour Cray, was finished in 1964 and marked the transition fromgermaniumtosilicontransistors. Silicon transistors could run more quickly and the overheating problem was solved by introducing refrigeration to the supercomputer design.[18]Thus, the CDC6600 became the fastest computer in the world. Given that the 6600 outperformed all the other contemporary computers by about 10 times, it was dubbed asupercomputerand defined the supercomputing market, when one hundred computers were sold at $8 million each.[19][20][21][22]
Cray left CDC in 1972 to form his own company,Cray Research.[20]Four years after leaving CDC, Cray delivered the 80 MHzCray-1in 1976, which became one of the most successful supercomputers in history.[23][24]TheCray-2was released in 1985. It had eightcentral processing units(CPUs),liquid coolingand the electronics coolant liquidFluorinertwas pumped through thesupercomputer architecture. It reached 1.9gigaFLOPS, making it the first supercomputer to break the gigaflop barrier.[25]
The only computer to seriously challenge the Cray-1's performance in the 1970s was theILLIAC IV. This machine was the first realized example of a truemassively parallelcomputer, in which many processors worked together to solve different parts of a single larger problem. In contrast with the vector systems, which were designed to run a single stream of data as quickly as possible, in this concept, the computer instead feeds separate parts of the data to entirely different processors and then recombines the results. The ILLIAC's design was finalized in 1966 with 256 processors and offer speed up to 1 GFLOPS, compared to the 1970s Cray-1's peak of 250 MFLOPS. However, development problems led to only 64 processors being built, and the system could never operate more quickly than about 200 MFLOPS while being much larger and more complex than the Cray. Another problem was that writing software for the system was difficult, and getting peak performance from it was a matter of serious effort.
But the partial success of the ILLIAC IV was widely seen as pointing the way to the future of supercomputing. Cray argued against this, famously quipping that "If you were plowing a field, which would you rather use? Two strong oxen or 1024 chickens?"[26]But by the early 1980s, several teams were working on parallel designs with thousands of processors, notably theConnection Machine(CM) that developed from research atMIT. The CM-1 used as many as 65,536 simplified custommicroprocessorsconnected together in anetworkto share data. Several updated versions followed; the CM-5 supercomputer is a massively parallel processing computer capable of many billions of arithmetic operations per second.[27]
In 1982,Osaka University'sLINKS-1 Computer Graphics Systemused amassively parallelprocessing architecture, with 514microprocessors, including 257Zilog Z8001control processorsand 257iAPX86/20floating-point processors. It was mainly used for rendering realistic3D computer graphics.[28]Fujitsu's VPP500 from 1992 is unusual since, to achieve higher speeds, its processors usedGaAs, a material normally reserved for microwave applications due to its toxicity.[29]Fujitsu'sNumerical Wind Tunnelsupercomputer used 166 vector processors to gain the top spot in 1994 with a peak speed of 1.7gigaFLOPS (GFLOPS)per processor.[30][31]TheHitachi SR2201obtained a peak performance of 600 GFLOPS in 1996 by using 2048 processors connected via a fast three-dimensionalcrossbarnetwork.[32][33][34]TheIntel Paragoncould have 1000 to 4000Intel i860processors in various configurations and was ranked the fastest in the world in 1993. The Paragon was aMIMDmachine which connected processors via a high speed two-dimensional mesh, allowing processes to execute on separate nodes, communicating via theMessage Passing Interface.[35]
Software development remained a problem, but the CM series sparked off considerable research into this issue. Similar designs using custom hardware were made by many companies, including theEvans & Sutherland ES-1,MasPar,nCUBE,Intel iPSCand theGoodyear MPP. But by the mid-1990s, general-purpose CPU performance had improved so much in that a supercomputer could be built using them as the individual processing units, instead of using custom chips. By the turn of the 21st century, designs featuring tens of thousands of commodity CPUs were the norm, with later machines addinggraphic unitsto the mix.[9][10]
In 1998,David Baderdeveloped the firstLinuxsupercomputer using commodity parts.[36]While at the University of New Mexico, Bader sought to build a supercomputer running Linux using consumer off-the-shelf parts and a high-speed low-latency interconnection network. The prototype utilized an Alta Technologies "AltaCluster" of eight dual, 333 MHz, Intel Pentium II computers running a modified Linux kernel. Bader ported a significant amount of software to provide Linux support for necessary components as well as code from members of the National Computational Science Alliance (NCSA) to ensure interoperability, as none of it had been run on Linux previously.[37]Using the successful prototype design, he led the development of "RoadRunner," the first Linux supercomputer for open use by the national science and engineering community via the National Science Foundation's National Technology Grid. RoadRunner was put into production use in April 1999. At the time of its deployment, it was considered one of the 100 fastest supercomputers in the world.[37][38]Though Linux-based clusters using consumer-grade parts, such asBeowulf, existed prior to the development of Bader's prototype and RoadRunner, they lacked the scalability, bandwidth, and parallel computing capabilities to be considered "true" supercomputers.[37]
Systems with a massive number of processors generally take one of two paths. In thegrid computingapproach, the processing power of many computers, organized as distributed, diverse administrative domains, is opportunistically used whenever a computer is available.[39]In another approach, many processors are used in proximity to each other, e.g. in acomputer cluster. In such a centralizedmassively parallelsystem the speed and flexibility of theinterconnectbecomes very important and modern supercomputers have used various approaches ranging from enhancedInfinibandsystems to three-dimensionaltorus interconnects.[40][41]The use ofmulti-core processorscombined with centralization is an emerging direction, e.g. as in theCyclops64system.[42][43]
As the price, performance andenergy efficiencyofgeneral-purpose graphics processing units(GPGPUs) have improved, a number ofpetaFLOPSsupercomputers such asTianhe-IandNebulaehave started to rely on them.[44]However, other systems such as theK computercontinue to use conventional processors such asSPARC-based designs and the overall applicability ofGPGPUsin general-purpose high-performance computing applications has been the subject of debate, in that while a GPGPU may be tuned to score well on specific benchmarks, its overall applicability to everyday algorithms may be limited unless significant effort is spent to tune the application to it.[45]However, GPUs are gaining ground, and in 2012 theJaguarsupercomputer was transformed intoTitanby retrofitting CPUs with GPUs.[46][47][48]
High-performance computers have an expected life cycle of about three years before requiring an upgrade.[49]TheGyoukousupercomputer is unique in that it uses both a massively parallel design andliquid immersion cooling.
A number of special-purpose systems have been designed, dedicated to a single problem. This allows the use of specially programmedFPGAchips or even customASICs, allowing better price/performance ratios by sacrificing generality. Examples of special-purpose supercomputers includeBelle,[50]Deep Blue,[51]andHydra[52]for playingchess,Gravity Pipefor astrophysics,[53]MDGRAPE-3for protein structure prediction and molecular dynamics,[54]andDeep Crackfor breaking theDEScipher.[55]
Throughout the decades, the management ofheat densityhas remained a key issue for most centralized supercomputers.[58][59][60]The large amount of heat generated by a system may also have other effects, e.g. reducing the lifetime of other system components.[61]There have been diverse approaches to heat management, from pumpingFluorinertthrough the system, to a hybrid liquid-air cooling system or air cooling with normalair conditioningtemperatures.[62][63]A typical supercomputer consumes large amounts of electrical power, almost all of which is converted into heat, requiring cooling. For example,Tianhe-1Aconsumes 4.04megawatts(MW) of electricity.[64]The cost to power and cool the system can be significant, e.g. 4 MW at $0.10/kWh is $400 an hour or about $3.5 million per year.
Heat management is a major issue in complex electronic devices and affects powerful computer systems in various ways.[65]Thethermal design powerandCPU power dissipationissues in supercomputing surpass those of traditionalcomputer coolingtechnologies. The supercomputing awards forgreen computingreflect this issue.[66][67][68]
The packing of thousands of processors together inevitably generates significant amounts ofheat densitythat need to be dealt with. TheCray-2wasliquid cooled, and used aFluorinert"cooling waterfall" which was forced through the modules under pressure.[62]However, the submerged liquid cooling approach was not practical for the multi-cabinet systems based on off-the-shelf processors, and inSystem Xa special cooling system that combined air conditioning with liquid cooling was developed in conjunction with theLiebert company.[63]
In theBlue Genesystem, IBM deliberately used low power processors to deal with heat density.[69]The IBMPower 775, released in 2011, has closely packed elements that require water cooling.[70]The IBMAquasarsystem uses hot water cooling to achieve energy efficiency, the water being used to heat buildings as well.[71][72]
The energy efficiency of computer systems is generally measured in terms of "FLOPS per watt". In 2008,RoadrunnerbyIBMoperated at 376MFLOPS/W.[73][74]In November 2010, theBlue Gene/Qreached 1,684 MFLOPS/W[75][76]and in June 2011 the top two spots on theGreen 500list were occupied byBlue Genemachines in New York (one achieving 2097 MFLOPS/W) with theDEGIMA clusterin Nagasaki placing third with 1375 MFLOPS/W.[77]
Because copper wires can transfer energy into a supercomputer with much higher power densities than forced air or circulating refrigerants can removewaste heat,[78]the ability of the cooling systems to remove waste heat is a limiting factor.[79][80]As of 2015[update], many existing supercomputers have more infrastructure capacity than the actual peak demand of the machine – designers generally conservatively design the power and cooling infrastructure to handle more than the theoretical peak electrical power consumed by the supercomputer. Designs for future supercomputers are power-limited – thethermal design powerof the supercomputer as a whole, the amount that the power and cooling infrastructure can handle, is somewhat more than the expected normal power consumption, but less than the theoretical peak power consumption of the electronic hardware.[81]
Since the end of the 20th century,supercomputer operating systemshave undergone major transformations, based on the changes insupercomputer architecture.[82]While early operating systems were custom tailored to each supercomputer to gain speed, the trend has been to move away from in-house operating systems to the adaptation of generic software such asLinux.[83]
Since modernmassively parallelsupercomputers typically separate computations from other services by using multiple types ofnodes, they usually run different operating systems on different nodes, e.g. using a small and efficientlightweight kernelsuch asCNKorCNLon compute nodes, but a larger system such as a fullLinux distributionon server andI/Onodes.[84][85][86]
While in a traditional multi-user computer systemjob schedulingis, in effect, ataskingproblem for processing and peripheral resources, in a massively parallel system, the job management system needs to manage the allocation of both computational and communication resources, as well as gracefully deal with inevitable hardware failures when tens of thousands of processors are present.[87]
Although most modern supercomputers useLinux-based operating systems, each manufacturer has its own specific Linux distribution, and no industry standard exists, partly due to the fact that the differences in hardware architectures require changes to optimize the operating system to each hardware design.[82][88]
The parallel architectures of supercomputers often dictate the use of special programming techniques to exploit their speed. Software tools for distributed processing include standardAPIssuch asMPI[90]andPVM,VTL, andopen sourcesoftware such asBeowulf.
In the most common scenario, environments such asPVMandMPIfor loosely connected clusters andOpenMPfor tightly coordinated shared memory machines are used. Significant effort is required to optimize an algorithm for the interconnect characteristics of the machine it will be run on; the aim is to prevent any of the CPUs from wasting time waiting on data from other nodes.GPGPUshave hundreds of processor cores and are programmed using programming models such asCUDAorOpenCL.
Moreover, it is quite difficult to debug and test parallel programs.Special techniquesneed to be used for testing and debugging such applications.
Opportunistic supercomputing is a form of networkedgrid computingwhereby a "super virtual computer" of manyloosely coupledvolunteer computing machines performs very large computing tasks. Grid computing has been applied to a number of large-scaleembarrassingly parallelproblems that require supercomputing performance scales. However, basic grid andcloud computingapproaches that rely onvolunteer computingcannot handle traditional supercomputing tasks such as fluid dynamic simulations.[91]
The fastest grid computing system is thevolunteer computing projectFolding@home(F@h). As of April 2020[update], F@h reported 2.5 exaFLOPS ofx86processing power. Of this, over 100 PFLOPS are contributed by clients running on various GPUs, and the rest from various CPU systems.[92]
TheBerkeley Open Infrastructure for Network Computing(BOINC) platform hosts a number of volunteer computing projects. As of February 2017[update], BOINC recorded a processing power of over 166 petaFLOPS through over 762 thousand active Computers (Hosts) on the network.[93]
As of October 2016[update],Great Internet Mersenne Prime Search's (GIMPS) distributedMersenne Primesearch achieved about 0.313 PFLOPS through over 1.3 million computers.[94]The PrimeNet server has supported GIMPS's grid computing approach, one of the earliest volunteer computing projects, since 1997.
Quasi-opportunistic supercomputing is a form ofdistributed computingwhereby the "super virtual computer" of many networked geographically disperse computers performs computing tasks that demand huge processing power.[95]Quasi-opportunistic supercomputing aims to provide a higher quality of service thanopportunistic grid computingby achieving more control over the assignment of tasks to distributed resources and the use of intelligence about the availability and reliability of individual systems within the supercomputing network. However, quasi-opportunistic distributed execution of demanding parallel computing software in grids should be achieved through the implementation of grid-wise allocation agreements, co-allocation subsystems, communication topology-aware allocation mechanisms, fault tolerant message passing libraries and data pre-conditioning.[95]
Cloud computingwith its recent and rapid expansions and development have grabbed the attention of high-performance computing (HPC) users and developers in recent years. Cloud computing attempts to provide HPC-as-a-service exactly like other forms of services available in the cloud such assoftware as a service,platform as a service, andinfrastructure as a service. HPC users may benefit from the cloud in different angles such as scalability, resources being on-demand, fast, and inexpensive. On the other hand, moving HPC applications have a set of challenges too. Good examples of such challenges arevirtualizationoverhead in the cloud, multi-tenancy of resources, and network latency issues. Much research is currently being done to overcome these challenges and make HPC in the cloud a more realistic possibility.[96][97][98][99]
In 2016, Penguin Computing, Parallel Works, R-HPC,Amazon Web Services,Univa,Silicon Graphics International,Rescale, Sabalcore, and Gomput started to offer HPCcloud computing. The Penguin On Demand (POD) cloud is abare-metalcompute model to execute code, but each user is givenvirtualizedlogin node. POD computing nodes are connected via non-virtualized10 Gbit/sEthernetor QDRInfiniBandnetworks. User connectivity to the PODdata centerranges from 50 Mbit/s to 1 Gbit/s.[100]Citing Amazon's EC2 Elastic Compute Cloud, Penguin Computing argues thatvirtualizationof compute nodes is not suitable for HPC. Penguin Computing has also criticized that HPC clouds may have allocated computing nodes to customers that are far apart, causing latency that impairs performance for some HPC applications.[101]
Supercomputers generally aim for the maximum in capability computing rather than capacity computing. Capability computing is typically thought of as using the maximum computing power to solve a single large problem in the shortest amount of time. Often a capability system is able to solve a problem of a size or complexity that no other computer can, e.g. a very complexweather simulationapplication.[102]
Capacity computing, in contrast, is typically thought of as using efficient cost-effective computing power to solve a few somewhat large problems or many small problems.[102]Architectures that lend themselves to supporting many users for routine everyday tasks may have a lot of capacity but are not typically considered supercomputers, given that they do not solve a single very complex problem.[102]
In general, the speed of supercomputers is measured andbenchmarkedinFLOPS(floating-point operations per second), and not in terms ofMIPS(million instructions per second), as is the case with general-purpose computers.[103]These measurements are commonly used with anSI prefixsuch astera-, combined into the shorthand TFLOPS (1012FLOPS, pronouncedteraflops), orpeta-, combined into the shorthand PFLOPS (1015FLOPS, pronouncedpetaflops.)Petascalesupercomputers can process one quadrillion (1015) (1000 trillion) FLOPS.Exascaleis computing performance in the exaFLOPS (EFLOPS) range. An EFLOPS is one quintillion (1018) FLOPS (one million TFLOPS). However, The performance of a supercomputer can be severely impacted by fluctuation brought on by elements like system load, network traffic, and concurrent processes, as mentioned by Brehm and Bruhwiler (2015).[104]
No single number can reflect the overall performance of a computer system, yet the goal of the Linpack benchmark is to approximate how fast the computer solves numerical problems and it is widely used in the industry.[105]The FLOPS measurement is either quoted based on the theoretical floating point performance of a processor (derived from manufacturer's processor specifications and shown as "Rpeak" in the TOP500 lists), which is generally unachievable when running real workloads, or the achievable throughput, derived from theLINPACK benchmarksand shown as "Rmax" in the TOP500 list.[106]The LINPACK benchmark typically performsLU decompositionof a large matrix.[107]The LINPACK performance gives some indication of performance for some real-world problems, but does not necessarily match the processing requirements of many other supercomputer workloads, which for example may require more memory bandwidth, or may require better integer computing performance, or may need a high performance I/O system to achieve high levels of performance.[105]
Since 1993, the fastest supercomputers have been ranked on the TOP500 list according to theirLINPACK benchmarkresults. The list does not claim to be unbiased or definitive, but it is a widely cited current definition of the "fastest" supercomputer available at any given time.
This is a list of the computers which appeared at the top of theTOP500 listsince June 1993,[108]and the "Peak speed" is given as the "Rmax" rating. In 2018,Lenovobecame the world's largest provider for the TOP500 supercomputers with 117 units produced.[109]
Legend:[112]
The stages of supercomputer application are summarized in the following table:
The IBMBlue Gene/P computer has been used to simulate a number of artificial neurons equivalent to approximately one percent of a human cerebral cortex, containing 1.6 billion neurons with approximately 9 trillion connections. The same research group also succeeded in using a supercomputer to simulate a number of artificial neurons equivalent to the entirety of a rat's brain.[121]
Modern weather forecasting relies on supercomputers. TheNational Oceanic and Atmospheric Administrationuses supercomputers to crunch hundreds of millions of observations to help make weather forecasts more accurate.[122]
In 2011, the challenges and difficulties in pushing the envelope in supercomputing were underscored byIBM's abandonment of theBlue Waterspetascale project.[123]
TheAdvanced Simulation and Computing Programcurrently uses supercomputers to maintain and simulate the United States nuclear stockpile.[124]
In early 2020,COVID-19was front and center in the world. Supercomputers used different simulations to find compounds that could potentially stop the spread. These computers run for tens of hours using multiple paralleled running CPU's to model different processes.[125][126][127]
In the 2010s, China, the United States, the European Union, and others competed to be the first to create a 1exaFLOP(1018or one quintillion FLOPS) supercomputer.[128]Erik P. DeBenedictis ofSandia National Laboratorieshas theorized that a zettaFLOPS (1021or one sextillion FLOPS) computer is required to accomplish fullweather modeling, which could cover a two-week time span accurately.[129][130][131]Such systems might be built around 2030.[132]
ManyMonte Carlo simulationsuse the same algorithm to process a randomly generated data set; particularly,integro-differential equationsdescribingphysical transport processes, therandom paths, collisions, and energy and momentum depositions of neutrons, photons, ions, electrons, etc.The next step for microprocessors may be into thethird dimension; and specializing to Monte Carlo, the many layers could be identical, simplifying the design and manufacture process.[133]
The cost of operating high performance supercomputers has risen, mainly due to increasing power consumption. In the mid-1990s a top 10 supercomputer required in the range of 100 kilowatts, in 2010 the top 10 supercomputers required between 1 and 2 megawatts.[134]A 2010 study commissioned byDARPAidentified power consumption as the most pervasive challenge in achievingExascale computing.[135]At the time a megawatt per year in energy consumption cost about 1 million dollars. Supercomputing facilities were constructed to efficiently remove the increasing amount of heat produced by modern multi-corecentral processing units. Based on the energy consumption of the Green 500 list of supercomputers between 2007 and 2011, a supercomputer with 1 exaFLOPS in 2011 would have required nearly 500 megawatts. Operating systems were developed for existing hardware to conserve energy whenever possible.[136]CPU cores not in use during the execution of a parallelized application were put into low-power states, producing energy savings for some supercomputing applications.[137]
The increasing cost of operating supercomputers has been a driving factor in a trend toward bundling of resources through a distributed supercomputer infrastructure. National supercomputing centers first emerged in the US, followed by Germany and Japan. The European Union launched thePartnership for Advanced Computing in Europe(PRACE) with the aim of creating a persistent pan-European supercomputer infrastructure with services to support scientists across theEuropean Unionin porting, scaling and optimizing supercomputing applications.[134]Iceland built the world's first zero-emission supercomputer. Located at the Thor Data Center inReykjavík, Iceland, this supercomputer relies on completely renewable sources for its power rather than fossil fuels. The colder climate also reduces the need for active cooling, making it one of the greenest facilities in the world of computers.[138]
Funding supercomputer hardware also became increasingly difficult. In the mid-1990s a top 10 supercomputer cost about 10 million euros, while in 2010 the top 10 supercomputers required an investment of between 40 and 50 million euros.[134]In the 2000s national governments put in place different strategies to fund supercomputers. In the UK the national government funded supercomputers entirely and high performance computing was put under the control of a national funding agency. Germany developed a mixed funding model, pooling local state funding and federal funding.[134]
Examples of supercomputers in fiction includeHAL 9000,Multivac,The Machine Stops,GLaDOS,The Evitable Conflict,Vulcan's Hammer,Colossus,WOPR,AM, andDeep Thought. A supercomputer fromThinking Machineswas mentioned as the supercomputer used to sequence theDNAextracted from preserved parasites in theJurassic Parkseries.
|
https://en.wikipedia.org/wiki/Supercomputer
|
Theoretical computer scienceis a subfield ofcomputer scienceandmathematicsthat focuses on theabstractand mathematical foundations ofcomputation.
It is difficult to circumscribe the theoretical areas precisely. TheACM'sSpecial Interest Group on Algorithms and Computation Theory(SIGACT) provides the following description:[1]
TCS covers a wide variety of topics includingalgorithms,data structures,computational complexity,parallelanddistributedcomputation,probabilistic computation,quantum computation,automata theory,information theory,cryptography,program semanticsandverification,algorithmic game theory,machine learning,computational biology,computational economics,computational geometry, andcomputational number theoryandalgebra. Work in this field is often distinguished by its emphasis on mathematical technique andrigor.
While logical inference and mathematical proof had existed previously, in 1931Kurt Gödelproved with hisincompleteness theoremthat there are fundamental limitations on what statements could be proved or disproved.
Information theorywas added to the field witha 1948 mathematical theory of communicationbyClaude Shannon. In the same decade,Donald Hebbintroduced a mathematical model oflearningin the brain. With mounting biological data supporting this hypothesis with some modification, the fields ofneural networksandparallel distributed processingwere established. In 1971,Stephen Cookand, workingindependently,Leonid Levin, proved that there exist practically relevant problems that areNP-complete– a landmark result incomputational complexity theory.[2]
Modern theoretical computer science research is based on these basic developments, but includes many other mathematical and interdisciplinary problems that have been posed, as shown below:
Analgorithmis a step-by-step procedure for calculations. Algorithms are used forcalculation,data processing, andautomated reasoning.
An algorithm is aneffective methodexpressed as afinitelist[3]of well-defined instructions[4]for calculating afunction.[5]Starting from an initial state and initial input (perhapsempty),[6]the instructions describe acomputationthat, whenexecuted, proceeds through a finite[7]number of well-defined successive states, eventually producing "output"[8]and terminating at a final ending state. The transition from one state to the next is not necessarilydeterministic; some algorithms, known asrandomized algorithms, incorporate random input.[9]
Automata theoryis the study ofabstract machinesandautomata, as well as the computational problems that can be solved using them. It is a theory in theoretical computer science, underdiscrete mathematics(a section ofmathematicsand also ofcomputer science).Automatacomes from the Greek word αὐτόματα meaning "self-acting".
Automata Theory is the study of self-operating virtual machines to help in the logical understanding of input and output process, without or with intermediate stage(s) ofcomputation(or anyfunction/process).
Coding theoryis the study of the properties of codes and their fitness for a specific application. Codes are used fordata compression,cryptography,error correctionand more recently also fornetwork coding. Codes are studied by various scientific disciplines – such asinformation theory,electrical engineering,mathematics, andcomputer science– for the purpose of designing efficient and reliabledata transmissionmethods. This typically involves the removal of redundancy and the correction (or detection) of errors in the transmitted data.
Computational complexity theoryis a branch of thetheory of computationthat focuses on classifyingcomputational problemsaccording to their inherent difficulty, and relating thoseclassesto each other. A computational problem is understood to be a task that is in principle amenable to being solved by a computer, which is equivalent to stating that the problem may be solved by mechanical application of mathematical steps, such as analgorithm.
A problem is regarded as inherently difficult if its solution requires significant resources, whatever thealgorithmused. The theory formalizes this intuition, by introducing mathematicalmodels of computationto study these problems and quantifying the amount of resources needed to solve them, such as time and storage. Othercomplexitymeasures are also used, such as the amount of communication (used incommunication complexity), the number ofgatesin a circuit (used incircuit complexity) and the number of processors (used inparallel computing). One of the roles of computational complexity theory is to determine the practical limits on whatcomputerscan and cannot do.
Computational geometryis a branch of computer science devoted to the study of algorithms that can be stated in terms ofgeometry. Some purely geometrical problems arise out of the study of computational geometric algorithms, and such problems are also considered to be part of computational geometry.
The main impetus for the development of computational geometry as a discipline was progress incomputer graphicsand computer-aided design and manufacturing (CAD/CAM), but many problems in computational geometry are classical in nature, and may come frommathematical visualization.
Other important applications of computational geometry includerobotics(motion planning and visibility problems),geographic information systems(GIS) (geometrical location and search, route planning),integrated circuitdesign (IC geometry design and verification),computer-aided engineering(CAE) (mesh generation),computer vision(3D reconstruction).
Theoretical results in machine learning mainly deal with a type of inductive learning called supervised learning. In supervised learning, an algorithm is given samples that are labeled in some
useful way. For example, the samples might be descriptions of mushrooms, and the labels could be whether or not the mushrooms are edible. The algorithm takes these previously labeled samples and
uses them to induce a classifier. This classifier is a function that assigns labels to samples including the samples that have never been previously seen by the algorithm. The goal of the supervised learning algorithm is to optimize some measure of performance such as minimizing the number of mistakes made on new samples.
Computational number theory, also known asalgorithmic number theory, is the study ofalgorithmsfor performingnumber theoreticcomputations. The best known problem in the field isinteger factorization.
Cryptographyis the practice and study of techniques forsecure communicationin the presence of third parties (calledadversaries).[10]More generally, it is about constructing and analyzingprotocolsthat overcome the influence of adversaries[11]and that are related to various aspects ininformation securitysuch as dataconfidentiality,data integrity,authentication, andnon-repudiation.[12]Modern cryptography intersects the disciplines ofmathematics,computer science, andelectrical engineering. Applications of cryptography includeATM cards,computer passwords, andelectronic commerce.
Modern cryptography is heavily based on mathematical theory and computer science practice; cryptographic algorithms are designed aroundcomputational hardness assumptions, making such algorithms hard to break in practice by any adversary. It is theoretically possible to break such a system, but it is infeasible to do so by any known practical means. These schemes are therefore termed computationally secure; theoretical advances, e.g., improvements ininteger factorizationalgorithms, and faster computing technology require these solutions to be continually adapted. There existinformation-theoretically secureschemes that provably cannot be broken even with unlimited computing power—an example is theone-time pad—but these schemes are more difficult to implement than the best theoretically breakable but computationally secure mechanisms.
Adata structureis a particular way of organizingdatain a computer so that it can be usedefficiently.[13][14]
Different kinds of data structures are suited to different kinds of applications, and some are highly specialized to specific tasks. For example, databases useB-treeindexes for small percentages of data retrieval andcompilersand databases use dynamichash tablesas look up tables.
Data structures provide a means to manage large amounts of data efficiently for uses such as largedatabasesandinternet indexing services. Usually, efficient data structures are key to designing efficientalgorithms. Some formal design methods andprogramming languagesemphasize data structures, rather than algorithms, as the key organizing factor in software design. Storing and retrieving can be carried out on data stored in bothmain memoryand insecondary memory.
Distributed computingstudies distributed systems. A distributed system is a software system in which components located onnetworked computerscommunicate and coordinate their actions bypassing messages.[15]The components interact with each other in order to achieve a common goal. Three significant characteristics of distributed systems are: concurrency of components, lack of a global clock, and independent failure of components.[15]Examples of distributed systems vary fromSOA-based systemstomassively multiplayer online gamestopeer-to-peer applications, and blockchain networks likeBitcoin.
Acomputer programthat runs in a distributed system is called adistributed program, and distributed programming is the process of writing such programs.[16]There are many alternatives for the message passing mechanism, includingRPC-likeconnectors andmessage queues. An important goal and challenge of distributed systems islocation transparency.
Information-based complexity(IBC) studies optimal algorithms and computational complexity for continuous problems. IBC has studied continuous problems as path integration, partial differential equations, systems of ordinary differential equations, nonlinear equations, integral equations, fixed points, and very-high-dimensional integration.
Formal methodsare a particular kind ofmathematicsbased techniques for thespecification, development andverificationofsoftwareandhardwaresystems.[17]The use of formal methods for software and hardware design is motivated by the expectation that, as in other engineering disciplines, performing appropriate mathematical analysis can contribute to the reliability and robustness of a design.[18]
Formal methods are best described as the application of a fairly broad variety of theoretical computer science fundamentals, in particularlogiccalculi,formal languages,automata theory, andprogram semantics, but alsotype systemsandalgebraic data typesto problems in software and hardware specification and verification.[19]
Information theoryis a branch ofapplied mathematics,electrical engineering, andcomputer scienceinvolving thequantificationofinformation. Information theory was developed byClaude E. Shannonto find fundamental limits onsignal processingoperations such ascompressing dataand on reliablystoringandcommunicatingdata. Since its inception it has broadened to find applications in many other areas, includingstatistical inference,natural language processing,cryptography,neurobiology,[20]the evolution[21]and function[22]of molecular codes,model selectionin statistics,[23]thermal physics,[24]quantum computing,linguistics, plagiarism detection,[25]pattern recognition,anomaly detectionand other forms ofdata analysis.[26]
Applications of fundamental topics of information theory includelossless data compression(e.g.ZIP files),lossy data compression(e.g.MP3sandJPEGs), andchannel coding(e.g. forDigital Subscriber Line (DSL)). The field is at the intersection ofmathematics,statistics,computer science,physics,neurobiology, andelectrical engineering. Its impact has been crucial to the success of theVoyagermissions to deep space, the invention of the compact disc, the feasibility of mobile phones, the development of theInternet, the study oflinguisticsand of human perception, the understanding ofblack holes, and numerous other fields. Important sub-fields of information theory aresource coding,channel coding,algorithmic complexity theory,algorithmic information theory,information-theoretic security, and measures of information.
Machine learningis ascientific disciplinethat deals with the construction and study ofalgorithmsthat canlearnfrom data.[27]Such algorithms operate by building amodelbased on inputs[28]: 2and using that to make predictions or decisions, rather than following only explicitly programmed instructions.
Machine learning can be considered a subfield of computer science andstatistics. It has strong ties toartificial intelligenceandoptimization, which deliver methods, theory and application domains to the field. Machine learning is employed in a range of computing tasks where designing and programming explicit, rule-basedalgorithmsis infeasible. Example applications includespam filtering,optical character recognition(OCR),[29]search enginesandcomputer vision. Machine learning is sometimes conflated withdata mining,[30]although that focuses more on exploratory data analysis.[31]Machine learning andpattern recognition"can be viewed as two facets of
the same field."[28]: vii
Natural computing,[32][33]also called natural computation, is a terminology introduced to encompass three classes of methods: 1) those that take inspiration from nature for the development of novel problem-solving techniques; 2) those that are based on the use of computers to synthesize natural phenomena; and 3) those that employ natural materials (e.g., molecules) to compute. The main fields of research that compose these three branches areartificial neural networks,evolutionary algorithms,swarm intelligence,artificial immune systems, fractal geometry,artificial life,DNA computing, andquantum computing, among others. However, the field is more related toBiological Computation.
Computational paradigms studied by natural computing are abstracted from natural phenomena as diverse asself-replication, the functioning of thebrain,Darwinian evolution,group behavior, theimmune system, the defining properties of life forms,cell membranes, andmorphogenesis.
Besides traditionalelectronic hardware, these computational paradigms can be implemented on alternative physical media such as biomolecules (DNA, RNA), or trapped-ionquantum computingdevices.
Dually, one can view processes occurring in nature as information processing. Such processes includeself-assembly,developmental processes,gene regulationnetworks,protein–protein interactionnetworks, biological transport (active transport,passive transport) networks, andgene assemblyinunicellular organisms. Efforts to
understand biological systems also include engineering of semi-synthetic organisms, and understanding the universe itself from the point of view of information processing. Indeed, the idea was even advanced that information is more fundamental than matter or energy.
The Zuse-Fredkin thesis, dating back to the 1960s, states that the entire universe is a hugecellular automatonwhich continuously updates its rules.[34][35]Recently it has been suggested that the whole universe is aquantum computerthat computes its own behaviour.[36]
[39]
Parallel computingis a form ofcomputationin which many calculations are carried out simultaneously,[40]operating on the principle that large problems can often be divided into smaller ones, which are then solved"in parallel". There are several different forms of parallel computing:bit-level,instruction level,data, andtask parallelism. Parallelism has been employed for many years, mainly inhigh-performance computing, but interest in it has grown lately due to the physical constraints preventingfrequency scaling.[41]As power consumption (and consequently heat generation) by computers has become a concern in recent years,[42]parallel computing has become the dominant paradigm incomputer architecture, mainly in the form ofmulti-core processors.[43]
Parallel computer programsare more difficult to write than sequential ones,[44]because concurrency introduces several new classes of potentialsoftware bugs, of whichrace conditionsare the most common.Communicationandsynchronizationbetween the different subtasks are typically some of the greatest obstacles to getting good parallel program performance.
The maximum possiblespeed-upof a single program as a result of parallelization is known asAmdahl's law.
Programming language theoryis a branch of computer science that deals with the design, implementation, analysis, characterization, and classification ofprogramming languagesand their individualfeatures. It falls within the discipline of theoretical computer science, both depending on and affectingmathematics, software engineering, andlinguistics. It is an active research area, with numerous dedicated academic journals.
Inprogramming language theory,semanticsis the field concerned with the rigorous mathematical study of the meaning ofprogramming languages. It does so by evaluating the meaning ofsyntacticallylegalstringsdefined by a specific programming language, showing the computation involved. In such a case that the evaluation would be of syntactically illegal strings, the result would be non-computation. Semantics describes the processes a computer follows when executing a program in that specific language. This can be shown by describing the relationship between the input and output of a program, or an explanation of how the program will execute on a certainplatform, hence creating amodel of computation.
Aquantum computeris acomputationsystem that makes direct use ofquantum-mechanicalphenomena, such assuperpositionandentanglement, to performoperationsondata.[45]Quantum computers are different from digital computers based ontransistors. Whereas digital computers require data to be encoded into binary digits (bits), each of which is always in one of two definite states (0 or 1), quantum computation usesqubits(quantum bits), which can be insuperpositionsof states. A theoretical model is thequantum Turing machine, also known as the universal quantum computer. Quantum computers share theoretical similarities withnon-deterministicandprobabilistic computers; one example is the ability to be in more than one state simultaneously. The field of quantum computing was first introduced byYuri Maninin 1980[46]andRichard Feynmanin 1982.[47][48]A quantum computer with spins as quantum bits was also formulated for use as a quantumspace–timein 1968.[49]
Experiments have been carried out in which quantum computational operations were executed on a very small number of qubits.[50]Both practical and theoretical research continues, and many national governments and military funding agencies support quantum computing research to develop quantumcomputersfor both civilian and national security purposes, such ascryptanalysis.[51]
Computer algebra, also called symbolic computation or algebraic computation is a scientific area that refers to the study and development ofalgorithmsandsoftwarefor manipulatingmathematical expressionsand othermathematical objects. Although, properly speaking, computer algebra should be a subfield ofscientific computing, they are generally considered as distinct fields because scientific computing is usually based onnumerical computationwith approximatefloating point numbers, while symbolic computation emphasizesexactcomputation with expressions containingvariablesthat have not any given value and are thus manipulated as symbols (therefore the name ofsymbolic computation).
Softwareapplications that perform symbolic calculations are calledcomputer algebra systems, with the termsystemalluding to the complexity of the main applications that include, at least, a method to represent mathematical data in a computer, a user programming language (usually different from the language used for the implementation), a dedicated memory manager, auser interfacefor the input/output of mathematical expressions, a large set ofroutinesto perform usual operations, like simplification of expressions,differentiationusingchain rule,polynomial factorization,indefinite integration, etc.
Very-large-scale integration(VLSI) is the process of creating anintegrated circuit(IC) by combining thousands oftransistorsinto a single chip. VLSI began in the 1970s when complexsemiconductorandcommunicationtechnologies were being developed. Themicroprocessoris a VLSI device. Before the introduction of VLSI technology most ICs had a limited set of functions they could perform. Anelectronic circuitmight consist of aCPU,ROM,RAMand otherglue logic. VLSI allows IC makers to add all of these circuits into one chip.
|
https://en.wikipedia.org/wiki/Theoretical_computer_science
|
Unconventional computing(also known asalternative computingornonstandard computation) iscomputingby any of a wide range of new or unusual methods.
The termunconventional computationwas coined byCristian S. CaludeandJohn Castiand used at the First International Conference on Unconventional Models of Computation[1]in 1998.[2]
The general theory ofcomputationallows for a variety of methods of computation. Computing technology was first developed usingmechanicalsystems and then evolved into the use of electronic devices. Other fields ofmodern physicsprovide additional avenues for development.
A model of computation describes how the output of a mathematical function is computed given its input. The model describes how units of computations, memories, and communications are organized.[3]The computational complexity of an algorithm can be measured given a model of computation. Using a model allows studying the performance of algorithms independently of the variations that are specific to particular implementations and specific technology.
A wide variety of models are commonly used; some closely resemble the workings of (idealized) conventional computers, while others do not. Some commonly used models areregister machines,random-access machines,Turing machines,lambda calculus,rewriting systems,digital circuits,cellular automata, andPetri nets.
Historically,mechanical computerswere used in industry before the advent of thetransistor.
Mechanical computers retain some interest today, both in research and as analogue computers. Some mechanical computers have a theoretical or didactic relevance, such asbilliard-ball computers, while hydraulic ones like theMONIACor theWater integratorwere used effectively.[4]
While some are actually simulated, others are not.[clarification needed]No attempt is made[dubious–discuss]to build a functioning computer through the mechanical collisions of billiard balls. Thedomino computeris another theoretically interesting mechanical computing scheme.[why?]
An analog computer is a type of computer that usesanalog signals, which are continuous physical quantities, to model and solve problems. These signals can beelectrical,mechanical, orhydraulicin nature. Analog computers were widely used in scientific and industrial applications, and were often faster than digital computers at the time. However, they started to become obsolete in the 1950s and 1960s and are now mostly used in specific applications such as aircraft flight simulators and teaching control systems in universities.[5]Examples of analog computing devices includeslide rules,nomograms, and complex mechanisms for process control and protective relays.[6]TheAntikythera mechanism, a mechanical device that calculates the positions of planets and the Moon, and theplanimeter, a mechanical integrator for calculating the area of an arbitrary 2D shape, are also examples of analog computing.
Most modern computers are electronic computers with theVon Neumann architecturebased on digital electronics, with extensive integration made possible following the invention of the transistor and the scaling ofMoore's law.
Unconventional computing is, (according to website of Center for Nonlinear Studies announcing the conference; Unconventional Computation:Quo Vadis?, March 21-23 2007 in Santa Fe, New Mexico, USA)[7]"an interdisciplinary research area with the main goal to enrich or go beyond the standard models, such as theVon Neumann computer architectureand theTuring machine, which have dominated computer science for more than half a century". These methods model their computational operations based on non-standard paradigms, and are currently mostly in the research and development stage.
This computing behavior can be "simulated"[clarification needed]using classical silicon-based micro-transistors orsolid statecomputing technologies, but it aims to achieve a new kind of computing.
These are unintuitive and pedagogical examples that a computer can be made out of almost anything.
A billiard-ball computer is a type of mechanical computer that uses the motion of spherical billiard balls to perform computations. In this model, the wires of a Boolean circuit are represented by paths for the balls to travel on, the presence or absence of a ball on a path encodes the signal on that wire, and gates are simulated by collisions of balls at points where their paths intersect.[8][9]
A domino computer is a mechanical computer that uses standing dominoes to represent the amplification or logic gating of digital signals. These constructs can be used to demonstrate digital concepts and can even be used to build simple information processing modules.[10][11]
Both billiard-ball computers and domino computers are examples of unconventional computing methods that use physical objects to perform computation.
Reservoir computing is a computational framework derived from recurrent neural network theory that involves mapping input signals into higher-dimensional computational spaces through the dynamics of a fixed, non-linear system called a reservoir. The reservoir, which can be virtual or physical, is made up of individual non-linear units that are connected in recurrent loops, allowing it to store information. Training is performed only at the readout stage, as the reservoir dynamics are fixed, and this framework allows for the use of naturally available systems, both classical and quantum mechanical, to reduce the effective computational cost. One key benefit of reservoir computing is that it allows for a simple and fast learning algorithm, as well as hardware implementation throughphysical reservoirs.[12][13]
Tangible computing refers to the use of physical objects as user interfaces for interacting with digital information. This approach aims to take advantage of the human ability to grasp and manipulate physical objects in order to facilitate collaboration, learning, and design. Characteristics of tangible user interfaces include the coupling of physical representations to underlying digital information and the embodiment of mechanisms for interactive control.[14]There are five defining properties of tangible user interfaces, including the ability to multiplex both input and output in space, concurrent access and manipulation of interface components, strong specific devices, spatially aware computational devices, and spatial reconfigurability of devices.[15]
The term "human computer" refers to individuals who perform mathematical calculations manually, often working in teams and following fixed rules. In the past, teams of people were employed to perform long and tedious calculations, and the work was divided to be completed in parallel. The term has also been used more recently to describe individuals with exceptional mental arithmetic skills, also known as mental calculators.[16]
Human-robot interaction, or HRI, is the study of interactions between humans and robots. It involves contributions from fields such as artificial intelligence, robotics, and psychology.Cobots, or collaborative robots, are designed for direct interaction with humans within shared spaces and can be used for a variety of tasks,[17]including information provision, logistics, and unergonomic tasks in industrial environments.
Swarm roboticsis a field of study that focuses on the coordination and control of multiple robots as a system. Inspired by the emergent behavior observed in social insects, swarm robotics involves the use of relatively simple individual rules to produce complex group behaviors through local communication and interaction with the environment.[18]This approach is characterized by the use of large numbers of simple robots and promotes scalability through the use of local communication methods such as radio frequency or infrared.
Optical computing is a type of computing that uses light waves, often produced by lasers or incoherent sources, for data processing, storage, and communication. While this technology has the potential to offer higher bandwidth than traditional computers, which use electrons, optoelectronic devices can consume a significant amount of energy in the process of converting electronic energy to photons and back. All-optical computers aim to eliminate the need for these conversions, leading to reduced electrical power consumption.[19]Applications of optical computing include synthetic-aperture radar and optical correlators, which can be used for object detection, tracking, and classification.[20][21]
Spintronics is a field of study that involves the use of the intrinsic spin and magnetic moment of electrons in solid-state devices.[22][23][24]It differs from traditional electronics in that it exploits the spin of electrons as an additional degree of freedom, which has potential applications in data storage and transfer,[25]as well as quantum and neuromorphic computing. Spintronic systems are often created using dilute magnetic semiconductors and Heusler alloys.
Atomtronics is a form of computing that involves the use of ultra-cold atoms in coherent matter-wave circuits, which can have components similar to those found in electronic or optical systems.[26][27]These circuits have potential applications in several fields, including fundamental physics research and the development of practical devices such as sensors and quantum computers.
Fluidics, or fluidic logic, is the use of fluid dynamics to perform analog or digital operations in environments where electronics may be unreliable, such as those exposed to high levels of electromagnetic interference or ionizing radiation. Fluidic devices operate without moving parts and can use nonlinear amplification, similar to transistors in electronic digital logic. Fluidics are also used in nanotechnology and military applications.
Quantum computing, perhaps the most well-known and developed unconventional computing method, is a type of computation that utilizes the principles of quantum mechanics, such assuperpositionand entanglement, to perform calculations.[28][29]Quantum computers use qubits, which are analogous to classical bits but can exist in multiple states simultaneously, to perform operations. While current quantum computers may not yet outperform classical computers in practical applications, they have the potential to solve certain computational problems, such as integer factorization, significantly faster than classical computers. However, there are several challenges to building practical quantum computers, including the difficulty of maintaining qubits' quantum states and the need for error correction.[30][31]Quantum complexity theory is the study of the computational complexity of problems with respect to quantum computers.
Neuromorphic Quantum Computing[32][33](abbreviated as 'n.quantum computing') is an unconventional type of computing that usesneuromorphic computingto perform quantum operations. It was suggested thatquantum algorithms, which are algorithms that run on a realistic model ofquantum computation, can be computed equally efficiently with neuromorphic quantum computing.[34][35][36][37][38]
Both traditionalquantum computingand neuromorphic quantum computing are physics-based unconventional computing approaches to computations and don't follow thevon Neumann architecture. They both construct a system (a circuit) that represents the physical problem at hand, and then leverage their respective physics properties of the system to seek the "minimum". Neuromorphic quantum computing andquantum computingshare similar physical properties during computation[38][39].
Superconducting computing is a form of cryogenic computing that utilizes the unique properties of superconductors, including zero resistance wires and ultrafast switching, to encode, process, and transport data using single flux quanta. It is often used in quantum computing and requires cooling to cryogenic temperatures for operation.
Microelectromechanical systems (MEMS) and nanoelectromechanical systems (NEMS) are technologies that involve the use of microscopic devices with moving parts, ranging in size from micrometers to nanometers. These devices typically consist of a central processing unit (such as an integrated circuit) and several components that interact with their surroundings, such as sensors.[40]MEMS and NEMS technology differ from molecular nanotechnology or molecular electronics in that they also consider factors such as surface chemistry and the effects of ambient electromagnetism and fluid dynamics. Applications of these technologies include accelerometers and sensors for detecting chemical substances.[41]
Molecular computing is an unconventional form of computing that utilizes chemical reactions to perform computations. Data is represented by variations in chemical concentrations,[42]and the goal of this type of computing is to use the smallest stable structures, such as single molecules, as electronic components. This field, also known as chemical computing or reaction-diffusion computing, is distinct from the related fields of conductive polymers and organic electronics, which use molecules to affect the bulk properties of materials.
Peptide computing is a computational model that uses peptides and antibodies to solve NP-complete problems and has been shown to be computationally universal. It offers advantages over DNA computing, such as a larger number of building blocks and more flexible interactions, but has not yet been practically realized due to the limited availability of specific monoclonal antibodies.[43][44]
DNA computing is a branch of unconventional computing that uses DNA and molecular biology hardware to perform calculations. It is a form of parallel computing that can solve certain specialized problems faster and more efficiently than traditional electronic computers. While DNA computing does not provide any new capabilities in terms ofcomputability theory, it can perform a high number of parallel computations simultaneously. However, DNA computing has slower processing speeds, and it is more difficult to analyze the results compared to digital computers.
Membrane computing, also known as P systems,[45]is a subfield of computer science that studies distributed and parallel computing models based on the structure and function of biological membranes. In these systems, objects such as symbols or strings are processed within compartments defined by membranes, and the communication between compartments and with the external environment plays a critical role in the computation. P systems are hierarchical and can be represented graphically, with rules governing the production, consumption, and movement of objects within and between regions. While these systems have largely remained theoretical,[46]some have been shown to have the potential to solve NP-complete problems and have been proposed as hardware implementations for unconventional computing.
Biological computing, also known as bio-inspired computing or natural computation, is the study of using models inspired by biology to solve computer science problems, particularly in the fields of artificial intelligence and machine learning. It encompasses a range of computational paradigms including artificial neural networks, evolutionary algorithms, swarm intelligence, artificial immune systems, and more, which can be implemented using traditional electronic hardware or alternative physical media such as biomolecules or trapped-ion quantum computing devices. It also includes the study of understanding biological systems through engineering semi-synthetic organisms and viewing natural processes as information processing. The concept of the universe itself as a computational mechanism has also been proposed.[47][48]
Neuromorphic computing involves using electronic circuits to mimic the neurobiological architectures found in the human nervous system, with the goal of creating artificial neural systems that are inspired by biological ones.[49][50]These systems can be implemented using a variety of hardware, such as memristors,[51]spintronic memories, and transistors,[52][53]and can be trained using a range of software-based approaches, including error backpropagation[54]and canonical learning rules.[55]The field of neuromorphic engineering seeks to understand how the design and structure of artificial neural systems affects their computation, representation of information, adaptability, and overall function, with the ultimate aim of creating systems that exhibit similar properties to those found in nature. Wetware computers, which are composed of living neurons, are a conceptual form of neuromorphic computing that has been explored in limited prototypes.[56]Electron microscopy has already been imaging high-resolution anatomical neural connection diagrams,[57]and semiconductor chip based intracellular recording at scale can generate physical neural connection maps that specify connection types and strengths,[58]and these imaging and recording technologies can inform the neuromorphic system design.
Cellular automata are discrete models of computation consisting of a grid of cells in a finite number of states, such as on and off. The state of each cell is determined by a fixed rule based on the states of the cell and its neighbors. There are four primary classifications of cellular automata, ranging from patterns that stabilize into homogeneity to those that become extremely complex and potentially Turing-complete. Amorphous computing refers to the study of computational systems using large numbers of parallel processors with limited computational ability and local interactions, regardless of the physical substrate. Examples of naturally occurring amorphous computation can be found in developmental biology, molecular biology, neural networks, and chemical engineering. The goal of amorphous computation is to understand and engineer novel systems through the characterization of amorphous algorithms as abstractions.
Evolutionary computation is a type of artificial intelligence and soft computing that uses algorithms inspired by biological evolution to find optimized solutions to a wide range of problems. It involves generating an initial set of candidate solutions, stochastically removing less desired solutions, and introducing small random changes to create a new generation. The population of solutions is subjected to natural or artificial selection and mutation, resulting in evolution towards increased fitness according to the chosen fitness function. Evolutionary computation has proven effective in various problem settings and has applications in both computer science and evolutionary biology.
Ternary computing is a type of computing that usesternary logic, or base 3, in its calculations rather than the more commonbinary system. Ternary computers use trits, or ternary digits, which can be defined in several ways, including unbalanced ternary, fractional unbalanced ternary, balanced ternary, and unknown-state logic. Ternary quantum computers use qutrits instead of trits. Ternary computing has largely been replaced by binary computers, but it has been proposed for use in high-speed, low-power consumption devices using the Josephson junction as a balanced ternary memory cell.
Reversible computing is a type of unconventional computing where the computational process can be reversed to some extent. In order for a computation to be reversible, the relation between states and their successors must be one-to-one, and the process must not result in an increase in physical entropy. Quantum circuits are reversible as long as they do not collapse quantum states, and reversible functions are bijective, meaning they have the same number of inputs as outputs.[60]
Chaos computing is a type of unconventional computing that utilizes chaotic systems to perform computation. Chaotic systems can be used to create logic gates and can be rapidly switched between different patterns, making them useful for fault-tolerant applications and parallel computing. Chaos computing has been applied to various fields such as meteorology, physiology, and finance.
Stochastic computing is a method of computation that represents continuous values as streams of random bits and performs complex operations using simple bit-wise operations on the streams. It can be viewed as a hybrid analog/digital computer and is characterized by its progressive precision property, where the precision of the computation increases as the bit stream is extended. Stochastic computing can be used in iterative systems to achieve faster convergence, but it can also be costly due to the need for random bit stream generation and is vulnerable to failure if the assumption of independent bit streams is not met. It is also limited in its ability to perform certain digital functions.
|
https://en.wikipedia.org/wiki/Unconventional_computing
|
Valleytronics(fromvalleyandelectronics) is an experimental area insemiconductorsthat exploits localextrema("valleys") in theelectronic band structure. Certainsemiconductorshave multiple "valleys" in the electronic band structure of the firstBrillouin zone, and are known as multivalley semiconductors.[1][2]Valleytronics is the technology of control over the valley degree of freedom, alocal maximum/minimumon thevalence/conduction band, of such multivalley semiconductors.
The term was coined in analogy tospintronics. While in spintronics the internal degree of freedom ofspinis harnessed to store, manipulate and read out bits of information, the proposal for valleytronics is to perform similar tasks using the multiple extrema of the band structure, so that the information of 0s and 1s would be stored as different discrete values of thecrystal momentum.
Valleytronics may refer to other forms of quantum manipulation of valleys in semiconductors, includingquantum computationwith valley-basedqubits,[3][4][5][6]valley blockade and other forms ofquantum electronics. First experimental evidence of valley blockade predicted in Ref.[7](which completes the set ofCoulomb charge blockadeand Pauli spin blockade) has been observed in a single atom doped silicon transistor.[8]
Several theoretical proposals and experiments were performed in a variety of systems, such asgraphene,[9][10][11]few-layerphosphorene,[12]sometransition metal dichalcogenide monolayers,[13][14][15]diamond,[16]bismuth,[17]silicon,[4][18][19]carbon nanotubes,[6]aluminium arsenide[20]andsilicene.[21]
|
https://en.wikipedia.org/wiki/Valleytronics
|
TheGSM Interworking Profile, usually abbreviated to GIP and sometimes to IWP, is a profile forDECTthat allows a DECT base station to form part of a GSM network, given suitable handsets. While proposed and tested, notably inSwitzerlandin 1995, the system has never been commercially deployed. Infrastructure issues make it less practical and useful to implement than the more recentGAN/UMAsystem, which can make use of usually unmetered and neutralInternetservice to provide the connection back to the network operator.
Like the laterGAN/UMAstandard, GIP makes use of a technology that doesn't require licensed spectrum to expand capacity and allow end users, in theory, to improve coverage in areas difficult to reach via large, external, cell towers.
GIP is a DECTprofile, meaning a set of protocols that runs over the base DECT system. The most popular profile for DECT is GAP, which is used to provide cordless phone service, but this is not used for GIP.
In GIP, several of the GSM lower level protocols are replaced by DECT-friendly equivalents. Voice channels make use of 32 kbit/s ADPCM channels rather than 13 kbit/s FR/EFR/AMR channels, for example.
The system supportshandoff, and authentication is done via the GSM SIM card as normal. However, DECT terminals need to authenticate themselves against the base station, and this added layer is implementation dependent.
The base station is usually connected back to the GSM network via an ISDN line. An "A interface" is implemented over the ISDN line just as it would be for aBSC. This allows multiple GSM calls and GSM control data to be multiplexed over the 64 kbit/s ISDN B channels.
While GIP was deployed to some success at Telecom '95 in Geneva, the system has not been commercially deployed since. Hybrid DECT/GSM devices have appeared, but these have essentially been "Two phones in a box" systems that combine the functionality of a standardGAPphone with a GSM phone, so that a person can receive and make calls on either their home phone line or a mobile network without having to use two phones. An example of this approach is BT's/Ericsson'sOnePhoneservice.
Most probably, the fact that the system requires an ISDN connection, which in most countries where ISDN is popular is priced by time used, has made GIP a difficult sell. In practice, the system appears to be oriented towards carriers instead of individuals, and carriers can more easily create microcells using their own spectrum, running ordinary GSM and not requiring the use of special handsets.
With the advent of the Internet and widespread availability of high speed Internet connections, GIP could be redesigned to make use of Internet instead of ISDN connections. However, the industry has gone in the direction of using GAN/UMA, which substitutes an 802.11 or Bluetooth air interface for GSM/UMTS's and as such can use unmodified commodity infrastructure.
|
https://en.wikipedia.org/wiki/GSM_Interworking_Profile
|
IP-DECTis a technology used for on-site wireless communications. It uses theDECTair interfacefor reliable wireless voice and data communication between handsets and base stations and the well establishedVoIPtechnology for the corded voice communication between base stations and server functions.
The advantage is the circuit switched approach and therefore better specified quality of service for vocal communication than withWireless LAN.
A DECT phone must remain in proximity to its own base (or repeaters thereof), and WLAN devices have a better range given sufficient access points, howevervoice over WLANhandsets impose significant design and maintenance complexity for large networks to ensure roaming facilities and high quality-of-service.
There are some of the traditional telephone equipment manufacturers and smaller enterprises that offer IP-DECT systems, both for residential (single-cell base station/access points), as well as for enterprise usage (multi-cell with multiple base stations/access points, and/or seamless handoff between cells) where it is important to cover large areas with a maintained speech path.
For enterprise use, the following vendors produce IP-DECT systems:
This article related totelecommunicationsis astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/IP-DECT
|
CT2is a cordless telephony standard that was used in the early 1990s to provide short-range proto-mobile phone service in some countries in Europe and in Hong Kong. It is considered the precursor to the more successfulDECTsystem. CT2 was also referred to by its marketing name,Telepoint.
CT2 is a digitalFDMAsystem that usestime-division duplexingtechnology to share carrier frequencies between handsets and base stations. Features[1][2][3]of the system are:
Unlike DECT, CT2 was a voice-only system, though like any minimally-compressed voice system, users could deployanalog modemsto transfer data; in the early 1990s,Apple Computersold a CT2 modem called thePowerBopto make use of France'sBi-BopCT2 network. Although CT2 is a microcellular system, fully capable of supportinghandoff, unlike DECT it does not support "forward handoff", meaning that it has to drop its former radio link before establishing the subsequent one, leading to a sub-second dropout in the call during the handover.
CT2 was deployed in a number of countries, including Britain and France. In Britain, theFerrantiZonephone system was the first public network to go live in 1989, and the much largerRabbitnetwork – backed by Hong Kong'sHutchison Telecommunications– operated from 1992 to 1993.[4]In France, theBi-Bopnetwork ran from 1991 to 1997. In the Netherlands, Dutch incumbent PTT deployed a CT2-based network called Greenpoint from 1992 to 1999; in the first year it used the name and mascotKermitbut royalties proved prohibitively large and the mascot was dropped.[5]The service continued under thebrand nameGreenhopper, with at one time over 60,000 subscribers. In Finland, thePointerservice was available for a short time in the 1980s before being superseded byNordic Mobile Telephone(NMT). Since 31 December 2008, CTA1 and CTA2 based phones are forbidden in Germany.
Outside Europe, the system achieved a certain amount of popularity in Hong Kong with three operators offering service from 1991, until licenses were terminated in 1996.[6]A CT2 service was offered in Singapore from 1993 to 1998 byTelecommunications Equipmentunder the brand nameCallzone,[7][8]using Motorola's Silverlink 2000 Birdie handset.
Typical CT2 users were sold a handset and base station which they could connect to their own home telephone wiring. Calls via the home base station would be routed via the home telephone line and in this configuration the system was identical to a standard cordless phone, for both incoming and outgoing calls.
Once out of range of the home, the CT2 user could find signs indicating a network base station in the area, and make outgoing calls (but not receive calls) using the network base station. Base stations were in a variety of places, including high-streets and other shopping areas, gas stations, and transport hubs such as rail stations. In this configuration, callers would be charged a per-minute rate which was higher than if they made calls from home, but not as high as conventional cellular charges.
The advantages to the user were that the rates were generally lower than cellular, and that the same handset could be used at home and away from home. The disadvantages, compared to cellular, were that many networks did not deliver incoming calls to the phones (Bi-Bop was an exception), and that their areas of use were more limited.
There are no known open CT2 networks still running.
Japan'sPersonal Handyphone System, another system based upon microcells, is a direct analog of CT2 and has achieved a much greater level of success. PHS is a full microcellular system with hand-off, better range, and more features.
TheDECTsystem is CT2's successor, and also supports full microcellular service and data. However, to date DECT has been used to provide commercial mobile-phone like service only in Italy in 1997-8 (the FIDO network).
Canada adopted an enhanced version of CT2, known as CT2Plus, in 1993, operating in the 944–948.5 MHz band. CT2Plus class 2 systems benefited from the use of common signalling channels and offered multi-cell hand-off as well as tracking of devices. Incoming calls could be received anywhere within a multi-cell system.Nortel Networksoffered aprivate branch exchangesystem based on the standard which was specified in Department of Communication document RSS-130 Annex 1.
In the United States, a system similar to DECT and PHS calledPACSwas developed but never deployed commercially.
CT2, as used in Europe and Hong Kong, required adherence to the MPT 1322 and MPT 1334 technical standards. Most striking was the use of TDD (time-division duplex) channels where one radio channel carried both sides of a duplex telephone conversation. This solved the problem of different propagation paths between two widely separated channels (up to 45 MHz in some cellular systems), but also placed an upper limit on the range of CT-2 signaling, since the speed of light (and radio signals) prevented long transmission paths. However, the use of TDD made available many frequency bands for CT-2 use, since a "paired" return path was not needed.
An American company, Cellular 21, Inc. (later to become Advanced Cordless Technologies, Inc.) headed by broadcaster Matt Edwards, petitioned the FCC to permit the use of CT2 technology in the US. ACT built two active test systems which were located in Monticello, New York (outdoor), and outside and inside the South Street Seaport complex in lower Manhattan. The Monticello public field trials used Timex technology which was incompatible with the trans-European standard, while the South Street Seaport indoor test used equipment from Ferranti, GPT, and Motorola, which at the time manufactured CT2 equipment for the Singapore and Hong Kong markets. GPT and Motorola both provided CT2 equipment for the Rabbit system rollout (GPT handset and charger shown above). All the testing was under an FCC Experimental license. The ACT/Cellular 21 "Petition for Rulemaking" (RM-7152), along with a later petition byMillicom, became the basis of the FCC's PCS initiative (FCC GEN Docket 90-314) which resulted in the allocation of frequencies in the 1.7 to 2.1 GHz band as spectrum expansion for the crowded 800 MHz cellular band. The FCC used the acronym PCS to designatePersonal Communications Services, separate and distinct from cellular service which was 800 MHz analog at the time. PCS was to be digital-only, and has progressed through several "generations" (mostly marketing designations) such as G3 and G4.
|
https://en.wikipedia.org/wiki/CT2
|
Net3was aWi-Fi-like system developed, manufactured and commercialised byOlivettiin the early 1990s. It could wirelessly connect PCs to anEthernetfixedLANat a speed of up to 512 kbit/s, over a very wide area. It was a micro-cellular system, in which each base station had an effective range of about 100m indoors, 300m outdoors, and the system supported seamless handover between base stations.
The system was based on theDECTstandard, published in 1992. A prototype system was first demonstrated at the Telecom '91 show in Geneva in October 1991, and is believed to be the first public demonstration of the DECT transmission system. The product was launched in June 1993, and was the first product based on the DECT standard to reach the market, narrowly beatingSiemens' highly successfulGigasetcordless telephone. It is also believed to be the first wireless LAN to be sold on the European market.
In its first version, the adapter cards consisted of half-size PC cards connected to an external desk-seated radio unit of modest dimensions. The second version, launched at Telecom '95, consisted of aPCMCIAcard and a small external radio unit suitable for portable use.
The system was developed in the laboratories of Olivetti Sixtel, the telecommunications technology division of Olivetti inIvrea, Italy. At a time when knowledge of commercial digital radio technology was scarce in Italy, the group began research in 1988 and developed in-house a high level of capability in DECT technology, including patented technology that became fundamental to the standard.[1]The development was funded partly from corporate venture resources, partly fromESPRITfunding, and partly from an unusual but highly effective tool of industrial policy, invented by Ing. Augusto Vighi of the Istituto Superiore delle Poste e Telecomunicazioni. Vighi placed a contract for proof-of-concept DECT demonstration systems with a consortium of Italian technology companies, covering the full range of DECT applications. This accelerated the development not only of the Net3 wireless lan by Olivetti, but of the FIDO public system by Italtel and of a complete wireless PABX by SELTA.
Net3 was originally conceived as a means to substitute LAN cabling in problematic buildings, which are especially numerous in the historic centres of Italian cities. In practice this was not a fast-growing or eager market, and the product eventually instead found success when integrated with rugged portable computers on forklift trucks in large warehouses and stockyards. A system was also installed inside a steel works and worked reliably despite the very high levels of electrical interference.
The team developing Net3 was also deeply involved in the development of the DECT standards, and contributed the chairmen of the DECT standards groups that designed the DECT network protocols and the data transport and interworking protocols. As a result, the DECT standards contained a high level of standardised, embedded support for wireless LAN functionality. The product benefited greatly from the availability of dedicated spectrum (1880-1900 MHz)throughout Europe thanks to a European directive on DECT, and from a single-stop type approval process arising from DECT's status as a pan-European standard. Although a very leading-edge product, Net3 was nevertheless able to exploit the availability of early semiconductor devices designed and priced to meet the mass consumer market for DECT-based cordless telephones.
In 1995 Olivetti cancelled all its commercial telecommunications products as part of its strategy of transformation into a telecoms operator. The Net3 product was progressively withdrawn from the market. The technology was repurposed to work as a high performance, low cost wireless local-loop infrastructure, supporting both toll-quality voice and broadband internet access. A pilot system was built and operated in Ivrea. The approach eventually floundered on the difficulty of redistributing signals within the apartment blocks so prevalent in the Italian urban fabric, and the Net3 team was eventually disbanded in 1997.
|
https://en.wikipedia.org/wiki/Net3
|
corDECTis awireless local loopstandard developed inIndiabyIIT Madrasand Midas Communications atChennai, under leadership of ProfAshok Jhunjhunwala, based on theDECTdigitalcordless phonestandard.[1]
The technology is aFixed WirelessOption, which has extremely low capital costs and is ideal for small start ups to scale, as well as for sparseruralareas. It is very suitable for ICT4D projects and India has one such organization,n-Logue Communicationsthat has aptly done this.
The full form of DECT isDigital Enhanced Cordless Telecommunications, which is useful in designing small capacityWLL(wireless in local loop) systems. These systems are operative only onLOSConditions and are very much affected by weather conditions.
System is designed for rural and sub urban areas where subscriber density is medium or low. "corDECT" system provides simultaneous voice and Internet access.
Following are the main parts of the system.
DECT Interface Unit (DIU)This is a 1000 line exchange provides E1 interface to the PSTN. This can cater up to 20 base stations. These base stations are interfaced through ISDN link which carries signals and power feed for the base stations even up to 3 km.
Compact Base Station (CBS)This is the radio fixed part of the DECT wireless local loop. CBSs are typically mounted on a tower top which can cater up to 50 subscribers with 0.1 erlang traffic.
Base Station Distributor (BSD)This is a traffic aggregator used to extend the range of the wireless local-loop where 4 CBS can be connected to this.
Relay Base Station (RBS)
This another technique used to extend the range of the corDECT wireless local loop up to 25 km by a radio chain.
Fixed Remote Station (FRS)This is the subscriber-end equipment used the corDECT wireless local loop which provides standard telephone instrument and Internet access up to 70 kbit/s through Ethernet port.
The new generation corDECT technology is calledBroadband corDECTwhich supports provides broadband Internet access over wireless local loop.
This article aboutwireless technologyis astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/CorDECT
|
Digital Enhanced Cordless Telecommunications(DECT) is acordless telephonystandard maintained byETSI. It originated inEurope, where it is the common standard, replacing earlier standards, such asCT1andCT2.[1]Since the DECT-2020 standard onwards, it also includesIoTcommunication.
Beyond Europe, it has been adopted byAustraliaand most countries inAsiaandSouth America. North American adoption was delayed byUnited Statesradio-frequency regulations. This forced development of a variation of DECT calledDECT 6.0, using a slightly different frequency range, which makes these units incompatible with systems intended for use in other areas, even from the same manufacturer. DECT has almost completely replaced other standards in most countries where it is used, with the exception of North America.
DECT was originally intended for fast roaming between networked base stations, and the first DECT product wasNet3wireless LAN. However, its most popular application is single-cell cordless phones connected totraditional analog telephone, primarily in home and small-office systems, though gateways with multi-cell DECT and/or DECT repeaters are also available in manyprivate branch exchange(PBX) systems for medium and large businesses, produced byPanasonic,Mitel,Gigaset,Ascom,Cisco,Grandstream,Snom,Spectralink, and RTX. DECT can also be used for purposes other than cordless phones, such asbaby monitors,wireless microphonesand industrial sensors. TheULE Alliance'sDECT ULEand its "HAN FUN" protocol[2]are variants tailored for home security, automation, and theinternet of things(IoT).
The DECT standard includes thegeneric access profile(GAP), a common interoperability profile for simple telephone capabilities, which most manufacturers implement. GAP-conformance enables DECT handsets and bases from different manufacturers to interoperate at the most basic level of functionality, that of making and receiving calls. Japan uses its own DECT variant, J-DECT, which is supported by the DECT forum.[3]
The New Generation DECT (NG-DECT) standard, marketed asCAT-iqby the DECT Forum, provides a common set of advanced capabilities for handsets and base stations. CAT-iq allows interchangeability acrossIP-DECTbase stations and handsets from different manufacturers, while maintaining backward compatibility with GAP equipment. It also requires mandatory support forwideband audio.
DECT-2020New Radio, marketed as NR+ (New Radio plus), is a5Gdata transmission protocol which meets ITU-RIMT-2020requirements for ultra-reliable low-latency and massive machine-type communications, and can co-exist with earlier DECT devices.[4][5][6]
The DECT standard was developed byETSIin several phases, the first of which took place between 1988 and 1992 when the first round of standards were published. These were the ETS 300-175 series in nine parts defining the air interface, and ETS 300-176 defining how the units should be type approved. A technical report, ETR-178, was also published to explain the standard.[7]Subsequent standards were developed and published by ETSI to cover interoperability profiles and standards for testing.
Named Digital European Cordless Telephone at its launch by CEPT in November 1987; its name was soon changed to Digital European Cordless Telecommunications, following a suggestion by Enrico Tosato of Italy, to reflect its broader range of application including data services. In 1995, due to its more global usage, the name was changed from European to Enhanced. DECT is recognized by theITUas fulfilling theIMT-2000requirements and thus qualifies as a3Gsystem. Within the IMT-2000 group of technologies, DECT is referred to as IMT-2000 Frequency Time (IMT-FT).
DECT was developed by ETSI but has since been adopted by many countries all over the World. The original DECT frequency band (1880–1900 MHz) is used in all countries inEurope. Outside Europe, it is used in most ofAsia,AustraliaandSouth America. In theUnited States, theFederal Communications Commissionin 2005 changed channelization and licensing costs in a nearby band (1920–1930 MHz, or 1.9GHz), known asUnlicensed Personal Communications Services(UPCS), allowing DECT devices to be sold in the U.S. with only minimal changes. These channels are reserved exclusively for voice communication applications and therefore are less likely to experience interference from other wireless devices such asbaby monitorsandwireless networks.
The New Generation DECT (NG-DECT) standard was first published in 2007;[8]it was developed by ETSI with guidance from theHome Gateway Initiativethrough the DECT Forum[9]to supportIP-DECTfunctions inhome gateway/IP-PBXequipment. The ETSI TS 102 527 series comes in five parts and covers wideband audio and mandatory interoperability features between handsets and base stations. They were preceded by an explanatory technical report, ETSI TR 102 570.[10]The DECT Forum maintains theCAT-iqtrademark and certification program; CAT-iq wideband voice profile 1.0 and interoperability profiles 2.0/2.1 are based on the relevant parts of ETSI TS 102 527.
TheDECT Ultra Low Energy(DECT ULE) standard was announced in January 2011 and the first commercial products were launched later that year byDialog Semiconductor. The standard was created to enablehome automation, security, healthcare and energy monitoring applications that are battery powered. Like DECT, DECT ULE standard uses the 1.9 GHz band, and so suffers less interference thanZigbee,Bluetooth, orWi-Fifrom microwave ovens, which all operate in the unlicensed 2.4 GHzISM band. DECT ULE uses a simple star network topology, so many devices in the home are connected to a single control unit.
A new low-complexity audio codec,LC3plus, has been added as an option to the 2019 revision of the DECT standard. This codec is designed for high-quality voice and music applications such as wireless speakers, headphones, headsets, and microphones. LC3plus supports scalable 16-bit narrowband, wideband, super wideband, fullband, and 24-bit high-resolution fullband and ultra-band coding, with sample rates of 8, 16, 24, 32, 48 and 96 kHz and audio bandwidth of up to 48 kHz.[11][12]
DECT-2020 New Radio protocol was published in July 2020; it defines a new physical interface based oncyclic prefixorthogonal frequency-division multiplexing (CP-OFDM) capable of up to 1.2Gbit/s transfer rate withQAM-1024 modulation. The updated standard supports multi-antennaMIMOandbeamforming, FECchannel coding, and hybridautomatic repeat request. There are 17 radio channel frequencies in the range from 450MHz up to 5,875MHz, and channel bandwidths of 1,728, 3,456, or 6,912kHz. Direct communication between end devices is possible with amesh networktopology. In October 2021, DECT-2020 NR was approved for theIMT-2020standard,[4]for use in Massive Machine Type Communications (MMTC) industry automation, Ultra-Reliable Low-Latency Communications (URLLC), and professionalwireless audioapplications with point-to-point ormulticastcommunications;[13][14][15]the proposal was fast-tracked by ITU-R following real-world evaluations.[5][16]The new protocol will be marketed as NR+ (New Radio plus) by the DECT Forum.[6]OFDMAandSC-FDMAmodulations were also considered by the ESTI DECT committee.[17][18]
OpenD is an open-source framework designed to provide a complete software implementation of DECT ULE protocols on reference hardware fromDialog SemiconductorandDSP Group; the project is maintained by the DECT forum.[19][20]
The DECT standard originally envisaged three major areas of application:[7]
Of these, the domestic application (cordless home telephones) has been extremely successful. The enterprisePABXmarket, albeit much smaller than the cordless home market, has been very successful as well, and all the major PABX vendors have advanced DECT access options available. The public access application did not succeed, since public cellular networks rapidly out-competed DECT by coupling their ubiquitous coverage with large increases in capacity and continuously falling costs. There has been only one major installation of DECT for public access: in early 1998Telecom Italialaunched a wide-area DECT network known as "Fido" after much regulatory delay, covering major cities in Italy.[21]The service was promoted for only a few months and, having peaked at 142,000 subscribers, was shut down in 2001.[22]
DECT has been used forwireless local loopas a substitute for copper pairs in the "last mile" in countries such as India and South Africa. By using directional antennas and sacrificing some traffic capacity, cell coverage could extend to over 10 kilometres (6.2 mi). One example is thecorDECTstandard.
The first data application for DECT wasNet3wireless LAN system by Olivetti, launched in 1993 and discontinued in 1995. A precursor to Wi-Fi, Net3was a micro-cellular data-only network with fast roaming between base stations and 520 kbit/s transmission rates.
Data applications such as electronic cash terminals, traffic lights, and remote door openers[23]also exist, but have been eclipsed byWi-Fi,3Gand4Gwhich compete with DECT for both voice and data.
The DECT standard specifies a means for aportable phoneor "Portable Part" to access a fixed telephone network via radio.Base stationor "Fixed Part" is used to terminate the radio link and provide access to a fixed line. Agatewayis then used to connect calls to the fixed network, such aspublic switched telephone network(telephone jack), office PBX, ISDN, or VoIP over Ethernet connection.
Typical abilities of a domestic DECTGeneric Access Profile(GAP) system include multiple handsets to one base station and one phone line socket. This allows several cordless telephones to be placed around the house, all operating from the same telephone line. Additional handsets have a battery charger station that does not plug into the telephone system. Handsets can in many cases be used asintercoms, communicating between each other, and sometimes aswalkie-talkies, intercommunicating without telephone line connection.
DECT operates in the 1880–1900 MHz band and defines ten frequency channels from 1881.792 MHz to 1897.344 MHz with a band gap of 1728 kHz.
DECT operates as a multicarrierfrequency-division multiple access(FDMA) andtime-division multiple access(TDMA) system. This means that theradio spectrumis divided into physical carriers in two dimensions: frequency and time. FDMA access provides up to 10 frequency channels, and TDMA access provides 24 time slots per every frame of 10ms. DECT usestime-division duplex(TDD), which means that down- and uplink use the same frequency but different time slots. Thus a base station provides 12 duplex speech channels in each frame, with each time slot occupying any available channel – thus 10 × 12 = 120 carriers are available, each carrying 32 kbit/s.
DECT also providesfrequency-hopping spread spectrumoverTDMA/TDD structure for ISM band applications. If frequency-hopping is avoided, each base station can provide up to 120 channels in the DECT spectrum before frequency reuse. Each timeslot can be assigned to a different channel in order to exploit advantages of frequency hopping and to avoid interference from other users in asynchronous fashion.[24]
DECT allows interference-free wireless operation to around 100 metres (110 yd) outdoors. Indoor performance is reduced when interior spaces are constrained by walls.
DECT performs with fidelity in common congested domestic radio traffic situations. It is generally immune to interference from other DECT systems,Wi-Finetworks,video senders,Bluetoothtechnology, baby monitors and other wireless devices.
ETSI standards documentation ETSI EN 300 175 parts 1–8 (DECT), ETSI EN 300 444 (GAP) and ETSI TS 102 527 parts 1–5 (NG-DECT) prescribe the following technical properties:
The DECTphysical layeruses FDMA/TDMA access with TDD.
Gaussian frequency-shift keying(GFSK) modulation is used: the binary one is coded with a frequency increase by 288 kHz, and the binary zero with frequency decrease of 288 kHz. With high quality connections, 2-, 4- or 8-level differential PSK modulation (DBPSK, DQPSK or D8PSK), which is similar to QAM-2, QAM-4 and QAM-8, can be used to transmit 1, 2, or 3 bits per each symbol. QAM-16 and QAM-64 modulations with 4 and 6 bits per symbol can be used for user data (B-field) only, with resulting transmission speeds of up to 5,068Mbit/s.
DECT provides dynamic channel selection and assignment; the choice of transmission frequency and time slot is always made by the mobile terminal. In case of interference in the selected frequency channel, the mobile terminal (possibly from suggestion by the base station) can initiate either intracell handover, selecting another channel/transmitter on the same base, or intercell handover, selecting a different base station altogether. For this purpose, DECT devices scan all idle channels at regular 30s intervals to generate a received signal strength indication (RSSI) list. When a new channel is required, the mobile terminal (PP) or base station (FP) selects a channel with the minimum interference from the RSSI list.
The maximum allowed power for portable equipment as well as base stations is 250 mW. A portable device radiates an average of about 10 mW during a call as it is only using one of 24 time slots to transmit. In Europe, the power limit was expressed aseffective radiated power(ERP), rather than the more commonly usedequivalent isotropically radiated power(EIRP), permitting the use of high-gain directional antennas to produce much higher EIRP and hence long ranges.
The DECTmedia access controllayer controls the physical layer and providesconnection oriented,connectionlessandbroadcastservices to the higher layers.
The DECTdata link layeruses Link Access Protocol Control (LAPC), a specially designed variant of theISDNdata link protocol called LAPD. They are based onHDLC.
GFSK modulation uses a bit rate of 1152 kbit/s, with a frame of 10ms (11520bits) which contains 24 time slots. Each slots contains 480 bits, some of which are reserved for physical packets and the rest is guard space. Slots 0–11 are always used for downlink (FP to PP) and slots 12–23 are used for uplink (PP to FP).
There are several combinations of slots and corresponding types of physical packets with GFSK modulation:
The 420/424 bits of a GFSK basic packet (P32) contain the following fields:
The resulting full data rate is 32 kbit/s, available in both directions.
The DECTnetwork layeralways contains the following protocol entities:
Optionally it may also contain others:
All these communicate through a Link Control Entity (LCE).
The call control protocol is derived fromISDNDSS1, which is aQ.931-derived protocol. Many DECT-specific changes have been made.[specify]
The mobility management protocol includes the management of identities, authentication, location updating, on-air subscription and key allocation. It includes many elements similar to the GSM protocol, but also includes elements unique to DECT.
Unlike the GSM protocol, the DECT network specifications do not define cross-linkages between the operation of the entities (for example, Mobility Management and Call Control). The architecture presumes that such linkages will be designed into the interworking unit that connects the DECT access network to whatever mobility-enabled fixed network is involved. By keeping the entities separate, the handset is capable of responding to any combination of entity traffic, and this creates great flexibility in fixed network design without breaking full interoperability.
DECTGAPis an interoperability profile for DECT. The intent is that two different products from different manufacturers that both conform not only to the DECT standard, but also to the GAP profile defined within the DECT standard, are able to interoperate for basic calling. The DECT standard includes full testing suites for GAP, and GAP products on the market from different manufacturers are in practice interoperable for the basic functions.
The DECT media access control layer includes authentication of handsets to the base station using the DECT Standard Authentication Algorithm (DSAA). When registering the handset on the base, both record a shared 128-bit Unique Authentication Key (UAK). The base can request authentication by sending two random numbers to the handset, which calculates the response using the shared 128-bit key. The handset can also request authentication by sending a 64-bit random number to the base, which chooses a second random number, calculates the response using the shared key, and sends it back with the second random number.
The standard also providesencryptionservices with the DECT Standard Cipher (DSC). The encryption isfairly weak, using a 35-bitinitialization vectorand encrypting the voice stream with 64-bit encryption. While most of the DECT standard is publicly available, the part describing the DECT Standard Cipher was only available under anon-disclosure agreementto the phones' manufacturers fromETSI.
The properties of the DECT protocol make it hard to intercept a frame, modify it and send it later again, as DECT frames are based on time-division multiplexing and need to be transmitted at a specific point in time.[26]Unfortunately very few DECT devices on the market implemented authentication and encryption procedures[26][27]– and even when encryption was used by the phone, it was possible to implement aman-in-the-middle attackimpersonating a DECT base station and revert to unencrypted mode – which allows calls to be listened to, recorded, and re-routed to a different destination.[27][28][29]
After an unverified report of a successful attack in 2002,[30][31]members of the deDECTed.org project actually did reverse engineer the DECT Standard Cipher in 2008,[27]and as of 2010 there has been a viable attack on it that can recover the key.[32]
In 2012, an improved authentication algorithm, the DECT Standard Authentication Algorithm 2 (DSAA2), and improved version of the encryption algorithm, the DECT Standard Cipher 2 (DSC2), both based onAES128-bit encryption, were included as optional in the NG-DECT/CAT-iq suite.
DECT Forum also launched the DECT Security certification program which mandates the use of previously optional security features in the GAP profile, such as early encryption and base authentication.
Various access profiles have been defined in the DECT standard:
DECT 6.0 is a North American marketing term for DECT devices manufactured for the United States and Canada operating at 1.9 GHz. The "6.0" does not equate to a spectrum band; it was decided the term DECT 1.9 might have confused customers who equate larger numbers (such as the 2.4 and 5.8 in existing 2.4 GHz and 5.8 GHz cordless telephones) with later products. The term was coined by Rick Krupka, marketing director at Siemens and the DECT USA Working Group / Siemens ICM.
In North America, DECT suffers from deficiencies in comparison to DECT elsewhere, since theUPCS band(1920–1930 MHz) is not free from heavy interference.[34]Bandwidth is half as wide as that used in Europe (1880–1900 MHz), the 4 mW average transmission power reduces range compared to the 10 mW permitted in Europe, and the commonplace lack of GAP compatibility among US vendors binds customers to a single vendor.
Before 1.9 GHz band was approved by the FCC in 2005, DECT could only operate in unlicensed2.4 GHzand 900 MHz Region 2ISM bands; some users ofUnidenWDECT 2.4 GHz phones reported interoperability issues withWi-Fiequipment.[35][36][unreliable source?]
North-AmericanDECT 6.0products may not be used in Europe, Pakistan,[37]Sri Lanka,[38]and Africa, as they cause and suffer from interference with the local cellular networks. Use of such products is prohibited by European Telecommunications Authorities,PTA, Telecommunications Regulatory Commission of Sri Lanka[39]and the Independent Communication Authority of South Africa. European DECT products may not be used in the United States and Canada, as they likewise cause and suffer from interference with American and Canadian cellular networks, and use is prohibited by theFederal Communications CommissionandInnovation, Science and Economic Development Canada.
DECT 8.0 HD is a marketing designation for North American DECT devices certified withCAT-iq 2.0"Multi Line" profile.[40]
Cordless Advanced Technology—internet and quality (CAT-iq) is a certification program maintained by the DECT Forum. It is based on New Generation DECT (NG-DECT) series of standards from ETSI.
NG-DECT/CAT-iq contains features that expand the generic GAP profile with mandatory support for high quality wideband voice, enhanced security, calling party identification, multiple lines, parallel calls, and similar functions to facilitateVoIPcalls throughSIPandH.323protocols.
There are several CAT-iq profiles which define supported voice features:
CAT-iq allows any DECT handset to communicate with a DECT base from a different vendor, providing full interoperability. CAT-iq 2.0/2.1 feature set is designed to supportIP-DECTbase stations found in officeIP-PBXandhome gateways.
DECT-2020, also called NR+, is a new radio standard byETSIfor the DECT bands worldwide.[41][42]The standard was designed to meet a subset of theITUIMT-20205Grequirements that are applicable toIOTandIndustrial internet of things.[43]DECT-2020 is compliant with the requirements for Ultra Reliable Low Latency CommunicationsURLLCand massive Machine Type Communication (mMTC) of IMT-2020.
DECT-2020 NR has new capabilities[44]compared to DECT and DECT Evolution:
The DECT-2020 standard has been designed to co-exist in the DECT radio band with existing DECT deployments. It uses the same Time Division slot timing and Frequency Division center frequencies and uses pre-transmit scanning to minimize co-channel interference.
Other interoperability profiles exist in the DECT suite of standards, and in particular the DPRS (DECT Packet Radio Services) bring together a number of prior interoperability profiles for the use of DECT as a wireless LAN and wireless internet access service. With good range (up to 200 metres (660 ft) indoors and 6 kilometres (3.7 mi) using directional antennae outdoors), dedicated spectrum, high interference immunity, open interoperability and data speeds of around 500 kbit/s, DECT appeared at one time to be a superior alternative toWi-Fi.[45]The protocol capabilities built into the DECT networking protocol standards were particularly good at supporting fast roaming in the public space, between hotspots operated by competing but connected providers. The first DECT product to reach the market, Olivetti'sNet3, was a wireless LAN, and German firmsDosch & AmandandHoeft & Wesselbuilt niche businesses on the supply of data transmission systems based on DECT.
However, the timing of the availability of DECT, in the mid-1990s, was too early to find wide application for wireless data outside niche industrial applications. Whilst contemporary providers of Wi-Fi struggled with the same issues, providers of DECT retreated to the more immediately lucrative market for cordless telephones. A key weakness was also the inaccessibility of the U.S. market, due to FCC spectrum restrictions at that time. By the time mass applications for wireless Internet had emerged, and the U.S. had opened up to DECT, well into the new century, the industry had moved far ahead in terms of performance and DECT's time as a technically competitive wireless data transport had passed.
DECT usesUHFradio, similar to mobile phones, baby monitors, Wi-Fi, and other cordless telephone technologies.
In North America, the 4 mW average transmission power reduces range compared to the 10 mW permitted in Europe.
The UKHealth Protection Agency(HPA) claims that due to a mobile phone's adaptive power ability, a European DECT cordless phone's radiation could actually exceed the radiation of a mobile phone. A European DECT cordless phone's radiation has an average output power of 10 mW but is in the form of 100 bursts per second of 250 mW, a strength comparable to some mobile phones.[46]
Most studies have been unable to demonstrate any link to health effects, or have been inconclusive.Electromagnetic fieldsmay have an effect on protein expression in laboratory settings[47]but have not yet been demonstrated to have clinically significant effects in real-world settings. The World Health Organization has issued a statement on medical effects of mobile phones which acknowledges that the longer term effects (over several decades) require further research.[48]
|
https://en.wikipedia.org/wiki/Wideband_Digital_Enhanced_Cordless_Telecommunications
|
Unlicensed Personal Communications ServicesorUPCS bandis the 1920–1930MHzfrequency bandallocated by the United StatesFederal Communications Commission(FCC) for short rangePersonal Communications Services(PCS) applications in theUnited States, such as theDigital Enhanced Cordless Telecommunications(DECT) wireless protocol.
Prior to an FCC rules change in April 2005, the band also included the frequencies 1910-1920 MHz and 2390–2400 MHz. These were used for a variety of short range communications, including point-to-pointmicrowavelinks.
These allocation rules are described in Title 47,Part 15of theCode of Federal Regulations.
LicensedPCS, although not necessarily distinguished as such from UPCS, is used for digitalmobile phoneservices.
DECT devices designed to operate in this band in the US use the marketing termDECT 6.0.
|
https://en.wikipedia.org/wiki/Unlicensed_Personal_Communications_Services
|
Amicrocellis a cell in amobile phone networkserved by a low powercellularbase station(tower), covering a limited area such as a mall, a hotel, or a transportation hub. A microcell is usually larger than apicocell, though the distinction is not always clear. A microcell uses power control to limit the radius of its coverage area.
Typically the range of a microcell is less than two kilometers wide, whereasstandard base stationsmay have ranges of up to 35 kilometres (22 mi). Apicocell, on the other hand, is 200 meters or less, and afemtocellis on the order of 10 meters,[1]although AT&T calls its femtocell that has a range of 40 feet (12 m), a "microcell".[2]AT&T uses "AT&T 3G MicroCell" as a trademark and not necessarily the "microcell" technology, however.[3]
Amicrocellular networkis a radio network composed of microcells.
Like picocells, microcells are usually used to addnetwork capacityin areas with very dense phone usage, such as train stations. Microcells are often deployed temporarily during sporting events and other occasions in which extra capacity is known to be needed at a specific location in advance.
Cell size flexibility is a feature of2G(and later) networks and is a significant part of how such networks have been able to improve capacity. Power controls implemented on digital networks make it easier to prevent interference from nearby cells using the same frequencies.[4]By subdividing cells, and creating more cells to help serve high density areas, a cellular network operator can optimize the use of spectrum and ensure capacity can grow. By comparison, older analog systems have fixed limits, beyond which attempts to subdivide cells simply would result in an unacceptable level of interference.
Certain mobile phone systems, notablyPHSandDECT, only provide microcellular (and Pico cellular) coverage. Microcellular systems are typically used to provide low cost mobile phone systems in high-density environments such as large cities. PHS is deployed throughout major cities in Japan as an alternative to ordinary cellular service. DECT is used by many businesses to deploy private license-free microcellular networks within large campuses where wireline phone service is less useful. DECT is also used as a private, non-networked, cordless phone system where its low power profile ensures that nearby DECT systems do not interfere with each other.
A forerunner of these types of network was theCT2cordless phone system, which provided access to a looser network (without handover), again with base stations deployed in areas where large numbers of people might need to make calls. CT2's limitations ensured the concept never took off. CT2's successor, DECT, was provided with an interworking profile,GIPso that GSM networks could make use of it for microcellular access, but in practice the success of GSM within Europe, and the ability of GSM to support microcells without using alternative technologies, meant GIP was rarely used, and DECT's use in general was limited to non-GSM private networks, including use as cordless phone systems.
|
https://en.wikipedia.org/wiki/Microcell
|
Wireless local loop(WLL) is the use of a wireless communications link as the "last mile/ first mile" connection for deliveringplain old telephone service(POTS) orInternet access(marketed under the term "broadband") to telecommunications customers.
Various types of WLL systems and technologies exist.
Other terms for this type of access includebroadband wireless access(BWA),radio in the loop(RITL),fixed-radio access(FRA),fixed wireless access(FWA) andmetro wireless(MW).
In 2017, a company called Climate Resilient Internet, LLC, formed to develop a new standard and certification for point-to-point microwave ("fixed wireless") for enterprise and government resilience to extreme weather, grid outages and terror attacks. The company was co-founded by David Theodore, founder ofMicrowave Bypass, who pioneered the first use of point-to-point microwave for internet access.[1][2]
Next-Web,Etheric Networks, Gate Speed and a handful of other companies founded the first voluntary spectrum coordination, working entirely independently of government regulators. This organization was founded in March 2003 as BANC.[3]
These includeGlobal System for Mobile Communications(GSM),time-division multiple access(TDMA),code-division multiple access(CDMA), andDigital Enhanced Cordless Telecommunications(DECT). Earlier implementations included such technologies asAdvanced Mobile Phone System(AMPS).
The wireless local loop market is currently[when?]an extremely high-growth market, offering Internet service providers immediate access to customer markets without having to either lay cable through a metropolitan area, or work through theILECs, reselling the telephone, cable or satellite networks, owned by companies that prefer to largely sell direct.
This trend revived the prospects for local and regional ISPs, as those willing to deploy fixed wireless networks were not at the mercy of the large telecommunication monopolies. They were at the mercy of unregulated re-use of unlicensed frequencies upon which they communicate.
Due to the enormous quantity of 802.11 "Wi-Fi" equipment and software, coupled with the fact that spectrum licenses are not required in theISMandU-NIIbands, the industry has moved well ahead of the regulators and the standards bodies.
|
https://en.wikipedia.org/wiki/Wireless_local_loop
|
CDMA spectral efficiencyrefers to thesystem spectral efficiencyin bit/s/Hz/site orErlang/MHz/site that can be achieved in a certainCDMAbased wireless communication system. CDMA techniques (also known asspread spectrum) are characterized by a very lowlink spectral efficiencyin (bit/s)/Hz as compared to non-spread spectrum systems, but a comparable system spectral efficiency.
The system spectral efficiency can be improved byradio resource managementtechniques, resulting in that a higher number of simultaneous calls and higher data rates can be achieved without adding more radio spectrum or more base station sites. This article is about radio resource management specifically fordirect-sequence spread spectrum(DS-CDMA) based cellular systems.
Examples of DS-CDMA based cellular systems are:
The terminology used in this article is firstly based on 3GPP2 standards.
CDMA is not expected to be used in4Gsystems, and is not used in pre-4G systems such asLTEandWiMAX, but is about to be supplemented by more spectral efficientfrequency-domain equalization(FDE) techniques such asOFDMA.
The aim of improving system spectral efficiency is to use limited radio spectrum resources and radio network infrastructure as efficiently as possible. The objective ofradio-resource managementis typically to maximize the system spectral efficiency under constraint that thegrade of serviceshould be above a certain level. This involves covering a certain area and avoidingoutagedue toco-channel interference,noise, attenuation caused by long distances,fadingcaused by shadowing andmultipath,Doppler shiftand other forms ofdistortion. The grade of service is also affected byblockingdue toadmission control,scheduling starvationor inability to guaranteequality of servicethat is requested by the users.
There are many ways of increasing system spectral efficiency. These include techniques to be implemented at the handset level or at the network level. They include network optimization and vocoder rate encapsulation. Issues faced while deploying these techniques are the cost, upgrade requirements, hardware and software changes (which includes cell phone compatibility corresponding to the changes) to be made and the agreements to be approved from the telecommunication department.
Due to its large transmission power, theCommon pilot channel(CPICH) probably consumes 15 to 20 percentage of the forward as well as the reverse link capacity[citation needed].Co-channel interferenceis obvious. It is hence important to initializeinterference cancellationtechniques such as pilot interference cancellation (PIC) and forward link interference cancellation (FLIC) together in the network. Quasi-linear interference cancellation (QLIC) is a technique used for both FLIC and PIC.
Along with the forward link, reverse link interference cancellation is also important. Interference will be reduced and the mobiles will have to transmit less power to get theline of sight[clarification needed]with the base station which will in turn increase the battery life of the mobile.
The 1/8 rategatingon thereverse fundamental channel(R-FCH) is the method used for gated transmission in a CDMA communication system. A mobile station (mobile phone) in the CDMA communication system transmits a reverse pilot signal at a reverse gating rate which is different from a forward gating rate in agated mode, and a base station transmits a forward pilot signal at the forward gating rate different from the forward gating rate in a gated mode.
When theduty cycleis 1/8, only 1/8 of the wholepower controlgroups in one frame are transmitted. This behavior is not present in any other CDMA modes.
Another CDMA invention to provide a device and technique for improving a downlink phone capacity and receiving performance by gating an uplink DPCCH signal in a partial period of thepower controlgroup in a mobile communication system. The test set's support for the R-FCH gating mode is disabled (off) by default.
If the test set's R-FCH gating mode is enabled (on) and themobile station(MS) supports the gating mode, the MS will gate the R-FCH/R-Pilot Channel when transmitting at 1/8 rate. This will save around 75%[citation needed]of the power on an average on reverse channels.
The CDMA radio configuration is defined as a combination of forward and reverse traffic channel transmission formats that are characterized by physical layer parameters such as data rates,error-correction codes,modulationcharacteristics, andspreading factors. The traffic channel may consist of one or more code channels such as fundamental channels and supplemental channels.
The forward link of a 3G code-division multiple-access (CDMA) system may become a limiting factor when the number of users increases maximal capacity.
The conventional channelization code,Walsh codedoes not have enough available bits to cope with maximal use. Therefore, the quasi-orthogonal function (QOF), which can process optimal cross-correlation with Walsh code has been used as a method to get around the limitations of the Walsh Codes.
To enhance the overall capacity in such scenarios, alternative sets of orthogonal functions called the quasi-orthogonal functions (QOF), which possess optimalminimaxcross correlation with Walsh code sets of variable length, have been incorporated inIS-2000.
This method uses aggregation of multiple quasi-orthogonal functions with a smaller constellation alphabet size for a single user with a joint multi-channel detector. This method is compared with the alternative method for enhancing the maximum throughput using aggregation of a smaller number of Walsh functions, but with a higher constellation alphabet size (multi-level modulation).
There have been many industrial and academic discussions on the trade-offs with respect to better methods for increasing capacity in IS-2000/3G systems. QOF introduces high amount of interference in the network channels, thus limiting its benefits.
There are some places where the utilization of the site is very high and excess softerhandoffsoccur. For such sites, a 6-sector antennais one of the solutions, as it provides greater coverage granularity than the traditional 3-sector antenna. Instead of 1 BTS, 2 BTS are used and hence the antennas can be separated from each other by 60 degrees instead of 120 degrees.
Antenna diversity, also known asspace diversity(micro-diversity as well as macro-diversity, i.e.soft handover, see below), is any one of several wireless diversity schemes that use two or more antennas to improve the quality and reliability of a wireless link.
Often, especially in urban and indoor environments, there is not a clear line-of-sight (LOS) between transmitter and receiver. Instead the signal is reflected along multiple paths before finally being received. Each of these bounces can introduce phase shifts, time delays, attenuations, and even distortions that can destructively interfere with one another at the aperture of the receiving antenna.
Antenna diversity is especially effective at mitigating these multipath propagation situations. This is because multiple antennas afford a receiver several observations of the same signal. Each antenna will experience a different interference environment. Thus, if one antenna is experiencing a deep fade, it is likely that another has a sufficient signal.
Collectively such a system can provide a robust link. While this is primarily seen in receiving systems (diversity reception), the analog has also proven valuable for transmitting systems (transmit diversity) as well.
Inherently an antenna diversity scheme requires additional hardware and integration versus a single antenna system but due to the commonality of the signal paths a fair amount of circuitry can be shared.
With multiple signals there is a greater processing demand placed on the receiver, which can lead to tighter design requirements of the base station. Typically, however, signal reliability is paramount and using multiple antennas is an effective way to decrease the number of drop-outs and lost connections.
Qualcomm'sfourth generation vocoder(4GV) is a suite of voice speech codecs expected to be used in future 4G networks as well CDMA networks, that allows the network operators to dynamically prioritize voice quality to increase network capacity while maintaining voice quality. Currently, the 4GV suite offersEVRC-BandEVRC-WB.
Enhanced Variable Rate Codec B (EVRC-B) is a speech codec used by CDMA networks. EVRC-B is an enhancement to EVRC and compresses each 20 milliseconds of 8000 Hz, 16-bit sampled speech input into output frames of one of the four different sizes: Rate 1 - 171 bits, Rate 1/2 - 80 bits, Rate 1/4 - 40 bits, Rate 1/8 - 16 bits.
In addition, there are two zero bit codec frame types: null frames and erasure frames, similar to EVRC. One significant enhancement in EVRC-B is the use of 1/4 rate frames that were not used in EVRC. This provides lower average data rates (ADRs) compared to EVRC, for a given voice quality. The new 4GV Codecs used in CDMA2000 are based on EVRC-B. 4GV is designed to allow service providers to dynamically prioritize voice capacity on their network as required.
TheEnhanced Variable Rate Codec(EVRC) is a speech codec used for cellular telephony in cdma2000 systems. EVRC provides excellent[citation needed]speech quality using variable rate coding with 3 possible rates, 8.55, 4.0 and 0.8 kbit/s. However, theQuality of Service(QoS) in cdma2000 systems can significantly benefit from a codec which allows tradeoffs between voice quality and network capacity, which cannot be achieved efficiently with the EVRC.
Higher combined Ec/Io, lower traffic channel Ec/Io is required and more BTS power is conserved.Ec/Iois a notation used to represent a dimensionless ratio of the average power of a channel, typically the pilot channel, to the total signal power. It is expressed in dB.
There are some remote places where BTS signal penetrates but reverse link of mobile cannot reach back to the base station. Solution is like reducing base station antenna height, down tilt, select lower gains, etc.
There are some areas with moresoft handoffthan necessary. The handoff parameters has to be reduced to save the base station power. Set higher values of T_ADD and T_DROP, and check the sector coverage should not be too high or too low.
For best quality decrease theFPCH(Forward Pilot Channel) and FER (Frame Error Rate) settings to 1% and for increase the capacity of highly loaded sites, increase the settings of these parameters to more than 3%.
Some sites have very low utilization and due to coverage issue, a new site is required in nearby areas. Instead of a new site, aCellular repeatercan be used effectively to provide coverage solutions.
|
https://en.wikipedia.org/wiki/CDMA_spectral_efficiency
|
Interim Standard 95(IS-95) was the first digital cellular technology that usedcode-division multiple access(CDMA). It was developed byQualcommand later adopted as a standard by theTelecommunications Industry Associationin TIA/EIA/IS-95 release published in 1995. The proprietary name for IS-95 iscdmaOne.
It is a2Gmobile telecommunications standard that uses CDMA, amultiple accessscheme fordigital radio, to send voice, data and signaling data (such as a dialed telephone number) between mobiletelephonesandcell sites. CDMA transmits streams ofbits(PN codes). CDMA permits several radios to share the same frequencies. Unliketime-division multiple access(TDMA), a competing system used in 2GGSM, all radios can be active all the time, because network capacity does not directly limit the number of active radios. Since larger numbers of phones can be served by smaller numbers of cell-sites, CDMA-based standards have a significant economic advantage over TDMA-based standards,[citation needed]or the oldest cellular standards that usedfrequency-division multiplexing.
In North America, the technology competed withDigital AMPS(IS-136), a TDMA-based standard, as well as with the TDMA-based GSM. It was supplanted byIS-2000(CDMA2000), a later CDMA-based standard.
cdmaOne's technical history is reflective of both its birth as a Qualcomm internal project, and the world of then-unproven competing digital cellular standards under which it was developed. Theterm IS-95generically applies to the earlier set of protocol revisions, namely P_REV's one through five.
P_REV=1 was developed under anANSIstandards process with documentation referenceJ-STD-008. J-STD-008, published in 1995, was only defined for the then-new North American PCS band (Band Class 1, 1900 MHz). The termIS-95properly refers to P_REV=1, developed under theTelecommunications Industry Association(TIA) standards process, for the North American cellular band (Band Class 0, 800 MHz) under roughly the same time frame. IS-95 offered interoperation (including handoff) with the analog cellular network. For digital operation, IS-95 and J-STD-008 have most technical details in common. The immature style and structure of both documents are highly reflective of the "standardizing" of Qualcomm's internal project.
P_REV=2 is termedInterim Standard 95A (IS-95A). IS-95A was developed for Band Class 0 only, as in incremental improvement over IS-95 in the TIA standards process.
P_REV=3 is termedTechnical Services Bulletin 74 (TSB-74). TSB-74 was the next incremental improvement over IS-95A in the TIA standards process.
P_REV=4 is termedInterim Standard 95B (IS-95B) Phase I, and P_REV=5 is termedInterim Standard 95B (IS-95B) Phase II. The IS-95B standards track provided for a merging of the TIA and ANSI standards tracks under the TIA, and was the first document that provided for interoperation of IS-95 mobile handsets in both band classes (dual-band operation). P_REV=4 was by far the most popular variant of IS-95, with P_REV=5 only seeing minimal uptake in South Korea.
P_REV=6 and beyond fall under theCDMA2000umbrella. Besides technical improvements, the IS-2000 documents are much more mature in terms of layout and content. They also provide backwards-compatibility to IS-95.
The IS-95 standards describe anair interface,[1]a set of protocols used between mobile units and the network. IS-95 is widely described as a three-layer stack, where L1 corresponds to the physical (PHY) layer, L2 refers to theMedia Access Control(MAC) and Link-Access Control (LAC) sublayers, and L3 to the call-processing state machine.
IS-95 defines the transmission of signals in both theforward(network-to-mobile) andreverse(mobile-to-network) directions.
In the forward direction, radio signals are transmitted by base stations (BTS's). Every BTS is synchronized with aGPSreceiver so transmissions are tightly controlled in time. All forward transmissions areQPSKwith a chip rate of 1,228,800 per second. Each signal is spread with aWalsh codeof length 64 and apseudo-random noisecode (PN code) of length 215, yielding a PN roll-over period of803{\displaystyle {\frac {80}{3}}}ms.
For the reverse direction, radio signals are transmitted by the mobile. Reverse link transmissions areOQPSKin order to operate in the optimal range of the mobile's power amplifier. Like the forward link, the chip rate is 1,228,800 per second and signals are spread withWalsh codesand thepseudo-random noisecode, which is also known as a Short Code.
Every BTS dedicates a significant amount of output power to apilot channel, which is an unmodulated PN sequence (in other words, spread with Walsh code 0). Each BTS sector in the network is assigned a PN offset in steps of 64 chips. There is no data carried on the forward pilot. With its strongautocorrelationfunction, the forward pilot allows mobiles to determine system timing and distinguish different BTSs forhandoff.
When a mobile is "searching", it is attempting to find pilot signals on the network by tuning to particular radio frequencies and performing a cross-correlation across all possible PN phases. A strong correlation peak result indicates the proximity of a BTS.
Other forward channels, selected by their Walsh code, carry data from the network to the mobiles. Data consists of network signaling and user traffic. Generally, data to be transmitted is divided into frames of bits. A frame of bits is passed through a convolutional encoder, adding forward error correction redundancy, generating a frame of symbols. These symbols are then spread with the Walsh and PN sequences and transmitted.
BTSs transmit async channelspread with Walsh code 32. The sync channel frame is803{\displaystyle {\frac {80}{3}}}ms long, and its frame boundary is aligned to the pilot. The sync channel continually transmits a single message, theSync Channel Message, which has a length and content dependent on the P_REV. The message is transmitted 32 bits per frame, encoded to 128 symbols, yielding a rate of 1200 bit/s. The Sync Channel Message contains information about the network, including the PN offset used by the BTS sector.
Once a mobile has found a strong pilot channel, it listens to the sync channel and decodes a Sync Channel Message to develop a highly accurate synchronization to system time. At this point the mobile knows whether it is roaming, and that it is "in service".
BTSs transmit at least one, and as many as seven,paging channels starting with Walsh code 1. The paging channel frame time is 20 ms, and is time aligned to the IS-95 system (i.e. GPS) 2-second roll-over. There are two possible rates used on the paging channel: 4800 bit/s or 9600 bit/s. Both rates are encoded to 19200 symbols per second.
The paging channel contains signaling messages transmitted from the network to all idle mobiles. A set of messages communicate detailed network overhead to the mobiles, circulating this information while the paging channel is free. The paging channel also carries higher-priority messages dedicated to setting up calls to and from the mobiles.
When a mobile is idle, it is mostly listening to a paging channel. Once a mobile has parsed all the network overhead information, itregisterswith the network, then optionally entersslotted-mode. Both of these processes are described in more detail below.
The Walsh space not dedicated to broadcast channels on the BTS sector is available fortraffic channels. These channels carry the individual voice and data calls supported by IS-95. Like the paging channel, traffic channels have a frame time of 20ms.
Since voice and user data are intermittent, the traffic channels support variable-rate operation. Every 20 ms frame may be transmitted at a different rate, as determined by the service in use (voice or data). P_REV=1 and P_REV=2 supportedrate set 1, providing a rate of 1200, 2400, 4800, or 9600 bit/s. P_REV=3 and beyond also providedrate set 2, yielding rates of 1800, 3600, 7200, or 14400 bit/s.
For voice calls, the traffic channel carries frames ofvocoderdata. A number of different vocoders are defined under IS-95, the earlier of which were limited to rate set 1, and were responsible for some user complaints of poor voice quality. More sophisticated vocoders, taking advantage of modern DSPs and rate set 2, remedied the voice quality situation and are still in wide use in 2005.
The mobile receiving a variable-rate traffic frame does not know the rate at which the frame was transmitted. Typically, the frame is decoded at each possible rate, and using the quality metrics of theViterbi decoder, the correct result is chosen.
Traffic channels may also carry circuit-switch data calls in IS-95. The variable-rate traffic frames are generated using the IS-95Radio Link Protocol (RLP). RLP provides a mechanism to improve the performance of the wireless link for data. Where voice calls might tolerate the dropping of occasional 20 ms frames, a data call would have unacceptable performance without RLP.
Under IS-95B P_REV=5, it was possible for a user to use up to seven supplemental "code" (traffic) channels simultaneously to increase the throughput of a data call. Very few mobiles or networks ever provided this feature, which could in theory offer 115200 bit/s to a user.
After convolution coding and repetition, symbols are sent to a 20 ms block interleaver, which is a 24 by 16 array.
IS-95 and its use of CDMA techniques, like any other communications system, have their throughput limited according toShannon's theorem. Accordingly, capacity improves with SNR and bandwidth. IS-95 has a fixed bandwidth, but fares well in the digital world because it takes active steps to improve SNR.
With CDMA, signals that are not correlated with the channel of interest (such as other PN offsets from adjacent cellular base stations) appear as noise, and signals carried on other Walsh codes (that are properly time aligned) are essentially removed in the de-spreading process. The variable-rate nature of traffic channels provide lower-rate frames to be transmitted at lower power causing less noise for other signals still to be correctly received. These factors provide an inherently lower noise level than other cellular technologies allowing the IS-95 network to squeeze more users into the same radio spectrum.
Active (slow) power control is also used on the forward traffic channels, where during a call, the mobile sends signaling messages to the network indicating the quality of the signal. The network will control the transmitted power of the traffic channel to keep the signal quality just good enough, thereby keeping the noise level seen by all other users to a minimum.
The receiver also uses the techniques of therake receiverto improve SNR as well as performsoft handoff.
Once a call is established, a mobile is restricted to using the traffic channel. A frame format is defined in the MAC for the traffic channel that allows the regular voice (vocoder) or data (RLP) bits to be multiplexed with signaling message fragments. The signaling message fragments are pieced together in the LAC, where complete signaling messages are passed on to Layer 3.
cdmaOne was used in the following areas:
|
https://en.wikipedia.org/wiki/CdmaOne
|
Indigital communications, achipis a pulse of adirect-sequence spread spectrum(DSSS) code, such as a pseudo-random noise (PN) code sequence used in direct-sequencecode-division multiple access(CDMA)channel accesstechniques.
In a binary direct-sequence system, each chip is typically a rectangular pulse of +1 or −1 amplitude, which is multiplied by a data sequence (similarly +1 or −1 representing the messagebits) and by a carrier waveform to make the transmitted signal. The chips are therefore just the bit sequence out of the code generator; they are called chips to avoid confusing them with message bits.[1]
Thechip rateof a code is the number of pulses per second (chips per second) at which the code is transmitted (or received). The chip rate is larger than thesymbol rate, meaning that onesymbolis represented by multiple chips. The ratio is known as thespreading factor(SF) or processing gain:
Orthogonal variable spreading factor(OVSF) is an implementation ofcode-division multiple access(CDMA) where before each signal is transmitted, thesignal is spread over a wide spectrum rangethrough the use of a user's code. Users' codes are carefully chosen to be mutuallyorthogonalto each other.
These codes are derived from an OVSF code tree, and each user is given a different code. An OVSF code tree is a completebinary treethat reflects the construction ofHadamard matrices.
This article aboutwireless technologyis astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Orthogonal_variable_spreading_factor
|
Incryptography,pseudorandom noise(PRN[1]) is asignalsimilar tonoisewhich satisfies one or more of the standard tests forstatistical randomness. Although it seems to lack any definitepattern, pseudorandom noise consists of a deterministicsequenceofpulsesthat will repeat itself after its period.[2]
Incryptographic devices, the pseudorandom noise pattern is determined by akeyand the repetition period can be very long, even millions of digits.
Pseudorandom noise is used in someelectronic musical instruments, either by itself or as an input tosubtractive synthesis, and in manywhite noise machines.
Inspread-spectrumsystems, the receivercorrelatesa locally generated signal with the receivedsignal. Such spread-spectrum systems require a set of one or more "codes" or "sequences" such that
In adirect-sequence spread spectrumsystem, each bit in thepseudorandom binary sequenceis known as achipand theinverseof its period aschip rate;comparebit rateandsymbol rate.
In afrequency-hopping spread spectrumsequence, each value in the pseudorandom sequence is known as achannel numberand theinverseof its period as thehop rate.FCC Part 15mandates at least 50 different channels and at least a 2.5 Hz hop rate for narrow band frequency-hopping systems.
GPS satellites broadcast data at a rate of 50 data bits per second – each satellite modulates its data with one PN bit stream at 1.023 millionchips per secondand the same data with another PN bit stream at 10.23 million chips per second.GPSreceivers correlate the received PN bit stream with a local reference to measure distance. GPS is a receive-only system that uses relative timing measurements from several satellites (and the known positions of the satellites) to determine receiver position.
Otherrange-findingapplications involve two-way transmissions. A local station generates a pseudorandom bit sequence and transmits it to the remote location (using any modulation technique). Some object at the remote location echoes this PN signal back to the location station – either passively, as in some kinds of radar and sonar systems, or using an active transponder at the remote location, as in the ApolloUnified S-bandsystem.[3]By correlating a (delayed version of) the transmitted signal with the received signal, a precise round trip time to the remote location can be determined and thus the distance.
Apseudo-noise code(PN code) orpseudo-random-noise code(PRN code) is one that has a spectrum similar to arandom sequenceof bits but isdeterministicallygenerated. The most commonly used sequences indirect-sequence spread spectrumsystems aremaximal length sequences,Gold codes,Kasami codes, andBarker codes.[4]
|
https://en.wikipedia.org/wiki/Pseudorandom_noise
|
Quadrature-division multiple access(QDMA) is a radio protocol.[1]The term combines two standard terms intelecommunications,CDMAandQPSK.
QDMA is used forlocal area networks, usually wireless short-range such asWiMax. CDMA and QDMA are especially suitable for modern communications, for example, the transmission of short messages such asSMSorMMS; communication when in motion (from cars, trains, etc.); the establishment of unplanned links.
The traditionalTDMAandFDMArequire a lot of overhead to set a link parameter with a new user, or to detect that a user left and their allocation is free to be allocated to another. In CDMA or QDMA, a new user is simply allocated a new code and is ready to go. It may impose a slight load on the spectrum, but the system is so devised as to absorb a controlled measure ofcollisionsand continue operations at a high level of quality of service.
This article related to radio communications is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Quadrature-division_multiple_access
|
Inwireless communicationsystems, therise over thermal(ROT) indicates the ratio between the total interference received on abase stationand thethermal noise.[1]
The ROT is a measurement of congestion of acellular telephone network. The acceptable level of ROT is often used to define the capacity of systems using CDMA (code-division multiple access).
This article related totelecommunicationsis astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Rise_over_thermal
|
Intelecommunications, especiallyradio communication,spread spectrumare techniques by which asignal(e.g., an electrical, electromagnetic, or acoustic) generated with a particularbandwidthis deliberately spread in thefrequency domainover a widerfrequency band. Spread-spectrum techniques are used for the establishment of secure communications, increasing resistance to naturalinterference,noise, andjamming, to prevent detection, to limitpower flux density(e.g., insatellitedownlinks), and to enable multiple-access communications.
Spread spectrum generally makes use of a sequentialnoise-like signal structure to spread the normallynarrowbandinformation signal over a relativelywideband(radio) band of frequencies. The receiver correlates the received signals to retrieve the original information signal. Originally there were two motivations: either to resist enemy efforts to jam the communications (anti-jam, or AJ), or to hide the fact that communication was even taking place, sometimes calledlow probability of intercept(LPI).[1]
Frequency-hopping spread spectrum(FHSS),direct-sequence spread spectrum(DSSS),time-hopping spread spectrum(THSS),chirp spread spectrum(CSS), and combinations of these techniques are forms of spread spectrum. The first two of these techniques employ pseudorandom number sequences—created usingpseudorandom number generators—to determine and control the spreading pattern of the signal across the allocated bandwidth. Wireless standardIEEE 802.11uses either FHSS or DSSS in its radio interface.
The idea of trying to protect and avoid interference in radio transmissions dates back to the beginning of radio wave signaling. In 1899,Guglielmo Marconiexperimented with frequency-selective reception in an attempt to minimize interference.[2]The concept ofFrequency-hoppingwas adopted by the German radio companyTelefunkenand also described in part of a 1903 US patent byNikola Tesla.[3][4]Radio pioneerJonathan Zenneck's 1908 German bookWireless Telegraphydescribes the process and notes thatTelefunkenwas using it previously.[2]It saw limited use by the German military inWorld War I,[5]was put forward byPolishengineerLeonard Danilewiczin 1929,[6]showed up in a patent in the 1930s by Willem Broertjes (U.S. patent 1,869,659issued Aug. 2, 1932), and in the top-secretUS Army Signal CorpsWorld War IIcommunications system namedSIGSALY.
During World War II,Golden Age of HollywoodactressHedy Lamarrand avant-gardecomposerGeorge Antheildeveloped an intended jamming-resistant radio guidance system for use in Alliedtorpedoes, patenting the device underU.S. patent 2,292,387"Secret Communications System" on August 11, 1942. Their approach was unique in that frequency coordination was done with paper player piano rolls, a novel approach which was never put into practice.[7]
Spread-spectrum clock generation (SSCG) is used in somesynchronous digital systems, especially those containing microprocessors, to reduce the spectral density of theelectromagnetic interference(EMI) that these systems generate. A synchronous digital system is one that is driven by aclock signaland, because of its periodic nature, has an unavoidably narrow frequency spectrum. In fact, a perfect clock signal would have all its energy concentrated at a single frequency (the desired clock frequency) and its harmonics.
Practical synchronous digital systems radiate electromagnetic energy on a number of narrow bands spread on the clock frequency and its harmonics, resulting in a frequency spectrum that, at certain frequencies, can exceed the regulatory limits for electromagnetic interference (e.g. those of theFCCin the United States,JEITAin Japan and theIECin Europe).
Spread-spectrum clocking avoids this problem by reducing the peak radiated energy and, therefore, its electromagnetic emissions and so comply withelectromagnetic compatibility(EMC) regulations. It has become a popular technique to gain regulatory approval because it requires only simple equipment modification. It is even more popular in portable electronics devices because of faster clock speeds and increasing integration of high-resolution LCD displays into ever smaller devices. As these devices are designed to be lightweight and inexpensive, traditional passive, electronic measures to reduce EMI, such as capacitors or metal shielding, are not viable.Active EMI reductiontechniques such as spread-spectrum clocking are needed in these cases.
In PCIe, USB 3.0, and SATA systems, the most common technique is downspreading, viafrequency modulationwith a lower-frequency source.[8]Spread-spectrum clocking, like other kinds ofdynamic frequency change, can also create challenges for designers. Principal among these is clock/data misalignment, orclock skew. Aphase-locked loopon the receiving side needs a high enough bandwidth to correctly track a spread-spectrum clock.[9]
Even though SSC compatibility is mandatory on SATA receivers,[10]it is not uncommon to find expander chips having problems dealing with such a clock. Consequently, an ability to disable spread-spectrum clocking in computer systems is considered useful.[11][12][13]
Note that this method does not reduce totalradiatedenergy, and therefore systems are not necessarily less likely to cause interference. Spreading energy over a larger bandwidth effectively reduces electrical and magnetic readings within narrow bandwidths. Typicalmeasuring receiversused by EMC testing laboratories divide the electromagnetic spectrum into frequency bands approximately 120 kHz wide.[14]If the system under test were to radiate all its energy in a narrow bandwidth, it would register a large peak. Distributing this same energy into a larger bandwidth prevents systems from putting enough energy into any one narrowband to exceed the statutory limits. The usefulness of this method as a means to reduce real-life interference problems is often debated,[9]as it is perceived that spread-spectrum clocking hides rather than resolves higher radiated energy issues by simple exploitation of loopholes in EMC legislation or certification procedures. This situation results in electronic equipment sensitive to narrow bandwidth(s) experiencing much less interference, while those with broadband sensitivity, or even operated at other higher frequencies (such as a radio receiver tuned to a different station), will experience more interference.
FCC certification testing is often completed with the spread-spectrum function enabled in order to reduce the measured emissions to within acceptable legal limits. However, the spread-spectrum functionality may be disabled by the user in some cases. As an example, in the area of personal computers, someBIOSwriters include the ability to disable spread-spectrum clock generation as a user setting, thereby defeating the object of the EMI regulations. This might be considered aloophole, but is generally overlooked as long as spread-spectrum is enabled by default.
|
https://en.wikipedia.org/wiki/Spread_spectrum
|
Incomputational complexity theory, acomputational hardness assumptionis the hypothesis that a particular problem cannot be solved efficiently (whereefficientlytypically means "inpolynomial time"). It is not known how toprove(unconditional) hardness for essentially any useful problem. Instead,computer scientistsrely onreductionsto formally relate the hardness of a new or complicated problem to a computational hardness assumption about a problem that is better-understood.
Computational hardness assumptions are of particular importance incryptography. A major goal in cryptography is to createcryptographic primitiveswithprovable security. In some cases, cryptographic protocols are found to haveinformation theoretic security; theone-time padis a common example. However, information theoretic security cannot always be achieved; in such cases, cryptographers fall back to computational security. Roughly speaking, this means that these systems are secureassuming that any adversaries are computationally limited, as all adversaries are in practice.
Computational hardness assumptions are also useful for guidingalgorithmdesigners: a simple algorithm is unlikely to refute a well-studied computational hardness assumption such asP ≠ NP.
Computer scientists have different ways of assessing which hardness assumptions are more reliable.
We say that assumptionA{\displaystyle A}isstrongerthan assumptionB{\displaystyle B}whenA{\displaystyle A}impliesB{\displaystyle B}(and theconverseis false or not known).
In other words, even if assumptionA{\displaystyle A}were false, assumptionB{\displaystyle B}may still be true, and cryptographic protocols based on assumptionB{\displaystyle B}may still be safe to use. Thus when devising cryptographic protocols, one hopes to be able to prove security using theweakestpossible assumptions.
Anaverage-caseassumption says that a specific problem is hard on most instances from some explicit distribution, whereas aworst-caseassumption only says that the problem is hard onsomeinstances. For a given problem, average-case hardness implies worst-case hardness, so anaverage-case hardness assumptionis stronger than a worst-case hardness assumption for the same problem. Furthermore, even for incomparable problems, an assumption like theexponential time hypothesisis often considered preferable to anaverage-case assumptionlike theplanted clique conjecture.[1]However, for cryptographic applications, knowing that a problem has some hard instance (the problem is hard in the worst-case) is useless because it does not provide us with a way of generating hard instances.[2]Fortunately, many average-case assumptions used in cryptography (includingRSA,discrete log, and somelattice problems) can be based on worst-case assumptions via worst-case-to-average-case reductions.[3]
A desired characteristic of a computational hardness assumption isfalsifiability, i.e. that if the assumption were false, then it would be possible to prove it. In particular,Naor (2003)introduced a formal notion of cryptographic falsifiability.[4]Roughly, a computational hardness assumption is said to be falsifiable if it can be formulated in terms of a challenge: an interactive protocol between an adversary and an efficient verifier, where an efficient adversary can convince the verifier to acceptif and only ifthe assumption is false.
There are many cryptographic hardness assumptions in use. This is a list of some of the most common ones, and some cryptographic protocols that use them.
Given acompositeintegern{\displaystyle n}, and in particular one which is the product of two largeprimesn=p⋅q{\displaystyle n=p\cdot q}, the integer factorization problem is to findp{\displaystyle p}andq{\displaystyle q}(more generally, find primesp1,…,pk{\displaystyle p_{1},\dots ,p_{k}}such thatn=∏ipi{\displaystyle n=\prod _{i}p_{i}}).
It is a majoropen problemto find an algorithm for integer factorization that runs in time polynomial in the size of representation (logn{\displaystyle \log n}). The security of many cryptographic protocols rely on the assumption that integer factorization is hard (i.e. cannot be solved in polynomial time). Cryptosystems whose security is equivalent to this assumption include theRabin cryptosystemand theOkamoto–Uchiyama cryptosystem. Many more cryptosystems rely on stronger assumptions such asRSA,residuosity problems, andphi-hiding.
Given a composite numbern{\displaystyle n}, exponente{\displaystyle e}and numberc:=me(modn){\displaystyle c:=m^{e}(\mathrm {mod} \;n)}, the RSA problem is to findm{\displaystyle m}. The problem isconjecturedto be hard, but becomes easy given the factorization ofn{\displaystyle n}. In theRSA cryptosystem,(n,e){\displaystyle (n,e)}is thepublic key,c{\displaystyle c}is the encryption of messagem{\displaystyle m}, and the factorization ofn{\displaystyle n}is the secret key used for decryption.
Given a composite numbern{\displaystyle n}and integersy,d{\displaystyle y,d}, the residuosity problem is to determine whether there exists (alternatively, find an)x{\displaystyle x}such that
Important special cases include thequadratic residuosity problemand thedecisional composite residuosity problem. As in the case of RSA, this problem (and its special cases) are conjectured to be hard, but become easy given the factorization ofn{\displaystyle n}. Some cryptosystems that rely on the hardness of residuousity problems include:
For a composite numberm{\displaystyle m}, it is not known how to efficiently compute itsEuler's totient functionϕ(m){\displaystyle \phi (m)}. The phi-hiding assumption postulates that it is hard to computeϕ(m){\displaystyle \phi (m)}, and furthermore even computing anyprime factorsofϕ(m){\displaystyle \phi (m)}is hard. This assumption is used in the Cachin–Micali–StadlerPIRprotocol.[5]
Given elementsa{\displaystyle a}andb{\displaystyle b}from agroupG{\displaystyle G}, the discrete log problem asks for an integerk{\displaystyle k}such thata=bk{\displaystyle a=b^{k}}.
The discrete log problem is not known to be comparable to integer factorization, but their computational complexitiesare closely related.
Most cryptographic protocols related to the discrete log problem actually rely on the strongerDiffie–Hellman assumption: given group elementsg,ga,gb{\displaystyle g,g^{a},g^{b}}, whereg{\displaystyle g}is ageneratoranda,b{\displaystyle a,b}are random integers, it is hard to findga⋅b{\displaystyle g^{a\cdot b}}. Examples of protocols that use this assumption include the originalDiffie–Hellman key exchange, as well as theElGamal encryption(which relies on the yet strongerDecisional Diffie–Hellman (DDH)variant).
Amultilinear mapis a functione:G1,…,Gn→GT{\displaystyle e:G_{1},\dots ,G_{n}\rightarrow G_{T}}(whereG1,…,Gn,GT{\displaystyle G_{1},\dots ,G_{n},G_{T}}aregroups) such that for anyg1,…,gn∈G1,…Gn{\displaystyle g_{1},\dots ,g_{n}\in G_{1},\dots G_{n}}anda1,…,an{\displaystyle a_{1},\dots ,a_{n}},
For cryptographic applications, one would like to construct groupsG1,…,Gn,GT{\displaystyle G_{1},\dots ,G_{n},G_{T}}and a mape{\displaystyle e}such that the map and the group operations onG1,…,Gn,GT{\displaystyle G_{1},\dots ,G_{n},G_{T}}can be computed efficiently, but the discrete log problem onG1,…,Gn{\displaystyle G_{1},\dots ,G_{n}}is still hard.[6]Some applications require stronger assumptions, e.g. multilinear analogs of Diffie-Hellman assumptions.
For the special case ofn=2{\displaystyle n=2},bilinear mapswith believable security have been constructed usingWeil pairingandTate pairing.[7]Forn>2{\displaystyle n>2}many constructions have been proposed in recent years, but many of them have also been broken, and currently there is no consensus about a safe candidate.[8]
Some cryptosystems that rely on multilinear hardness assumptions include:
The most fundamental computational problem onlatticesis theshortest vector problem (SVP): given a latticeL{\displaystyle L}, find the shortest non-zero vectorv∈L{\displaystyle v\in L}.
Most cryptosystems require stronger assumptions on variants of SVP, such asshortest independent vectors problem (SIVP),GapSVP,[10]or Unique-SVP.[11]
The most useful lattice hardness assumption in cryptography is for thelearning with errors(LWE) problem: Given samples to(x,y){\displaystyle (x,y)}, wherey=f(x){\displaystyle y=f(x)}for some linear functionf(⋅){\displaystyle f(\cdot )}, it is easy to learnf(⋅){\displaystyle f(\cdot )}usinglinear algebra. In the LWE problem, the input to the algorithm has errors, i.e. for each pairy≠f(x){\displaystyle y\neq f(x)}with some smallprobability. The errors are believed to make the problem intractable (for appropriate parameters); in particular, there are known worst-case to average-case reductions from variants of SVP.[12]
Forquantum computers, factoring and discrete log problems are easy, but lattice problems are conjectured to be hard.[13]This makes somelattice-based cryptosystemscandidates forpost-quantum cryptography.
Some cryptosystems that rely on hardness of lattice problems include:
As well as their cryptographic applications, hardness assumptions are used incomputational complexity theoryto provide evidence for mathematical statements that are difficult to prove unconditionally. In these applications, one proves that the hardness assumption implies some desired complexity-theoretic statement, instead of proving that the statement is itself true. The best-known assumption of this type is the assumption thatP ≠ NP,[14]but others include theexponential time hypothesis,[15]theplanted clique conjecture, and theunique games conjecture.[16]
Manyworst-casecomputational problems are known to be hard or evencompletefor somecomplexity classC{\displaystyle C}, in particularNP-hard(but often alsoPSPACE-hard,PPAD-hard, etc.). This means that they are at least as hard as any problem in the classC{\displaystyle C}. If a problem isC{\displaystyle C}-hard (with respect to polynomial time reductions), then it cannot be solved by a polynomial-time algorithm unless the computational hardness assumptionP≠C{\displaystyle P\neq C}is false.
The exponential time hypothesis (ETH) is astrengtheningofP≠NP{\displaystyle P\neq NP}hardness assumption, which conjectures that not only does theBoolean satisfiability problem(SAT) not have a polynomial time algorithm, it furthermore requires exponential time (2Ω(n){\displaystyle 2^{\Omega (n)}}).[17]An even stronger assumption, known as thestrong exponential time hypothesis(SETH) conjectures thatk{\displaystyle k}-SATrequires2(1−εk)n{\displaystyle 2^{(1-\varepsilon _{k})n}}time, wherelimk→∞εk=0{\displaystyle \lim _{k\rightarrow \infty }\varepsilon _{k}=0}.
ETH, SETH, and related computational hardness assumptions allow for deducing fine-grained complexity results, e.g. results that distinguish polynomial time andquasi-polynomial time,[1]or evenn1.99{\displaystyle n^{1.99}}versusn2{\displaystyle n^{2}}.[18]Such assumptions are also useful inparametrized complexity.[19]
Some computational problems are assumed to be hard on average over a particular distribution of instances.
For example, in theplanted cliqueproblem, the input is arandom graphsampled, by sampling anErdős–Rényi random graphand then "planting" a randomk{\displaystyle k}-clique, i.e. connectingk{\displaystyle k}uniformly random nodes (where2log2n≪k≪n{\displaystyle 2\log _{2}n\ll k\ll {\sqrt {n}}}), and the goal is to find the plantedk{\displaystyle k}-clique (which is unique w.h.p.).[20]Another important example isFeige's Hypothesis, which is a computational hardness assumption about random instances of3-SAT(sampled to maintain a specific ratio of clauses to variables).[21]Average-case computational hardness assumptions are useful for proving average-case hardness in applications like statistics, where there is a natural distribution over inputs.[22]Additionally, the planted clique hardness assumption has also been used to distinguish between polynomial and quasi-polynomial worst-case time complexity of other problems,[23]similarly to theexponential time hypothesis.
Theunique label coverproblem is a constraint satisfaction problem, where each constraintC{\displaystyle C}involves two variablesx,y{\displaystyle x,y}, and for each value ofx{\displaystyle x}there is auniquevalue ofy{\displaystyle y}that satisfiesC{\displaystyle C}. Determining whether all the constraints can be satisfied is easy, but theunique game conjecture(UGC) postulates that determining whether almost all the constraints ((1−ε){\displaystyle (1-\varepsilon )}-fraction, for any constantε>0{\displaystyle \varepsilon >0}) can be satisfied or almost none of them (ε{\displaystyle \varepsilon }-fraction) can be satisfied is NP-hard.[16]Approximation problems are often known to be NP-hard assuming UGC; such problems are referred to as UG-hard. In particular, assuming UGC there is asemidefinite programmingalgorithm that achieves optimal approximation guarantees for many important problems.[24]
Closely related to the unique label cover problem is thesmall set expansion (SSE)problem: Given agraphG=(V,E){\displaystyle G=(V,E)}, find a small set of vertices (of sizen/log(n){\displaystyle n/\log(n)}) whoseedge expansionis minimal.
It is known that if SSE is hard to approximate, then so is unique label cover. Hence, thesmall set expansion hypothesis, which postulates that SSE is hard to approximate, is a stronger (but closely related) assumption than the unique game conjecture.[25]Some approximation problems are known to be SSE-hard[26](i.e. at least as hard as approximating SSE).
Given a set ofn{\displaystyle n}numbers, the 3SUM problem asks whether there is a triplet of numbers whose sum is zero. There is aquadratic-timealgorithm for 3SUM, and it has been conjectured that no algorithm can solve 3SUM in "truly sub-quadratic time": the3SUM conjectureis the computational hardness assumption that there are noO(n2−ε){\displaystyle O(n^{2-\varepsilon })}-time algorithms for 3SUM (for any constantε>0{\displaystyle \varepsilon >0}). This conjecture is useful for proving near-quadratic lower bounds for several problems, mostly fromcomputational geometry.[27]
|
https://en.wikipedia.org/wiki/Computational_hardness_assumption
|
40-bit encryptionrefers to a (now broken)key sizeof forty bits, or fivebytes, forsymmetric encryption; this represents a relatively lowlevel of security. A forty bit length corresponds to a total of 240possible keys. Although this is a large number in human terms (about atrillion), it is possible to break this degree of encryption using a moderate amount of computing power in abrute-force attack,i.e., trying out each possible key in turn.
A typical home computer in 2004 could brute-force a 40-bit key in a little under two weeks, testing a million keys per second; modern computers are able to achieve this much faster. Using free time on a large corporate network or abotnetwould reduce the time in proportion to the number of computers available.[1]With dedicated hardware, a 40-bit key can be broken in seconds. TheElectronic Frontier Foundation'sDeep Crack, built by a group of enthusiasts for US$250,000 in 1998, could break a 56-bitData Encryption Standard(DES) key in days,[2]and would be able to break40-bit DESencryption in about two seconds.[3]
40-bit encryption was common in software released before 1999, especially those based on theRC2andRC4algorithms which had special "7-day" export review policies,[citation needed]when algorithms with larger key lengths could not legally beexportedfrom the United States without a case-by-case license. "In the early 1990s ... As a general policy, the State Department allowed exports of commercial encryption with 40-bit keys, although some software with DES could be exported to U.S.-controlled subsidiaries and financial institutions."[4][5]As a result, the "international" versions ofweb browserswere designed to have an effective key size of 40 bits when usingSecure Sockets Layerto protecte-commerce. Similar limitations were imposed on other software packages, including early versions ofWired Equivalent Privacy. In 1992,IBMdesigned theCDMFalgorithm to reduce the strength of56-bitDES against brute force attack to 40 bits, in order to create exportable DES implementations.
All 40-bit and 56-bit encryption algorithms areobsolete, because they are vulnerable to brute force attacks, and therefore cannot be regarded as secure.[6][7]As a result, virtually all Web browsers now use 128-bit keys, which are considered strong. MostWeb serverswill not communicate with a client unless it has 128-bit encryption capability installed on it.
Public/private key pairs used inasymmetric encryption(public key cryptography), at least those based on prime factorization, must be much longer in order to be secure; seekey sizefor more details.
As a general rule, modern symmetric encryption algorithms such asAESuse key lengths of 128, 192 and 256 bits.
|
https://en.wikipedia.org/wiki/40-bit_encryption
|
Derived algebraic geometryis a branch of mathematics that generalizesalgebraic geometryto a situation wherecommutative rings, which provide local charts, are replaced by eitherdifferential graded algebras(overQ{\displaystyle \mathbb {Q} }),simplicial commutative ringsorE∞{\displaystyle E_{\infty }}-ring spectrafromalgebraic topology, whose higher homotopy groups account for the non-discreteness (e.g., Tor) of the structure sheaf. Grothendieck'sscheme theoryallows the structure sheaf to carrynilpotent elements. Derived algebraic geometry can be thought of as an extension of this idea, and provides natural settings forintersection theory(ormotivic homotopy theory[1]) of singular algebraic varieties andcotangent complexesindeformation theory(cf. J. Francis), among the other applications.
Basic objects of study in the field arederived schemesandderived stacks. The oft-cited motivation isSerre's intersection formula.[2]In the usual formulation, the formula involves theTor functorand thus, unless higher Tor vanish, thescheme-theoretic intersection(i.e., fiber product of immersions)does notyield the correctintersection number. In the derived context, one takes thederived tensor productA⊗LB{\displaystyle A\otimes ^{L}B}, whose higher homotopy is higher Tor, whoseSpecis not a scheme but aderived scheme. Hence, the "derived" fiber product yields the correct intersection number. (Currently this is hypothetical; the derived intersection theory has yet to be developed.)
The term "derived" is used in the same way asderived functororderived category, in the sense that the category of commutative rings is being replaced with a∞-categoryof "derived rings." In classical algebraic geometry, the derived category ofquasi-coherent sheavesis viewed as atriangulated category, but it has natural enhancement to astable ∞-category, which can be thought of as the∞-categoricalanalogue of anabelian category.
Derived algebraic geometry is fundamentally the study of geometric objects using homological algebra and homotopy. Since objects in this field should encode the homological and homotopy information, there are various notions of what derived spaces encapsulate. The basic objects of study in derived algebraic geometry are derived schemes, and more generally, derived stacks. Heuristically, derived schemes should be functors from some category of derived rings to the category of sets
which can be generalized further to have targets of higher groupoids (which are expected to be modelled by homotopy types). These derived stacks are suitable functors of the form
Many authors model such functors as functors with values in simplicial sets, since they model homotopy types and are well-studied. Differing definitions on these derived spaces depend on a choice of what the derived rings are, and what the homotopy types should look like. Some examples of derived rings include commutative differential graded algebras, simplicial rings, andE∞{\displaystyle E_{\infty }}-rings.
Over characteristic 0 many of the derived geometries agree since the derived rings are the same.E∞{\displaystyle E_{\infty }}algebras are just commutative differential graded algebras over characteristic zero. We can then define derived schemes similarly to schemes in algebraic geometry. Similar to algebraic geometry, we could also view these objects as a pair(X,OX∙){\displaystyle (X,{\mathcal {O}}_{X}^{\bullet })}which is a topological spaceX{\displaystyle X}with a sheaf of commutative differential graded algebras. Sometimes authors take the convention that these are negatively graded, soOXn=0{\displaystyle {\mathcal {O}}_{X}^{n}=0}forn>0{\displaystyle n>0}. The sheaf condition could also be weakened so that for a coverUi{\displaystyle U_{i}}ofX{\displaystyle X}, the sheavesOUi∙{\displaystyle {\mathcal {O}}_{U_{i}}^{\bullet }}would glue on overlapsUij{\displaystyle U_{ij}}only by quasi-isomorphism.
Unfortunately, over characteristic p, differential graded algebras work poorly for homotopy theory, due to the factd[xp]=pd[xp−1]{\displaystyle d[x^{p}]=pd[x^{p-1}]}[1]. This can be overcome by using simplicial algebras.
Derived rings over arbitrary characteristic are taken assimplicial commutative ringsbecause of the nice categorical properties these have. In particular, the category of simplicial rings is simplicially enriched, meaning the hom-sets are themselves simplicial sets. Also, there is a canonical model structure on simplicial commutative rings coming from simplicial sets.[3]In fact, it is a theorem of Quillen's that the model structure on simplicial sets can be transferred over to simplicial commutative rings.
It is conjectured there is a final theory ofhigher stackswhich modelhomotopy types. Grothendieck conjectured these would be modelled by globular groupoids, or a weak form of their definition. Simpson[4]gives a useful definition in the spirit of Grothendieck's ideas. Recall that an algebraic stack (here a 1-stack) is called representable if the fiber product of any two schemes is isomorphic to a scheme.[5]If we take the ansatz that a 0-stack is just an algebraic space and a 1-stack is just a stack, we can recursively define an n-stack as an object such that the fiber product along any two schemes is an (n-1)-stack. If we go back to the definition of an algebraic stack, this new definition agrees.
Another theory of derived algebraic geometry is encapsulated by the theory of spectral schemes. Their definition requires a fair amount of technology in order to precisely state.[6]But, in short,spectral schemesX=(X,OX){\displaystyle X=({\mathfrak {X}},{\mathcal {O}}_{\mathfrak {X}})}are given by a spectrally ringed∞{\displaystyle \infty }-toposX{\displaystyle {\mathfrak {X}}}together with a sheaf ofE∞{\displaystyle \mathbb {E} _{\infty }}-ringsOX{\displaystyle {\mathcal {O}}_{\mathfrak {X}}}on it subject to some locality conditions similar to the definition of affine schemes. In particular
Moreover, the spectral schemeX{\displaystyle X}is calledconnectiveifπi(OX)=0{\displaystyle \pi _{i}({\mathcal {O}}_{\mathfrak {X}})=0}fori<0{\displaystyle i<0}.
Recall that the topos of a pointSh(∗){\displaystyle {\text{Sh}}(*)}is equivalent to the category of sets. Then, in the∞{\displaystyle \infty }-topos setting, we instead consider∞{\displaystyle \infty }-sheaves of∞{\displaystyle \infty }-groupoids (which are∞{\displaystyle \infty }-categories with all morphisms invertible), denotedShv(∗){\displaystyle {\text{Shv}}(*)}, giving an analogue of the point topos in the∞{\displaystyle \infty }-topos setting. Then, the structure of a spectrally ringed space can be given by attaching anE∞{\displaystyle \mathbb {E} _{\infty }}-ringA{\displaystyle A}. Notice this implies that spectrally ringed spaces generalizeE∞{\displaystyle \mathbb {E} _{\infty }}-rings since everyE∞{\displaystyle \mathbb {E} _{\infty }}-ring can be associated with a spectrally ringed site.
This spectrally ringed topos can be a spectral scheme if the spectrum of this ring gives an equivalent∞{\displaystyle \infty }-topos, so its underlying space is a point. For example, this can be given by the ring spectrumHQ{\displaystyle H\mathbb {Q} }, called the Eilenberg–Maclane spectrum, constructed from theEilenberg–MacLane spacesK(Q,n){\displaystyle K(\mathbb {Q} ,n)}.
|
https://en.wikipedia.org/wiki/Derived_algebraic_geometry
|
Noncommutative geometry(NCG) is a branch ofmathematicsconcerned with a geometric approach tononcommutative algebras, and with the construction ofspacesthat are locally presented by noncommutative algebras of functions, possibly in some generalized sense. A noncommutative algebra is anassociative algebrain which the multiplication is notcommutative, that is, for whichxy{\displaystyle xy}does not always equalyx{\displaystyle yx}; or more generally analgebraic structurein which one of the principalbinary operationsis not commutative; one also allows additional structures, e.g.topologyornorm, to be possibly carried by the noncommutative algebra of functions.
An approach giving deep insight about noncommutative spaces is throughoperator algebras, that is, algebras ofbounded linear operatorson aHilbert space.[1]Perhaps one of the typical examples of a noncommutative space is the "noncommutative torus", which played a key role in the early development of this field in 1980s and lead to noncommutative versions ofvector bundles,connections,curvature, etc.[2]
The main motivation is to extend the commutative duality between spaces and functions to the noncommutative setting. In mathematics,spaces, which are geometric in nature, can be related to numericalfunctionson them. In general, such functions will form acommutative ring. For instance, one may take the ringC(X) ofcontinuouscomplex-valued functions on atopological spaceX. In many cases (e.g., ifXis acompactHausdorff space), we can recoverXfromC(X), and therefore it makes some sense to say thatXhascommutative topology.
More specifically, in topology, compactHausdorfftopological spaces can be reconstructed from theBanach algebraof functions on the space (Gelfand–Naimark). In commutativealgebraic geometry,algebraic schemesare locally prime spectra of commutative unital rings (A. Grothendieck), and every quasi-separated schemeX{\displaystyle X}can be reconstructed up to isomorphism of schemes from the category of quasicoherent sheaves ofOX{\displaystyle O_{X}}-modules (P. Gabriel–A. Rosenberg). ForGrothendieck topologies, the cohomological properties of a site are invariants of the corresponding category of sheaves of sets viewed abstractly as atopos(A. Grothendieck). In all these cases, a space is reconstructed from the algebra of functions or its categorified version—somecategory of sheaveson that space.
Functions on a topological space can be multiplied and added pointwise hence they form a commutative algebra; in fact these operations are local in the topology of the base space, hence the functions form a sheaf of commutative rings over the base space.
The dream of noncommutative geometry is to generalize this duality to the duality between noncommutative algebras, or sheaves of noncommutative algebras, or sheaf-like noncommutative algebraic or operator-algebraic structures, and geometric entities of certain kinds, and give an interaction between the algebraic and geometric description of those via this duality.
Regarding that the commutative rings correspond to usual affine schemes, and commutativeC*-algebrasto usual topological spaces, the extension to noncommutative rings and algebras requires non-trivial generalization oftopological spacesas "non-commutative spaces". For this reason there is some talk aboutnon-commutative topology, though the term also has other meanings.
There is an influence of physics on noncommutative geometry.[3]Thefuzzy spherehas been used to study the emergence ofconformal symmetryin the 3-dimensionalIsing model.[4]
Some of the theory developed byAlain Connesto handle noncommutative geometry at a technical level has roots in older attempts, in particular inergodic theory. The proposal ofGeorge Mackeyto create avirtual subgrouptheory, with respect to which ergodicgroup actionswould becomehomogeneous spacesof an extended kind, has by now been subsumed.
The (formal) duals ofnon-commutativeC*-algebrasare often now called non-commutative spaces. This is by analogy with theGelfand representation, which shows thatcommutativeC*-algebras aredualtolocally compactHausdorff spaces. In general, one can associate to any C*-algebraSa topological spaceŜ; seespectrum of a C*-algebra.
For thedualitybetween localizablemeasure spacesand commutativevon Neumann algebras,noncommutativevon Neumann algebrasare callednon-commutativemeasure spaces.
A smoothRiemannian manifoldMis atopological spacewith a lot of extra structure. From its algebra of continuous functionsC(M), we only recoverMtopologically. The algebraic invariant that recovers the Riemannian structure is aspectral triple. It is constructed from a smooth vector bundleEoverM, e.g. the exterior algebra bundle. The Hilbert spaceL2(M,E) of square integrable sections ofEcarries a representation ofC(M)by multiplication operators, and we consider an unbounded operatorDinL2(M,E) with compact resolvent (e.g. thesignature operator), such that the commutators [D,f] are bounded wheneverfis smooth. A deep theorem[5]states thatMas a Riemannian manifold can be recovered from this data.
This suggests that one might define a noncommutative Riemannian manifold as aspectral triple(A,H,D), consisting of a representation of a C*-algebraAon a Hilbert spaceH, together with an unbounded operatorDonH, with compact resolvent, such that [D,a] is bounded for allain some dense subalgebra ofA. Research in spectral triples is very active, and many examples of noncommutative manifolds have been constructed.
In analogy to thedualitybetweenaffine schemesandcommutative rings, we define a category ofnoncommutative affine schemesas the dual of the category of associative unital rings. There are certain analogues of Zariski topology in that context so that one can glue such affine schemes to more general objects.
There are also generalizations of the Cone and of the Proj of a commutative graded ring, mimicking a theorem ofSerreon Proj. Namely the category of quasicoherent sheaves of O-modules on a Proj of a commutative graded algebra is equivalent to the category of graded modules over the ring localized on Serre's subcategory of graded modules of finite length; there is also analogous theorem for coherent sheaves when the algebra is Noetherian. This theorem is extended as a definition ofnoncommutative projective geometrybyMichael Artinand J. J. Zhang,[6]who add also some general ring-theoretic conditions (e.g. Artin–Schelter regularity).
Many properties of projective schemes extend to this context. For example, there exists an analog of the celebratedSerre dualityfor noncommutative projective schemes of Artin and Zhang.[7]
A. L. Rosenberg has created a rather general relative concept ofnoncommutative quasicompact scheme(over a base category), abstracting Grothendieck's study of morphisms of schemes and covers in terms of categories of quasicoherent sheaves and flat localization functors.[8]There is also another interesting approach via localization theory, due toFred Van Oystaeyen, Luc Willaert and Alain Verschoren, where the main concept is that of aschematic algebra.[9][10]
Some of the motivating questions of the theory are concerned with extending knowntopological invariantsto formal duals of noncommutative (operator) algebras and other replacements and candidates for noncommutative spaces. One of the main starting points ofAlain Connes' direction in noncommutative geometry is his discovery of a new homology theory associated to noncommutative associative algebras and noncommutative operator algebras, namely thecyclic homologyand its relations to thealgebraic K-theory(primarily via Connes–Chern character map).
The theory ofcharacteristic classesof smooth manifolds has been extended to spectral triples, employing the tools of operatorK-theoryandcyclic cohomology. Several generalizations of now-classicalindex theoremsallow for effective extraction of numerical invariants from spectral triples. The fundamental characteristic class in cyclic cohomology, theJLO cocycle, generalizes the classicalChern character.
AConnes connectionis a noncommutative generalization of aconnectionindifferential geometry. It was introduced byAlain Connes, and was later generalized byJoachim CuntzandDaniel Quillen.
Given a rightA-moduleE, a Connes connection onEis a linear map
that satisfies theLeibniz rule∇r(sa)=∇r(s)a+s⊗da{\displaystyle \nabla _{r}(sa)=\nabla _{r}(s)a+s\otimes da}.[12]
|
https://en.wikipedia.org/wiki/Noncommutative_geometry
|
Ring homomorphisms
Algebraic structures
Related structures
Algebraic number theory
Noncommutative algebraic geometry
Free algebra
Clifford algebra
Noncommutative algebraic geometryis a branch ofmathematics, and more specifically a direction innoncommutative geometry, that studies the geometric properties of formal duals ofnon-commutativealgebraic objectssuch asringsas well as geometric objects derived from them (e.g. by gluing along localizations or taking noncommutativestack quotients).
For example, noncommutative algebraic geometry is supposed to extend a notion of analgebraic schemeby suitable gluing of spectra of noncommutative rings; depending on how literally and how generally this aim (and a notion of spectrum) is understood in noncommutative setting, this has been achieved in various level of success. The noncommutative ring generalizes here a commutativering of regular functionson acommutative scheme. Functions on usual spaces in the traditional (commutative)algebraic geometryhave a product defined bypointwise multiplication; as the values of these functionscommute, the functions also commute:atimesbequalsbtimesa. It is remarkable that viewing noncommutative associative algebras as algebras of functions on "noncommutative" would-be space is a far-reaching geometric intuition, though it formally looks like a fallacy.[citation needed]
Much of the motivation for noncommutative geometry, and in particular for the noncommutative algebraic geometry, is from physics; especially from quantum physics, where thealgebras of observablesare indeed viewed as noncommutative analogues of functions, hence having the ability to observe their geometric aspects is desirable.
One of the values of the field is that it also provides new techniques to study objects in commutative algebraic geometry such asBrauer groups.
The methods of noncommutative algebraic geometry are analogs of the methods of commutative algebraic geometry, but frequently the foundations are different. Local behavior in commutative algebraic geometry is captured bycommutative algebraand especially the study oflocal rings. These do not have a ring-theoretic analogue in the noncommutative setting; though in a categorical setup one can talk aboutstacksof local categories ofquasicoherent sheavesover noncommutative spectra. Global properties such as those arising fromhomological algebraandK-theorymore frequently carry over to the noncommutative setting.
Commutative algebraic geometry begins by constructing thespectrum of a ring. The points of the algebraic variety (or more generally,scheme) are the prime ideals of the ring, and the functions on the algebraic variety are the elements of the ring. A noncommutative ring, however, may not have any proper non-zero two-sided prime ideals. For instance, this is true of theWeyl algebraof polynomial differential operators on affine space: The Weyl algebra is asimple ring. Therefore, one can for instance attempt to replace a prime spectrum by aprimitive spectrum: there are also the theory ofnon-commutative localizationas well asdescent theory. This works to some extent: for instance,Dixmier'senveloping algebrasmay be thought of as working out non-commutative algebraic geometry for the primitive spectrum of an enveloping algebra of aLie algebra. Another work in a similar spirit isMichael Artin’s notes titled “noncommutative rings”,[1]which in part is an attempt to studyrepresentation theoryfrom a non-commutative-geometry point of view. The key insight to both approaches is thatirreducible representations, or at leastprimitive ideals, can be thought of as “non-commutative points”.
As it turned out, starting from, say, primitive spectra, it was not easy to develop a workablesheaf theory. One might imagine this difficulty is because of a sort of quantum phenomenon: points in a space can influence points far away (and in fact, it is not appropriate to treat points individually and view a space as a mere collection of the points).
Due to the above, one accepts a paradigm implicit inPierre Gabriel's thesis and partly justified by theGabriel–Rosenberg reconstruction theorem(afterPierre GabrielandAlexander L. Rosenberg) that a commutative scheme can be reconstructed, up to isomorphism of schemes, solely from theabelian categoryofquasicoherent sheaveson the scheme.Alexander Grothendiecktaught that to do geometry one does not need a space, it is enough to have a category of sheaves on that would be space; this idea has been transmitted to noncommutative algebra byYuri Manin. There are, a bit weaker, reconstruction theorems from the derived categories of (quasi)coherent sheaves motivating thederived noncommutative algebraic geometry(see just below).
Perhaps the most recent approach is through thedeformation theory, placing non-commutative algebraic geometry in the realm ofderived algebraic geometry.
As a motivating example, consider the one-dimensionalWeyl algebraover thecomplex numbersC. This is the quotient of the free ringC<x,y> by the relation
This ring represents the polynomial differential operators in a single variablex;ystands in for the differential operator ∂x. This ring fits into a one-parameter family given by the relationsxy-yx= α. When α is not zero, then this relation determines a ring isomorphic to the Weyl algebra. When α is zero, however, the relation is the commutativity relation forxandy, and the resulting quotient ring is the polynomial ring in two variables,C[x,y]. Geometrically, the polynomial ring in two variables represents the two-dimensionalaffine spaceA2, so the existence of this one-parameter family says thataffine space admits non-commutative deformations to the space determined by the Weyl algebra.This deformation is related to thesymbol of a differential operatorand thatA2is thecotangent bundleof the affine line. (Studying the Weyl algebra can lead to information about affine space: TheDixmier conjectureabout the Weyl algebra is equivalent to theJacobian conjectureabout affine space.)
In this line of the approach, the notion ofoperad, a set or space of operations, becomes prominent: in the introduction to (Francis 2008), Francis writes:
We begin the study of certainlesscommutative algebraic geometries. … algebraic geometry overEn{\displaystyle {\mathcal {E}}_{n}}-ringscan be thought of as interpolating between some derived theories of noncommutative and commutative algebraic geometries. Asnincreases, theseEn{\displaystyle {\mathcal {E}}_{n}}-algebras converge to thederived algebraic geometryof Toën-Vezzosi andLurie.
One of the basic constructions in commutative algebraic geometry is theProj constructionof agraded commutative ring. This construction builds aprojective algebraic varietytogether with avery ample line bundlewhosehomogeneous coordinate ringis the original ring. Building the underlying topological space of the variety requires localizing the ring, but building sheaves on that space does not. By a theorem ofJean-Pierre Serre, quasi-coherent sheaves on Proj of a graded ring are the same as graded modules over the ring up to finite dimensional factors. The philosophy oftopos theorypromoted byAlexander Grothendiecksays that the category of sheaves on a space can serve as the space itself. Consequently, in non-commutative algebraic geometry one often defines Proj in the following fashion: LetRbe a gradedC-algebra, and let Mod-Rdenote the category of graded rightR-modules. LetFdenote the subcategory of Mod-Rconsisting of all modules of finite length. ProjRis defined to be the quotient of the abelian category Mod-RbyF. Equivalently, it is a localization of Mod-Rin which two modules become isomorphic if, after taking their direct sums with appropriately chosen objects ofF, they are isomorphic in Mod-R.
This approach leads to a theory ofnon-commutative projective geometry. A non-commutative smooth projective curve turns out to be a smooth commutative curve, but for singular curves or smooth higher-dimensional spaces, the non-commutative setting allows new objects.
|
https://en.wikipedia.org/wiki/Noncommutative_algebraic_geometry
|
Inmathematics,noncommutative harmonic analysisis the field in which results fromFourier analysisare extended totopological groupsthat are notcommutative.[1]Sincelocally compact abelian groupshave a well-understood theory,Pontryagin duality, which includes the basic structures ofFourier seriesandFourier transforms, the major business of non-commutativeharmonic analysisis usually taken to be the extension of the theory to all groupsGthat arelocally compact. The case ofcompact groupsis understood, qualitatively and after thePeter–Weyl theoremfrom the 1920s, as being generally analogous to that offinite groupsand theircharacter theory.
The main task is therefore the case ofGthat is locally compact, not compact and not commutative. The interesting examples include manyLie groups, and alsoalgebraic groupsoverp-adic fields. These examples are of interest and frequently applied inmathematical physics, and contemporarynumber theory, particularlyautomorphic representations.
What to expect is known as the result of basic work ofJohn von Neumann. He showed that if thevon Neumann group algebraofGis of type I, thenL2(G) as aunitary representationofGis adirect integralof irreducible representations. It is parametrized therefore by theunitary dual, the set of isomorphism classes of such representations, which is given thehull-kernel topology. The analogue of thePlancherel theoremis abstractly given by identifying a measure on the unitary dual, thePlancherel measure, with respect to which the direct integral is taken. (For Pontryagin duality the Plancherel measure is some Haar measure on thedual grouptoG, the only issue therefore being its normalization.) For general locally compact groups, or even countable discrete groups, the von Neumann group algebra need not be of type I and the regular representation ofGcannot be written in terms of irreducible representations, even though it is unitary and completely reducible. An example where this happens is the infinite symmetric group, where the von Neumann group algebra is the hyperfinite type II1factor. The further theory divides up the Plancherel measure into a discrete and a continuous part. Forsemisimple groups, and classes ofsolvable Lie groups, a very detailed theory is available.[2]
|
https://en.wikipedia.org/wiki/Noncommutative_harmonic_analysis
|
In themathematicalfield ofrepresentation theory,group representationsdescribe abstractgroupsin terms ofbijectivelinear transformationsof avector spaceto itself (i.e. vector spaceautomorphisms); in particular, they can be used to represent group elements asinvertible matricesso that the group operation can be represented bymatrix multiplication.
In chemistry, a group representation can relate mathematical group elements to symmetric rotations and reflections of molecules.
Representations of groups allow manygroup-theoreticproblems to be reduced to problems inlinear algebra. Inphysics, they describe how thesymmetry groupof a physical system affects the solutions of equations describing that system.
The termrepresentation of a groupis also used in a more general sense to mean any "description" of a group as a group of transformations of some mathematical object. More formally, a "representation" means ahomomorphismfrom the group to theautomorphism groupof an object. If the object is a vector space we have alinear representation. Some people userealizationfor the general notion and reserve the termrepresentationfor the special case of linear representations. The bulk of this article describes linear representation theory; see the last section for generalizations.
The representation theory of groups divides into subtheories depending on the kind of group being represented. The various theories are quite different in detail, though some basic definitions and concepts are similar. The most important divisions are:
Representation theory also depends heavily on the type ofvector spaceon which the group acts. One distinguishes between finite-dimensional representations and infinite-dimensional ones. In the infinite-dimensional case, additional structures are important (e.g. whether or not the space is aHilbert space,Banach space, etc.).
One must also consider the type offieldover which the vector space is defined. The most important case is the field ofcomplex numbers. The other important cases are the field ofreal numbers,finite fields, and fields ofp-adic numbers. In general,algebraically closedfields are easier to handle than non-algebraically closed ones. Thecharacteristicof the field is also significant; many theorems for finite groups depend on the characteristic of the field not dividing theorder of the group.
Arepresentationof agroupGon avector spaceVover afieldKis agroup homomorphismfromGto GL(V), thegeneral linear grouponV. That is, a representation is a map
such that
HereVis called therepresentation spaceand the dimension ofVis called thedimensionordegreeof the representation. It is common practice to refer toVitself as the representation when the homomorphism is clear from the context.
In the case whereVis of finite dimensionnit is common to choose abasisforVand identify GL(V) withGL(n,K), the group ofn×n{\displaystyle n\times n}invertible matriceson the fieldK.
Consider the complex numberu= e2πi / 3which has the propertyu3= 1. The setC3= {1,u,u2} forms acyclic groupunder multiplication. This group has a representation ρ onC2{\displaystyle \mathbb {C} ^{2}}given by:
This representation is faithful because ρ is aone-to-one map.
Another representation forC3onC2{\displaystyle \mathbb {C} ^{2}}, isomorphic to the previous one, is σ given by:
The groupC3may also be faithfully represented onR2{\displaystyle \mathbb {R} ^{2}}by τ given by:
where
A possible representation onR3{\displaystyle \mathbb {R} ^{3}}is given by the set of cyclic permutation matricesv:
Another example:
LetV{\displaystyle V}be the space of homogeneous degree-3 polynomials over the complex numbers in variablesx1,x2,x3.{\displaystyle x_{1},x_{2},x_{3}.}
ThenS3{\displaystyle S_{3}}acts onV{\displaystyle V}by permutation of the three variables.
For instance,(12){\displaystyle (12)}sendsx13{\displaystyle x_{1}^{3}}tox23{\displaystyle x_{2}^{3}}.
A subspaceWofVthat is invariant under thegroup actionis called asubrepresentation. IfVhas exactly two subrepresentations, namely the zero-dimensional subspace andVitself, then the representation is said to beirreducible; if it has a proper subrepresentation of nonzero dimension, the representation is said to bereducible. The representation of dimension zero is considered to be neither reducible nor irreducible,[1]just as the number 1 is considered to be neithercompositenorprime.
Under the assumption that thecharacteristicof the fieldKdoes not divide the size of the group, representations offinite groupscan be decomposed into adirect sumof irreducible subrepresentations (seeMaschke's theorem). This holds in particular for any representation of a finite group over thecomplex numbers, since the characteristic of the complex numbers is zero, which never divides the size of a group.
In the example above, the first two representations given (ρ and σ) are both decomposable into two 1-dimensional subrepresentations (given by span{(1,0)} and span{(0,1)}), while the third representation (τ) is irreducible.
Aset-theoretic representation(also known as a group action orpermutation representation) of agroupGon asetXis given by afunctionρ :G→XX, the set of functions fromXtoX, such that for allg1,g2inGand allxinX:
where1{\displaystyle 1}is the identity element ofG. This condition and the axioms for a group imply that ρ(g) is abijection(orpermutation) for allginG. Thus we may equivalently define a permutation representation to be agroup homomorphismfrom G to thesymmetric groupSXofX.
For more information on this topic see the article ongroup action.
Every groupGcan be viewed as acategorywith a single object;morphismsin this category are just the elements ofG. Given an arbitrary categoryC, arepresentationofGinCis afunctorfromGtoC. Such a functor selects an objectXinCand a group homomorphism fromGto Aut(X), theautomorphism groupofX.
In the case whereCisVectK, thecategory of vector spacesover a fieldK, this definition is equivalent to a linear representation. Likewise, a set-theoretic representation is just a representation ofGin thecategory of sets.
WhenCisAb, thecategory of abelian groups, the objects obtained are calledG-modules.
For another example consider thecategory of topological spaces,Top. Representations inTopare homomorphisms fromGto thehomeomorphismgroup of a topological spaceX.
Two types of representations closely related to linear representations are:
|
https://en.wikipedia.org/wiki/Representation_theory_(group_theory)
|
Mental calculation(also known asmentalcomputation[1]) consists ofarithmeticalcalculationsmade by themind, within thebrain, with no help from any supplies (such as pencil and paper) or devices such as acalculator. People may use mental calculation when computing tools are not available, when it is faster than other means of calculation (such as conventional educational institution methods), or even ina competitive context. Mental calculation often involves the use of specific techniques devised for specific types of problems. Many of these techniques take advantage of or rely on thedecimalnumeral system.
Capacity ofshort-term memoryis a necessary factor for the successful acquisition of a calculation,[2]specifically perhaps, thephonological loop, in the context of addition calculations (only).[3]Mental flexiblenesscontributes to the probability of successful completion of mental effort - which is a concept representing adaptive use of knowledge of rules or ways any number associates with any other and how multitudes of numbers are meaningfully associative, and certain (any) numberpatterns, combined withalgorithmsprocess.[4]
It was found during the eighteenth century that children with powerful mental capacities for calculations developed either into very capable and successful scientists and or mathematicians or instead became a counter example having experienced personalretardation.[5]People with an unusual fastness with reliably correct performance of mental calculations of sufficient relevant complexity areprodigiesorsavants.[6]By the same token, in some contexts and at some time, such an exceptional individual would be known as a:lightningcalculator, or agenius.[7]
In a survey of children in England it was found thatmental imagerywas used for mental calculation.[8]Byneuro-imaging, brain activity in theparietal lobesof the right hemisphere was found to be associated with mental imaging.[9]
The teaching of mental calculation as an element ofschooling, with a focus in some teaching contexts on mental strategies[10]
An exceptional ability is mental calculation such asadding,subtracting,multiplyingordividinglarge numbers.
Skilled calculators were necessary in research centers such asCERNbefore the advent of modern electronic calculators and computers. See, for instance, Steven B. Smith's 1983 bookThe Great Mental Calculators, or the 2016 bookHidden Figures[11]andthe film adapted from it.
TheMental Calculation World Cupis an international competition that attempts to find the world's best mental calculator, and also the best at specific types of mental calculation, such as addition, multiplication, square root or calendar reckoning. The first Mental Calculation World Cup[12]took place in 2004. It is an in-person competition that occurs every other year in Germany. It consists of four different standard tasks --- addition of ten ten-digit numbers, multiplication of two eight-digit numbers, calculation of square roots, and calculation of weekdays for given dates --- in addition to a variety of "surprise" tasks.[12]The last edition was organized in September 2024 and won byAaryan Nitin Shukla, who successfully defended his title to become two time World Champion.
TheMind Sports Olympiadhas staged annual world championships since 1998.
The first international Memoriad[13]was held inIstanbul, Turkey, in 2008.
The second Memoriad took place inAntalya, Turkey, on 24–25 November 2012. 89 competitors from 20 countries participated. Awards and money prizes were given for 10 categories in total; of which 5 categories had to do aboutMental Calculation(Mental addition, Mental Multiplication, Mental Square Roots (non-integer), Mental Calendar Dates calculation and Flash Anzan). The third Memoriad was held in Las Vegas, USA, from November 8, 2016 through November 10, 2016.
TheMind Sports Organisationrecognizes six grandmasters of mental calculation:Robert Fountain(1999),George Lane(2001),Gert Mittring(2005), Chris Bryant (2017),Wenzel Grüß(2019), and Kaloyan Geshev (2022), and one international master,Andy Robertshaw(2008). In 2021,Aaryan Nitin Shuklabecame the youngest champion ever at an age of just 11 years.
Shakuntala DevifromIndiahas been often mentioned on theGuinness World Records.Neelakantha Bhanu PrakashfromIndiahas been often mentioned on theLimca Book of Recordsfor racing past the speed of a calculator in addition.[14]Sri Lankan-Malaysianperformer Yaashwin Sarawanan was the runner-up in 2019Asia's Got Talent.
In the 2009 Japanese animated filmSummer Wars, the main character, mathematical genius Kenji Koiso, is able to mentally break purely mathematical encryption codes generated by the OZ virtual world's security system. He can alsomentally calculate the day of the week a person was born, based on their birthday.
|
https://en.wikipedia.org/wiki/Mental_calculation
|
Theparallel operator‖{\displaystyle \|}(pronounced "parallel",[1]following theparallel lines notation from geometry;[2][3]also known asreduced sum,parallel sumorparallel addition) is abinary operationwhich is used as a shorthand inelectrical engineering,[4][5][6][nb 1]but is also used inkinetics,fluid mechanicsandfinancial mathematics.[7][8]The nameparallelcomes from the use of the operator computing the combined resistance ofresistors in parallel.
The parallel operator represents thereciprocalvalue of a sum of reciprocal values (sometimes also referred to as the "reciprocal formula" or "harmonicsum") and is defined by:[9][6][10][11]
wherea,b, anda∥b{\displaystyle a\parallel b}are elements of theextended complex numbersC¯=C∪{∞}.{\displaystyle {\overline {\mathbb {C} }}=\mathbb {C} \cup \{\infty \}.}[12][13]
The operator gives half of theharmonic meanof two numbersaandb.[7][8]
As a special case, for any numbera∈C¯{\displaystyle a\in {\overline {\mathbb {C} }}}:
Further, for all distinct numbersa≠b{\displaystyle a\neq b}:
with|a∥b|{\displaystyle {\big |}\,a\parallel b\,{\big |}}representing theabsolute valueofa∥b{\displaystyle a\parallel b}, andmin(x,y){\displaystyle \min(x,y)}meaning theminimum(least element) amongxandy.
Ifa{\displaystyle a}andb{\displaystyle b}are distinct positive real numbers then12min(a,b)<|a∥b|<min(a,b).{\displaystyle {\tfrac {1}{2}}\min(a,b)<{\big |}\,a\parallel b\,{\big |}<\min(a,b).}
The concept has been extended from ascalaroperation tomatrices[14][15][16][17][18]and furthergeneralized.[19]
The operator was originally introduced asreduced sumby Sundaram Seshu in 1956,[20][21][14]studied as operator∗by Kent E. Erickson in 1959,[22][23][14]and popularized byRichard James Duffinand William Niles Anderson, Jr. asparallel additionorparallel sumoperator:inmathematicsandnetwork theorysince 1966.[15][16][1]While some authors continue to use this symbol up to the present,[7][8]for example, Sujit Kumar Mitra used∙as a symbol in 1970.[14]Inapplied electronics, a∥sign became more common as the operator's symbol around 1974.[24][25][26][27][28][nb 1][nb 2]This was often written as doubled vertical line (||) available in mostcharacter sets(sometimes italicized as//[29][30]), but now can be represented usingUnicodecharacter U+2225 ( ∥ ) for "parallel to". InLaTeXand related markup languages, the macros\|and\parallelare often used (and rarely\smallparallelis used) to denote the operator's symbol.
LetC~{\displaystyle {\widetilde {\mathbb {C} }}}represent theextended complex planeexcluding zero,C~:=C∪{∞}∖{0},{\displaystyle {\widetilde {\mathbb {C} }}:=\mathbb {C} \cup \{\infty \}\smallsetminus \{0\},}andφ{\displaystyle \varphi }thebijective functionfromC{\displaystyle \mathbb {C} }toC~{\displaystyle {\widetilde {\mathbb {C} }}}such thatφ(z)=1/z.{\displaystyle \varphi (z)=1/z.}One has identities
and
This implies immediately thatC~{\displaystyle {\widetilde {\mathbb {C} }}}is afieldwhere the parallel operator takes the place of the addition, and that this field isisomorphictoC.{\displaystyle \mathbb {C} .}
The following properties may be obtained by translating throughφ{\displaystyle \varphi }the corresponding properties of the complex numbers.
As for any field,(C~,∥,⋅){\displaystyle ({\widetilde {\mathbb {C} }},\,\parallel \,,\,\cdot \,)}satisfies a variety of basic identities.
It iscommutativeunder parallel and multiplication:
It isassociativeunder parallel and multiplication:[12][7][8]
Both operations have anidentityelement; for parallel the identity is∞{\displaystyle \infty }while for multiplication the identity is1:
Every elementa{\displaystyle a}ofC~{\displaystyle {\widetilde {\mathbb {C} }}}has aninverseunder parallel, equal to−a,{\displaystyle -a,}the additive inverse under addition. (But0has no inverse under parallel.)
The identity element∞{\displaystyle \infty }is its own inverse,∞∥∞=∞.{\displaystyle \infty \parallel \infty =\infty .}
Every elementa≠∞{\displaystyle a\neq \infty }ofC~{\displaystyle {\widetilde {\mathbb {C} }}}has amultiplicative inversea−1=1/a{\displaystyle a^{-1}=1/a}:
Multiplication isdistributiveover parallel:[1][7][8]
Repeated parallel is equivalent to division,
Or, multiplying both sides byn,
Unlike forrepeated addition, this does not commute:
Using the distributive property twice, the product of two parallel binomials can be expanded as
The square of a binomial is
The cube of a binomial is
In general, thenth power of a binomial can be expanded usingbinomial coefficientswhich are the reciprocal of those under addition, resulting in an analog of thebinomial formula:
The following identities hold:
As with apolynomialunder addition, a parallel polynomial with coefficientsak{\displaystyle a_{k}}inC~{\textstyle {\widetilde {\mathbb {C} }}}(witha0≠∞{\displaystyle a_{0}\neq \infty })can befactoredinto a product of monomials:
for some rootsrk{\displaystyle r_{k}}(possibly repeated) inC~.{\textstyle {\widetilde {\mathbb {C} }}.}
Analogous to polynomials under addition, the polynomial equation
implies thatx=rk{\textstyle x=r_{k}}for somek.
Alinear equationcan be easily solved via the parallel inverse:
To solve a parallelquadratic equation,complete the squareto obtain an analog of thequadratic formula
Theextended complex numbersincludingzero,C¯:=C∪∞,{\displaystyle {\overline {\mathbb {C} }}:=\mathbb {C} \cup \infty ,}is no longer a field under parallel and multiplication, because0has no inverse under parallel. (This is analogous to the way(C¯,+,⋅){\displaystyle {\bigl (}{\overline {\mathbb {C} }},{+},{\cdot }{\bigr )}}is not a field because∞{\displaystyle \infty }has no additive inverse.)
For every non-zeroa,
The quantity0∥(−0)=0∥0{\displaystyle 0\parallel (-0)=0\parallel 0}can either be left undefined (seeindeterminate form) or defined to equal0.
In the absence of parentheses, the parallel operator is defined astaking precedenceover addition or subtraction, similar to multiplication.[1][31][9][10]
There are applications of the parallel operator in mechanics, electronics, optics, and study of periodicity:
Given massesmandM, thereduced massμ=mMm+M=m∥M{\displaystyle \mu ={\frac {mM}{m+M}}=m\parallel M}is frequently applied in mechanics. For instance, when the masses orbit each other, themoment of inertiais their reduced mass times the distance between them.
Inelectrical engineering, the parallel operator can be used to calculate the total impedance of variousserial and parallelelectrical circuits.[nb 2]There is adualitybetween the usual(series) sumand the parallel sum.[7][8]
For instance, the totalresistanceofresistors connected in parallelis the reciprocal of the sum of the reciprocals of the individualresistors.
Likewise for the totalcapacitanceof serialcapacitors.[nb 2]
Thecoalesced density functionfcoalesced(x) of n independent probability density functions f1(x), f2(x), …, fn(x), is equal to the reciprocal of the sum of the reciprocal densities.[32]
Ingeometric opticsthethin lens approximationto the lens maker's equation.
The time between conjunctions of two orbiting bodies is called thesynodic period. If the period of the slower body is T2, and the period of the faster is T1, then the synodic period is
Question:
Answer:
Question:[7][8]
Answer:
Suggested already by Kent E. Erickson as a subroutine in digital computers in 1959,[22]the parallel operator is implemented as a keyboard operator on theReverse Polish Notation(RPN) scientific calculatorsWP 34Ssince 2008[33][34][35]as well as on theWP 34C[36]andWP 43Ssince 2015,[37][38]allowing to solve even cascaded problems with few keystrokes like270↵ Enter180∥120∥.
Given afieldFthere are twoembeddingsofFinto theprojective lineP(F):z→ [z: 1] andz→ [1 :z]. These embeddings overlap except for [0:1] and [1:0]. The parallel operator relates the addition operation between the embeddings. In fact, thehomographieson the projective line are represented by 2 x 2 matrices M(2,F), and the field operations (+ and ×) are extended to homographies. Each embedding has its additiona+brepresented by the followingmatrix multiplicationsin M(2,A):
The two matrix products show that there are two subgroups of M(2,F) isomorphic to (F,+), the additive group ofF. Depending on which embedding is used, one operation is +, the other is∥.{\displaystyle \parallel .}
|
https://en.wikipedia.org/wiki/Parallel_addition_(mathematics)
|
Verbal arithmetic, also known asalphametics,cryptarithmetic,cryptarithmorword addition, is a type ofmathematical gameconsisting of a mathematicalequationamong unknownnumbers, whosedigitsare represented bylettersof the alphabet. The goal is to identify the value of each letter. The name can be extended to puzzles that use non-alphabetic symbols instead of letters.
The equation is typically a basic operation ofarithmetic, such asaddition,multiplication, ordivision. The classic example, published in the July 1924 issue ofThe Strand MagazinebyHenry Dudeney,[1]is:
SEND+MORE=MONEY{\displaystyle {\begin{matrix}&&{\text{S}}&{\text{E}}&{\text{N}}&{\text{D}}\\+&&{\text{M}}&{\text{O}}&{\text{R}}&{\text{E}}\\\hline =&{\text{M}}&{\text{O}}&{\text{N}}&{\text{E}}&{\text{Y}}\\\end{matrix}}}
The solution to this puzzle is O = 0, M = 1, Y = 2, E = 5, N = 6, D = 7, R = 8, and S = 9.
Traditionally, each letter should represent a different digit, and (as an ordinary arithmetic notation) the leading digit of a multi-digit number must not be zero. A good puzzle should have one unique solution, and the letters should make up a phrase (as in the example above).
Verbal arithmetic can be useful as a motivation and source of exercises in theteachingofelementary algebra.
Cryptarithmicpuzzles are quite old and their inventor is unknown. An 1864 example in The American Agriculturist[2]disproves the popular notion that it was invented bySam Loyd. The name "cryptarithm" was coined by puzzlist Minos (pseudonym ofSimon Vatriquant) in the May 1931 issue of Sphinx, a Belgian magazine of recreational mathematics, and was translated as "cryptarithmetic" byMaurice Kraitchikin 1942.[3]In 1955, J. A. H. Hunter introduced the word "alphametic" to designate cryptarithms, such as Dudeney's, whose letters form meaningfulwordsor phrases.[4]
Types of cryptarithm include the alphametic, the digimetic, and the skeletal division.
Solving a cryptarithm by hand usually involves a mix of deductions and exhaustive tests of possibilities. For instance the following sequence of deductions solves Dudeney's SEND+MORE = MONEY puzzle above (columns are numbered from right to left):
SEND+MORE=MONEY{\displaystyle {\begin{matrix}&&{\text{S}}&{\text{E}}&{\text{N}}&{\text{D}}\\+&&{\text{M}}&{\text{O}}&{\text{R}}&{\text{E}}\\\hline =&{\text{M}}&{\text{O}}&{\text{N}}&{\text{E}}&{\text{Y}}\\\end{matrix}}}
Another example of TO+GO=OUT (source is unknown):
TO+GO=OUT{\displaystyle {\begin{matrix}&&{\text{T}}&{\text{O}}\\+&&{\text{G}}&{\text{O}}\\\hline =&{\text{O}}&{\text{U}}&{\text{T}}\\\end{matrix}}}
The use ofmodular arithmeticoften helps. For example, use of mod-10 arithmetic allows the columns of an addition problem to be treated assimultaneous equations, while the use of mod-2 arithmetic allows inferences based on theparityof the variables.
Incomputer science, cryptarithms provide good examples to illustrate thebrute forcemethod, and algorithms that generate allpermutationsofmchoices fromnpossibilities. For example, the Dudeney puzzle above can be solved by testing all assignments of eight values among the digits 0 to 9 to the eight letters S,E,N,D,M,O,R,Y, giving 1,814,400 possibilities. They also provide good examples forbacktrackingparadigm ofalgorithmdesign.
When generalized to arbitrary bases, the problem of determining if a cryptarithm has a solution isNP-complete.[6](The generalization is necessary for the hardness result because in base 10, there are only 10! possible assignments of digits to letters, and these can be checked against the puzzle in linear time.)
Alphametics can be combined with other number puzzles such as Sudoku and Kakuro to create crypticSudokuandKakuro.
Anton Pavlis constructed an alphametic in 1983 with 41 addends:
(The answer is that MANYOTHERS=2764195083.)[7]
|
https://en.wikipedia.org/wiki/Verbal_arithmetic
|
Theabsolute differenceof tworeal numbersx{\displaystyle x}andy{\displaystyle y}is given by|x−y|{\displaystyle |x-y|}, theabsolute valueof theirdifference. It describes the distance on thereal linebetween the points corresponding tox{\displaystyle x}andy{\displaystyle y}, and is a special case of theLpdistancefor all1≤p≤∞{\displaystyle 1\leq p\leq \infty }. Its applications in statistics include theabsolute deviationfrom acentral tendency.
Absolute difference has the following properties:
Because it is non-negative, nonzero for distinct arguments, symmetric, and obeys the triangle inequality, the real numbers form ametric spacewith the absolute difference as its distance, the familiar measure of distance along a line.[4]It has been called "the most natural metric space",[5]and "the most important concrete metric space".[2]This distance generalizes in many different ways to higher dimensions, as a special case of theLpdistancesfor all1≤p≤∞{\displaystyle 1\leq p\leq \infty }, including thep=1{\displaystyle p=1}andp=2{\displaystyle p=2}cases (taxicab geometryandEuclidean distance, respectively). It is also the one-dimensional special case ofhyperbolic distance.
Instead of|x−y|{\displaystyle |x-y|}, the absolute difference may also be expressed asmax(x,y)−min(x,y).{\displaystyle \max(x,y)-\min(x,y).}Generalizing this to more than two values, in any subsetS{\displaystyle S}of the real numbers which has aninfimumand asupremum, the absolute difference between any two numbers inS{\displaystyle S}is less or equal then the absolute difference of the infimum and supremumofS{\displaystyle S}.
The absolute difference takes non-negative integers to non-negative integers. As a binary operation that is commutative but not associative, with an identity element on the non-negative numbers, the absolute difference gives the non-negative numbers (whether real or integer) the algebraic structure of acommutative magmawith identity.[1]
The absolute difference is used to define therelative difference, the absolute difference between a given value and a reference value divided by the reference value itself.[6]
In the theory ofgraceful labelingsingraph theory, vertices are labeled bynatural numbersand edges are labeled by the absolute difference of the numbers at their two vertices. A labeling of this type is graceful when the edge labels are distinct and consecutive from 1 to the number of edges.[7]
As well as being a special case of the Lpdistances, absolute difference can be used to defineChebyshev distance(L∞), in which the distance between points is the maximum or supremum of the absolute differences of their coordinates.[8]
In statistics, theabsolute deviationof a sampled number from acentral tendencyis its absolute difference from the center, theaverage absolute deviationis the average of the absolute deviations of a collection of samples, andleast absolute deviationsis a method forrobust statisticsbased on minimizing the average absolute deviation.
|
https://en.wikipedia.org/wiki/Absolute_difference
|
Elementary arithmeticis a branch ofmathematicsinvolvingaddition,subtraction,multiplication, anddivision. Due to its low level ofabstraction, broad range of application, and position as the foundation of all mathematics, elementary arithmetic is generally the first branch of mathematics taught in schools.[1][2]
Innumeral systems,digitsare characters used to represent the value of numbers. An example of a numeral system is the predominantly usedIndo-Arabic numeral system(0 to 9), which uses adecimalpositional notation.[3]Other numeral systems include theKaktovik system(often used in theEskimo-Aleutlanguages ofAlaska,Canada, andGreenland), and is avigesimalpositional notationsystem.[4]Regardless of the numeral system used, the results of arithmetic operations are unaffected.
In elementary arithmetic, thesuccessorof anatural number(including zero) is the next natural number and is the result of adding one to that number. The predecessor of a natural number (excluding zero) is the previous natural number and is the result of subtracting one from that number. For example, the successor of zero is one, and the predecessor of eleven is ten (0+1=1{\displaystyle 0+1=1}and11−1=10{\displaystyle 11-1=10}). Every natural number has a successor, and every natural number except 0 has a predecessor.[5]
The natural numbers have atotal ordering. If one number is greater than (>{\displaystyle >}) another number, then the latter is less than (<{\displaystyle <}) the former. For example, three is less than eight (3<8{\displaystyle 3<8}), thus eight is greater than three (8>3{\displaystyle 8>3}). The natural numbers are alsowell-ordered, meaning that any subset of the natural numbers has aleast element.
Counting assigns a natural number to each object in aset, starting with 1 for the first object and increasing by 1 for each subsequent object. The number of objects in the set is the count. This is also known as thecardinalityof the set.
Counting can also be the process oftallying, the process of drawing a mark for each object in a set.
Additionis a mathematical operation that combines two or more numbers (called addends or summands) to produce a combined number (called the sum). The addition of two numbers is expressed with the plus sign (+{\displaystyle +}).[6]It is performed according to these rules:
When the sum of a pair of digits results in a two-digit number, the "tens" digit is referred to as the "carry digit".[9]In elementary arithmetic, students typically learn to addwhole numbersand may also learn about topics such asnegative numbersandfractions.
Subtractionevaluates the difference between two numbers, where the minuend is the number being subtracted from, and the subtrahend is the number being subtracted. It is represented using the minus sign (−{\displaystyle -}). The minus sign is also used to notate negative numbers.[10]
Subtraction is not commutative, which means that the order of the numbers can change the final value;3−5{\displaystyle 3-5}is not the same as5−3{\displaystyle 5-3}. In elementary arithmetic, the minuend is always larger than the subtrahend to produce a positive result.
Subtraction is also used to separate,combine(e.g., find the size of a subset of a specific set), and find quantities in other contexts.
There are several methods to accomplish subtraction. Thetraditional mathematicsmethod subtracts using methods suitable for hand calculation.[11]Reform mathematicsis distinguished generally by the lack of preference for any specific technique, replaced by guiding students to invent their own methods of computation.
American schools teach a method of subtraction using borrowing.[12]A subtraction problem such as86−39{\displaystyle 86-39}is solved by borrowing a 10 from the tens place to add to the ones place in order to facilitate the subtraction. Subtracting 9 from 6 involves borrowing a 10 from the tens place, making the problem into70+16−39{\displaystyle 70+16-39}. This is indicated by crossing out the 8, writing a 7 above it, and writing a 1 above the 6. These markings are called "crutches", which were invented byWilliam A. Brownell, who used them in a study, in November 1937.[13]
The Austrian method, also known as the additions method, is taught in certain European countries[which?]. In contrast to the previous method, no borrowing is used, although there are crutches that vary according to certain countries.[14][15]The method of addition involves augmenting the subtrahend. This transforms the previous problem into(80+16)−(39+10){\displaystyle (80+16)-(39+10)}. A small 1 is marked below the subtrahend digit as a reminder.
Subtracting the numbers 792 and 308, starting with the ones column, 2 is smaller than 8. Using the borrowing method, 10 is borrowed from 90, reducing 90 to 80. This changes the problem to12−8{\displaystyle 12-8}.
In the tens column, the difference between 80 and 0 is 80.
In the hundreds column, the difference between 700 and 300 is 400.
The result:
792−308=484{\displaystyle 792-308=484}
Multiplicationis a mathematical operation of repeated addition. When two numbers are multiplied, the resulting value is a product. The numbers being multiplied are multiplicands, multipliers, or factors. Multiplication can be expressed as "five times three equals fifteen," "five times three is fifteen," or "fifteen is the product of five and three."
Multiplication is represented using the multiplication sign (×), the asterisk (*), parentheses (), or a dot (⋅). The statement "five times three equals fifteen" can be written as "5×3=15{\displaystyle 5\times 3=15}", "5∗3=15{\displaystyle 5\ast 3=15}", "(5)(3)=15{\displaystyle (5)(3)=15}", or "5⋅3=15{\displaystyle 5\cdot 3=15}".
In elementary arithmetic, multiplication satisfies the following properties[a]:
In the multiplication algorithm, the "tens" digit of the product of a pair of digits is referred to as the "carry digit".
Multiplying 729 and 3, starting on the ones column, the product of 9 and 3 is 27. 7 is written under the ones column and 2 is written above the tens column as a carry digit.
The product of 2 and 3 is 6, and the carry digit adds 2 to 6, so 8 is written under the tens column.
The product of 7 and 3 is 21, and since this is the last digit, 2 will not be written as a carry digit, but instead beside 1.
The result:
Multiplying 789 and 345, starting with the ones column, the product of 789 and 5 is 3945.
4 is in the tens digit. The multiplier is 40, not 4. The product of 789 and 40 is 31560.
3 is in the hundreds digits. The multiplier is 300. The product of 789 and 300 is 236700.
Adding all the products,
The result:
Divisionis an arithmetic operation, and the inverse ofmultiplication, given thatc×b=a{\displaystyle c\times b=a}.
Division can be written asa÷b{\displaystyle a\div b},ab{\displaystyle {\frac {a}{b}}}, ora⁄b. This can be read verbally as "adivided byb" or "aoverb".
In some non-English-speaking cultures[which?], "adivided byb" is writtena:b. In English usage, thecolonis restricted to the concept ofratios("ais tob").
In an equationa÷b=c{\displaystyle a\div b=c}, ais the dividend,bthe divisor, andcthe quotient.Division by zerois considered impossible at an elementary arithmetic level.
Two numbers can be divided on paper usinglong division. An abbreviated version of long division,short division, can be used for smaller divisors.
A less systematic method involves the concept ofchunking, involving subtracting more multiples from the partial remainder at each stage.
Dividing 272 and 8, starting with the hundreds digit, 2 is not divisible by 8. Add 20 and 7 to get 27. The largest number that the divisor of 8 can be multiplied by without exceeding 27 is 3, so it is written under the tens column. Subtracting 24 (the product of 3 and 8) from 27 gives 3 as theremainder.
Going to the ones digit, the number is 2. Adding 30 (the remainder, 3, times 10) and 2 gets 32. The quotient of 32 and 8 is 4, which is written under the ones column.
The result:272÷8=34{\displaystyle 272\div 8=34}
Another method of dividing taught in some schools is the bus stop method, sometimes notated as
The steps here are shown below, using the same example as above:
The result:
272÷8=34{\displaystyle 272\div 8=34}
Elementary arithmetic is typically taught at the primary or secondary school levels and is governed by local educational standards. There has been debate about the content and methods used to teach elementary arithmetic in the United States and Canada.[16][17]
+Addition(+)
−Subtraction(−)
×Multiplication(×or·)
÷Division(÷or∕)
|
https://en.wikipedia.org/wiki/Elementary_arithmetic
|
Inmathematicsandcomputing, themethod of complementsis a technique to encode a symmetric range of positive and negativeintegersin a way that they can use the samealgorithm(ormechanism) foradditionthroughout the whole range. For a given number ofplaceshalf of the possible representations of numbers encode the positive numbers, the other half represents their respectiveadditive inverses. The pairs of mutually additive inverse numbers are calledcomplements. Thussubtractionof any number is implemented by adding its complement. Changing the sign of any number is encoded by generating its complement, which can be done by a very simple and efficient algorithm. This method was commonly used inmechanical calculatorsand is still used in moderncomputers. The generalized concept of theradix complement(as described below) is also valuable innumber theory, such as inMidy's theorem.
Thenines' complementof a number given in decimal representation is formed by replacing each digit with nine minus that digit. To subtract a decimal numbery(thesubtrahend) from another numberx(theminuend) two methods may be used:
In the first method, the nines' complement ofxis added toy. Then the nines' complement of the result obtained is formed to produce the desired result.
In the second method, the nines' complement ofyis added toxand one is added to the sum. The leftmost digit '1' of the result is then discarded. Discarding the leftmost '1' is especially convenient on calculators or computers that use a fixed number of digits: there is nowhere for it to go so it is simply lost during the calculation. The nines' complement plus one is known as thetens' complement.
The method of complements can be extended to other number bases (radices); in particular, it is used on most digital computers to perform subtraction, represent negative numbers in base 2 orbinary arithmeticand test overflow in calculation.[1]
Theradix complementof ann{\displaystyle n}-digit numbery{\displaystyle y}inradixb{\displaystyle b}is defined asbn−y{\displaystyle b^{n}-y}. In practice, the radix complement is more easily obtained by adding 1 to thediminished radix complement, which is(bn−1)−y{\displaystyle \left(b^{n}-1\right)-y}. While this seems equally difficult to calculate as the radix complement, it is actually simpler since(bn−1){\displaystyle \left(b^{n}-1\right)}is simply the digitb−1{\displaystyle b-1}repeatedn{\displaystyle n}times. This is becausebn−1=(b−1)(bn−1+bn−2+⋯+b+1)=(b−1)bn−1+⋯+(b−1){\displaystyle b^{n}-1=(b-1)\left(b^{n-1}+b^{n-2}+\cdots +b+1\right)=(b-1)b^{n-1}+\cdots +(b-1)}(see alsoGeometric series Formula). Knowing this, the diminished radix complement of a number can be found by complementing each digit with respect tob−1{\displaystyle b-1}, i.e. subtracting each digit iny{\displaystyle y}fromb−1{\displaystyle b-1}.
The subtraction ofy{\displaystyle y}fromx{\displaystyle x}using diminished radix complements may be performed as follows. Add the diminished radix complement ofx{\displaystyle x}toy{\displaystyle y}to obtainbn−1−x+y{\displaystyle b^{n}-1-x+y}or equivalentlybn−1−(x−y){\displaystyle b^{n}-1-(x-y)}, which is the diminished radix complement ofx−y{\displaystyle x-y}. Further taking the diminished radix complement ofbn−1−(x−y){\displaystyle b^{n}-1-(x-y)}results in the desired answer ofx−y{\displaystyle x-y}.
Alternatively using the radix complement,x−y{\displaystyle x-y}may be obtained by adding the radix complement ofy{\displaystyle y}tox{\displaystyle x}to obtainx+bn−y{\displaystyle x+b^{n}-y}orx−y+bn{\displaystyle x-y+b^{n}}. Assumingy≤x{\displaystyle y\leq x}, the result will be greater or equal tobn{\displaystyle b^{n}}and dropping the leading1{\displaystyle 1}from the result is the same as subtractingbn{\displaystyle b^{n}}, making the resultx−y+bn−bn{\displaystyle x-y+b^{n}-b^{n}}or justx−y{\displaystyle x-y}, the desired result.
In thedecimalnumbering system, the radix complement is called theten's complementand the diminished radix complement thenines' complement. Inbinary, the radix complement is called thetwo's complementand the diminished radix complement theones' complement. The naming of complements in other bases is similar. Some people, notablyDonald Knuth, recommend using the placement of the apostrophe to distinguish between the radix complement and the diminished radix complement. In this usage, thefour's complementrefers to the radix complement of a number in base four whilefours' complementis the diminished radix complement of a number in base 5. However, the distinction is not important when the radix is apparent (nearly always), and the subtle difference in apostrophe placement is not common practice. Most writers useone'sandnine's complement, and many style manuals leave out the apostrophe, recommendingonesandnines complement.
The nines' complement of a decimal digit is the number that must be added to it to produce 9; the nines' complement of 3 is 6, the nines' complement of 7 is 2, and so on, see table. To form the nines' complement of a larger number, each digit is replaced by its nines' complement.
Consider the following subtraction problem:
1. Compute the nines' complement of the minuend (873).
2. Add that to the subtrahend (218).
3. Now calculate the nines' complement of the result
1. Compute the nines' complement of 218, which is 781. Because 218 is three digits long, this is the same as subtracting 218 from 999.
2. Next, the sum ofx{\displaystyle x}and the nines' complement ofy{\displaystyle y}is taken.
3. The leading "1" digit is then dropped, giving 654.
4. This is not yet correct. In the first step, 999 was added to the equation. Then 1000 was subtracted when the leading 1 was dropped. So, the answer obtained (654) is one less than the correct answerx−y{\displaystyle x-y}. To fix this, 1 is added to the answer.
Adding a 1 gives 655, the correct answer to our original subtraction problem. The last step of adding 1 could be skipped if instead the ten's complement of y was used in the first step.
In the following example the result of the subtraction has fewer digits thanx{\displaystyle x}:
Using the first method the sum of the nines' complement ofx{\displaystyle x}andy{\displaystyle y}is
The nines' complement of 999990 is 000009. Removing the leading zeros gives 9, the desired result.
If the subtrahend,y{\displaystyle y}, has fewer digits than the minuend,x{\displaystyle x}, leading zeros must be added in the second method. These zeros become leading nines when the complement is taken. For example:
can be rewritten
Replacing 00391 with its nines' complement and adding 1 produces the sum:
Dropping the leading 1 gives the correct answer: 47641.
The method of complements is especially useful in binary (radix 2) since the ones' complement is very easily obtained by inverting each bit (changing '0' to '1' and vice versa). Adding 1 to get the two's complement can be done by simulating a carry into the least significant bit. For example:
becomes the sum:
Dropping the initial "1" gives the answer: 0100 1110 (equals decimal 78)
The method of complements normally assumes that the operands are positive and thaty≤x, logical constraints given that adding and subtracting arbitrary integers is normally done by comparing signs, adding the two or subtracting the smaller from the larger, and giving the result the correct sign.
Let's see what happens ifx<y. In that case, there will not be a "1" digit to cross out after the addition sincex−y+bn{\displaystyle x-y+b^{n}}will be less thanbn{\displaystyle b^{n}}. For example, (in decimal):
Complementingyand adding gives:
At this point, there is nosimpleway to complete the calculation by subtractingbn{\displaystyle b^{n}}(1000 in this case); one cannot simply ignore a leading 1. The expected answer is −144, which isn't as far off as it seems; 856 happens to be the ten's complement of 144. This issue can be addressed in a number of ways:
The method of complements was used in many mechanical calculators as an alternative to running the gears backwards. For example:
Use of the method of complements is ubiquitous in digital computers, regardless of the representation used for signed numbers. However, the circuitry required depends on the representation:
The method of complements was used to correct errors when accounting books were written by hand. To remove an entry from a column of numbers, the accountant could add a new entry with the ten's complement of the number to subtract. A bar was added over the digits of this entry to denote its special status. It was then possible to add the whole column of figures to obtain the corrected result.
Complementing the sum is handy for cashiers making change for a purchase from currency in a single denomination of 1 raised to an integer power of the currency's base. For decimal currencies that would be 10, 100, 1,000, etc., e.g. a $10.00 bill.
In grade schools, students are sometimes taught the method of complements as a shortcut useful inmental arithmetic.[3]Subtraction is done by adding the ten's complement of thesubtrahend, which is the nines' complement plus 1. The result of this addition is used when it is clear that the difference will be positive, otherwise the ten's complement of the addition's result is used with it marked as negative. The same technique works for subtracting on an adding machine.
|
https://en.wikipedia.org/wiki/Method_of_complements
|
Inmathematics, anegative numberis theoppositeof a positivereal number.[1]Equivalently, a negative number is a real number that isless thanzero. Negative numbers are often used to represent themagnitudeof a loss or deficiency. Adebtthat is owed may be thought of as a negative asset. If a quantity, such as the charge on an electron, may have either of two opposite senses, then one may choose to distinguish between those senses—perhaps arbitrarily—aspositiveandnegative. Negative numbers are used to describe values on a scale that goes below zero, such as theCelsiusandFahrenheitscales for temperature. The laws of arithmetic for negative numbers ensure that the common-sense idea of an opposite is reflected in arithmetic. For example, −(−3) = 3 because the opposite of an opposite is the original value.
Negative numbers are usually written with aminus signin front. For example, −3 represents a negative quantity with a magnitude of three, and is pronounced and read as "minus three" or "negative three". Conversely, a number that is greater than zero is calledpositive; zero is usually (but not always) thought of as neither positive nornegative.[2]The positivity of a number may be emphasized by placing a plus sign before it, e.g. +3. In general, the negativity or positivity of a number is referred to as itssign.
Every real number other than zero is either positive or negative. The non-negative whole numbers are referred to asnatural numbers(i.e., 0, 1, 2, 3, ...), while the positive and negative whole numbers (together with zero) are referred to asintegers. (Some definitions of the natural numbers exclude zero.)
Inbookkeeping, amounts owed are often represented by red numbers, or a number in parentheses, as an alternative notation to represent negative numbers.
Negative numbers were used in theNine Chapters on the Mathematical Art, which in its present form dates from the period of the ChineseHan dynasty(202 BC – AD 220), but may well contain much older material.[3]Liu Hui(c. 3rd century) established rules for adding and subtracting negative numbers.[4]By the 7th century, Indian mathematicians such asBrahmaguptawere describing the use of negative numbers.Islamic mathematiciansfurther developed the rules of subtracting and multiplying negative numbers and solved problems with negativecoefficients.[5]Prior to the concept of negative numbers, mathematicians such asDiophantusconsidered negative solutions to problems "false" and equations requiring negative solutions were described as absurd.[6]Western mathematicians likeLeibnizheld that negative numbers were invalid, but still used them in calculations.[7][8]
The relationship between negative numbers, positive numbers, and zero is often expressed in the form of anumber line:
Numbers appearing farther to the right on this line are greater, while numbers appearing farther to the left are lesser. Thus zero appears in the middle, with the positive numbers to the right and the negative numbers to the left.
Note that a negative number with greater magnitude is considered less. For example, even though (positive)8is greater than (positive)5, written
negative8is considered to be less than negative5:
In the context of negative numbers, a number that is greater than zero is referred to aspositive. Thus everyreal numberother than zero is either positive or negative, while zero itself is not considered to have a sign. Positive numbers are sometimes written with aplus signin front, e.g.+3denotes a positive three.
Because zero is neither positive nor negative, the termnonnegativeis sometimes used to refer to a number that is either positive or zero, whilenonpositiveis used to refer to a number that is either negative or zero. Zero is a neutral number.
Negative numbers can be thought of as resulting from thesubtractionof a larger number from a smaller. For example, negative three is the result of subtracting three from zero:
In general, the subtraction of a larger number from a smaller yields a negative result, with the magnitude of the result being the difference between the two numbers. For example,
since8 − 5 = 3.
Theminus sign"−" signifies theoperatorfor both the binary (two-operand)operationofsubtraction(as iny−z) and the unary (one-operand) operation ofnegation(as in−x, or twice in−(−x)). A special case of unary negation occurs when it operates on a positive number, in which case the result is a negative number (as in−5).
The ambiguity of the "−" symbol does not generally lead to ambiguity in arithmetical expressions, because the order of operations makes only one interpretation or the other possible for each "−". However, it can lead to confusion and be difficult for a person to understand an expression when operator symbols appear adjacent to one another. A solution can be to parenthesize the unary "−" along with its operand.
For example, the expression7 + −5may be clearer if written7 + (−5)(even though they mean exactly the same thing formally). Thesubtractionexpression7 − 5is a different expression that doesn't represent the same operations, but it evaluates to the same result.
Sometimes in elementary schools a number may be prefixed by a superscript minus sign or plus sign to explicitly distinguish negative and positive numbers as in[23]
Addition of two negative numbers is very similar to addition of two positive numbers. For example,
The idea is that two debts can be combined into a single debt of greater magnitude.
When adding together a mixture of positive and negative numbers, one can think of the negative numbers as positive quantities being subtracted. For example:
In the first example, a credit of8is combined with a debt of3, which yields a total credit of5. If the negative number has greater magnitude, then the result is negative:
Here the credit is less than the debt, so the net result is a debt.
As discussed above, it is possible for the subtraction of two non-negative numbers to yield a negative answer:
In general, subtraction of a positive number yields the same result as the addition of a negative number of equal magnitude. Thus
and
On the other hand, subtracting a negative number yields the same result as the addition a positive number of equal magnitude. (The idea is thatlosinga debt is the same thing asgaininga credit.) Thus
and
When multiplying numbers, the magnitude of the product is always just the product of the two magnitudes. Thesignof the product is determined by the following rules:
Thus
and
The reason behind the first example is simple: adding three−2's together yields−6:
The reasoning behind the second example is more complicated. The idea again is that losing a debt is the same thing as gaining a credit. In this case, losing two debts of three each is the same as gaining a credit of six:
The convention that a product of two negative numbers is positive is also necessary for multiplication to follow thedistributive law. In this case, we know that
Since2 × (−3) = −6, the product(−2) × (−3)must equal6.
These rules lead to another (equivalent) rule—the sign of any producta×bdepends on the sign ofaas follows:
The justification for why the product of two negative numbers is a positive number can be observed in the analysis ofcomplex numbers.
The sign rules fordivisionare the same as for multiplication. For example,
and
If dividend and divisor have the same sign, the result is positive, if they have different signs the result is negative.
The negative version of a positive number is referred to as itsnegation. For example,−3is the negation of the positive number3. Thesumof a number and its negation is equal to zero:
That is, the negation of a positive number is theadditive inverseof the number.
Usingalgebra, we may write this principle as analgebraic identity:
This identity holds for any positive numberx. It can be made to hold for all real numbers by extending the definition of negation to include zero and negative numbers. Specifically:
For example, the negation of−3is+3. In general,
Theabsolute valueof a number is the non-negative number with the same magnitude. For example, the absolute value of−3and the absolute value of3are both equal to3, and the absolute value of0is0.
In a similar manner torational numbers, we can extend thenatural numbersN{\displaystyle \mathbb {N} }to the integersZ{\displaystyle \mathbb {Z} }by defining integers as anordered pairof natural numbers (a,b). We can extend addition and multiplication to these pairs with the following rules:
We define anequivalence relation~ upon these pairs with the following rule:
This equivalence relation is compatible with the addition and multiplication defined above, and we may defineZ{\displaystyle \mathbb {Z} }to be thequotient setN2/∼{\displaystyle \mathbb {N} ^{2}/\sim }, i.e. we identify two pairs (a,b) and (c,d) if they are equivalent in the above sense. Note thatZ{\displaystyle \mathbb {Z} }, equipped with these operations of addition and multiplication, is aring, and is in fact, the prototypical example of a ring.
We can also define atotal orderonZ{\displaystyle \mathbb {Z} }by writing
This will lead to anadditive zeroof the form (a,a), anadditive inverseof (a,b) of the form (b,a), a multiplicative unit of the form (a+ 1,a), and a definition ofsubtraction
This construction is a special case of theGrothendieck construction.
The additive inverse of a number is unique, as is shown by the following proof. As mentioned above, an additive inverse of a number is defined as a value which when added to the number yields zero.
Letxbe a number and letybe its additive inverse. Supposey′is another additive inverse ofx. By definition,x+y′=0,andx+y=0.{\displaystyle x+y'=0,\quad {\text{and}}\quad x+y=0.}
And so,x+y′=x+y. Using the law of cancellation for addition, it is seen thaty′=y. Thusyis equal to any other additive inverse ofx. That is,yis the unique additive inverse ofx.
For a long time, understanding of negative numbers was delayed by the impossibility of having a negative-number amount of a physical object, for example "minus-three apples", and negative solutions to problems were considered "false".
InHellenistic Egypt, theGreekmathematicianDiophantusin the 3rd century AD referred to an equation that was equivalent to4x+20=4{\displaystyle 4x+20=4}(which has a negative solution) inArithmetica, saying that the equation was absurd.[24]For this reason Greek geometers were able to solve geometrically all forms of the quadratic equation which give positive roots, while they could take no account of others.[25]
Negative numbers appear for the first time in history in theNine Chapters on the Mathematical Art(九章算術,Jiǔ zhāng suàn-shù), which in its present form dates from theHan period, but may well contain much older material.[3]The mathematicianLiu Hui(c. 3rd century) established rules for the addition and subtraction of negative numbers. The historian Jean-Claude Martzloff theorized that the importance of duality in Chinesenatural philosophymade it easier for the Chinese to accept the idea of negative numbers.[4]The Chinese were able to solve simultaneous equations involving negative numbers. TheNine Chaptersused redcounting rodsto denote positivecoefficientsand black rods for negative.[4][26]This system is the exact opposite of contemporary printing of positive and negative numbers in the fields of banking, accounting, and commerce, wherein red numbers denote negative values and black numbers signify positive values. Liu Hui writes:
Now there are two opposite kinds of counting rods for gains and losses, let them be called positive and negative. Red counting rods are positive, black counting rods are negative.[4]
The ancient IndianBakhshali Manuscriptcarried out calculations with negative numbers, using "+" as a negative sign.[27]The date of the manuscript is uncertain. LV Gurjar dates it no later than the 4th century,[28]Hoernle dates it between the third and fourth centuries, Ayyangar and Pingree dates it to the 8th or 9th centuries,[29]and George Gheverghese Joseph dates it to about AD 400 and no later than the early 7th century.[30]
During the 7th century AD, negative numbers were used in India to represent debts. TheIndian mathematicianBrahmagupta, inBrahma-Sphuta-Siddhanta(written c. AD 630), discussed the use of negative numbers to produce a general formquadratic formulasimilar to the one in use today.[24]
In the 9th century,Islamic mathematicianswere familiar with negative numbers from the works of Indian mathematicians, but the recognition and use of negative numbers during this period remained timid.[5]Al-Khwarizmiin hisAl-jabr wa'l-muqabala(from which the word "algebra" derives) did not use negative numbers or negative coefficients.[5]But within fifty years,Abu Kamilillustrated the rules of signs for expanding the multiplication(a±b)(c±d){\displaystyle (a\pm b)(c\pm d)},[31]andal-Karajiwrote in hisal-Fakhrīthat "negative quantities must be counted as terms".[5]In the 10th century,Abū al-Wafā' al-Būzjānīconsidered debts as negative numbers inA Book on What Is Necessary from the Science of Arithmetic for Scribes and Businessmen.[31]
By the 12th century, al-Karaji's successors were to state the general rules of signs and use them to solvepolynomial divisions.[5]Asal-Samaw'alwrites:
the product of a negative number—al-nāqiṣ(loss)—by a positive number—al-zāʾid(gain)—is negative, and by a negative number is positive. If we subtract a negative number from a higher negative number, the remainder is their negative difference. The difference remains positive if we subtract a negative number from a lower negative number. If we subtract a negative number from a positive number, the remainder is their positive sum. If we subtract a positive number from an empty power (martaba khāliyya), the remainder is the same negative, and if we subtract a negative number from an empty power, the remainder is the same positive number.[5]
In the 12th century in India,Bhāskara IIgave negative roots for quadratic equations but rejected them because they were inappropriate in the context of the problem. He stated that a negative value is "in this case not to be taken, for it is inadequate; people do not approve of negative roots."
Fibonacciallowed negative solutions in financial problems where they could be interpreted as debits (chapter 13 ofLiber Abaci, 1202) and later as losses (inFlos, 1225).
In the 15th century,Nicolas Chuquet, a Frenchman, used negative numbers asexponents[32]but referred to them as "absurd numbers".[33]
Michael Stifeldealt with negative numbers in his1544ADArithmetica Integra, where he also called themnumeri absurdi(absurd numbers).
In 1545,Gerolamo Cardano, in hisArs Magna, provided the first satisfactory treatment of negative numbers in Europe.[24]He did not allow negative numbers in his consideration ofcubic equations, so he had to treat, for example,x3+ax=b{\displaystyle x^{3}+ax=b}separately fromx3=ax+b{\displaystyle x^{3}=ax+b}(witha,b>0{\displaystyle a,b>0}in both cases). In all, Cardano was driven to the study of thirteen types of cubic equations, each with all negative terms moved to the other side of the = sign to make them positive. (Cardano also dealt withcomplex numbers, but understandably liked them even less.)
|
https://en.wikipedia.org/wiki/Negative_number
|
Theplus sign(+) and theminus sign(−) aremathematical symbolsused to denotepositiveandnegativefunctions, respectively. In addition, the symbol+represents the operation ofaddition, which results in asum, while the symbol−representssubtraction, resulting in adifference.[1]Their use has been extended to many other meanings, more or less analogous.PlusandminusareLatinterms meaning 'more' and 'less', respectively.
The forms+and−are used in many countries around the world. Other designs includeU+FB29﬩HEBREW LETTER ALTERNATIVE PLUS SIGNfor plus andU+2052⁒COMMERCIAL MINUS SIGNfor minus.
Though the signs now seem as familiar as thealphabetor theArabic numerals, they are not of great antiquity. TheEgyptian hieroglyphicsign for addition, for example, resembles a pair of legs walking in the direction in which the text was written (Egyptiancould be written either from right to left or left to right), with the reverse sign indicating subtraction:[2]
Nicole Oresme'smanuscriptsfrom the 14th century show what may be one of the earliest uses of+as a sign for plus.[3]
In early 15th century Europe, the letters "P" and "M" were generally used.[4][5]The symbols (P with overline,p̄, forpiù(more), i.e., plus, and M with overline,m̄, formeno(less), i.e., minus) appeared for the first time inLuca Pacioli's mathematicscompendium,Summa de arithmetica, geometria, proportioni et proportionalità, first printed and published inVenicein 1494.[6]
The+sign is a simplification of theLatin:et(comparable to the evolution of theampersand&).[7]The−may be derived from amacron◌̄written over⟨m⟩when used to indicate subtraction; or it may come from a shorthand version of the letter⟨m⟩itself.[8]
In his 1489 treatise,Johannes Widmannreferred to the symbols−and+asminusandmer(Modern Germanmehr; "more"):"[...] was − ist das ist minus [...] und das + das ist mer das zu addirst".[9][10][11]They were not used for addition and subtraction in the treatise, but were used to indicate surplus and deficit; usage in the modern sense is attested in a 1518 book byHenricus Grammateus.[12][13]
Robert Recorde, the designer of theequals sign, introduced plus and minus to Britain in 1557 inThe Whetstone of Witte:[14]"There be other 2 signes in often use of which the first is made thus + and betokeneth more: the other is thus made − and betokeneth lesse."
The plus sign (+) is abinary operatorthat indicatesaddition, as in 2 + 3 = 5. It can also serve as aunary operatorthat leaves itsoperandunchanged(+xmeans the same asx). This notation may be used when it is desired to emphasize the positiveness of a number, especially in contrast with thenegative numbers(+5 versus −5).
The plus sign can also indicate many other operations, depending on the mathematical system under consideration. Manyalgebraic structures, such asvector spacesandmatrix rings, have some operation which is called, or is equivalent to, addition. It is though conventional to use the plus sign to only denotecommutative operations.[15]
The symbol is also used inchemistryandphysics. For more, see§ Other uses.
The minus sign (−) has three main uses in mathematics:[16]
In many contexts, it does not matter whether the second or the third of these usages is intended: −5 is the same number. When it is important to distinguish them, a raised minus sign (¯) is sometimes used for negative constants, as inelementary education, the programming languageAPL, and some early graphing calculators.[a]
All three uses can be referred to as "minus" in everyday speech, though the binary operator is sometimes read as "take away".[17]In American English nowadays, −5 (for example) is generally referred to as "negative five" though speakers born before 1950 often refer to it as "minus five". (Temperatures tend to follow the older usage; −5° is generally called "minus five degrees".)[18]Further, a few textbooks in the United States encourage −xto be read as "the opposite ofx" or "the additive inverse ofx"—to avoid giving the impression that −xis necessarily negative (sincexitself may already be negative).[19]
In mathematics and most programming languages, the rules for theorder of operationsmean that −52is equal to −25:Exponentiationbinds more strongly than the unary minus, which binds more strongly than multiplication or division. However, in some programming languages (Microsoft Excelin particular), unary operators bind strongest, so in those cases−5^2is 25, but0−5^2is −25.[20]
Similar to the plus sign, the minus sign is also used inchemistryandphysics. (For more, see§ Other usesbelow.)
Some elementary teachers use raised minus signs before numbers to disambiguate them from the operation of subtraction.[21]The same convention is also used in some computer languages. For example, subtracting −5 from 3 might be read as "positive three take away negative 5", and be shown as
which can be read as:
or even as
When placed after a number, a plus sign can indicate an open range of numbers. For example, "18+" is commonly used as shorthand for "ages 18 and up" although "eighteen plus", for example, is now common usage.
In USgrading systems, the plus sign indicates a grade one level higher and the minus sign a grade lower. For example,B−("B minus") is one grade lower thanB. In some occasions, this is extended to two plus or minus signs (e.g.,A++being two grades higher thanA).[citation needed]
A common trend in branding, particularly with streaming video services, has been the use of the plus sign at the end of brand names, e.g.Google+,Disney+,Paramount+, andApple TV+. Since the word "plus" can mean an advantage, or an additional amount of something, such "+" signs imply that a product offers extra features or benefits.
Positive and negative are sometimes abbreviated as+veand−ve,[22]and on batteries and cell terminals are often marked with+and−.
In mathematics theone-sided limitx→a+meansxapproachesafrom the right (i.e., right-sided limit), andx→a−meansxapproachesafrom the left (i.e., left-sided limit). For example,1/x→ +∞{\displaystyle \infty }asx→ 0+but1/x→ −∞{\displaystyle \infty }asx→ 0−.
When placed afterspecial sets of numbers, plus and minus signs are used to indicate that only positive numbers and negative numbers are included, respectively. For example,Z+{\displaystyle \mathbb {Z} ^{+}}is the set of all positiveintegersandZ−{\displaystyle \mathbb {Z} ^{-}}is the set of all negative integers. In these cases, a subscript 0 may also be added to clarify that 0 is included.
Blood typesare often qualified with a plus or minus to indicate the presence or absence of theRh factor. For example, A+ meanstype A bloodwith the Rh factor present, while B− means type B blood with the Rh factor absent.
In music,augmented chordsare symbolized with a plus sign, although this practice is not universal (as there are other methods for spelling those chords). For example, "C+" is read "C augmented chord". Sometimes the plus is written as asuperscript.
As well as the normal mathematical usage, plus and minus signs may be used for a number of other purposes in computing.
Plus and minus signs are often used intree viewon a computer screen—to show if a folder is collapsed or not.
In some programming languages,concatenationofstringsis written"a" + "b", and results in"ab".
In most programming languages, subtraction and negation are indicated with the ASCIIhyphen-minuscharacter,-. InAPLa raised minus sign (here written usingU+00AF¯MACRON) is used to denote a negative number, as in¯3. While inJa negative number is denoted by anunderscore, as in_5.
InCand some other computer programming languages, two plus signs indicate theincrement operatorand two minus signs a decrement; the position of the operator before or after the variable indicates whether the new or old value is read from it. For example, if x equals 6, theny = x++increments x to 7 but sets y to 6, whereasy = ++xwould set both x and y to 7. By extension,++is sometimes used in computing terminology to signify an improvement, as in the name of the languageC++.
Inregular expressions,+is often used to indicate "1 or more" in a pattern to be matched. For example,x+means "one or more of the letter x". This is theKleene plusnotation. Hyphen-minus usually indicates a range ([A-Z]- any capital from 'A' to 'Z'), although it can stand for itself ([ABCDE-]any capital from 'A' to 'E' or '-').
There is no concept of negative zero in mathematics, but in computing−0may have a separate representation from zero. In theIEEE floating-point standard, 1 / −0 isnegative infinity(−∞{\displaystyle -\infty }) whereas 1 / 0 ispositive infinity(∞{\displaystyle \infty }).
+is also used to denote added lines indiffoutput in thecontext formator theunified format.
In physics, the use of plus and minus signs for differentelectrical chargeswas introduced byGeorg Christoph Lichtenberg.
In chemistry, superscripted plus and minus signs are used to indicate an ion with a positive or negative charge of 1 (e.g., NH+4). If the charge is greater than 1, a number indicating the charge is written before the sign (as in SO2−4).
A plus sign prefixed to a telephone number is used to indicate the form used forInternational Direct Dialing.[23]Its precise usage varies by technology and national standards. In theInternational Phonetic Alphabet,subscriptedplus and minus signs are used as diacritics to indicateadvanced or retracted articulationsof speech sounds.
The minus sign is also used as tone letter in the orthographies ofDan,Krumen,Karaboro,Mwan,Wan,Yaouré,Wè,Nyabwa, andGodié.[24]The Unicode character used for the tone letter (U+02D7˗MODIFIER LETTER MINUS SIGN) is different from the mathematical minus sign.
The plus sign sometimes represents/ɨ/in the orthography ofHuichol.[25]
In thealgebraic notationused to record games ofchess, the plus sign+is used to denote a move that puts the opponent intocheck, while a double plus++is sometimes used to denotedouble check. Combinations of the plus and minus signs are used to evaluate a move (+/−, +/=, =/+, −/+).
In linguistics, a superscript plus+sometimes replaces theasterisk, which denotes unattestedlinguistic reconstruction.
Inbotanical names, a plus sign denotesgraft-chimaera.
In Catholicism, the plus sign before a last name denotes aBishop, and a double plus is used to denote an Archbishop.
Variants of the symbols have uniquecodepointsin Unicode:
There is acommercial minus sign,⁒, which is (or was) used in Germany and Scandinavia. The symbol÷, still used in many Anglophone countries as adivision sign, is (or was) usedto denote subtractioninScandinavia.[26]
Thehyphen-minussymbol (-) is the form ofhyphenmost commonly used in digitaldocuments. On most keyboards, it is the only character that resembles aminus signor adashso it is also used for these.[27]The namehyphen-minusderives from the originalASCIIstandard,[28]where it was calledhyphen–(minus).[29]The character is referred to as ahyphen, aminus sign, or adashaccording to the context where it is being used.
AJewishtradition that dates from at least the 19th century is to writeplususing the symbol﬩, to avoid the writing of a symbol+that could look like aChristian cross.[30][31]This practice was adopted intoIsraelischools and is still commonplace today inelementary schools(includingsecularschools) but in fewersecondary schools.[31]It is also used occasionally in books by religious authors, but most books for adults use the international symbol+.Unicodehas this symbol at positionU+FB29﬩HEBREW LETTER ALTERNATIVE PLUS SIGN.[32]
|
https://en.wikipedia.org/wiki/Plus_and_minus_signs
|
In mathematics,monusis an operator on certaincommutativemonoidsthat are notgroups. A commutative monoid on which a monus operator is defined is called acommutative monoid with monus, orCMM. The monus operator may be denoted with the−symbol because thenatural numbersare a CMM undersubtraction; it is also denoted with the−˙{\displaystyle \mathop {\dot {-}} }symbol to distinguish it from the standard subtraction operator.
Let(M,+,0){\displaystyle (M,+,0)}be a commutativemonoid. Define abinary relation≤{\displaystyle \leq }on this monoid as follows: for any two elementsa{\displaystyle a}andb{\displaystyle b}, definea≤b{\displaystyle a\leq b}if there exists an elementc{\displaystyle c}such thata+c=b{\displaystyle a+c=b}. It is easy to check that≤{\displaystyle \leq }isreflexive[2]and that it istransitive.[3]M{\displaystyle M}is callednaturally orderedif the≤{\displaystyle \leq }relation is additionallyantisymmetricand hence apartial order. Further, if for each pair of elementsa{\displaystyle a}andb{\displaystyle b}, a unique smallest elementc{\displaystyle c}exists such thata≤b+c{\displaystyle a\leq b+c}, thenMis called acommutative monoid with monus[4]: 129and themonusa−˙b{\displaystyle a\mathop {\dot {-}} b}of any two elementsa{\displaystyle a}andb{\displaystyle b}can be defined as this unique smallest elementc{\displaystyle c}such thata≤b+c{\displaystyle a\leq b+c}.
An example of a commutative monoid that is not naturally ordered is(Z,+,0){\displaystyle (\mathbb {Z} ,+,0)}, the commutative monoid of theintegerswith usualaddition, as for anya,b∈Z{\displaystyle a,b\in \mathbb {Z} }there existsc{\displaystyle c}such thata+c=b{\displaystyle a+c=b}, soa≤b{\displaystyle a\leq b}holds for anya,b∈Z{\displaystyle a,b\in \mathbb {Z} }, so≤{\displaystyle \leq }is not a partial order. There are also examples of monoids that are naturally ordered but are not semirings with monus.[5]
Beyond monoids, the notion of monus can be applied to other structures. For instance, anaturally ordered semiring(sometimes called adioid[6]) is asemiringwhere the commutative monoid induced by the addition operator is naturally ordered. When this monoid is a commutative monoid with monus, the semiring is called asemiring with monus, orm-semiring.
IfMis anidealin aBoolean algebra, thenMis a commutative monoid with monus undera+b=a∨b{\displaystyle a+b=a\vee b}anda−˙b=a∧¬b{\displaystyle a\mathop {\dot {-}} b=a\wedge \neg b}.[4]: 129
Thenatural numbersincluding 0 form a commutative monoid with monus, with their ordering being the usual order of natural numbers and the monus operator being asaturatingvariant of standard subtraction, variously referred to astruncated subtraction,[7]limited subtraction,proper subtraction,doz(difference or zero),[8]andmonus.[9]Truncated subtraction is usually defined as[7]
where − denotes standardsubtraction. For example, 5 − 3 = 2 and 3 − 5 = −2 in regular subtraction, whereas in truncated subtraction 3 ∸ 5 = 0. Truncated subtraction may also be defined as[9]
InPeano arithmetic, truncated subtraction is defined in terms of the predecessor functionP(the inverse of thesuccessor function):[7]
A definition that does not need the predecessor function is:
Truncated subtraction is useful in contexts such asprimitive recursive functions, which are not defined over negative numbers.[7]Truncated subtraction is also used in the definition of themultisetdifferenceoperator.
The class of all commutative monoids with monus form avariety.[4]: 129The equational basis for the variety of all CMMs consists of the axioms forcommutative monoids, as well as the following axioms:
a+(b−˙a)=b+(a−˙b),(a−˙b)−˙c=a−˙(b+c),(a−˙a)=0,(0−˙a)=0.{\displaystyle {\begin{aligned}a+(b\mathop {\dot {-}} a)&=b+(a\mathop {\dot {-}} b),\\(a\mathop {\dot {-}} b)\mathop {\dot {-}} c&=a\mathop {\dot {-}} (b+c),\\(a\mathop {\dot {-}} a)&=0,\\(0\mathop {\dot {-}} a)&=0.\\\end{aligned}}}
|
https://en.wikipedia.org/wiki/Monus
|
Thematerial conditional(also known asmaterial implication) is abinary operationcommonly used inlogic. When the conditional symbol→{\displaystyle \to }isinterpretedas material implication, a formulaP→Q{\displaystyle P\to Q}is true unlessP{\displaystyle P}is true andQ{\displaystyle Q}is false.
Material implication is used in all the basic systems ofclassical logicas well as somenonclassical logics. It is assumed as a model of correct conditional reasoning within mathematics and serves as the basis for commands in manyprogramming languages. However, many logics replace material implication with other operators such as thestrict conditionaland thevariably strict conditional. Due to theparadoxes of material implicationand related problems, material implication is not generally considered a viable analysis ofconditional sentencesinnatural language.
In logic and related fields, the material conditional is customarily notated with an infix operator→{\displaystyle \to }.[1]The material conditional is also notated using the infixes⊃{\displaystyle \supset }and⇒{\displaystyle \Rightarrow }.[2]In the prefixedPolish notation, conditionals are notated asCpq{\displaystyle Cpq}. In a conditional formulap→q{\displaystyle p\to q}, the subformulap{\displaystyle p}is referred to as theantecedentandq{\displaystyle q}is termed theconsequentof the conditional. Conditional statements may be nested such that the antecedent or the consequent may themselves be conditional statements, as in the formula(p→q)→(r→s){\displaystyle (p\to q)\to (r\to s)}.
InArithmetices Principia: Nova Methodo Exposita(1889),Peanoexpressed the proposition "IfA{\displaystyle A}, thenB{\displaystyle B}" asA{\displaystyle A}ƆB{\displaystyle B}with the symbol Ɔ, which is the opposite of C.[3]He also expressed the propositionA⊃B{\displaystyle A\supset B}asA{\displaystyle A}ƆB{\displaystyle B}.[4][5][6]Hilbertexpressed the proposition "IfA, thenB" asA→B{\displaystyle A\to B}in 1918.[1]Russellfollowed Peano in hisPrincipia Mathematica(1910–1913), in which he expressed the proposition "IfA, thenB" asA⊃B{\displaystyle A\supset B}. Following Russell,Gentzenexpressed the proposition "IfA, thenB" asA⊃B{\displaystyle A\supset B}.Heytingexpressed the proposition "IfA, thenB" asA⊃B{\displaystyle A\supset B}at first but later came to express it asA→B{\displaystyle A\to B}with a right-pointing arrow.Bourbakiexpressed the proposition "IfA, thenB" asA→B{\displaystyle A\to B}in 1954.[7]
From aclassicalsemantic perspective, material implication is thebinarytruth functionaloperator which returns "true" unless its first argument is true and its second argument is false. This semantics can be shown graphically in the followingtruth table:
One can also consider the equivalenceA→B≡¬(A∧¬B)≡¬A∨B{\displaystyle A\to B\equiv \neg (A\land \neg B)\equiv \neg A\lor B}.
The conditionals(A→B){\displaystyle (A\to B)}where the antecedentA{\displaystyle A}is false, are called "vacuous truths".
Examples are ...
Formulas over the set of connectives{→,⊥}{\displaystyle \{\to ,\bot \}}[8]are calledf-implicational.[9]Inclassical logicthe other connectives, such as¬{\displaystyle \neg }(negation),∧{\displaystyle \land }(conjunction),∨{\displaystyle \lor }(disjunction) and↔{\displaystyle \leftrightarrow }(equivalence), can be defined in terms of→{\displaystyle \to }and⊥{\displaystyle \bot }(falsity):[10]¬A=defA→⊥A∧B=def(A→(B→⊥))→⊥A∨B=def(A→⊥)→BA↔B=def{(A→B)→[(B→A)→⊥]}→⊥{\displaystyle {\begin{aligned}\neg A&\quad {\overset {\text{def}}{=}}\quad A\to \bot \\A\land B&\quad {\overset {\text{def}}{=}}\quad (A\to (B\to \bot ))\to \bot \\A\lor B&\quad {\overset {\text{def}}{=}}\quad (A\to \bot )\to B\\A\leftrightarrow B&\quad {\overset {\text{def}}{=}}\quad \{(A\to B)\to [(B\to A)\to \bot ]\}\to \bot \\\end{aligned}}}
The validity of f-implicational formulas can be semantically established by themethod of analytic tableaux. The logical rules are
Hilbert-style proofscan be foundhereorhere.
AHilbert-style proofcan be foundhere.
The semantic definition by truth tables does not permit the examination of structurally identical propositional forms in variouslogical systems, where different properties may be demonstrated. The language considered here is restricted tof-implicational formulas.
Consider the following (candidate)natural deductionrules.
If assumingA{\displaystyle A}one can deriveB{\displaystyle B}, then one can concludeA→B{\displaystyle A\to B}.
[A]⋮BA→B{\displaystyle {\frac {\begin{array}{c}[A]\\\vdots \\B\end{array}}{A\to B}}}(→{\displaystyle \to }I)
[A]{\displaystyle [A]}is an assumption that is discharged when applying the rule.
This rule corresponds tomodus ponens.
A→BAB{\displaystyle {\frac {A\to B\quad A}{B}}}(→{\displaystyle \to }E)
AA→BB{\displaystyle {\frac {A\quad A\to B}{B}}}(→{\displaystyle \to }E)
(A→⊥)→⊥A{\displaystyle {\frac {\begin{array}{c}(A\to \bot )\to \bot \\\end{array}}{A}}}(¬¬{\displaystyle \neg \neg }E)
From falsum (⊥{\displaystyle \bot }) one can derive any formula.(ex falso quodlibet)
⊥A{\displaystyle {\frac {\bot }{A}}}(⊥{\displaystyle \bot }E)
Inclassical logicmaterial implication validates the following:
Similarly, on classical interpretations of the other connectives, material implication validates the followingentailments:
Tautologiesinvolving material implication include:
Material implication does not closely match the usage ofconditional sentencesinnatural language. For example, even though material conditionals with false antecedents arevacuously true, the natural language statement "If 8 is odd, then 3 is prime" is typically judged false. Similarly, any material conditional with a true consequent is itself true, but speakers typically reject sentences such as "If I have a penny in my pocket, then Paris is in France". These classic problems have been called theparadoxes of material implication.[16]In addition to the paradoxes, a variety of other arguments have been given against a material implication analysis. For instance,counterfactual conditionalswould all be vacuously true on such an account, when in fact some are false.[17]
In the mid-20th century, a number of researchers includingH. P. GriceandFrank Jacksonproposed thatpragmaticprinciples could explain the discrepancies between natural language conditionals and the material conditional. On their accounts, conditionalsdenotematerial implication but end up conveying additional information when they interact with conversational norms such asGrice's maxims.[16][18]Recent work informal semanticsandphilosophy of languagehas generally eschewed material implication as an analysis for natural-language conditionals.[18]In particular, such work has often rejected the assumption that natural-language conditionals aretruth functionalin the sense that the truth value of "IfP, thenQ" is determined solely by the truth values ofPandQ.[16]Thus semantic analyses of conditionals typically propose alternative interpretations built on foundations such asmodal logic,relevance logic,probability theory, andcausal models.[18][16][19]
Similar discrepancies have been observed by psychologists studying conditional reasoning, for instance, by the notoriousWason selection taskstudy, where less than 10% of participants reasoned according to the material conditional. Some researchers have interpreted this result as a failure of the participants to conform to normative laws of reasoning, while others interpret the participants as reasoning normatively according to nonclassical laws.[20][21][22]
|
https://en.wikipedia.org/wiki/Material_conditional
|
Theparadoxes of material implicationare a group oftrue formulaeinvolvingmaterial conditionalswhose translations into natural language are intuitively false when the conditional is translated as "if ... then ...". A material conditional formulaP→Q{\displaystyle P\rightarrow Q}is true unlessP{\displaystyle P}is true andQ{\displaystyle Q}is false. Ifnatural languageconditionals were understood in the same way, that would mean that the sentence "If the Nazis had won World War Two, everybody would be happy" isvacuously true. Given that such problematic consequences follow from a seemingly correct assumption about logic, they are calledparadoxes. They demonstrate a mismatch between classical logic and robust intuitions aboutmeaningandreasoning.[1]
As the best known of the paradoxes, and most formally simple, the paradox ofentailmentmakes the best introduction.
In natural language, an instance of the paradox of entailment arises:
And
Therefore
This arises from theprinciple of explosion, a law ofclassical logicstating that inconsistent premises always make an argument valid; that is, inconsistent premises imply anyconclusionat all. This seems paradoxical because although the above is a logically valid argument, it is notsound(not all of its premises are true).
Validityis defined in classical logic as follows:
For example a valid argument might run:
In this example there is no possible situation in which the premises are true while the conclusion is false. Since there is nocounterexample, the argument is valid.
But one could construct an argument in which the premises areinconsistent. This would satisfy the test for a valid argument since there would beno possible situation in which all the premises are trueand thereforeno possible situation in which all the premises are true and the conclusion is false.
For example an argument with inconsistent premises might run:
As there is no possible situation where both premises could be true, then there is certainly no possible situation in which the premises could be true while the conclusion was false. So the argument is valid whatever the conclusion is; inconsistent premises imply all conclusions.
The classical paradox formulae are closely tied toconjunction elimination,(p∧q)→p{\displaystyle (p\land q)\to p}which can be derived from the paradox formulae, for example from (1) byimportation.
In addition, there are serious problems with trying to use material implication as representing the English "if ... then ...". For example, the following are valid inferences:
but mapping these back to English sentences using "if" gives paradoxes.
The first might be read "If John is in London then he is in England, and if he is in Paris then he is in France. Therefore, it is true that either (a) if John is in London then he is in France, or (b) if he is in Paris then he is in England." Using material implication, if John isnotin London then (a) is true; whereas if heisin London then, because he is not in Paris, (b) is true. Either way, the conclusion that at least one of (a) or (b) is true is valid.
But this does not match how "if ... then ..." is used in natural language: the most likely scenario in which one would say "If John is in London then he is in England" is if onedoes not knowwhere John is, but nonetheless knows thatifhe is in London, he is in England. Under this interpretation, both premises are true, but both clauses of the conclusion are false.
The second example can be read "If both switch A and switch B are closed, then the light is on. Therefore, it is either true that if switch A is closed, the light is on, or that if switch B is closed, the light is on." Here, the most likely natural-language interpretation of the "if ... then ..." statements would be "wheneverswitch A is closed, the light is on," and "wheneverswitch B is closed, the light is on." Again, under this interpretation both clauses of the conclusion may be false (for instance in a series circuit, with a light that comes on only whenbothswitches are closed).
|
https://en.wikipedia.org/wiki/Paradoxes_of_material_implication
|
Theformal fallacyofaffirming a disjunctalso known as thefallacy of the alternative disjunctor afalse exclusionary disjunctoccurs when adeductiveargument takes the followinglogical form:[1]
Or inlogical operators:
Where⊢{\displaystyle {}\vdash {}}denotes alogical assertion.
The fallacy lies in concluding that onedisjunctmust be false because the other disjunct is true; in fact they may both be true because "or" is defined inclusively rather than exclusively. It is a fallacy ofequivocationbetween the operationsORandXOR.
Affirming the disjunct should not be confused with the valid argument known as thedisjunctive syllogism.[2]
The following argument indicates the unsoundness of affirming a disjunct:
Thisinferenceis unsound becauseallcats, by definition, are mammals.
A second example provides a first proposition that appears realistic and shows how an obviously flawed conclusion still arises under this fallacy.[3]
|
https://en.wikipedia.org/wiki/Affirming_a_disjunct
|
InBoolean logic,logical NOR,[1]non-disjunction, orjoint denial[1]is a truth-functional operator which produces a result that is the negation oflogical or. That is, a sentence of the form (pNORq) is true precisely when neitherpnorqis true—i.e. when bothpandqarefalse. It is logically equivalent to¬(p∨q){\displaystyle \neg (p\lor q)}and¬p∧¬q{\displaystyle \neg p\land \neg q}, where the symbol¬{\displaystyle \neg }signifies logicalnegation,∨{\displaystyle \lor }signifiesOR, and∧{\displaystyle \land }signifiesAND.
Non-disjunction is usually denoted as↓{\displaystyle \downarrow }or∨¯{\displaystyle {\overline {\vee }}}orX{\displaystyle X}(prefix) orNOR{\displaystyle \operatorname {NOR} }.
As with itsdual, theNAND operator(also known as theSheffer stroke—symbolized as either↑{\displaystyle \uparrow },∣{\displaystyle \mid }or/{\displaystyle /}), NOR can be used by itself, without any other logical operator, to constitute a logicalformal system(making NORfunctionally complete).
Thecomputerused in the spacecraft that first carried humans to themoon, theApollo Guidance Computer, was constructed entirely using NOR gates with three inputs.[2]
TheNOR operationis alogical operationon twological values, typically the values of twopropositions, that produces a value oftrueif and only if both operands are false. In other words, it produces a value offalseif and only if at least one operand is true.
Thetruth tableofA↓B{\displaystyle A\downarrow B}is as follows:
The logical NOR↓{\displaystyle \downarrow }is the negation of the disjunction:
Peirceis the first to show the functional completeness of non-disjunction while he doesn't publish his result.[3][4]Peirce used⋏¯{\displaystyle {\overline {\curlywedge }}}fornon-conjunctionand⋏{\displaystyle \curlywedge }for non-disjunction (in fact, what Peirce himself used is⋏{\displaystyle \curlywedge }and he didn't introduce⋏¯{\displaystyle {\overline {\curlywedge }}}while Peirce's editors made such disambiguated use).[4]Peirce called⋏{\displaystyle \curlywedge }theampheck(from Ancient Greekἀμφήκης,amphēkēs, "cutting both ways").[4]
In 1911,Stamm[pl]was the first to publish a description of both non-conjunction (using∼{\displaystyle \sim }, the Stamm hook), and non-disjunction (using∗{\displaystyle *}, the Stamm star), and showed their functional completeness.[5][6]Note that most uses in logical notation of∼{\displaystyle \sim }use this for negation.
In 1913,Shefferdescribed non-disjunction and showed its functional completeness. Sheffer used∣{\displaystyle \mid }for non-conjunction, and∧{\displaystyle \wedge }for non-disjunction.
In 1935,Webbdescribed non-disjunction forn{\displaystyle n}-valued logic, and use∣{\displaystyle \mid }for the operator. So some people call itWebb operator,[7]Webb operation[8]orWebb function.[9]
In 1940,Quinealso described non-disjunction and use↓{\displaystyle \downarrow }for the operator.[10]So some people call the operatorPeirce arroworQuine dagger.
In 1944,Churchalso described non-disjunction and use∨¯{\displaystyle {\overline {\vee }}}for the operator.[11]
In 1954,BocheńskiusedX{\displaystyle X}inXpq{\displaystyle Xpq}for non-disjunction inPolish notation.[12]
APLuses a glyph⍱that combines a∨with a~.[13]
NOR is commutative but not associative, which means thatP↓Q↔Q↓P{\displaystyle P\downarrow Q\leftrightarrow Q\downarrow P}but(P↓Q)↓R↮P↓(Q↓R){\displaystyle (P\downarrow Q)\downarrow R\not \leftrightarrow P\downarrow (Q\downarrow R)}.[14]
The logical NOR, taken by itself, is afunctionally completeset of connectives.[15]This can be proved by first showing, with atruth table, that¬A{\displaystyle \neg A}is truth-functionally equivalent toA↓A{\displaystyle A\downarrow A}.[16]Then, sinceA↓B{\displaystyle A\downarrow B}is truth-functionally equivalent to¬(A∨B){\displaystyle \neg (A\lor B)},[16]andA∨B{\displaystyle A\lor B}is equivalent to¬(¬A∧¬B){\displaystyle \neg (\neg A\land \neg B)},[16]the logical NOR suffices to define the set of connectives{∧,∨,¬}{\displaystyle \{\land ,\lor ,\neg \}},[16]which is shown to be truth-functionally complete by theDisjunctive Normal Form Theorem.[16]
This may also be seen from the fact that Logical NOR does not possess any of the five qualities (truth-preserving, false-preserving,linear,monotonic, self-dual) required to be absent from at least one member of a set offunctionally completeoperators.
NOR has the interesting feature that all other logical operators can be expressed by interlaced NOR operations. Thelogical NANDoperator also has this ability.
Expressed in terms of NOR↓{\displaystyle \downarrow }, the usual operators of propositional logic are:
|
https://en.wikipedia.org/wiki/Ampheck
|
Incomputer science, thecontrolled NOT gate(alsoC-NOTorCNOT),controlled-Xgate,controlled-bit-flip gate,Feynman gateorcontrolled Pauli-Xis aquantum logic gatethat is an essential component in the construction of agate-basedquantum computer. It can be used toentangleand disentangleBell states. Any quantum circuit can be simulated to an arbitrary degree of accuracy using a combination of CNOT gates and singlequbitrotations.[1][2]The gate is sometimes named afterRichard Feynmanwho developed an early notation for quantum gate diagrams in 1986.[3][4][5]
The CNOT can be expressed in thePauli basisas:
Being bothunitaryandHermitian, CNOThas the propertyeiθU=(cosθ)I+(isinθ)U{\displaystyle e^{i\theta U}=(\cos \theta )I+(i\sin \theta )U}andU=eiπ2(I−U)=e−iπ2(I−U){\displaystyle U=e^{i{\frac {\pi }{2}}(I-U)}=e^{-i{\frac {\pi }{2}}(I-U)}}, and isinvolutory.
The CNOT gate can be further decomposed as products ofrotation operator gatesand exactly onetwo qubit interaction gate, for example
In general, any single qubitunitary gatecan be expressed asU=eiH{\displaystyle U=e^{iH}}, whereHis aHermitian matrix, and then the controlledUisCU=ei12(I1−Z1)H2{\displaystyle CU=e^{i{\frac {1}{2}}(I_{1}-Z_{1})H_{2}}}.
The CNOT gate is also used in classicalreversible computing.
The CNOT gate operates on aquantum registerconsisting of 2 qubits. The CNOT gate flips the second qubit (the target qubit) if and only if the first qubit (the control qubit) is|1⟩{\displaystyle |1\rangle }.
If{|0⟩,|1⟩}{\displaystyle \{|0\rangle ,|1\rangle \}}are the only allowed input values for both qubits, then the TARGET output of the CNOT gate corresponds to the result of a classicalXOR gate. Fixing CONTROL as|1⟩{\displaystyle |1\rangle }, the TARGET output of the CNOT gate yields the result of a classicalNOT gate.
More generally, the inputs are allowed to be a linear superposition of{|0⟩,|1⟩}{\displaystyle \{|0\rangle ,|1\rangle \}}. The CNOT gate transforms the quantum state:
a|00⟩+b|01⟩+c|10⟩+d|11⟩{\displaystyle a|00\rangle +b|01\rangle +c|10\rangle +d|11\rangle }
into:
a|00⟩+b|01⟩+c|11⟩+d|10⟩{\displaystyle a|00\rangle +b|01\rangle +c|11\rangle +d|10\rangle }
The action of the CNOT gate can be represented by the matrix (permutation matrixform):
The first experimental realization of a CNOT gate was accomplished in 1995. Here, a singleBerylliumion in atrapwas used. The two qubits were encoded into an optical state and into the vibrational state of the ion within the trap. At the time of the experiment, the reliability of the CNOT-operation was measured to be on the order of 90%.[6]
In addition to a regular controlled NOT gate, one could construct a function-controlled NOT gate, which accepts an arbitrary numbern+1 of qubits as input, wheren+1 is greater than or equal to 2 (aquantum register). This gate flips the last qubit of the register if and only if a built-in function, with the firstnqubits as input, returns a 1.
The function-controlled NOT gate is an essential element of theDeutsch–Jozsa algorithm.
When viewed only in the computational basis{|0⟩,|1⟩}{\displaystyle \{|0\rangle ,|1\rangle \}}, the behaviour of the CNOTappears to be like the equivalent classical gate. However, the simplicity of labelling one qubit thecontroland the other thetargetdoes not reflect the complexity of what happens for most input values of both qubits.
Insight can be gained by expressing the CNOT gate with respect to a Hadamard transformed basis{|+⟩,|−⟩}{\displaystyle \{|+\rangle ,|-\rangle \}}. The Hadamard transformed basis[a]of a one-qubitregisteris given by
and the corresponding basis of a 2-qubit register is
etc. Viewing CNOT in this basis, the state of the second qubit remains unchanged, and the state of the first qubit is flipped, according to the state of the second bit. (For details see below.) "Thus, in this basis the sense of which bit is thecontrol bitand which thetarget bithas reversed. But we have not changed the transformation at all, only the way we are thinking about it."[7]
The "computational" basis{|0⟩,|1⟩}{\displaystyle \{|0\rangle ,|1\rangle \}}is the eigenbasis for the spin in the Z-direction, whereas the Hadamard basis{|+⟩,|−⟩}{\displaystyle \{|+\rangle ,|-\rangle \}}is the eigenbasis for spin in the X-direction. Switching X and Z and qubits 1 and 2, then, recovers the original transformation."[8]This expresses a fundamental symmetry of the CNOT gate.
The observation that both qubits are (equally) affected in a CNOTinteraction is of importance when considering information flow in entangled quantum systems.[9]
We now proceed to give the details of the computation. Working through each of the Hadamard basis states, the results on the right column show that the first qubit flips between|+⟩{\displaystyle |+\rangle }and|−⟩{\displaystyle |-\rangle }when the second qubit is|−⟩{\displaystyle |-\rangle }:
A quantum circuit that performs a Hadamard transform followed by CNOTthen another Hadamard transform, can be described as performing the CNOT gate in the Hadamard basis (i.e. achange of basis):
(H1⊗ H1)−1. CNOT. (H1⊗ H1)
The single-qubit Hadamard transform, H1, isHermitianand therefore its own inverse. The tensor product of two Hadamard transforms operating (independently) on two qubits is labelledH2. We can therefore write the matrices as:
H2. CNOT. H2
When multiplied out, this yields a matrix that swaps the|01⟩{\displaystyle |01\rangle }and|11⟩{\displaystyle |11\rangle }terms over, while leaving the|00⟩{\displaystyle |00\rangle }and|10⟩{\displaystyle |10\rangle }terms alone. This is equivalent to a CNOT gate where qubit 2 is the control qubit and qubit 1 is the target qubit:[b]
12[11111−11−111−1−11−1−11].[1000010000010010].12[11111−11−111−1−11−1−11]=[1000000100100100]{\displaystyle {\frac {1}{2}}{\begin{bmatrix}{\begin{array}{rrrr}1&1&1&1\\1&-1&1&-1\\1&1&-1&-1\\1&-1&-1&1\end{array}}\end{bmatrix}}.{\begin{bmatrix}1&0&0&0\\0&1&0&0\\0&0&0&1\\0&0&1&0\end{bmatrix}}.{\frac {1}{2}}{\begin{bmatrix}{\begin{array}{rrrr}1&1&1&1\\1&-1&1&-1\\1&1&-1&-1\\1&-1&-1&1\end{array}}\end{bmatrix}}={\begin{bmatrix}1&0&0&0\\0&0&0&1\\0&0&1&0\\0&1&0&0\end{bmatrix}}}
A common application of the CNOTgate is to maximally entangle two qubits into the|Φ+⟩{\displaystyle |\Phi ^{+}\rangle }Bell state; this forms part of the setup of thesuperdense coding,quantum teleportation, and entangledquantum cryptographyalgorithms.
To construct|Φ+⟩{\displaystyle |\Phi ^{+}\rangle }, the inputs A (control) and B (target) to the CNOTgate are
After applying CNOT, the resulting Bell state12(|00⟩+|11⟩){\textstyle {\frac {1}{\sqrt {2}}}(|00\rangle +|11\rangle )}has the property that the individual qubits can be measured using any basis and will always present a 50/50 chance of resolving to each state. In effect, the individual qubits are in an undefined state. The correlation between the two qubits is the complete description of the state of the two qubits; if we both choose the same basis to measure both qubits and compare notes, the measurements will perfectly correlate.
When viewed in the computational basis, it appears that qubit A is affecting qubit B. Changing our viewpoint to the Hadamard basis demonstrates that, in a symmetrical way, qubit B is affecting qubit A.
The input state can alternately be viewed as
In the Hadamard view, the control and target qubits have conceptually swapped and qubit A is inverted when qubit B is|−⟩B{\displaystyle |-\rangle _{B}}. The output state after applying the CNOTgate is12(|++⟩+|−−⟩),{\displaystyle {\tfrac {1}{\sqrt {2}}}(|++\rangle +|--\rangle ),}which can be shown as follows:
The C-ROT gate (controlledRabi rotation) is equivalent to a C-NOT gate except for aπ/2{\displaystyle \pi /2}rotation of the nuclear spin around the z axis.[10][11]
Trapped ion quantum computers:
In May, 2024, Canada implementedexport restrictionson the sale of quantum computers containing more than 34qubitsand error rates below a certain CNOTerror threshold, along with restrictions for quantum computers with more qubits and higher error rates.[12]The same restrictions quickly popped up in the UK, France, Spain and the Netherlands. They offered few explanations for this action, but all of them areWassenaar Arrangementstates, and the restrictions seem related tonational securityconcerns potentially includingquantum cryptographyorprotection from competition.[13][14]
|
https://en.wikipedia.org/wiki/Controlled_NOT_gate
|
Inclassical logic,disjunctive syllogism[1][2](historically known asmodus tollendo ponens(MTP),[3]Latinfor "mode that affirms by denying")[4]is avalidargument formwhich is asyllogismhaving adisjunctive statementfor one of itspremises.[5][6]
An example inEnglish:
Inpropositional logic,disjunctive syllogism(also known asdisjunction eliminationandor elimination, or abbreviated∨E),[7][8][9][10]is a validrule of inference. If it is known that at least one of two statements is true, and that it is not the former that is true; we caninferthat it has to be the latter that is true. Equivalently, ifPis true orQis true andPis false, thenQis true. The name "disjunctive syllogism" derives from its being a syllogism, a three-stepargument, and the use of a logical disjunction (any "or" statement.) For example, "P or Q" is a disjunction, where P and Q are called the statement'sdisjuncts. The rule makes it possible to eliminate adisjunctionfrom alogical proof. It is the rule that
where the rule is that whenever instances of "P∨Q{\displaystyle P\lor Q}", and "¬P{\displaystyle \neg P}" appear on lines of a proof, "Q{\displaystyle Q}" can be placed on a subsequent line.
Disjunctive syllogism is closely related and similar tohypothetical syllogism, which is another rule of inference involving a syllogism. It is also related to thelaw of noncontradiction, one of thethree traditional laws of thought.
For alogical systemthat validates it, thedisjunctive syllogismmay be written insequentnotation as
where⊢{\displaystyle \vdash }is ametalogicalsymbol meaning thatQ{\displaystyle Q}is asyntactic consequenceofP∨Q{\displaystyle P\lor Q}, and¬P{\displaystyle \lnot P}.
It may be expressed as a truth-functionaltautologyortheoremin the object language of propositional logic as
whereP{\displaystyle P}, andQ{\displaystyle Q}are propositions expressed in someformal system.
Here is an example:
Here is another example:
Modus tollendo ponenscan be made stronger by usingexclusive disjunctioninstead of inclusive disjunction as a premise:
Unlikemodus ponensandmodus ponendo tollens, with which it should not be confused, disjunctive syllogism is often not made an explicit rule or axiom oflogical systems, as the above arguments can be proven with a combination ofreductio ad absurdumanddisjunction elimination.
Other forms of syllogism include:
Disjunctive syllogism holds in classical propositional logic andintuitionistic logic, but not in someparaconsistent logics.[11]
|
https://en.wikipedia.org/wiki/Disjunctive_syllogism
|
Inlogic,disjunction(also known aslogical disjunction,logical or,logical addition, orinclusive disjunction) is alogical connectivetypically notated as∨{\displaystyle \lor }and read aloud as "or". For instance, theEnglishlanguage sentence "it is sunny or it is warm" can be represented in logic using the disjunctive formulaS∨W{\displaystyle S\lor W}, assuming thatS{\displaystyle S}abbreviates "it is sunny" andW{\displaystyle W}abbreviates "it is warm".
Inclassical logic, disjunction is given atruth functionalsemantics according to which a formulaϕ∨ψ{\displaystyle \phi \lor \psi }is true unless bothϕ{\displaystyle \phi }andψ{\displaystyle \psi }are false. Because this semantics allows a disjunctive formula to be true when both of its disjuncts are true, it is aninclusiveinterpretation of disjunction, in contrast withexclusive disjunction. Classicalproof theoreticaltreatments are often given in terms of rules such asdisjunction introductionanddisjunction elimination. Disjunction has also been given numerousnon-classicaltreatments, motivated by problems includingAristotle's sea battle argument,Heisenberg'suncertainty principle, as well as the numerous mismatches between classical disjunction and its nearest equivalents innatural languages.[1][2]
Anoperandof a disjunction is adisjunct.[3]
Because the logicalormeans a disjunction formula is true when either one or both of its parts are true, it is referred to as aninclusivedisjunction. This is in contrast with anexclusive disjunction, which is true when one or the other of the arguments are true, but not both (referred to asexclusive or, orXOR).
When it is necessary to clarify whether inclusive or exclusiveoris intended, English speakers sometimes uses the phraseand/or. In terms of logic, this phrase is identical toor, but makes the inclusion of both being true explicit.
In logic and related fields, disjunction is customarily notated with an infix operator∨{\displaystyle \lor }(UnicodeU+2228∨LOGICAL OR).[1]Alternative notations include+{\displaystyle +}, used mainly inelectronics, as well as|{\displaystyle \vert }and||{\displaystyle \vert \!\vert }in manyprogramming languages. The English wordoris sometimes used as well, often in capital letters. InJan Łukasiewicz'sprefix notation for logic, the operator isA{\displaystyle A}, short for Polishalternatywa(English: alternative).[4]
In mathematics, the disjunction of an arbitrary number of elementsa1,…,an{\displaystyle a_{1},\ldots ,a_{n}}can be denoted as aniterated binary operationusing a larger ⋁ (UnicodeU+22C1⋁N-ARY LOGICAL OR):[5]
⋁i=1nai=a1∨a2∨…an−1∨an{\displaystyle \bigvee _{i=1}^{n}a_{i}=a_{1}\lor a_{2}\lor \ldots a_{n-1}\lor a_{n}}
In thesemantics of logic, classical disjunction is atruth functionaloperationwhich returns thetruth valuetrueunless both of its arguments arefalse. Its semantic entry is standardly given as follows:[a]
This semantics corresponds to the followingtruth table:[1]
Inclassical logicsystems where logical disjunction is not a primitive, it can be defined in terms of the primitiveand(∧{\displaystyle \land }) andnot(¬{\displaystyle \lnot }) as:
Alternatively, it may be defined in terms ofimplies(→{\displaystyle \to }) andnotas:[6]
The latter can be checked by the following truth table:
It may also be defined solely in terms of→{\displaystyle \to }:
It can be checked by the following truth table:
The following properties apply to disjunction:
Operatorscorresponding to logical disjunction exist in mostprogramming languages.
Disjunction is often used forbitwise operations. Examples:
Theoroperator can be used to set bits in abit fieldto 1, byor-ing the field with a constant field with the relevant bits set to 1. For example,x = x | 0b00000001will force the final bit to 1, while leaving other bits unchanged.[citation needed]
Many languages distinguish between bitwise and logical disjunction by providing two distinct operators; in languages followingC,bitwise disjunctionis performed with the single pipe operator (|), and logical disjunction with the double pipe (||) operator.
Logical disjunction is usuallyshort-circuited; that is, if the first (left) operand evaluates totrue, then the second (right) operand is not evaluated. The logical disjunction operator thus usually constitutes asequence point.
In a parallel (concurrent) language, it is possible to short-circuit both sides: they are evaluated in parallel, and if one terminates with value true, the other is interrupted. This operator is thus called theparallel or.
Although the type of a logical disjunction expression is Boolean in most languages (and thus can only have the valuetrueorfalse), in some languages (such asPythonandJavaScript), the logical disjunction operator returns one of its operands: the first operand if it evaluates to a true value, and the second operand otherwise.[8][9]This allows it to fulfill the role of theElvis operator.
TheCurry–Howard correspondencerelates aconstructivistform of disjunction totagged uniontypes.[citation needed][10]
Themembershipof an element of aunion setinset theoryis defined in terms of a logical disjunction:x∈A∪B⇔(x∈A)∨(x∈B){\displaystyle x\in A\cup B\Leftrightarrow (x\in A)\vee (x\in B)}. Because of this, logical disjunction satisfies many of the same identities as set-theoretic union, such asassociativity,commutativity,distributivity, andde Morgan's laws, identifyinglogical conjunctionwithset intersection,logical negationwithset complement.[11]
Disjunction innatural languagesdoes not precisely match the interpretation of∨{\displaystyle \lor }in classical logic. Notably, classical disjunction is inclusive while natural language disjunction is often understood exclusively, as the following English example typically would be.[1]
This inference has sometimes been understood as anentailment, for instance byAlfred Tarski, who suggested that natural language disjunction isambiguousbetween a classical and a nonclassical interpretation. More recent work inpragmaticshas shown that this inference can be derived as aconversational implicatureon the basis of asemanticdenotation which behaves classically. However, disjunctive constructions includingHungarianvagy... vagyandFrenchsoit... soithave been argued to be inherently exclusive, rendering ungrammaticalityin contexts where an inclusive reading would otherwise be forced.[1]
Similar deviations from classical logic have been noted in cases such asfree choice disjunctionandsimplification of disjunctive antecedents, where certainmodal operatorstrigger aconjunction-like interpretation of disjunction. As with exclusivity, these inferences have been analyzed both as implicatures and as entailments arising from a nonclassical interpretation of disjunction.[1]
In many languages, disjunctive expressions play a role in question formation.
For instance, while the above English example can be interpreted as apolar questionasking whether it's true that Mary is either a philosopher or a linguist, it can also be interpreted as analternative questionasking which of the two professions is hers. The role of disjunction in these cases has been analyzed using nonclassical logics such asalternative semanticsandinquisitive semantics, which have also been adopted to explain the free choice and simplification inferences.[1]
In English, as in many other languages, disjunction is expressed by acoordinating conjunction. Other languages express disjunctive meanings in a variety of ways, though it is unknown whether disjunction itself is alinguistic universal. In many languages such asDyirbalandMaricopa, disjunction is marked using a verbsuffix. For instance, in the Maricopa example below, disjunction is marked by the suffixšaa.[1]
Johnš
John-NOM
Billš
Bill-NOM
vʔaawuumšaa
3-come-PL-FUT-INFER
Johnš Billš vʔaawuumšaa
John-NOM Bill-NOM 3-come-PL-FUT-INFER
'John or Bill will come.'
|
https://en.wikipedia.org/wiki/Inclusive_or
|
Inmathematics, aninvolution,involutory function, orself-inverse function[1]is afunctionfthat is its owninverse,
for allxin thedomainoff.[2]Equivalently, applyingftwice produces the original value.
Any involution is abijection.
Theidentity mapis a trivial example of an involution. Examples of nontrivial involutions includenegation(x↦ −x),reciprocation(x↦ 1/x), andcomplex conjugation(z↦z) inarithmetic;reflection, half-turnrotation, andcircle inversioningeometry;complementationinset theory; andreciprocal cipherssuch as theROT13transformation and theBeaufortpolyalphabetic cipher.
Thecompositiong∘fof two involutionsfandgis an involution if and only if theycommute:g∘f=f∘g.[3]
The number of involutions, including the identity involution, on a set withn= 0, 1, 2, ...elements is given by arecurrence relationfound byHeinrich August Rothein 1800:
The first few terms of this sequence are1, 1,2,4,10,26,76,232(sequenceA000085in theOEIS); these numbers are called thetelephone numbers, and they also count the number ofYoung tableauxwith a given number of cells.[4]The numberancan also be expressed by non-recursive formulas, such as the suman=∑m=0⌊n2⌋n!2mm!(n−2m)!.{\displaystyle a_{n}=\sum _{m=0}^{\lfloor {\frac {n}{2}}\rfloor }{\frac {n!}{2^{m}m!(n-2m)!}}.}
The number of fixed points of an involution on a finite set and itsnumber of elementshave the sameparity. Thus the number of fixed points of all the involutions on a given finite set have the same parity. In particular, every involution on anodd numberof elements has at least onefixed point. This can be used to proveFermat's two squares theorem.[5]
Thegraphof an involution (on the real numbers) issymmetricacross the liney=x. This is due to the fact that the inverse of anygeneralfunction will be its reflection over the liney=x. This can be seen by "swapping"xwithy. If, in particular, the function is aninvolution, then its graph is its own reflection.
Some basic examples of involutions include the functionsf(x)=a−x,f(x)=bx−a+a{\displaystyle {\begin{alignedat}{1}f(x)&=a-x\;,\\f(x)&={\frac {b}{x-a}}+a\end{alignedat}}}Besides, we can construct an involution by wrapping an involutiongin a bijectionhand its inverse (h−1∘g∘h{\displaystyle h^{-1}\circ g\circ h}). For instance :f(x)=1−x2on[0;1](g(x)=1−xandh(x)=x2),f(x)=ln(ex+1ex−1)(g(x)=x+1x−1=2x−1+1andh(x)=ex){\displaystyle {\begin{alignedat}{2}f(x)&={\sqrt {1-x^{2}}}\quad {\textrm {on}}\;[0;1]&{\bigl (}g(x)=1-x\quad {\textrm {and}}\quad h(x)=x^{2}{\bigr )},\\f(x)&=\ln \left({\frac {e^{x}+1}{e^{x}-1}}\right)&{\bigl (}g(x)={\frac {x+1}{x-1}}={\frac {2}{x-1}}+1\quad {\textrm {and}}\quad h(x)=e^{x}{\bigr )}\\\end{alignedat}}}
A simple example of an involution of the three-dimensionalEuclidean spaceisreflectionthrough aplane. Performing a reflection twice brings a point back to its original coordinates.
Another involution isreflection through the origin; not a reflection in the above sense, and so, a distinct example.
These transformations are examples ofaffine involutions.
An involution is aprojectivityof period 2, that is, a projectivity that interchanges pairs of points.[6]: 24
Another type of involution occurring in projective geometry is apolaritythat is acorrelationof period 2.[9]
In linear algebra, an involution is a linear operatorTon a vector space, such thatT2=I. Except for in characteristic 2, such operators are diagonalizable for a given basis with just1s and−1s on the diagonal of the corresponding matrix. If the operator is orthogonal (anorthogonal involution), it is orthonormally diagonalizable.
For example, suppose that a basis for a vector spaceVis chosen, and thate1ande2are basis elements. There exists a linear transformationfthat sendse1toe2, and sendse2toe1, and that is the identity on all other basis vectors. It can be checked thatf(f(x)) =xfor allxinV. That is,fis an involution ofV.
For a specific basis, any linear operator can be represented by amatrixT. Every matrix has atranspose, obtained by swapping rows for columns. This transposition is an involution on the set of matrices. Since elementwisecomplex conjugationis an independent involution, theconjugate transposeorHermitian adjointis also an involution.
The definition of involution extends readily tomodules. Given a moduleMover aringR, anRendomorphismfofMis called an involution iff2is the identity homomorphism onM.
Involutions are related to idempotents; if2is invertible then theycorrespondin a one-to-one manner.
Infunctional analysis,Banach *-algebrasandC*-algebrasare special types ofBanach algebraswith involutions.
In aquaternion algebra, an (anti-)involution is defined by the following axioms: if we consider a transformationx↦f(x){\displaystyle x\mapsto f(x)}then it is an involution if
An anti-involution does not obey the last axiom but instead
This former law is sometimes calledantidistributive. It also appears ingroupsas(xy)−1= (y)−1(x)−1. Taken as an axiom, it leads to the notion ofsemigroup with involution, of which there are natural examples that are not groups, for example square matrix multiplication (i.e. thefull linear monoid) withtransposeas the involution.
Inring theory, the wordinvolutionis customarily taken to mean anantihomomorphismthat is its own inverse function.
Examples of involutions in common rings:
Ingroup theory, an element of agroupis an involution if it hasorder2; that is, an involution is an elementasuch thata≠eanda2=e, whereeis theidentity element.[10]Originally, this definition agreed with the first definition above, since members of groups were always bijections from a set into itself; that is,groupwas taken to meanpermutation group. By the end of the 19th century,groupwas defined more broadly, and accordingly so wasinvolution.
Apermutationis an involution if and only if it can be written as a finite product of disjointtranspositions.
The involutions of a group have a large impact on the group's structure. The study of involutions was instrumental in theclassification of finite simple groups.
An elementxof a groupGis calledstrongly realif there is an involutiontwithxt=x−1(wherext=x−1=t−1⋅x⋅t).
Coxeter groupsare groups generated by a setSof involutions subject only to relations involving powers of pairs of elements ofS. Coxeter groups can be used, among other things, to describe the possibleregular polyhedraand theirgeneralizations to higher dimensions.
The operation of complement inBoolean algebrasis an involution. Accordingly,negationinclassical logicsatisfies thelaw of double negation:¬¬Ais equivalent toA.
Generally innon-classical logics, negation that satisfies the law of double negation is calledinvolutive. Inalgebraic semantics, such a negation is realized as an involution on the algebra oftruth values. Examples of logics that have involutive negation are Kleene and Bochvarthree-valued logics,Łukasiewicz many-valued logic, thefuzzy logic'involutive monoidal t-norm logic' (IMTL), etc. Involutive negation is sometimes added as an additional connective to logics with non-involutive negation; this is usual, for example, int-norm fuzzy logics.
The involutiveness of negation is an important characterization property for logics and the correspondingvarieties of algebras. For instance, involutive negation characterizesBoolean algebrasamongHeyting algebras. Correspondingly, classicalBoolean logicarises by adding the law of double negation tointuitionistic logic. The same relationship holds also betweenMV-algebrasandBL-algebras(and so correspondingly betweenŁukasiewicz logicand fuzzy logicBL), IMTL andMTL, and other pairs of important varieties of algebras (respectively, corresponding logics).
In the study ofbinary relations, every relation has aconverse relation. Since the converse of the converse is the original relation, the conversion operation is an involution on thecategory of relations. Binary relations areorderedthroughinclusion. While this ordering is reversed with thecomplementationinvolution, it is preserved under conversion.
TheXORbitwise operationwith a given value for one parameter is an involution on the other parameter. XORmasksin some instances were used to draw graphics on images in such a way that drawing them twice on the background reverts the background to its original state.
Two special cases of this, which are also involutions, are thebitwise NOToperation which is XOR with an all-ones value, andstream cipherencryption, which is an XOR with a secretkeystream.
This predates binary computers; practically all mechanical cipher machines implement areciprocal cipher, an involution on each typed-in letter.
Instead of designing two kinds of machines, one for encrypting and one for decrypting, all the machines can be identical and can be set up (keyed) the same way.[11]
Another involution used in computers is an order-2 bitwise permutation. For example. a color value stored as integers in the form(R,G,B), could exchangeRandB, resulting in the form(B,G,R):f(f(RGB)) = RGB,f(f(BGR)) = BGR.
Legendre transformation, which converts between theLagrangianandHamiltonian, is an involutive operation.
Integrability, a central notion of physics and in particular the subfield ofintegrable systems, is closely related to involution, for example in context ofKramers–Wannier duality.
|
https://en.wikipedia.org/wiki/Involution_(mathematics)
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.