text
stringlengths
11
320k
source
stringlengths
26
161
Inmathematics, thediscrete Fourier transform over a ringgeneralizes thediscrete Fourier transform(DFT), of a function whose values are commonlycomplex numbers, over an arbitraryring. LetRbe anyring, letn≥1{\displaystyle n\geq 1}be an integer, and letα∈R{\displaystyle \alpha \in R}be aprincipalnth root of unity, defined by:[1] The discrete Fourier transform maps ann-tuple(v0,…,vn−1){\displaystyle (v_{0},\ldots ,v_{n-1})}of elements ofRto anothern-tuple(f0,…,fn−1){\displaystyle (f_{0},\ldots ,f_{n-1})}of elements ofRaccording to the following formula: By convention, the tuple(v0,…,vn−1){\displaystyle (v_{0},\ldots ,v_{n-1})}is said to be in thetime domainand the indexjis calledtime. The tuple(f0,…,fn−1){\displaystyle (f_{0},\ldots ,f_{n-1})}is said to be in thefrequency domainand the indexkis calledfrequency. The tuple(f0,…,fn−1){\displaystyle (f_{0},\ldots ,f_{n-1})}is also called thespectrumof(v0,…,vn−1){\displaystyle (v_{0},\ldots ,v_{n-1})}. This terminology derives from the applications of Fourier transforms insignal processing. IfRis anintegral domain(which includesfields), it is sufficient to chooseα{\displaystyle \alpha }as aprimitiventh root of unity, which replaces the condition (1) by:[1] Takeβ=αk{\displaystyle \beta =\alpha ^{k}}with1≤k<n{\displaystyle 1\leq k<n}. Sinceαn=1{\displaystyle \alpha ^{n}=1},βn=(αn)k=1{\displaystyle \beta ^{n}=(\alpha ^{n})^{k}=1}, giving: where the sum matches (1). Sinceα{\displaystyle \alpha }is a primitive root of unity,β−1≠0{\displaystyle \beta -1\neq 0}. SinceRis an integral domain, the sum must be zero. ∎ Another simple condition applies in the case wherenis a power of two: (1) may be replaced byαn/2=−1{\displaystyle \alpha ^{n/2}=-1}.[1] The inverse of the discrete Fourier transform is given as: where1/n{\displaystyle 1/n}is the multiplicative inverse ofninR(if this inverse does not exist, the DFT cannot be inverted). Substituting (2) into the right-hand-side of (3), we get This is exactly equal tovj{\displaystyle v_{j}}, because∑k=0n−1α(j′−j)k=0{\displaystyle \sum _{k=0}^{n-1}\alpha ^{(j'-j)k}=0}whenj′≠j{\displaystyle j'\neq j}(by (1) withk=j′−j{\displaystyle k=j'-j}), and∑k=0n−1α(j′−j)k=n{\displaystyle \sum _{k=0}^{n-1}\alpha ^{(j'-j)k}=n}whenj′=j{\displaystyle j'=j}. ∎ Since the discrete Fourier transform is alinear operator, it can be described bymatrix multiplication. In matrix notation, the discrete Fourier transform is expressed as follows: The matrix for this transformation is called theDFT matrix. Similarly, the matrix notation for the inverse Fourier transform is Sometimes it is convenient to identify ann-tuple(v0,…,vn−1){\displaystyle (v_{0},\ldots ,v_{n-1})}with a formal polynomial By writing out the summation in the definition of the discrete Fourier transform (2), we obtain: This means thatfk{\displaystyle f_{k}}is just the value of the polynomialpv(x){\displaystyle p_{v}(x)}forx=αk{\displaystyle x=\alpha ^{k}}, i.e., The Fourier transform can therefore be seen to relate thecoefficientsand thevaluesof a polynomial: the coefficients are in the time-domain, and the values are in the frequency domain. Here, of course, it is important that the polynomial is evaluated at thenth roots of unity, which are exactly the powers ofα{\displaystyle \alpha }. Similarly, the definition of the inverse Fourier transform (3) can be written: With this means that We can summarize this as follows: if thevaluesofpv(x){\displaystyle p_{v}(x)}are thecoefficientsofpf(x){\displaystyle p_{f}(x)}, then thevaluesofpf(x){\displaystyle p_{f}(x)}are thecoefficientsofpv(x){\displaystyle p_{v}(x)}, up to a scalar factor and reordering.[2] IfF=C{\displaystyle F={\mathbb {C} }}is the field of complex numbers, then then{\displaystyle n}th roots of unity can be visualized as points on theunit circleof thecomplex plane. In this case, one usually takes which yields the usual formula for thecomplex discrete Fourier transform: Over the complex numbers, it is often customary to normalize the formulas for the DFT and inverse DFT by using the scalar factor1n{\displaystyle {\frac {1}{\sqrt {n}}}}in both formulas, rather than1{\displaystyle 1}in the formula for the DFT and1n{\displaystyle {\frac {1}{n}}}in the formula for the inverse DFT. With this normalization, the DFT matrix is then unitary. Note thatn{\displaystyle {\sqrt {n}}}does not make sense in an arbitrary field. IfF=GF(q){\displaystyle F=\mathrm {GF} (q)}is afinite field, whereqis aprimepower, then the existence of a primitiventh root automatically implies thatndividesq−1{\displaystyle q-1}, because themultiplicative orderof each element must divide the size of themultiplicative groupofF, which isq−1{\displaystyle q-1}. This in particular ensures thatn=1+1+⋯+1⏟ntimes{\displaystyle n=\underbrace {1+1+\cdots +1} _{n\ {\rm {times}}}}is invertible, so that the notation1n{\displaystyle {\frac {1}{n}}}in (3) makes sense. An application of the discrete Fourier transform overGF(q){\displaystyle \mathrm {GF} (q)}is the reduction ofReed–Solomon codestoBCH codesincoding theory. Such transform can be carried out efficiently with proper fast algorithms, for example,cyclotomic fast Fourier transform. SupposeF=GF(p){\displaystyle F=\mathrm {GF} (p)}. Ifp∤n{\displaystyle p\nmid n}, it may be the case thatn∤p−1{\displaystyle n\nmid p-1}. This means we cannot find annth{\displaystyle n^{th}}root of unity inF{\displaystyle F}. We may view the Fourier transform as an isomorphismF[Cn]=F[x]/(xn−1)≅⨁iF[x]/(Pi(x)){\displaystyle \mathrm {F} [C_{n}]=\mathrm {F} [x]/(x^{n}-1)\cong \bigoplus _{i}\mathrm {F} [x]/(P_{i}(x))}for some polynomialsPi(x){\displaystyle P_{i}(x)}, in accordance withMaschke's theorem. The map is given by theChinese remainder theorem, and the inverse is given by applyingBézout's identityfor polynomials.[3] xn−1=∏d|nΦd(x){\displaystyle x^{n}-1=\prod _{d|n}\Phi _{d}(x)}, a product of cyclotomic polynomials. FactoringΦd(x){\displaystyle \Phi _{d}(x)}inF[x]{\displaystyle F[x]}is equivalent to factoring the prime ideal(p){\displaystyle (p)}inZ[ζ]=Z[x]/(Φd(x)){\displaystyle \mathrm {Z} [\zeta ]=\mathrm {Z} [x]/(\Phi _{d}(x))}. We obtaing{\displaystyle g}polynomialsP1…Pg{\displaystyle P_{1}\ldots P_{g}}of degreef{\displaystyle f}wherefg=φ(d){\displaystyle fg=\varphi (d)}andf{\displaystyle f}is the order ofpmodd{\displaystyle p{\text{ mod }}d}. As above, we may extend the base field toGF(q){\displaystyle \mathrm {GF} (q)}in order to find a primitive root, i.e. a splitting field forxn−1{\displaystyle x^{n}-1}. Nowxn−1=∏k(x−αk){\displaystyle x^{n}-1=\prod _{k}(x-\alpha ^{k})}, so an element∑j=0n−1vjxj∈F[x]/(xn−1){\displaystyle \sum _{j=0}^{n-1}v_{j}x^{j}\in F[x]/(x^{n}-1)}maps to∑j=0n−1vjxjmod(x−αk)≡∑j=0n−1vj(αk)j{\displaystyle \sum _{j=0}^{n-1}v_{j}x^{j}\mod (x-\alpha ^{k})\equiv \sum _{j=0}^{n-1}v_{j}(\alpha ^{k})^{j}}for eachk{\displaystyle k}. Whenp|n{\displaystyle p|n}, we may still define anFp{\displaystyle F_{p}}-linear isomorphism as above. Note that(xn−1)=(xm−1)ps{\displaystyle (x^{n}-1)=(x^{m}-1)^{p^{s}}}wheren=mps{\displaystyle n=mp^{s}}andp∤m{\displaystyle p\nmid m}. We apply the above factorization toxm−1{\displaystyle x^{m}-1}, and now obtain the decompositionF[x]/(xn−1)≅⨁iF[x]/(Pi(x)ps){\displaystyle F[x]/(x^{n}-1)\cong \bigoplus _{i}F[x]/(P_{i}(x)^{p^{s}})}. The modules occurring are now indecomposable rather than irreducible. Supposep∤n{\displaystyle p\nmid n}so we have annth{\displaystyle n^{th}}root of unityα{\displaystyle \alpha }. LetA{\displaystyle A}be the above DFT matrix, a Vandermonde matrix with entriesAij=αij{\displaystyle A_{ij}=\alpha ^{ij}}for0≤i,j<n{\displaystyle 0\leq i,j<n}. Recall that∑j=0n−1α(k−l)j=nδk,l{\displaystyle \sum _{j=0}^{n-1}\alpha ^{(k-l)j}=n\delta _{k,l}}since ifk=l{\displaystyle k=l}, then every entry is 1. Ifk≠l{\displaystyle k\neq l}, then we have a geometric series with common ratioαk−l{\displaystyle \alpha ^{k-l}}, so we obtain1−αn(k−l)1−αk−l{\displaystyle {\frac {1-\alpha ^{n(k-l)}}{1-\alpha ^{k-l}}}}. Sinceαn=1{\displaystyle \alpha ^{n}=1}the numerator is zero, butk−l≠0{\displaystyle k-l\neq 0}so the denominator is nonzero. First computing the square,(A2)ik=∑j=0n−1αj(i+k)=nδi,−k{\displaystyle (A^{2})_{ik}=\sum _{j=0}^{n-1}\alpha ^{j(i+k)}=n\delta _{i,-k}}. ComputingA4=(A2)2{\displaystyle A^{4}=(A^{2})^{2}}similarly and simplifying the deltas, we obtain(A4)ik=n2δi,k{\displaystyle (A^{4})_{ik}=n^{2}\delta _{i,k}}. Thus,A4=n2In{\displaystyle A^{4}=n^{2}I_{n}}and the order is4⋅ord(n2){\displaystyle 4\cdot {\text{ord}}(n^{2})}. In order to align with the complex case and ensure the matrix is order 4 exactly, we can normalize the above DFT matrixA{\displaystyle A}with1n{\displaystyle {\frac {1}{\sqrt {n}}}}. Note that thoughn{\displaystyle {\sqrt {n}}}may not exist in the splitting fieldFq{\displaystyle F_{q}}ofxn−1{\displaystyle x^{n}-1}, we may form a quadratic extensionFq2≅Fq[x]/(x2−n){\displaystyle F_{q^{2}}\cong F_{q}[x]/(x^{2}-n)}in which the square root exists. We may then setU=1nA{\displaystyle U={\frac {1}{\sqrt {n}}}A}, andU4=In{\displaystyle U^{4}=I_{n}}. Supposep∤n{\displaystyle p\nmid n}. One can ask whether the DFT matrix isunitary over a finite field. If the matrix entries are overFq{\displaystyle F_{q}}, then one must ensureq{\displaystyle q}is a perfect square or extend toFq2{\displaystyle F_{q^{2}}}in order to define the order two automorphismx↦xq{\displaystyle x\mapsto x^{q}}. Consider the above DFT matrixAij=αij{\displaystyle A_{ij}=\alpha ^{ij}}. Note thatA{\displaystyle A}is symmetric. Conjugating and transposing, we obtainAij∗=αqji{\displaystyle A_{ij}^{*}=\alpha ^{qji}}. (AA∗)ik=∑j=0n−1αj(i+qk)=nδi,−qk{\displaystyle (AA^{*})_{ik}=\sum _{j=0}^{n-1}\alpha ^{j(i+qk)}=n\delta _{i,-qk}} by a similar geometric series argument as above. We may remove then{\displaystyle n}by normalizing so thatU=1nA{\displaystyle U={\frac {1}{\sqrt {n}}}A}and(UU∗)ik=δi,−qk{\displaystyle (UU^{*})_{ik}=\delta _{i,-qk}}. ThusU{\displaystyle U}is unitary iffq≡−1(modn){\displaystyle q\equiv -1\,({\text{mod}}\,n)}. Recall that since we have annth{\displaystyle n^{th}}root of unity,n|q2−1{\displaystyle n|q^{2}-1}. This means thatq2−1≡(q+1)(q−1)≡0(modn){\displaystyle q^{2}-1\equiv (q+1)(q-1)\equiv 0\,({\text{mod}}\,n)}. Note ifq{\displaystyle q}was not a perfect square to begin with, thenn|q−1{\displaystyle n|q-1}and soq≡1(modn){\displaystyle q\equiv 1\,({\text{mod}}\,n)}. For example, whenp=3,n=5{\displaystyle p=3,n=5}we need to extend toq2=34{\displaystyle q^{2}=3^{4}}to get a 5th root of unity.q=9≡−1(mod5){\displaystyle q=9\equiv -1\,({\text{mod}}\,5)}. For a nonexample, whenp=3,n=8{\displaystyle p=3,n=8}we extend toF32{\displaystyle F_{3^{2}}}to get an 8th root of unity.q2=9{\displaystyle q^{2}=9}, soq≡3(mod8){\displaystyle q\equiv 3\,({\text{mod}}\,8)}, and in this caseq+1≢0{\displaystyle q+1\not \equiv 0}andq−1≢0{\displaystyle q-1\not \equiv 0}.UU∗{\displaystyle UU^{*}}is a square root of the identity, soU{\displaystyle U}is not unitary. Whenp∤n{\displaystyle p\nmid n}, we have annth{\displaystyle n^{th}}root of unityα{\displaystyle \alpha }in the splitting fieldFq≅Fp[x]/(xn−1){\displaystyle F_{q}\cong F_{p}[x]/(x^{n}-1)}. Note that the characteristic polynomial of the above DFT matrix may not split overFq{\displaystyle F_{q}}. The DFT matrix is order 4. We may need to go to a further extensionFq′{\displaystyle F_{q'}}, the splitting extension of the characteristic polynomial of the DFT matrix, which at least contains fourth roots of unity. Ifa{\displaystyle a}is a generator of the multiplicative group ofFq′{\displaystyle F_{q'}}, then the eigenvalues are{±1,±a(q′−1)/4}{\displaystyle \{\pm 1,\pm a^{(q'-1)/4}\}}, in exact analogy with the complex case. They occur with some nonnegative multiplicity. Thenumber-theoretic transform (NTT)[4]is obtained by specializing the discrete Fourier transform toF=Z/p{\displaystyle F={\mathbb {Z} }/p}, theintegers modulo a primep. This is afinite field, and primitiventh roots of unity exist wheneverndividesp−1{\displaystyle p-1}, so we havep=ξn+1{\displaystyle p=\xi n+1}for a positive integerξ. Specifically, letω{\displaystyle \omega }be a primitive(p−1){\displaystyle (p-1)}th root of unity, then annth root of unityα{\displaystyle \alpha }can be found by lettingα=ωξ{\displaystyle \alpha =\omega ^{\xi }}. e.g. forp=5{\displaystyle p=5},α=2{\displaystyle \alpha =2} whenN=4{\displaystyle N=4} The number theoretic transform may be meaningful in theringZ/m{\displaystyle \mathbb {Z} /m}, even when the modulusmis not prime, provided a principal root of ordernexists. Special cases of the number theoretic transform such as the Fermat Number Transform (m= 2k+1), used by theSchönhage–Strassen algorithm, or Mersenne Number Transform[5](m= 2k− 1) use a composite modulus. In general, ifm=∏ipiei{\displaystyle m=\prod _{i}p_{i}^{e_{i}}}, then one may find annth{\displaystyle n^{th}}root of unity modmby finding primitiventh{\displaystyle n^{th}}roots of unitygi{\displaystyle g_{i}}modpiei{\displaystyle p_{i}^{e_{i}}}, yielding a tupleg=(gi)i∈∏i(Z/pieiZ)∗{\displaystyle g=\left(g_{i}\right)_{i}\in \prod _{i}\left(\mathbb {Z} /p_{i}^{e_{i}}\mathbb {Z} \right)^{\ast }}. The preimage ofg{\displaystyle g}under theChinese remainder theoremisomorphism is annth{\displaystyle n^{th}}root of unityα{\displaystyle \alpha }such thatαn/2=−1modm{\displaystyle \alpha ^{n/2}=-1\mod m}. This ensures that the above summation conditions are satisfied. We must have thatn|φ(piei){\displaystyle n|\varphi (p_{i}^{e_{i}})}for eachi{\displaystyle i}, whereφ{\displaystyle \varphi }is theEuler's totient functionfunction.[6] Thediscrete weighted transform (DWT)is a variation on the discrete Fourier transform over arbitrary rings involvingweightingthe input before transforming it by multiplying elementwise by a weight vector, then weighting the result by another vector.[7]TheIrrational base discrete weighted transformis a special case of this. Most of the important attributes of thecomplex DFT, including the inverse transform, theconvolution theorem, and mostfast Fourier transform(FFT) algorithms, depend only on the property that the kernel of the transform is a principal root of unity. These properties also hold, with identical proofs, over arbitrary rings. In the case of fields, this analogy can be formalized by thefield with one element, considering any field with a primitiventh root of unity as an algebra over the extension fieldF1n.{\displaystyle \mathbf {F} _{1^{n}}.}[clarification needed] In particular, the applicability ofO(nlog⁡n){\displaystyle O(n\log n)}fast Fourier transformalgorithms to compute the NTT, combined with the convolution theorem, mean that thenumber-theoretic transformgives an efficient way to compute exactconvolutionsof integer sequences. While the complex DFT can perform the same task, it is susceptible toround-off errorin finite-precisionfloating pointarithmetic; the NTT has no round-off because it deals purely with fixed-size integers that can be exactly represented. For the implementation of a "fast" algorithm (similar to howFFTcomputes theDFT), it is often desirable that the transform length is also highly composite, e.g., apower of two. However, there are specialized fast Fourier transform algorithms for finite fields, such as Wang and Zhu's algorithm,[8]that are efficient regardless of the transform length factors.
https://en.wikipedia.org/wiki/Number-theoretic_transform
Prosthaphaeresis(from the Greekπροσθαφαίρεσις) was analgorithmused in the late 16th century and early 17th century for approximatemultiplicationanddivisionusing formulas fromtrigonometry. For the 25 years preceding the invention of thelogarithmin 1614, it was the only known generally applicable way of approximating products quickly. Its name comes from theGreekprosthen(πρόσθεν) meaning before andaphaeresis(ἀφαίρεσις), meaning taking away or subtraction.[1][2][3] In ancient times the term was used to mean a reduction to bring the apparent place of a moving point or planet to the mean place (seeEquation of the center).Nicholas Copernicusmentions "prosthaphaeresis" several times in his 1543 workDe Revolutionibus Orbium Coelestium, to mean the "great parallax" caused by the displacement of the observer due to the Earth's annual motion. In 16th-century Europe,celestial navigationof ships on long voyages relied heavily onephemeridesto determine their position and course. These voluminous charts prepared byastronomersdetailed the position of stars and planets at various points in time. The models used to compute these were based onspherical trigonometry, which relates the angles andarc lengthsof spherical triangles (see diagram, right) using formulas such as and wherea,bandcare the anglessubtendedat the centre of the sphere by the corresponding arcs. When one quantity in such a formula is unknown but the others are known, the unknown quantity can be computed using a series of multiplications, divisions, and trigonometric table lookups. Astronomers had to make thousands of such calculations, and because the best method of multiplication available waslong multiplication, most of this time was spent taxingly multiplying out products. Mathematicians, particularly those who were also astronomers, were looking for an easier way, and trigonometry was one of the most advanced and familiar fields to these people. Prosthaphaeresis appeared in the 1580s, but its originator is not known for certain;[4]its contributors included the mathematiciansIbn Yunis,Johannes Werner,Paul Wittich,Joost Bürgi,Christopher Clavius, andFrançois Viète. Wittich, Ibn Yunis, and Clavius were all astronomers and have all been credited by various sources with discovering the method. Its most well-known proponent wasTycho Brahe, who used it extensively for astronomical calculations such as those described above. It was also used byJohn Napier, who is credited with inventing the logarithms that would supplant it. Thetrigonometric identitiesexploited by prosthaphaeresis relate products oftrigonometric functionsto sums. They include the following: The first two of these are believed to have been derived byJost Bürgi,[citation needed]who related them to [Tycho?] Brahe;[citation needed]the others follow easily from these two. If both sides are multiplied by 2, these formulas are also called theWerner formulas. Using the second formula above, the technique for multiplication of two numbers works as follows: For example, to multiply309{\displaystyle 309}and78.8{\displaystyle 78.8}: If we want the product of the cosines of the two initial values, which is useful in some of the astronomical calculations mentioned above, this is surprisingly even easier: only steps 3 and 4 above are necessary. To divide, we exploit the definition of the secant as the reciprocal of the cosine. To divide3420{\displaystyle 3420}by127{\displaystyle 127}, we scale the numbers to0.342{\displaystyle 0.342}and1.27{\displaystyle 1.27}. Now0.342{\displaystyle 0.342}is the cosine of70∘{\displaystyle 70^{\circ }}. Using a table ofsecants, we find1.27{\displaystyle 1.27}is the secant of38∘{\displaystyle 38^{\circ }}. This means that1/1.27≈cos⁡38∘{\displaystyle 1/1.27\approx \cos 38^{\circ }}, and so we can multiply0.342{\displaystyle 0.342}by1/1.27{\displaystyle 1/1.27}using the above procedure. Average the cosine of the sum of the angles,70∘+38∘=108∘{\displaystyle 70^{\circ }+38^{\circ }=108^{\circ }}, with the cosine of their difference,70∘−38∘=32∘{\displaystyle 70^{\circ }-38^{\circ }=32^{\circ }}, Scaling up to locate the decimal point gives the approximate answer,26.95{\displaystyle 26.95}. Algorithms using the other formulas are similar, but each using different tables (sine, inverse sine, cosine, and inverse cosine) in different places. The first two are the easiest because they each only require two tables. Using the second formula, however, has the unique advantage that if only a cosine table is available, it can be used to estimate inverse cosines by searching for the angle with the nearest cosine value. Notice how similar the above algorithm is to the process for multiplying using logarithms, which follows these steps: scale down, take logarithms, add, take inverse logarithm, scale up. It is no surprise that the originators of logarithms had used prosthaphaeresis. Indeed the two are closely related mathematically. In modern terms, prosthaphaeresis can be viewed as relying on the logarithm of complex numbers, in particular onEuler's formula If all the operations are performed with high precision, the product can be as accurate as desired. Although sums, differences, and averages are easy to compute with high precision, even by hand, trigonometric functions and especially inverse trigonometric functions are not. For this reason, the accuracy of the method depends to a large extent on the accuracy and detail of the trigonometric tables used. For example, a sine table with an entry for each degree can be off by as much as 0.0087 if we justround an angle off to the nearest degree; each time we double the size of the table (for example, by giving entries for every half-degree instead of every degree) we halve this error. Tables were painstakingly constructed for prosthaphaeresis with values for every second, or 3600th of a degree. Inverse sine and cosine functions are particularly troublesome, because they become steep near −1 and 1. One solution is to include more table values in this area. Another is to scale the inputs to numbers between −0.9 and 0.9. For example, 950 would become 0.095 instead of 0.950. Another effective approach to enhancing the accuracy islinear interpolation, which chooses a value between two adjacent table values. For example, if we know that the sine of 45° is about 0.707 and the sine of 46° is about 0.719, we can estimate the sine of 45.7° as 0.707 × (1 − 0.7) + 0.719 × 0.7 = 0.7154. The actual sine is 0.7157. A table of cosines with only 180 entries combined with linear interpolation is as accurate as a table with about45000entries without it. Even a quick estimate of the interpolated value is often much closer than the nearest table value. Seelookup tablefor more details. The product formulas can also be manipulated to obtain formulas that express addition in terms of multiplication. Although less useful for computing products, these are still useful for deriving trigonometric results:
https://en.wikipedia.org/wiki/Prosthaphaeresis
TheTrachtenberg systemis a system of rapidmental calculation. The system consists of a number of readily memorized operations that allow one to perform arithmetic computations very quickly. It was developed by the Russian engineerJakow Trachtenbergin order to keep his mind occupied while being held prisoner in aNazi concentration camp. This article presents some methods devised by Trachtenberg. Some of the algorithms Trachtenberg developed are for general multiplication, division and addition. Also, the Trachtenberg system includes some specialised methods for multiplying small numbers between 5 and 13. The section on addition demonstrates an effective method of checking calculations that can also be applied to multiplication. The method for general multiplication is a method to achieve multiplicationsa×b{\displaystyle a\times b}with low space complexity, i.e. as few temporary results as possible to be kept in memory. This is achieved by noting that the final digit is completely determined by multiplying the last digit of themultiplicands. This is held as a temporary result. To find the next to last digit, we need everything that influences this digit: The temporary result, the last digit ofa{\displaystyle a}times the next-to-last digit ofb{\displaystyle b}, as well as the next-to-last digit ofa{\displaystyle a}times the last digit ofb{\displaystyle b}. This calculation is performed, and we have a temporary result that is correct in the final two digits. In general, for each positionn{\displaystyle n}in the final result, we sum for alli{\displaystyle i}: People can learn this algorithm and thus multiply four-digit numbers in their head – writing down only the final result. They would write it out starting with the rightmost digit and finishing with the leftmost. Trachtenberg defined this algorithm with a kind of pairwise multiplication where two digits are multiplied by one digit, essentially only keeping the middle digit of the result. By performing the above algorithm with this pairwise multiplication, even fewer temporary results need to be held. Example:123456×789{\displaystyle 123456\times 789} To find the first (rightmost) digit of the answer, start at the first digit of the multiplicand To find the second digit of the answer, start at the second digit of the multiplicand: To find the third digit of the answer, start at the third digit of the multiplicand: To find the fourth digit of the answer, start at the fourth digit of the multiplicand: Continue with the same method to obtain the remaining digits. Trachtenberg called this the 2 Finger Method. The calculations for finding the fourth digit from the example above are illustrated at right. The arrow from the nine will always point to the digit of the multiplicand directly above the digit of the answer you wish to find, with the other arrows each pointing one digit to the right. Each arrow head points to a UT Pair, or Product Pair. The vertical arrow points to the product where we will get the Units digit, and the sloping arrow points to the product where we will get the Tens digits of the Product Pair. If an arrow points to a space with no digit there is no calculation for that arrow. As you solve for each digit you will move each of the arrows over the multiplicand one digit to the left until all of the arrows point to prefixed zeros. Division in the Trachtenberg System is done much the same as in multiplication but with subtraction instead of addition. Splitting the dividend into smaller Partial Dividends, then dividing this Partial Dividend by only the left-most digit of the divisor will provide the answer one digit at a time. As you solve each digit of the answer you then subtract Product Pairs (UT pairs) and also NT pairs (Number-Tens) from the Partial Dividend to find the next Partial Dividend. The Product Pairs are found between the digits of the answer so far and the divisor. If a subtraction results in a negative number you have to back up one digit and reduce that digit of the answer by one. With enough practice this method can be done in your head. When performing any of these multiplication algorithms the following "steps" should be applied. The answer must be found one digit at a time starting at the least significant digit and moving left. The last calculation is on the leading zero of the multiplicand. Each digit has aneighbor, i.e., the digit on its right. The rightmost digit's neighbor is the trailing zero. The 'halve' operation has a particular meaning to the Trachtenberg system. It is intended to mean "half the digit, rounded down" but for speed reasons people following the Trachtenberg system are encouraged to make this halving process instantaneous. So instead of thinking "half of seven is three and a half, so three" it's suggested that one thinks "seven, three". This speeds up calculation considerably. In this same way the tables for subtracting digits from 10 or 9 are to be memorized. And whenever the rule calls for adding half of the neighbor, always add 5 if the current digit is odd. This makes up for dropping 0.5 in the next digit's calculation. Digits and numbers are two different notions. The number T consists of n digits cn... c1. T=10n−1∗cn+...+100∗c1{\displaystyle T=10^{n-1}*c_{n}+...+10^{0}*c_{1}} Proof R=T∗2⇔R=2∗(10n−1∗cn+…+100∗c1)⇔R=10n−1∗2∗cn+…+100∗2∗c1QED{\textstyle {\begin{aligned}R&=T*2\Leftrightarrow \\R&=2*(10^{n-1}*c_{n}+\ldots +10^{0}*c_{1})\Leftrightarrow \\R&=10^{n-1}*2*c_{n}+\ldots +10^{0}*2*c_{1}\\\\&QED\end{aligned}}} Rule: Example:8624 × 2 Working from right to left: Example:76892 × 2 Working from right to left: Proof R=T∗3⇔R=3∗(10n−1∗cn+…+100∗c1)⇔R=(10/2−2)∗(10n−1∗cn+10n−2∗cn−1+…+100∗c1)⇔R=10n∗(cn/2−2)+10n∗2+10n−1∗(cn−1/2−2)+10n−1∗2+…+101∗(c1/2−2)+101∗2−2∗(10n−1∗cn+10n−2∗cn−1+…+101∗c2+100∗c1)⇔R=10n∗(cn/2−2)+10n−1∗(cn−1/2+20−2−2∗cn)+10n−2∗(cn−2/2+20−2−2∗cn−1)+…+101∗(c1/2+20−2−2∗c2)+100∗(20−2∗c1)⇔R=10n∗(cn/2−2)+10n−1∗(2∗(9−cn)+cn−1/2)+10n−2∗(2∗(9−cn−1)+cn−2/2)+…+101∗(2∗(9−c2)+c1/2)+100∗(2∗(10−c1))⇔⋮ℜ→ℵ: a=(adivb)∗b+(amodb)R=10n∗(((cndiv2)∗2+(cnmod2))/2−2)+10n−1∗(2∗(9−cn)+cn−1/2)+10n−2∗(2∗(9−cn−1)+cn−2/2)+…+101∗(2∗(9−c2)+c1/2)+100∗(2∗(10−c1))⇔R=10n∗((cndiv2)−2)+10n−1∗(10∗(cnmod2)/2+2∗(9−cn)+cn−1/2)+10n−2∗(2∗(9−cn−1)+cn−2/2)+…+101∗(2∗(9−c2)+c1/2)+100∗(2∗(10−c1))⇔R=10n∗((cndiv2)−2)+10n−1∗(2∗(9−cn)+cn−1/2+(cnmod2)∗5)+10n−2∗(2∗(9−cn−1)+cn−2/2)+…+101∗(2∗(9−c2)+c1/2)+100∗(2∗(10−c1))⇔R=10n∗((cndiv2)−2)+10n−1∗(2∗(9−cn)+(cn−1div2)+if(cnmod2<>0;5;0))+…+101∗(2∗(9−c2)+(c1div2)+if(c2mod2<>0;5;0))+100∗(2∗(10−c1)+if(c1mod2<>0;5;0))QED{\displaystyle {\begin{aligned}R&=T*3\Leftrightarrow \\R&=3*(10^{n-1}*c_{n}+\ldots +10^{0}*c_{1})\Leftrightarrow \\R&=(10/2-2)*(10^{n-1}*c_{n}+10^{n-2}*c_{n-1}+\ldots +10^{0}*c_{1})\Leftrightarrow \\R&=10^{n}*(c_{n}/2-2)+10^{n}*2+10^{n-1}*(c_{n-1}/2-2)+10^{n-1}*2+\ldots +10^{1}*(c_{1}/2-2)+10^{1}*2\\&-2*(10^{n-1}*c_{n}+10^{n-2}*c_{n-1}+\ldots +10^{1}*c_{2}+10^{0}*c_{1})\Leftrightarrow \\R&=10^{n}*(c_{n}/2-2)+10^{n-1}*(c_{n-1}/2+20-2-2*c_{n})+10^{n-2}*(c_{n-2}/2+20-2-2*c_{n-1})\\&+\ldots +10^{1}*(c_{1}/2+20-2-2*c_{2})+10^{0}*(20-2*c_{1})\Leftrightarrow \\R&=10^{n}*(c_{n}/2-2)+10^{n-1}*(2*(9-c_{n})+c_{n-1}/2)+10^{n-2}*(2*(9-c_{n-1})+c_{n-2}/2)\\&+\ldots +10^{1}*(2*(9-c_{2})+c_{1}/2)+10^{0}*(2*(10-c_{1}))\Leftrightarrow \vdots \Re \to \aleph {\text{: a }}=(a{\text{ div }}b)*b+(a{\bmod {b}})\\R&=10^{n}*(((c_{n}{\text{ div }}2)*2+(c_{n}{\bmod {2}}))/2-2)+10^{n-1}*(2*(9-c_{n})+c_{n-1}/2)+10^{n-2}*(2*(9-c_{n-1})+c_{n-2}/2)\\&+\ldots +10^{1}*(2*(9-c_{2})+c_{1}/2)+10^{0}*(2*(10-c_{1}))\Leftrightarrow \\R&=10^{n}*((c_{n}{\text{ div }}2)-2)+10^{n-1}*(10*(c_{n}{\bmod {2}})/2+2*(9-c_{n})+c_{n-1}/2)+10^{n-2}*(2*(9-c_{n-1})+c_{n-2}/2)\\&+\ldots +10^{1}*(2*(9-c_{2})+c_{1}/2)+10^{0}*(2*(10-c_{1}))\Leftrightarrow \\R&=10^{n}*((c_{n}{\text{ div }}2)-2)+10^{n-1}*(2*(9-c_{n})+c_{n-1}/2+(c_{n}{\bmod {2}})*5)+10^{n-2}*(2*(9-c_{n-1})+c_{n-2}/2)\\&+\ldots +10^{1}*(2*(9-c_{2})+c_{1}/2)+10^{0}*(2*(10-c_{1}))\Leftrightarrow \\R&=10^{n}*((c_{n}{\text{ div }}2)-2)+10^{n-1}*(2*(9-c_{n})+(c_{n-1}{\text{ div }}2)+{\text{ if}}(c_{n}{\bmod {2}}<>0;5;0))\\&+\ldots +10^{1}*(2*(9-c_{2})+(c_{1}{\text{ div }}2)+{\text{ if}}(c_{2}{\bmod {2}}<>0;5;0))\\&+10^{0}*(2*(10-c_{1})+{\text{ if}}(c_{1}{\bmod {2}}<>0;5;0))\\\\&QED\end{aligned}}} Rule: Example:492 × 3 = 1476 Working from right to left: Proof R=T∗4⇔R=4∗(10n−1∗cn+…+100∗c1)⇔R=(10/2−1)∗(10n−1∗cn+10n−2∗cn−1+…+100∗c1)⇔⋮see proof of method 3R=10n∗((cndiv2)−1)+10n−1∗((9−cn)+(cn−1div2)+if(cnmod2<>0;5;0))+…+101∗((9−c2)+(c1div2)+if(c2mod2<>0;5;0))+100∗((10−c1)+if(c1mod2<>0;5;0))QED{\displaystyle {\begin{aligned}R&=T*4\Leftrightarrow \\R&=4*(10^{n-1}*c_{n}+\ldots +10^{0}*c_{1})\Leftrightarrow \\R&=(10/2-1)*(10^{n-1}*c_{n}+10^{n-2}*c_{n-1}+\ldots +10^{0}*c_{1})\Leftrightarrow \vdots {\mbox{ see proof of method 3}}\\R&=10^{n}*((c_{n}{\text{ div }}2)-1)+10^{n-1}*((9-c_{n})+(c_{n-1}{\text{ div }}2)+{\text{ if}}(c_{n}{\bmod {2}}<>0;5;0))\\&+\ldots +10^{1}*((9-c_{2})+(c_{1}{\text{ div }}2)+{\text{ if}}(c_{2}{\bmod {2}}<>0;5;0))\\&+10^{0}*((10-c_{1})+{\text{ if}}(c_{1}{\bmod {2}}<>0;5;0))\\\\&QED\end{aligned}}} Rule: Example:346 × 4 = 1384 Working from right to left: Proof R=T∗5⇔R=5∗(10n−1∗cn+…+100∗c1)⇔R=(10/2)∗(10n−1∗cn+10n−2∗cn−1+…+100∗c1)⇔R=10n∗(cn/2)+10n−1∗(cn−1/2)+…+101∗(c1/2)⇔⋮ℜ→ℵ: a=(adivb)∗b+(amodb)R=10n∗((cndiv2)∗2+(cnmod2))/2+10n−1∗((cn−1div2)∗2+(cn−1mod2))/2+…+102∗((c2div2)∗2+(c2mod2))/2+101∗((c1div2)∗2+(c1mod2))/2⇔R=10n∗((cndiv2)+(cnmod2)/2)+10n−1∗((cn−1div2)+(cn−1mod2)/2)+…+102∗((c2div2)+(c2mod2)/2)+101∗((c1div2)+(c1mod2)/2)⇔R=10n∗(cndiv2)+10n−1∗10∗(cnmod2)/2+10n−1∗(cn−1div2)+10n−2∗10∗(cn−1mod2)/2+10n−2∗(cn−2div2)+…+102∗(c2div2)+101∗10∗(c2mod2)/2+(c1div2))+100∗10∗(c1mod2)/2⇔R=10n∗(cndiv2)+10n−1∗(cn−1div2)+10n−1∗(cnmod2)∗5+10n−2∗(cn−2div2)+10n−2∗(cn−1mod2)∗5+…+102∗(c2div2)+102∗(c3mod2)∗5+101∗(c1div2)+101∗(c2mod2)∗5+100∗(c1mod2)∗5⇔R=10n∗(cndiv2)+10n−1∗((cn−1div2)+if(cnmod2<>0;5;0))+10n−2∗((cn−2div2)+if(cn−1mod2<>0;5;0))+…+102∗((c2div2)+if(c3mod2<>0;5;0))+101∗((c1div2)+if(c2mod2<>0;5;0))+100∗if(c1mod2<>0;5;0)QED{\displaystyle {\begin{aligned}R&=T*5\Leftrightarrow \\R&=5*(10^{n-1}*c_{n}+\ldots +10^{0}*c_{1})\Leftrightarrow \\R&=(10/2)*(10^{n-1}*c_{n}+10^{n-2}*c_{n-1}+\ldots +10^{0}*c_{1})\Leftrightarrow \\R&=10^{n}*(c_{n}/2)+10^{n-1}*(c_{n-1}/2)+\ldots +10^{1}*(c_{1}/2)\Leftrightarrow \vdots \Re \to \aleph {\text{: a }}=(a{\text{ div }}b)*b+(a{\bmod {b}})\\R&=10^{n}*((c_{n}{\text{ div }}2)*2+(c_{n}{\bmod {2}}))/2+10^{n-1}*((c_{n-1}{\text{ div }}2)*2+(c_{n-1}{\bmod {2}}))/2\\&+\ldots +10^{2}*((c_{2}{\text{ div }}2)*2+(c_{2}{\bmod {2}}))/2+10^{1}*((c_{1}{\text{ div }}2)*2+(c_{1}{\bmod {2}}))/2\Leftrightarrow \\R&=10^{n}*((c_{n}{\text{ div }}2)+(c_{n}{\bmod {2}})/2)+10^{n-1}*((c_{n-1}{\text{ div }}2)+(c_{n-1}{\bmod {2}})/2)\\&+\ldots +10^{2}*((c_{2}{\text{ div }}2)+(c_{2}{\bmod {2}})/2)+10^{1}*((c_{1}{\text{ div }}2)+(c_{1}{\bmod {2}})/2)\Leftrightarrow \\R&=10^{n}*(c_{n}{\text{ div }}2)+10^{n-1}*10*(c_{n}{\bmod {2}})/2+10^{n-1}*(c_{n-1}{\text{ div }}2)+10^{n-2}*10*(c_{n-1}{\bmod {2}})/2+10^{n-2}*(c_{n-2}{\text{ div }}2)\\&+\ldots +10^{2}*(c_{2}{\text{ div }}2)+10^{1}*10*(c_{2}{\bmod {2}})/2+(c_{1}{\text{ div }}2))+10^{0}*10*(c_{1}{\bmod {2}})/2\Leftrightarrow \\R&=10^{n}*(c_{n}{\text{ div }}2)+10^{n-1}*(c_{n-1}{\text{ div }}2)+10^{n-1}*(c_{n}{\bmod {2}})*5+10^{n-2}*(c_{n-2}{\text{ div }}2)+10^{n-2}*(c_{n-1}{\bmod {2}})*5\\&+\ldots +10^{2}*(c_{2}{\text{ div }}2)+10^{2}*(c_{3}{\bmod {2}})*5+10^{1}*(c_{1}{\text{ div }}2)+10^{1}*(c_{2}{\bmod {2}})*5+10^{0}*(c_{1}{\bmod {2}})*5\Leftrightarrow \\R&=10^{n}*(c_{n}{\text{ div }}2)+10^{n-1}*((c_{n-1}{\text{ div }}2)+{\text{ if}}(c_{n}{\bmod {2}}<>0;5;0))+10^{n-2}*((c_{n-2}{\text{ div }}2)+{\text{ if}}(c_{n-1}{\bmod {2}}<>0;5;0))\\&+\ldots +10^{2}*((c_{2}{\text{ div }}2)+{\text{ if}}(c_{3}{\bmod {2}}<>0;5;0))+10^{1}*((c_{1}{\text{ div }}2)+{\text{ if}}(c_{2}{\bmod {2}}<>0;5;0))+10^{0}*{\text{ if}}(c_{1}{\bmod {2}}<>0;5;0)\\\\&QED\end{aligned}}} Rule: Example:42×5=210 Proof R=T∗6⇔R=6∗(10n−1∗cn+…+100∗c1)⇔R=(10/2+1)∗(10n−1∗cn+10n−2∗cn−1+…+100∗c1)⇔R=10n∗cn/2+1∗10n−1∗cn+10n−1∗cn−1/2+1∗10n−2∗cn−1+…+101∗c1/2+1∗100∗c1⇔R=10n∗cn/2+10n−1∗(cn+cn−1/2)+…+101∗c1/2+c1⇔⋮ℜ→ℵ: a=(adivb)∗b+(amodb)R=10n∗((cndiv2)∗2+(cnmod2))/2+10n−1∗(cn+cn−1/2)+…+101∗c1/2+c1⇔R=10n∗(cndiv2)+10n−1∗(cnmod2)∗5+10n−1∗cn+10n−1∗((cn−1div2)∗2+(cn−1mod2))/2+…+101∗c1/2+c1⇔R=10n∗(cndiv2)+10n−1∗(cn+(cn−1div2)+if((cnmod2)<>0;5;0))+10n−2∗(cn−1mod2)∗5+…+101∗c1/2+c1⇔R=10n∗(cndiv2)+10n−1∗(cn+(cn−1div2)+if((cnmod2)<>0;5;0))+10n−2∗(cn−1+(cn−2div2)+if((cn−1mod2)<>0;5;0))+…+100∗(c1+if((c1mod2)<>0;5;0))QED{\displaystyle {\begin{aligned}R&=T*6\Leftrightarrow \\R&=6*(10^{n-1}*c_{n}+\ldots +10^{0}*c_{1})\Leftrightarrow \\R&=(10/2+1)*(10^{n-1}*c_{n}+10^{n-2}*c_{n-1}+\ldots +10^{0}*c_{1})\Leftrightarrow \\R&=10^{n}*c_{n}/2+1*10^{n-1}*c_{n}+10^{n-1}*c_{n-1}/2+1*10^{n-2}*c_{n-1}+\ldots +10^{1}*c_{1}/2+1*10^{0}*c_{1}\Leftrightarrow \\R&=10^{n}*c_{n}/2+10^{n-1}*(c_{n}+c_{n-1}/2)+\ldots +10^{1}*c_{1}/2+c_{1}\Leftrightarrow \vdots \Re \to \aleph {\text{: a }}=(a{\text{ div }}b)*b+(a{\bmod {b}})\\R&=10^{n}*((c_{n}{\text{ div }}2)*2+(c_{n}{\bmod {2}}))/2+10^{n-1}*(c_{n}+c_{n-1}/2)+\ldots +10^{1}*c_{1}/2+c_{1}\Leftrightarrow \\R&=10^{n}*(c_{n}{\text{ div }}2)+10^{n-1}*(c_{n}{\bmod {2}})*5+10^{n-1}*c_{n}+10^{n-1}*((c_{n-1}{\text{ div }}2)*2+(c_{n-1}{\bmod {2}}))/2+\ldots +10^{1}*c_{1}/2+c_{1}\Leftrightarrow \\R&=10^{n}*(c_{n}{\text{ div }}2)+10^{n-1}*(c_{n}+(c_{n-1}{\text{ div }}2)+{\text{ if}}((c_{n}{\bmod {2}})<>0;5;0))+10^{n-2}*(c_{n-1}{\bmod {2}})*5+\ldots +10^{1}*c_{1}/2+c_{1}\Leftrightarrow \\R&=10^{n}*(c_{n}{\text{ div }}2)+10^{n-1}*(c_{n}+(c_{n-1}{\text{ div }}2)+{\text{ if}}((c_{n}{\bmod {2}})<>0;5;0))\\&+10^{n-2}*(c_{n-1}+(c_{n-2}{\text{ div }}2)+{\text{ if}}((c_{n-1}{\bmod {2}})<>0;5;0))\\&+\ldots +10^{0}*(c_{1}+{\text{ if}}((c_{1}{\bmod {2}})<>0;5;0))\\\\&QED\end{aligned}}} Rule: Example:357 × 6 = 2142 Working right to left: Proof R=T∗7⇔R=7∗(10n−1∗cn+…+100∗c1)⇔R=(10/2+2)∗(10n−1∗cn+…+100∗c1)⇔⋮see proof of method 6R=10n∗(cndiv2)+10n−1∗(2∗cn+(cn−1div2)+if(cnmod2<>0;5;0))+10n−2∗(2∗cn−1+(cn−2div2)+if(cn−1mod2<>0;5;0))+…+101∗(2∗c2+(c1div2)+if(c2mod2<>0;5;0))+2∗c1+if(c1mod2<>0;5;0)QED{\displaystyle {\begin{aligned}R&=T*7\Leftrightarrow \\R&=7*(10^{n-1}*c_{n}+\ldots +10^{0}*c_{1})\Leftrightarrow \\R&=(10/2+2)*(10^{n-1}*c_{n}+\ldots +10^{0}*c_{1})\Leftrightarrow \vdots {\mbox{ see proof of method 6}}\\R&=10^{n}*(c_{n}{\text{ div }}2)+10^{n-1}*(2*c_{n}+(c_{n-1}{\text{ div }}2)+{\text{ if}}(c_{n}{\bmod {2}}<>0;5;0))\\&+10^{n-2}*(2*c_{n-1}+(c_{n-2}{\text{ div }}2)+{\text{ if}}(c_{n-1}{\bmod {2}}<>0;5;0))\\&+\ldots +10^{1}*(2*c_{2}+(c_{1}{\text{ div }}2)+{\text{ if}}(c_{2}{\bmod {2}}<>0;5;0))+2*c_{1}+{\text{ if}}(c_{1}{\bmod {2}}<>0;5;0)\\\\&QED\end{aligned}}} Rule: Example:693 × 7 = 4,851 Working from right to left: Proof R=T∗8⇔R=T∗4∗2⇔⋮see proof of method 4R=10n∗2∗(cn/2−1)+10n−1∗2∗((9−cn)+cn−1/2)+10n−2∗2∗((9−cn−1)+cn−2/2)+…+101∗2∗((9−c2)+c1/2)+100∗2∗(10−c1)⇔R=10n∗(cn−2)+10n−1∗(2∗(9−cn)+cn−1)+…+102∗(2∗(9−c3)+c2)+101∗(2∗(9−c2)+c1)+2∗(10−c1)QED{\displaystyle {\begin{aligned}R&=T*8\Leftrightarrow \\R&=T*4*2\Leftrightarrow \vdots {\mbox{ see proof of method 4}}\\R&=10^{n}*2*(c_{n}/2-1)+10^{n-1}*2*((9-c_{n})+c_{n-1}/2)+10^{n-2}*2*((9-c_{n-1})+c_{n-2}/2)\\&+\ldots +10^{1}*2*((9-c_{2})+c_{1}/2)+10^{0}*2*(10-c_{1})\Leftrightarrow \\R&=10^{n}*(c_{n}-2)+10^{n-1}*(2*(9-c_{n})+c_{n-1})+\ldots +10^{2}*(2*(9-c_{3})+c_{2})+10^{1}*(2*(9-c_{2})+c_{1})+2*(10-c_{1})\\\\&QED\end{aligned}}} Rule: Example:456 × 8 = 3648 Working from right to left: Proof R=T∗9⇔R=(10−1)∗T⇔R=10n∗(cn−1)+10n+10n−1∗(cn−1−1)+10n−1+…+101∗(c1−1)+101−(10n−1∗cn+10n−2∗cn−1+…+101∗c2+100∗c1)⇔⋮see proof of method 4R=10n∗(cn−1)+10n−1∗(9−cn+cn−1)+10n−2∗(9−cn−1+cn−2)+…+101∗(9−c2+c1)+100∗(10−c1)QED{\displaystyle {\begin{aligned}R&=T*9\Leftrightarrow \\R&=(10-1)*T\Leftrightarrow \\R&=10^{n}*(c_{n}-1)+10^{n}+10^{n-1}*(c_{n-1}-1)+10^{n-1}+\ldots +10^{1}*(c_{1}-1)+10^{1}\\&-(10^{n-1}*c_{n}+10^{n-2}*c_{n-1}+\ldots +10^{1}*c_{2}+10^{0}*c_{1})\Leftrightarrow \vdots {\mbox{ see proof of method 4}}\\R&=10^{n}*(c_{n}-1)+10^{n-1}*(9-c_{n}+c_{n-1})+10^{n-2}*(9-c_{n-1}+c_{n-2})+\ldots +10^{1}*(9-c_{2}+c_{1})+10^{0}*(10-c_{1})\\\\&QED\end{aligned}}} Rule: For rules 9, 8, 4, and 3 only the first digit is subtracted from 10. After that each digit is subtracted from nine instead. Example:2,130 × 9 = 19,170 Working from right to left: Add 0 (zero) as the rightmost digit. Proof R=T∗10⇔R=10∗(10n−1∗cn+…+100∗c1)⇔R=10n∗cn+…+101∗c1QED{\displaystyle {\begin{aligned}R&=T*10\Leftrightarrow \\R&=10*(10^{n-1}*c_{n}+\ldots +10^{0}*c_{1})\Leftrightarrow \\R&=10^{n}*c_{n}+\ldots +10^{1}*c_{1}\\\\&QED\end{aligned}}} Proof R=T∗11⇔R=T∗(10+1)R=10∗(10n−1∗cn+…+100∗c1)+(10n−1∗cn+…+100∗c1)⇔R=10n∗cn+10n−1∗(cn+cn−1)+…+101∗(c2+c1)+c1QED{\displaystyle {\begin{aligned}R&=T*11\Leftrightarrow \\R&=T*(10+1)\\R&=10*(10^{n-1}*c_{n}+\ldots +10^{0}*c_{1})+(10^{n-1}*c_{n}+\ldots +10^{0}*c_{1})\Leftrightarrow \\R&=10^{n}*c_{n}+10^{n-1}*(c_{n}+c_{n-1})+\ldots +10^{1}*(c_{2}+c_{1})+c_{1}\\\\&QED\end{aligned}}} Rule: Example:3,425×11=37,675{\displaystyle 3,425\times 11=37,675} To illustrate: Thus, Proof R=T∗12⇔R=T∗(10+2)R=10∗(10n−1∗cn+…+100∗c1)+2∗(10n−1∗cn+…+100∗c1)⇔R=10n∗cn+10n−1∗(2∗cn+cn−1)+…+101∗(2∗c2+c1)+2∗c1QED{\displaystyle {\begin{aligned}R&=T*12\Leftrightarrow \\R&=T*(10+2)\\R&=10*(10^{n-1}*c_{n}+\ldots +10^{0}*c_{1})+2*(10^{n-1}*c_{n}+\ldots +10^{0}*c_{1})\Leftrightarrow \\R&=10^{n}*c_{n}+10^{n-1}*(2*c_{n}+c_{n-1})+\ldots +10^{1}*(2*c_{2}+c_{1})+2*c_{1}\\\\&QED\end{aligned}}} Rule:to multiply by12:Starting from the rightmost digit, double each digit and add the neighbor. (The "neighbor" is the digit on the right.) If the answer is greater than a single digit, simply carry over the extra digit (which will be a 1 or 2) to the next operation. The remaining digit is one digit of the final result. Example:316×12{\displaystyle 316\times 12} Determine neighbors in the multiplicand 0316: Proof R=T∗13⇔R=T∗(10+3)R=10∗(10n−1∗cn+…+100∗c1)+3∗(10n−1∗cn+…+100∗c1)⇔R=10n∗cn+10n−1∗(3∗cn+cn−1)+…+101∗(3∗c2+c1)+3∗c1QED{\displaystyle {\begin{aligned}R&=T*13\Leftrightarrow \\R&=T*(10+3)\\R&=10*(10^{n-1}*c_{n}+\ldots +10^{0}*c_{1})+3*(10^{n-1}*c_{n}+\ldots +10^{0}*c_{1})\Leftrightarrow \\R&=10^{n}*c_{n}+10^{n-1}*(3*c_{n}+c_{n-1})+\ldots +10^{1}*(3*c_{2}+c_{1})+3*c_{1}\\\\&QED\end{aligned}}} The book contains specific algebraic explanations for each of the above operations. Most of the information in this article is from the original book. The algorithms/operations for multiplication, etc., can be expressed in other more compact ways that the book does not specify, despite the chapter on algebraic description.[a] There are many other methods of calculation in mental mathematics. The list below shows a few other methods of calculating, though they may not be entirely mental.
https://en.wikipedia.org/wiki/Trachtenberg_system
Aresidue number systemorresidue numeral system(RNS) is anumeral systemrepresentingintegersby their valuesmoduloseveralpairwise coprimeintegers called the moduli. This representation is allowed by theChinese remainder theorem, which asserts that, ifMis the product of the moduli, there is, in an interval of lengthM, exactly one integer having any given set of modular values. Using a residue numeral system forarithmetic operationsis also calledmulti-modular arithmetic. Multi-modular arithmetic is widely used for computation with large integers, typically inlinear algebra, because it provides faster computation than with the usual numeral systems, even when the time for converting between numeral systems is taken into account. Other applications of multi-modular arithmetic includepolynomial greatest common divisor,Gröbner basiscomputation andcryptography. A residue numeral system is defined by a set ofkintegers called themoduli, which are generally supposed to bepairwise coprime(that is, any two of them have agreatest common divisorequal to one). Residue number systems have been defined for non-coprime moduli, but are not commonly used because of worse properties.[1] An integerxis represented in the residue numeral system by thefamilyof its remainders (indexed by the moduli of the indexes of the moduli) underEuclidean divisionby the moduli. That is and for everyi LetMbe the product of all themi{\displaystyle m_{i}}. Two integers whose difference is a multiple ofMhave the same representation in the residue numeral system defined by themis. More precisely, theChinese remainder theoremasserts that each of theMdifferent sets of possible residues represents exactly oneresidue classmoduloM. That is, each set of residues represents exactly one integerX{\displaystyle X}in the interval0,…,M−1{\displaystyle 0,\dots ,M-1}. For signed numbers, the dynamic range is−⌊M/2⌋≤X≤⌊(M−1)/2⌋{\textstyle {-\lfloor M/2\rfloor }\leq X\leq \lfloor (M-1)/2\rfloor }(whenM{\displaystyle M}is even, generally an extra negative value is represented).[2] For adding, subtracting and multiplying numbers represented in a residue number system, it suffices to perform the samemodular operationon each pair of residues. More precisely, if is the list of moduli, the sum of the integersxandy, respectively represented by the residues[x1,…,xk]{\displaystyle [x_{1},\ldots ,x_{k}]}and[y1,…,yk],{\displaystyle [y_{1},\ldots ,y_{k}],}is the integerzrepresented by[z1,…,zk],{\displaystyle [z_{1},\ldots ,z_{k}],}such that fori= 1, ...,k(as usual, mod denotes themodulo operationconsisting of taking the remainder of theEuclidean divisionby the right operand). Subtraction and multiplication are defined similarly. For a succession of operations, it is not necessary to apply the modulo operation at each step. It may be applied at the end of the computation, or, during the computation, for avoidingoverflowof hardware operations. However, operations such as magnitude comparison, sign computation, overflow detection, scaling, and division are difficult to perform in a residue number system.[3] If two integers are equal, then all their residues are equal. Conversely, if all residues are equal, then the two integers are equal or differ by a multiple ofM. It follows that testing equality is easy. At the opposite, testing inequalities (x<y) is difficult and, usually, requires to convert integers to the standard representation. As a consequence, this representation of numbers is not suitable for algorithms using inequality tests, such asEuclidean divisionandEuclidean algorithm. Division in residue numeral systems is problematic. On the other hand, ifB{\displaystyle B}is coprime withM{\displaystyle M}(that isbi≠0{\displaystyle b_{i}\not =0}) then can be easily calculated by whereB−1{\displaystyle B^{-1}}ismultiplicative inverseofB{\displaystyle B}moduloM{\displaystyle M}, andbi−1{\displaystyle b_{i}^{-1}}is multiplicative inverse ofbi{\displaystyle b_{i}}modulomi{\displaystyle m_{i}}. RNS have applications in the field ofdigitalcomputer arithmetic. By decomposing in this a large integer into a set of smaller integers, a large calculation can be performed as a series of smaller calculations that can be performed independently and in parallel.
https://en.wikipedia.org/wiki/Residue_number_system#Multiplication
TheIBM 1620was a model of scientificminicomputerproduced byIBM. It was announced on October 21, 1959,[1]and was then marketed as an inexpensive scientific computer.[2]After a total production of about two thousand machines, it was withdrawn on November 19, 1970. Modified versions of the 1620 were used as the CPU of theIBM 1710andIBM 1720Industrial Process Control Systems (making it the first digital computer considered reliable enough forreal-timeprocess controlof factory equipment).[1] Beingvariable-word-lengthdecimal, as opposed to fixed-word-length pure binary, made it an especially attractive first computer to learn on – and hundreds of thousands of students had their first experiences with a computer on the IBM 1620. Core memory cycle times were 20 microseconds for the (earlier)Model I, 10 microseconds for theModel II(about a thousand times slower than typical computer main memory in 2006). The Model II was introduced in 1962.[3] The IBM 1620 Model I was a variable "word" length decimal (BCD) computer usingcore memory. The Model I core could hold 20,000 decimal digits with each digit stored in six bits.[4][3]More memory could be added with the IBM 1623 Storage Unit, Model 1 which held 40,000 digits, or the 1623 Model 2 which held 60,000.[1] The Model II deployed the IBM 1625 core-storage memory unit,[5][6]whose memory cycle time was halved by using faster cores, compared to the Model I's (internal or 1623 memory unit): to 10 μs (i.e., the cycle speed was raised to 100 kHz). While the five-digit addresses of either model could have addressed 100,000 decimal digits, no machine larger than 60,000 decimal digits was ever marketed.[7] Memory was accessed two decimal digits at the same time (even-odd digit pair for numeric data or onealphamericcharacter for text data). Each decimal digit was six bits, composed of an odd parityCheck bit, aFlag bit, and four BCD bits for the value of the digit in the following format:[8] TheFlag bit had several uses: In addition to the valid BCD digit values there were threespecialdigit values (these couldnotbe used in calculations): Instructionswere fixed length (12 decimal digits), consisting of a two-digit "op code", a five-digit "P Address" (usually thedestinationaddress), and a five-digit "Q Address" (usually thesourceaddress or thesourceimmediate value). Some instructions, such as the B (branch) instruction, used only the P Address, and later smart assemblers included a "B7" instruction that generated a seven-digit branch instruction (op code, P address, and one extra digit because the next instruction had to start on an even-numbered digit). Fixed-pointdata "words" could be any size from two decimal digits up to all of memory not used for other purposes. Floating-pointdata "words" (using the hardwarefloating-pointoption) could be any size from 4 decimal digits up to 102 decimal digits (2 to 100 digits for themantissaand two digits for theexponent). The Fortran II compiler offered limited access to this flexibility via a "Source Program Control Card" preceding the Fortran source in a fixed format: The * in column one,ffthe number of digits for the mantissa of floating-point numbers (allowing 02 to 28),kkthe number of digits for fixed-point numbers (allowing 04 to 10) andsis to specify the memory size of the computer to run the code if not the current computer: 2, 4, or 6 for memories of 20,000 or 40,000 or 60,000 digits. The machine had no programmer-accessible registers: all operations were memory to memory (including theindex registersof the1620 II). The table below lists Alphameric mode characters (and op codes). The table below lists numeric mode characters. TheModel Iused theCyrillic characterЖ(pronounced zh) on the typewriter as a general purpose invalid character with correct parity (invalid parity being indicated with an overstrike "–"). In some 1620 installations it was called a SMERSH, as used in theJames Bondnovels that had become popular in the late 1960s. TheModel IIused a new character ❚ (called "pillow") as a general purpose invalid character with correct parity. Although the IBM 1620's architecture was very popular in the scientific and engineering community, computer scientistEdsger Dijkstrapointed out several flaws in its design in EWD37, "A review of the IBM 1620 data processing system".[9]Among these are that the machine's Branch and Transmit instruction together with Branch Back allow only one level of nested subroutine call, forcing the programmer of any code with more than one level to decide where the use of this "feature" would be most effective. He also showed how the machine's paper tape reading support could not properly read tapes containing record marks, since record marks are used to terminate the characters read in storage. One effect of this is that the 1620 cannot duplicate a tape with record marks in a straightforward way: when the record mark is encountered, the punch instruction punches an EOL character instead and terminates. However this was not a crippling problem: Most 1620 installations used the more convenient punched card input/output,[10]rather than paper tape. The successor to the 1620, theIBM 1130,[11]was based on a totally different, 16-bit binary architecture. (The 1130 line retained one 1620 peripheral, theIBM 1627drum plotter.) IBM supplied the following software for the 1620: The Monitors provided disk based versions of 1620 SPS IId, FORTRAN IId as well as a DUP (Disk Utility Program). Both Monitor systems required 20,000 digits or more of memory and one or more1311 disk drives. A collection of IBM 1620 related manuals in PDF format exists at bitsavers.[13] Since theModel Iused in-memory lookup tables for addition/subtraction,[14]limited bases (5 to 9) unsigned number arithmetic could be performed by changing the contents of these tables, but noting that the hardware included a ten's complementer for subtraction (and addition of oppositely signed numbers). To do fully signed addition and subtraction in bases 2 to 4 required detailed understanding of the hardware to create a "folded" addition table that would fake out the complementer and carry logic. Also the addition table would have to be reloaded for normal base 10 operation every time address calculations were required in the program, then reloaded again for the alternate base. This made the "trick" somewhat less than useful for any practical application. Since theModel IIhad addition and subtraction fully implemented in hardware, changing the table in memory could not be used as a "trick" to change arithmetic bases. However an optional special feature in hardware for octal input/output, logical operations, and base conversion to/from decimal was available. Although bases other than 8 and 10 were not supported, this made the Model II very practical for applications that needed to manipulate data formatted in octal by other computers (e.g., the IBM 7090). TheIBM 1620 Model I(commonly called "1620" from 1959 until the 1962 introduction of theModel II) was the original. It was produced as inexpensively as possible, tokeep the price low. TheIBM 1620 Model II(commonly called simply the Model II) was a vastly improved implementation, compared to the originalModel I. The Model II was introduced in 1962. While theLower consolefor both the Model 1[18]and the Model 2[19]IBM 1620 systems had the same lamps and switches, theUpper consoleof the pair were partly different. The balance of theUpper consolewas the same on both models: TheModel Iconsole typewriter was a modifiedModel B1, interfaced by a set of relays, and it typed at only 10 characters per second. There were a set of instructions that wrote to the typewriter, or read from it. The generalRN(read numeric) andWN(write numeric) instructions had assembly language mnemonics that supplied the "device" code in the second address field, and the control code in the low-order digit of the second address field. To simplify input and output, there were two instructions: TheModel IIused a modifiedSelectrictypewriter, which could type at 15.5 cps – a 55% improvement. Available peripherals were: The standard "output" mechanism for a program was to punch cards, which was faster than using the typewriter. These punched cards were then fed through anIBM 407mechanical calculator which could be programmed to print two cards, thus being able to use the additional print columns available on the 407. All output was synchronous, and the processor paused while theInput/Output(I/O) device produced the output, so the typewriter output could completely dominate program running time. A faster output option, theIBM 1443printer was introduced May 6, 1963,[22]and its 150–600 lines/minute capability was available for use with either model of the 1620.[23][24] It could print either 120 or 144 columns. The character width was fixed, so it was the paper size that changed; the printer printed 10 characters to the inch, so a printer could print a maximum of 12 inches or 14.4 inches of text. In addition, the printer had a buffer, so the I/O delay for the processor was reduced. However, the print instruction would block if the line had not completed. The "operating system" for the computer constituted the human operator, who would use controls on the computerconsole, which consisted of afront paneland typewriter, to load programs from the available bulk storage media such as decks of punched cards or rolls of paper tape that were kept in cabinets nearby. Later, the model 1311 disc storage device attached to the computer enabled a reduction in the fetch and carry of card decks or paper tape rolls, and a simple "Monitor" operating system could be loaded to help in selecting what to load from disc.[20][25] A standard preliminary was to clear the computer memory of any previous user's detritus – being magnetic cores, the memory retained its last state even if the power had been switched off. This was effected by using the console facilities to load a simple computer program via typing its machine code at the console typewriter, running it, and stopping it. This was not challenging as only one instruction was needed such as 160001000000, loaded at address zero and following. This meanttransmit field immediate(the 16: two-digit op-codes) to address 00010 the immediate constant field having the value 00000 (five-digit operand fields, the second being from address 11 back to 7), decrementing source and destination addresses until such time as a digit with a "flag" was copied. This was the normal machine code means of copying a constant of up to five digits. The digit string was addressed at its low-order end and extended through lower addresses until a digit with a flag marked its end. But for this instruction, no flag would ever be found because the source digits had shortly before been overwritten by digits lacking a flag. Thus the operation would roll around memory (even overwriting itself) filling it with all zeroes until the operator grew tired of watching the roiling of the indicator lights and pressed theInstant Stop - Single Cycle Executebutton. Each 20,000 digit module of memory took just under one second to clear. On the1620 IIthis instruction wouldNOTwork (due to certain optimizations in the implementation). Instead there was a button on the console calledModifywhich when pressed together with theCheck Resetbutton, when the computer was in Manual mode, would set the computer in a mode that would clear all of memory in a tenth of one second regardless of how much memory you had; when you pressedStart. It also stopped automatically when memory was cleared, instead of requiring the operator to stop it. Other than typing machine code at the console, a program could be loaded via either the paper tape reader, the card reader, or any disk drive. Loading from either tape or disk required first typing a "bootstrap" routine on the console typewriter. The card reader made things easier because it had a specialLoadbutton to signify that the first card was to be read into the computer's memory (starting at address 00000) and executed (as opposed to just starting the card reader, which then awaits commands from the computer to read cards) – this is the "bootstrap" process that gets into the computer just enough code to read in the rest of the code (from the card reader, or disc, or...) that constitutes the loader that will read in and execute the desired program. Programs were prepared ahead of time, offline, on paper tape or punched cards. But usually the programmers were allowed to run the programs personally, hands-on, instead of submitting them to operators as was the case with mainframe computers at that time. And the console typewriter allowed entering data and getting output in an interactive fashion, instead of just getting the normal printed output from a blind batch run on a pre-packaged data set. As well, there were fourprogram switcheson the console whose state a running program could test and so have its behavior directed by its user. The computer operator could also stop a running program (or it may come to a deliberately programmed stop) then investigate or modify the contents of memory: being decimal-based, this was quite easy; even floating-point numbers could be read at a glance. Execution could then be resumed, from any desired point. Aside from debugging, scientific programming is typically exploratory, by contrast to commercial data processing where the same work is repeated on a regular schedule. The most important items on the 1620's console were a pair of buttons labeledInsert&Release, and the console typewriter. The typewriter is used for operator input/output, both as the main console control of the computer and for program controlled input/output. Later models of the typewriter had a special key markedR-Sthat combined the functions of the consoleRelease&Startbuttons (this would be considered equivalent to anEnterkey on a modern keyboard). Note: several keys on the typewriter did not generate input characters, these includedTabandReturn(the 1620s alphameric and numeric BCD character sets lacked character codes for these keys). The next most important items on the console were the buttons labeledStart,Stop-SIE, andInstant Stop-SCE. For program debugging there were the buttons labeledSave&Display MAR. When a Branch Back instruction was executed inSavemode, it copied the saved value back to the program counter (instead of copying the return address register as it normally did) and deactivatedSavemode. This was used during debugging to remember where the program had been stopped to allow it to be resumed after the debugging instructions that the operator had typed on the typewriter had finished. Note: the MARS register used to save the program counter in was also used by theMultiplyinstruction, so this instruction and theSavemode were incompatible! However, there was no need to use multiply in debugging code, so this was not considered to be a problem. All of main memory could be cleared from the console by entering and executing a transfer instruction from address to address +1, this would overwrite any word mark, that would normally stop a transfer instruction, and wrap around at the end of memory. After a moment, pressing Stop would stop the transfer instruction and memory would be cleared. TheIBM 1621 Paper Tape Readercould read a maximum of 150 characters per second;TheIBM 1624 Paper Tape Punchcould output a maximum of 15 characters/second.[1] Both units: The1621Tape Reader and1624Tape Punch included controls for: TheIBM 1622 Card reader/punchcould: The 1622's controls were divided into three groups: 3 punch control rocker switches, 6 buttons, and 2 reader control rocker switches. Punch Rocker switches: Buttons: Reader Rocker switches: The1311Disk drive controls. The FORTRAN II compiler and SPS assembler were somewhat cumbersome to use[26][27]by modern standards, however, with repetition, the procedure soon became automatic and you no longer thought about the details involved. GOTRAN was much simpler to use, as it directly produced an executable in memory. However it was not a complete FORTRAN implementation. To improve this various third-party FORTRAN compilers were developed. One of these was developed by Bob Richardson,[28][29]a programmer atRice University, the FLAG (FORTRAN Load-and-Go) compiler. Once the FLAG deck had been loaded, all that was needed was to load the source deck to get directly to the output deck; FLAG stayed in memory, so it was immediately ready to accept the next source deck. This was particularly convenient for dealing with many small jobs. For instance, atAuckland Universitya batch job processor for student assignments (typically, many small programs not requiring much memory) chugged through a class lot rather faster than the laterIBM 1130did with its disk-based system. The compiler remained in memory, and the student's program had its chance in the remaining memory to succeed or fail, though a bad failure might disrupt the resident compiler. Later, disk storage devices were introduced, removing the need for working storage on card decks. The various decks of cards constituting the compiler and loader no longer need be fetched from their cabinets but could be stored on disk and loaded under the control of a simple disk-based operating system: a lot of activity becomes less visible, but still goes on. Since the punch side of the card reader-punch did not edge-print the characters across the top of the cards, one had to take any output decks over to aseparate machine, typically anIBM 557Alphabetic Interpreter, that read each card and printed its contents along the top. Listings were usually generated by punching a listing deck and using anIBM 407accounting machine to print the deck. Most of the logic circuitry of the 1620 was a type ofresistor–transistor logic(RTL) using"drift" transistors(a type of transistor invented byHerbert Kroemerin 1953) for their speed, that IBM referred to asSaturated Drift Transistor Resistor Logic(SDTRL). Other IBM circuit types used were referred to as:Alloy(some logic, but mostly various non-logic functions, named for the kind of transistors used),CTRL(another type of RTL, but slower thanSDTRL),CTDL(a type ofdiode–transistor logic(DTL)), andDL(another type of RTL, named for the kind of transistor used, "drift" transistors). Typical logic levels of all these circuits (S Level) were high: 0 V to -0.5 V, low: -6 V to -12 V.Transmission linelogic levels ofSDTRLcircuits (C Level) were high: 1 V, low: -1 V. Relay circuits used either of two logic levels (T Level) high: 51 V to 46 V, low: 16 V to 0 V or (W Level) high: 24 V, low: 0 V. These circuits were constructed of individual discrete components mounted on single sided paper-epoxyprinted circuitboards 2.5 by 4.5 inches (64 by 114 millimeters) with a 16-pin gold-platededge connector, that IBM referred to asSMScards (Standard Modular System). The amount of logic on one card was similar to that in one7400 seriesSSIor simplerMSIpackage (e.g., 3 to 5 logic gates or a couple of flip-flops). These boards were inserted into sockets mounted in door-like racks which IBM referred to asgates. The machine had the following "gates" in its basic configuration: There were two different types ofcore memoryused in the 1620: The address decoding logic of the Main memory also used two planes of 100pulse transformercores per module to generate the X-Y Line half-current pulses. There were two models of the 1620, each having totally different hardware implementations: In 1958 IBM assembled a team at thePoughkeepsie, New Yorkdevelopment laboratory to study the "small scientific market". Initially the team consisted of Wayne Winger (Manager), Robert C. Jackson, and William H. Rhodes. The competing computers in this market were theLibrascope LGP-30and theBendix G-15; both weredrum memorymachines. IBM's smallest computer at the time was the popularIBM 650, a fixed word length decimal machine that also used drum memory. All three usedvacuum tubes. It was concluded that IBM could offer nothing really new in that area. To compete effectively would require use of technologies that IBM had developed for larger computers, yet the machine would have to be produced at the least possible cost. To meet this objective, the team set the following requirements: The team expanded with the addition of Anne Deckman, Kelly B. Day, William Florac, and James Brenza. They completed the(codename) CADETprototype in the spring of 1959. Meanwhile, theSan Jose, Californiafacility was working on a proposal of its own. IBM could only build one of the two and thePoughkeepsieproposal won because "the San Jose version is top of the line and not expandable, while your proposal has all kinds of expansion capability - never offer a machine that cannot be expanded". in the IBM announcement of the machine. Management was not entirely convinced thatcore memorycould be made to work in small machines, so Gerry Ottaway was loaned to the team to design adrum memoryas a backup. Duringacceptance testingby the Product Test Lab, repeated core memory failures were encountered and it looked likely that management's predictions would come true. However, at the last minute it was found that themuffin fanused to blow hot air through the core stack was malfunctioning, causing the core to pick up noise pulses and fail to read correctly. After the fan problem was fixed, there were no further problems with the core memory and the drum memory design effort was discontinued as unnecessary. Following announcement of the IBM 1620 on October 21, 1959, due to an internal reorganization of IBM, it was decided to transfer the computer from the Data Processing Division at Poughkeepsie (large scale mainframe computers only) to the General Products Division at San Jose (small computers and support products only) for manufacturing. Following transfer to San Jose, someone there jokingly suggested that the code nameCADETactually stood for "Can'tAdd,Doesn'tEvenTry", referring to the use of addition tables in memory rather than dedicated addition circuitry (and SDTRL actually stood for "SoldDownTheRiverLogic" became a common joke among the CEs). This stuck and became very well known among the user community.[30][31][32] An IBM 1620 model II was used by Vearl N. Huff, NASA Headquarters (FOB 10B, Washington DC) to program a three-dimensional simulation inFortranof the tetheredGeminicapsule –Agenarocket module two-body problem at a time when it was not completely understood if it was safe to tether two objects together in space due to possible elastic tether induced collisions. The same computer was also used to simulate the orbits of the Gemini flights, producing printer-art charts of each orbit. These simulation were run over-night and the data examined the next day.[33] In 1963 an IBM 1620 was installed at IIT Kanpur providing the kicker for India's software prowess.[34] In 1964 at the Australian National University, Martin Ward used an IBM 1620 model I to calculate the order of the Janko groupJ1.[35] In 1966 theITUproduced an explanatory film on a 1963 system fortypesettingby computer at theWashington Evening Star, using an IBM 1620 and a Linofilmphototypesetter.[36] In 1964 an IBM 1620 was installed atThe University of Iceland, becoming the first computer in Iceland.[37] Many in the user community recall the 1620 being referred to asCADET, jokingly meaning "Can'tAdd,Doesn'tEvenTry", referring to the use of addition tables in memory rather than dedicated addition circuitry.[41] Seedevelopment historyfor an explanation of all three known interpretations of the machine's code name. The internal code nameCADETwas selected for the machine. One of the developers says that this stood for "Computer withADvancedEconomicTechnology", however others recall it as simply being one half of"SPACE - CADET", whereSPACEwas the internal code name of theIBM 1401machine, also then under development.[citation needed]
https://en.wikipedia.org/wiki/IBM_1620
TheBKM algorithmis ashift-and-add algorithmfor computingelementary functions, first published in 1994 by Jean-Claude Bajard, Sylvanus Kla, and Jean-Michel Muller. BKM is based on computing complexlogarithms(L-mode) andexponentials(E-mode) using a method similar to the algorithmHenry Briggsused to compute logarithms. By using a precomputed table of logarithms of negative powers of two, the BKM algorithm computes elementary functions using only integer add, shift, and compare operations. BKM is similar toCORDIC, but uses a table oflogarithmsrather than a table ofarctangents. On each iteration, a choice of coefficient is made from a set of nine complex numbers, 1, 0, −1, i, −i, 1+i, 1−i, −1+i, −1−i, rather than only −1 or +1 as used by CORDIC. BKM provides a simpler method of computing some elementary functions, and unlike CORDIC, BKM needs no result scaling factor. The convergence rate of BKM is approximately one bit per iteration, like CORDIC, but BKM requires more precomputed table elements for the same precision because the table stores logarithms of complex operands. As with other algorithms in the shift-and-add class, BKM is particularly well-suited to hardware implementation. The relative performance of software BKM implementation in comparison to other methods such aspolynomialorrationalapproximations will depend on the availability of fast multi-bit shifts (i.e. abarrel shifter) or hardwarefloating pointarithmetic. In order to solve the equation the BKM algorithm takes advantage of a basicproperty of logarithms UsingPi notation, this identity generalizes to Because any number can be represented by a product, this allows us to choose any set of valuesak{\displaystyle a_{k}}which multiply to give the value we started with. In computer systems, it's much faster to multiply and divide by multiples of 2, but because not every number is a multiple of 2, usingak=1+2m{\displaystyle a_{k}=1+2^{m}}is a better option than a more simple choice ofak=2m{\displaystyle a_{k}=2^{m}}. Since we want to start with large changes and get more accurate ask{\displaystyle k}increases, we can more specifically useak=1+2−k{\displaystyle a_{k}=1+2^{-k}}, allowing the product to approach any value between 1 and ~4.768, depending on which subset ofak{\displaystyle a_{k}}we use in the final product. At this point, the above equation looks like this: This choice ofak{\displaystyle a_{k}}reduces the computational complexity of the product from repeated multiplication to simple addition and bit-shifting depending on the implementation. Finally, by storing the valuesln⁡(1+2−k){\displaystyle \ln(1+2^{-k})}in a table, calculating the solution is also a simple matter of addition. Iteratively, this gives us two separate sequences. One sequence approaches the input valuex{\displaystyle x}while the other approaches the output valueln⁡(x)=y{\displaystyle \ln(x)=y}: xk={1ifk=0xk−1⋅(1+2−k)ifxkwould be≤xxk−1otherwise{\displaystyle x_{k}={\begin{cases}1&{\text{if }}k=0\\x_{k-1}\cdot (1+2^{-k})&{\text{if }}x_{k}{\text{ would be}}\leq x\\x_{k-1}&{\text{otherwise}}\end{cases}}} Given this recursive definition and becausexk{\displaystyle x_{k}}is strictly increasing, it can be shown byinductionandconvergencethat for any1≤x≲4.768{\displaystyle 1\leq x\lesssim 4.768}. For calculating the output, we first create the reference table Then the output is computed iteratively by the definitionyk={0ifk=0yk−1+Akifxkwould be≤xyk−1otherwise{\displaystyle y_{k}={\begin{cases}0&{\text{if }}k=0\\y_{k-1}+A_{k}&{\text{if }}x_{k}{\text{ would be}}\leq x\\y_{k-1}&{\text{otherwise}}\end{cases}}}The conditions in this iteration are the same as the conditions for the input. Similar to the input, this sequence is also strictly increasing, so it can be shown that for any0≤y≲1.562{\displaystyle 0\leq y\lesssim 1.562}. Because the algorithm above calculates both the input and output simultaneously, it's possible to modify it slightly so thaty{\displaystyle y}is the known value andx{\displaystyle x}is the value we want to calculate, thereby calculating the exponential instead of the logarithm. Since x becomes an unknown in this case, the conditional changes from to To calculate the logarithm function (L-mode), the algorithm in each iteration tests ifxn⋅(1+2−n)≤x{\displaystyle x_{n}\cdot (1+2^{-n})\leq x}. If so, it calculatesxn+1{\displaystyle x_{n+1}}andyn+1{\displaystyle y_{n+1}}. AfterN{\displaystyle N}iterations the value of the function is known with an error ofΔln⁡(x)≤2−N{\displaystyle \Delta \ln(x)\leq 2^{-N}}. Example program for natural logarithm in C++ (seeA_efor table): Logarithms for bases other thanecan be calculated with similar effort. Example program for binary logarithm in C++ (seeA_2for table): The allowed argument range is the same for both examples (1 ≤Argument≤ 4.768462058…). In the case of the base-2 logarithm the exponent can be split off in advance (to get the integer part) so that the algorithm can be applied to the remainder (between 1 and 2). Since the argument is smaller than 2.384231…, the iteration ofkcan start with 1. Working in either base, the multiplication by s can be replaced with direct modification of the floating point exponent, subtracting 1 from it during each iteration. This results in the algorithm using only addition and no multiplication. To calculate the exponential function (E-mode), the algorithm in each iteration tests ifyn+ln⁡(1+2−n)≤y{\displaystyle y_{n}+\ln(1+2^{-n})\leq y}. If so, it calculatesxn+1{\displaystyle x_{n+1}}andyn+1{\displaystyle y_{n+1}}. AfterN{\displaystyle N}iterations the value of the function is known with an error ofΔexp⁡(x)≤2−N{\displaystyle \Delta \exp(x)\leq 2^{-N}}. Example program in C++ (seeA_efor table):
https://en.wikipedia.org/wiki/BKM_algorithm
Incomputer science, alogical shiftis abitwise operationthat shifts all the bits of its operand. The two base variants are thelogical left shiftand thelogical right shift. This is further modulated by the number of bit positions a given value shall be shifted, such asshift left by 1orshift right by n. Unlike anarithmetic shift, a logical shift does not preserve a number's sign bit or distinguish a number'sexponentfrom itssignificand(mantissa); every bit in the operand is simply moved a given number of bit positions, and the vacant bit-positions are filled, usually with zeros, and possibly ones (contrast with acircular shift). A logical shift is often used when its operand is being treated as asequenceof bits instead of as a number. Logical shifts can be useful as efficient ways to perform multiplication or division of unsignedintegersby powers of two. Shifting left bynbits on a signed or unsigned binary number has the effect of multiplying it by 2n. Shifting right bynbits on anunsignedbinary number has the effect of dividing it by 2n(rounding towards 0). Logical right shift differs from arithmetic right shift. Thus, many languages have differentoperatorsfor them. For example, inJavaandJavaScript, the logical right shift operator is>>>, but the arithmetic right shift operator is>>. (Java has only one left shift operator (<<), because left shift via logic and arithmetic have the same effect.) Theprogramming languagesC,C++, andGo, however, have only one right shift operator,>>. Most C and C++ implementations, and Go, choose which right shift to perform depending on the type of integer being shifted: signed integers are shifted using the arithmetic shift, and unsigned integers are shifted using the logical shift. In particular, C++ uses its logical shift operators as part of the syntax of its input and output functions, called "cin" and "cout" respectively. All currently relevant C standards (ISO/IEC 9899:1999 to 2011) leave a definition gap for cases where the number of shifts is equal to or bigger than the number of bits in the operands in a way that the result is undefined. This helps allow C compilers to emit efficient code for various platforms by allowing direct use of the native shift instructions which have differing behavior. For example, shift-left-word inPowerPCchooses the more-intuitive behavior where shifting by the bit width or above gives zero,[6]whereas SHL inx86chooses to mask the shift amount to the lower bitsto reduce the maximum execution time of the instructions, and as such a shift by the bit width doesn't change the value.[7] Some languages, such as the.NET FrameworkandLLVM, also leave shifting by the bit width and aboveunspecified(.NET)[8]orundefined(LLVM).[9]Others choose to specify the behavior of their most common target platforms, such asC#which specifies the x86 behavior.[10] If the bit sequence 0001 0111 (decimal 23) is logically shifted by one bit position, then: Note: MSB = Most Significant Bit, LSB = Least Significant Bit
https://en.wikipedia.org/wiki/Logical_shift_left
Thenon-adjacent form(NAF) of a number is a uniquesigned-digit representation, in which non-zero values cannot be adjacent. For example: All are valid signed-digit representations of 7, but only the final representation, (1 0 0 −1)2, is in non-adjacent form. The non-adjacent form is also known as "canonical signed digit" representation. NAF assures a unique representation of aninteger, but the main benefit of it is that theHamming weightof the value will be minimal. For regularbinaryrepresentations of values, half of allbitswill be non-zero, on average, but with NAF this drops to only one-third of all digits. This leads to efficient implementations of add/subtract networks (e.g. multiplication by a constant) in hardwireddigital signal processing.[1] Obviously, at most half of the digits are non-zero, which was the reason it was introduced by G.W. Reitweisner[2]for speeding up early multiplication algorithms, much likeBooth encoding. Because every non-zero digit has to be adjacent to two 0s, the NAF representation can be implemented such that it only takes a maximum ofm+ 1 bits for a value that would normally be represented in binary withmbits. The properties of NAF make it useful in various algorithms, especially some incryptography; e.g., for reducing the number of multiplications needed for performing anexponentiation. In the algorithm,exponentiation by squaring, the number of multiplications depends on the number of non-zero bits. If the exponent here is given in NAF form, a digit value 1 implies a multiplication by the base, and a digit value −1 by its reciprocal. Other ways of encoding integers that avoid consecutive 1s includeBooth encodingandFibonacci coding. There are several algorithms for obtaining the NAF representation of a value given in binary. One such is the following method using repeated division; it works by choosing non-zero coefficients such that the resulting quotient is divisible by 2 and hence the next coefficient a zero.[3] A faster way is given by Prodinger[4]wherexis the input,npthe string of positive bits andnmthe string of negative bits: which is used, for example, inA184616.
https://en.wikipedia.org/wiki/Non-adjacent_form
Aredundant binary representation (RBR)is anumeral systemthat uses more bits than needed to represent a single binarydigitso that most numbers have several representations. An RBR is unlike usualbinary numeral systems, includingtwo's complement, which use a single bit for each digit. Many of an RBR's properties differ from those of regular binary representation systems. Most importantly, an RBR allows addition without using a typical carry.[1]When compared to non-redundant representation, an RBR makesbitwise logical operationslower, butarithmetic operationsare faster when a greater bit width is used.[2]Usually, each digit has its own sign that is not necessarily the same as the sign of the number represented. When digits have signs, that RBR is also asigned-digit representation. An RBR is aplace-value notation system. In an RBR,digitsarepairsof bits, that is, for every place, an RBR uses a pair of bits. The value represented by a redundant digit can be found using a translation table. This table indicates the mathematical value of each possible pair of bits. As in conventional binary representation, theintegervalue of a given representation is a weighted sum of the values of the digits. The weight starts at 1 for the rightmost position and goes up by a factor of 2 for each next position. Usually, an RBR allows negative values. There is no single sign bit that tells if a redundantly represented number is positive or negative. Most integers have several possible representations in an RBR. Often one of the several possible representations of an integer is chosen as the "canonical" form, so each integer has only one possible "canonical" representation;non-adjacent formand two's complement are popular choices for that canonical form. Anintegervalue can be converted back from an RBR using the following formula, wherenis the number of digits anddkis the interpreted value of thek-th digit, wherekstarts at 0 at the rightmost position: The conversion from an RBR ton-bit two's complement can be done in O(log(n)) time using aprefix adder.[3] Not all redundant representations have the same properties. For example, using the translation table on the right, the number 1 can be represented in this RBR in many ways: "01·01·01·11" (0+0+0+1), "01·01·10·11" (0+0+0+1), "01·01·11·00" (0+0+2−1), or "11·00·00·00" (8−4−2−1). Also, for this translation table, flipping all bits (NOT gate) corresponds to finding theadditive inverse(multiplicationby−1) of the integer represented.[4] In this case:dk∈{−1,0,1}{\displaystyle d_{k}\in \{-1,0,1\}} Redundant representations are commonly used inside high-speedarithmetic logic units. In particular, acarry-save adderuses a redundant representation.[citation needed] The addition operation in all RBRs is carry-free, which means that the carry does not have to propagate through the full width of the addition unit. In effect, the addition in all RBRs is a constant-time operation. The addition will always take the same amount of time independently of the bit-width of theoperands. This does not imply that the addition is always faster in an RBR than itstwo's complementequivalent, but that the addition will eventually be faster in an RBR with increasing bit width because the two's complement addition unit's delay is proportional to log(n) (wherenis the bit width).[5]Addition in an RBR takes a constant time because each digit of the result can be calculated independently of one another, implying that each digit of the result can be calculated in parallel.[6] Subtraction is the same as the addition except that the additive inverse of the second operand needs to be computed first. For common representations, this can be done on a digit-by-digit basis. Manyhardware multipliersinternally useBooth encoding, a redundant binary representation. Bitwise logical operations, such asAND,ORandXOR, are not possible in redundant representations. While it is possible to dobitwise operationsdirectly on the underlying bits inside an RBR, it is not clear that this is a meaningful operation; there are many ways to represent a value in an RBR, and the value of the result would depend on the representation used. To get the expected results, it is necessary to convert the two operands first to non-redundant representations. Consequently, logical operations are slower in an RBR. More precisely, they take a time proportional to log(n) (wherenis the number of digits) compared to constant time intwo's complement. It is, however, possible topartiallyconvert only the least-significant portion of a redundantly represented number to non-redundant form. This allows operations, such as masking off the lowkbits, to be done in log(k) time.
https://en.wikipedia.org/wiki/Redundant_binary_representation
Incomputer science,arbitrary-precision arithmetic, also calledbignum arithmetic,multiple-precision arithmetic, or sometimesinfinite-precision arithmetic, indicates thatcalculationsare performed on numbers whosedigitsofprecisionare potentially limited only by the availablememoryof the host system. This contrasts with the fasterfixed-precision arithmeticfound in mostarithmetic logic unit(ALU) hardware, which typically offers between 8 and 64bitsof precision. Several modernprogramming languageshave built-in support for bignums,[1][2][3][4]and others have libraries available for arbitrary-precisionintegerandfloating-pointmath. Rather than storing values as a fixed number of bits related to the size of theprocessor register, these implementations typically use variable-lengtharraysof digits. Arbitrary precision is used in applications where the speed ofarithmeticis not a limiting factor, or whereprecise resultswith very large numbers are required. It should not be confused with thesymbolic computationprovided by manycomputer algebra systems, which represent numbers by expressions such asπ·sin(2), and can thusrepresentanycomputable numberwith infinite precision. A common application ispublic-key cryptography, whose algorithms commonly employ arithmetic with integers having hundreds of digits.[5][6]Another is in situations where artificial limits andoverflowswould be inappropriate. It is also useful for checking the results of fixed-precision calculations, and for determining optimal or near-optimal values for coefficients needed in formulae, for example the13{\textstyle {\sqrt {\frac {1}{3}}}}that appears inGaussian integration.[7] Arbitrary precision arithmetic is also used to compute fundamentalmathematical constantssuch asπto millions or more digits and to analyze the properties of the digit strings[8]or more generally to investigate the precise behaviour of functions such as theRiemann zeta functionwhere certain questions are difficult to explore via analytical methods. Another example is in renderingfractalimages with an extremely high magnification, such as those found in theMandelbrot set. Arbitrary-precision arithmetic can also be used to avoidoverflow, which is an inherent limitation of fixed-precision arithmetic. Similar to an automobile'sodometerdisplay which may change from 99999 to 00000, a fixed-precision integer may exhibitwraparoundif numbers grow too large to represent at the fixed level of precision. Some processors can instead deal with overflow bysaturation,which means that if a result would be unrepresentable, it is replaced with the nearest representable value. (With 16-bit unsigned saturation, adding any positive amount to 65535 would yield 65535.) Some processors can generate anexceptionif an arithmetic result exceeds the available precision. Where necessary, the exception can be caught and recovered from—for instance, the operation could be restarted in software using arbitrary-precision arithmetic. In many cases, the task or the programmer can guarantee that the integer values in a specific application will not grow large enough to cause an overflow. Such guarantees may be based on pragmatic limits: a school attendance program may have a task limit of 4,000 students. A programmer may design the computation so that intermediate results stay within specified precision boundaries. Some programming languages such asLisp,Python,Perl,Haskell,RubyandRakuuse, or have an option to use, arbitrary-precision numbers forallinteger arithmetic. Although this reduces performance, it eliminates the possibility of incorrect results (or exceptions) due to simple overflow. It also makes it possible to guarantee that arithmetic results will be the same on all machines, regardless of any particular machine'sword size. The exclusive use of arbitrary-precision numbers in a programming language also simplifies the language, becausea number is a numberand there is no need for multiple types to represent different levels of precision. Arbitrary-precision arithmetic is considerably slower than arithmetic using numbers that fit entirely within processor registers, since the latter are usually implemented inhardware arithmeticwhereas the former must be implemented in software. Even if thecomputerlacks hardware for certain operations (such as integer division, or all floating-point operations) and software is provided instead, it will use number sizes closely related to the available hardware registers: one or two words only. There are exceptions, as certainvariable word lengthmachines of the 1950s and 1960s, notably theIBM 1620,IBM 1401and theHoneywell 200series, could manipulate numbers bound only by available storage, with an extra bit that delimited the value. Numbers can be stored in afixed-pointformat, or in afloating-pointformat as asignificandmultiplied by an arbitrary exponent. However, since division almost immediately introduces infinitely repeating sequences of digits (such as 4/7 in decimal, or 1/10 in binary), should this possibility arise then either the representation would be truncated at some satisfactory size or else rational numbers would be used: a large integer for thenumeratorand for thedenominator. But even with thegreatest common divisordivided out, arithmetic with rational numbers can become unwieldy very quickly: 1/99 − 1/100 = 1/9900, and if 1/101 is then added, the result is 10001/999900. The size of arbitrary-precision numbers is limited in practice by the total storage available, and computation time. Numerousalgorithmshave been developed to efficiently perform arithmetic operations on numbers stored with arbitrary precision. In particular, supposing thatNdigits are employed, algorithms have been designed to minimize the asymptoticcomplexityfor largeN. The simplest algorithms are foradditionandsubtraction, where one simply adds or subtracts the digits in sequence, carrying as necessary, which yields anO(N)algorithm (seebig O notation). Comparisonis also very simple. Compare the high-order digits (or machine words) until a difference is found. Comparing the rest of the digits/words is not necessary. The worst case isΘ{\displaystyle \Theta }(N), but it may complete much faster with operands of similar magnitude. Formultiplication, the most straightforward algorithms used for multiplying numbers by hand (as taught in primary school) requireΘ{\displaystyle \Theta }(N2)operations, butmultiplication algorithmsthat achieveO(Nlog(N) log(log(N)))complexity have been devised, such as theSchönhage–Strassen algorithm, based onfast Fourier transforms, and there are also algorithms with slightly worse complexity but with sometimes superior real-world performance for smallerN. TheKaratsubamultiplication is such an algorithm. Fordivision, seedivision algorithm. For a list of algorithms along with complexity estimates, seecomputational complexity of mathematical operations. For examples inx86assembly, seeexternal links. In some languages such asREXXandooRexx, the precision of all calculations must be set before doing a calculation. Other languages, such asPythonandRuby, extend the precision automatically to prevent overflow. The calculation offactorialscan easily produce very large numbers. This is not a problem for their usage in many formulas (such asTaylor series) because they appear along with other terms, so that—given careful attention to the order of evaluation—intermediate calculation values are not troublesome. If approximate values of factorial numbers are desired,Stirling's approximationgives good results using floating-point arithmetic. The largest representable value for a fixed-size integer variable may be exceeded even for relatively small arguments as shown in the table below. Even floating-point numbers are soon outranged, so it may help to recast the calculations in terms of thelogarithmof the number. But if exact values for large factorials are desired, then special software is required, as in thepseudocodethat follows, which implements the classic algorithm to calculate 1, 1×2, 1×2×3, 1×2×3×4, etc. the successive factorial numbers. With the example in view, a number of details can be discussed. The most important is the choice of the representation of the big number. In this case, only integer values are required for digits, so an array of fixed-width integers is adequate. It is convenient to have successive elements of the array represent higher powers of the base. The second most important decision is in the choice of the base of arithmetic, here ten. There are many considerations. The scratchpad variabledmust be able to hold the result of a single-digit multiplyplus the carryfrom the prior digit's multiply. In base ten, a sixteen-bit integer is certainly adequate as it allows up to 32767. However, this example cheats, in that the value ofnis not itself limited to a single digit. This has the consequence that the method will fail forn> 3200or so. In a more general implementation,nwould also use a multi-digit representation. A second consequence of the shortcut is that after the multi-digit multiply has been completed, the last value ofcarrymay need to be carried into multiple higher-order digits, not just one. There is also the issue of printing the result in base ten, for human consideration. Because the base is already ten, the result could be shown simply by printing the successive digits of arraydigit, but they would appear with the highest-order digit last (so that 123 would appear as "321"). The whole array could be printed in reverse order, but that would present the number with leading zeroes ("00000...000123") which may not be appreciated, so this implementation builds the representation in a space-padded text variable and then prints that. The first few results (with spacing every fifth digit and annotation added here) are: This implementation could make more effective use of the computer's built in arithmetic. A simple escalation would be to use base 100 (with corresponding changes to the translation process for output), or, with sufficiently wide computer variables (such as 32-bit integers) we could use larger bases, such as 10,000. Working in a power-of-2 base closer to the computer's built-in integer operations offers advantages, although conversion to a decimal base for output becomes more difficult. On typical modern computers, additions and multiplications take constant time independent of the values of the operands (so long as the operands fit in single machine words), so there are large gains in packing as much of a bignumber as possible into each element of the digit array. The computer may also offer facilities for splitting a product into a digit and carry without requiring the two operations ofmodanddivas in the example, and nearly all arithmetic units provide acarry flagwhich can be exploited in multiple-precision addition and subtraction. This sort of detail is the grist of machine-code programmers, and a suitable assembly-language bignumber routine can run faster than the result of the compilation of a high-level language, which does not provide direct access to such facilities but instead maps the high-level statements to its model of the target machine using an optimizing compiler. For a single-digit multiply the working variables must be able to hold the value (base−1)2+ carry, where the maximum value of the carry is (base−1). Similarly, the variables used to index the digit array are themselves limited in width. A simple way to extend the indices would be to deal with the bignumber's digits in blocks of some convenient size so that the addressing would be via (blocki, digitj) whereiandjwould be small integers, or, one could escalate to employing bignumber techniques for the indexing variables. Ultimately, machine storage capacity and execution time impose limits on the problem size. IBM's first business computer, theIBM 702(avacuum-tubemachine) of the mid-1950s, implemented integer arithmeticentirely in hardwareon digit strings of any length from 1 to 511 digits. The earliest widespread software implementation of arbitrary-precision arithmetic was probably that inMaclisp. Later, around 1980, theoperating systemsVAX/VMSandVM/CMSoffered bignum facilities as a collection ofstringfunctionsin the one case and in the languagesEXEC 2andREXXin the other. An early widespread implementation was available via theIBM 1620of 1959–1970. The 1620 was a decimal-digit machine which used discrete transistors, yet it had hardware (that usedlookup tables) to perform integer arithmetic on digit strings of a length that could be from two to whatever memory was available. For floating-point arithmetic, the mantissa was restricted to a hundred digits or fewer, and the exponent was restricted to two digits only. The largest memory supplied offered 60 000 digits, howeverFortrancompilers for the 1620 settled on fixed sizes such as 10, though it could be specified on a control card if the default was not satisfactory. Arbitrary-precision arithmetic in most computer software is implemented by calling an externallibrarythat providesdata typesandsubroutinesto store numbers with the requested precision and to perform computations. Different libraries have different ways of representing arbitrary-precision numbers, some libraries work only with integer numbers, others storefloating pointnumbers in a variety of bases (decimal or binary powers). Rather than representing a number as single value, some store numbers as a numerator/denominator pair (rationals) and some can fully representcomputable numbers, though only up to some storage limit. Fundamentally,Turing machinescannot represent allreal numbers, as thecardinalityofR{\displaystyle \mathbb {R} }exceeds the cardinality ofZ{\displaystyle \mathbb {Z} }.
https://en.wikipedia.org/wiki/Arbitrary-precision_arithmetic
C99(previouslyC9X, formallyISO/IEC 9899:1999) is a past version of theCprogramming languageopen standard.[1]It extends the previous version (C90) with new features for the language and thestandard library, and helps implementations make better use of available computer hardware, such asIEEE 754-1985floating-point arithmetic, and compiler technology.[2]TheC11version of the C programming language standard, published in 2011, updates C99. AfterANSIproduced the official standard for the C programming language in 1989, which became an international standard in 1990, the C language specification remained relatively static for some time, whileC++continued to evolve, largely during its own standardization effort. Normative Amendment 1 created a new standard for C in 1995, but only to correct some details of the 1989 standard and to add more extensive support for international character sets. The standard underwent further revision in the late 1990s, leading to the publication of ISO/IEC 9899:1999 in 1999, which was adopted as an ANSI standard in May 2000. The language defined by that version of the standard is commonly referred to as "C99". The international C standard is maintained by theworking groupISO/IEC JTC1/SC22/WG14. C99 is, for the most part, backward compatible with C89, but it is stricter in some ways.[3] In particular, a declaration that lacks a type specifier no longer hasintimplicitly assumed. The C standards committee decided that it was of more value for compilers to diagnose inadvertent omission of the type specifier than to silently process legacy code that relied on implicitint. In practice, compilers are likely to display a warning, then assumeintand continue translating the program. C99 introduced several new features, many of which had already been implemented as extensions in several compilers:[4] Parts of the C99 standard are included in the current version of theC++standard, including integer types, headers, and library functions. Variable-length arrays are not among these included parts because C++'sStandard Template Libraryalready includes similar functionality. A major feature of C99 is its numerics support, and in particular its support for access to the features ofIEEE 754-1985(also known as IEC 60559)floating-pointhardware present in the vast majority of modern processors (defined in "Annex F IEC 60559 floating-point arithmetic"). Platforms without IEEE 754 hardware can also implement it in software.[2] On platforms with IEEE 754 floating point: FLT_EVAL_METHOD == 2tends to limit the risk ofrounding errorsaffecting numerically unstable expressions (seeIEEE 754 design rationale) and is the designed default method forx87hardware, but yields unintuitive behavior for the unwary user;[9]FLT_EVAL_METHOD == 1was the default evaluation method originally used inK&R C, which promoted all floats to double in expressions; andFLT_EVAL_METHOD == 0is also commonly used and specifies a strict "evaluate to type" of the operands. (Forgcc,FLT_EVAL_METHOD == 2is the default on 32 bit x86, andFLT_EVAL_METHOD == 0is the default on 64 bit x86-64, butFLT_EVAL_METHOD == 2can be specified on x86-64 with option -mfpmath=387.) Before C99, compilers could round intermediate results inconsistently, especially when usingx87floating-point hardware, leading to compiler-specific behaviour;[10]such inconsistencies are not permitted in compilers conforming to C99 (annex F). The following annotated example C99 code for computing a continued fraction function demonstrates the main features: Footnotes: A standard macro__STDC_VERSION__is defined with value199901Lto indicate that C99 support is available. As with the__STDC__macro for C90,__STDC_VERSION__can be used to write code that will compile differently for C90 and C99 compilers, as in this example that ensures thatinlineis available in either case (by replacing it withstaticin C90 to avoid linker errors). Most C compilers provide support for at least some of the features introduced in C99. Historically,Microsofthas been slow to implement new C features in theirVisual C++tools, instead focusing mainly on supporting developments in the C++ standards.[13]However, with the introduction of Visual C++ 2013 Microsoft implemented a limited subset of C99, which was expanded in Visual C++ 2015.[14] Since ratification of the 1999 C standard, the standards working group prepared technical reports specifying improved support for embedded processing, additional character data types (Unicodesupport), and library functions with improvedbounds checking. Work continues on technical reports addressing decimalfloating point, additional mathematicalspecial functions, and additionaldynamic memory allocationfunctions. The C and C++ standards committees have been collaborating on specifications forthreadedprogramming. The next revision of the C standard,C11, was ratified in 2011.[41]The C standards committee adopted guidelines that limited the adoption of new features that have not been tested by existing implementations. Much effort went into developing amemory model, in order to clarifysequence pointsand to supportthreadedprogramming.
https://en.wikipedia.org/wiki/C99#IEEE_754_floating-point_support
Inmathematics,computable numbersare thereal numbersthat can be computed to within any desired precision by a finite, terminatingalgorithm. They are also known as therecursive numbers,[1]effective numbers,[2]computable reals,[3]orrecursive reals.[4]The concept of a computable real number was introduced byÉmile Borelin 1912, using the intuitive notion of computability available at the time.[5] Equivalent definitions can be given usingμ-recursive functions,Turing machines, orλ-calculusas the formal representation of algorithms. The computable numbers form areal closed fieldand can be used in the place of real numbers for many, but not all, mathematical purposes.[citation needed] In the following,Marvin Minskydefines the numbers to be computed in a manner similar to those defined byAlan Turingin 1936;[6]i.e., as "sequences of digits interpreted as decimal fractions" between 0 and 1:[7] A computable number [is] one for which there is a Turing machine which, givennon its initial tape, terminates with thenth digit of that number [encoded on its tape]. The key notions in the definition are (1) that somenis specified at the start, (2) for anynthe computation only takes a finite number of steps, after which the machine produces the desired output and terminates. An alternate form of (2) – the machine successively prints allnof the digits on its tape, halting after printing thenth – emphasizes Minsky's observation: (3) That by use of a Turing machine, afinitedefinition – in the form of the machine's state table – is being used to define what is a potentiallyinfinitestring of decimal digits. This is however not the modern definition which only requires the result be accurate to within any given accuracy. The informal definition above is subject to a rounding problem called thetable-maker's dilemmawhereas the modern definition is not. Areal numberaiscomputableif it can be approximated by somecomputable functionf:N→Z{\displaystyle f:\mathbb {N} \to \mathbb {Z} }in the following manner: given any positiveintegern, the function produces an integerf(n) such that: Acomplex numberis called computable if its real and imaginary parts are computable. There are two similar definitions that are equivalent: There is another equivalent definition of computable numbers via computableDedekind cuts. Acomputable Dedekind cutis a computable functionD{\displaystyle D\;}which when provided with a rational numberr{\displaystyle r}as input returnsD(r)=true{\displaystyle D(r)=\mathrm {true} \;}orD(r)=false{\displaystyle D(r)=\mathrm {false} \;}, satisfying the following conditions: An example is given by a programDthat defines thecube rootof 3. Assumingq>0{\displaystyle q>0\;}this is defined by: A real number is computable if and only if there is a computable Dedekind cutDcorresponding to it. The functionDis unique for each computable number (although of course two different programs may provide the same function). Assigning aGödel numberto each Turing machine definition produces a subsetS{\displaystyle S}of thenatural numberscorresponding to the computable numbers and identifies asurjectionfromS{\displaystyle S}to the computable numbers. There are only countably many Turing machines, showing that the computable numbers aresubcountable. The setS{\displaystyle S}of these Gödel numbers, however, is notcomputably enumerable(and consequently, neither are subsets ofS{\displaystyle S}that are defined in terms of it). This is because there is no algorithm to determine which Gödel numbers correspond to Turing machines that produce computable reals. In order to produce a computable real, a Turing machine must compute atotal function, but the correspondingdecision problemis inTuring degree0′′. Consequently, there is no surjectivecomputable functionfrom the natural numbers to the setS{\displaystyle S}of machines representing computable reals, andCantor's diagonal argumentcannot be usedconstructivelyto demonstrate uncountably many of them. While the set of real numbers isuncountable, the set of computable numbers is classicallycountableand thusalmost allreal numbers are not computable. Here, for any given computable numberx,{\displaystyle x,}thewell ordering principleprovides that there is a minimal element inS{\displaystyle S}which corresponds tox{\displaystyle x}, and therefore there exists a subset consisting of the minimal elements, on which the map is abijection. The inverse of this bijection is aninjectioninto the natural numbers of the computable numbers, proving that they are countable. But, again, this subset is not computable, even though the computable reals are themselves ordered. The arithmetical operations on computable numbers are themselves computable in the sense that whenever real numbersaandbare computable then the following real numbers are also computable:a+b,a-b,ab, anda/bifbis nonzero. These operations are actuallyuniformly computable; for example, there is a Turing machine which on input (A,B,ϵ{\displaystyle \epsilon }) produces outputr, whereAis the description of a Turing machine approximatinga,Bis the description of a Turing machine approximatingb, andris anϵ{\displaystyle \epsilon }approximation ofa+b. The fact that computable real numbers form afieldwas first proved byHenry Gordon Ricein 1954.[8] Computable reals however do not form acomputable field, because the definition of a computable field requires effective equality. The order relation on the computable numbers is not computable. LetAbe the description of a Turing machine approximating the numbera{\displaystyle a}. Then there is no Turing machine which on inputAoutputs "YES" ifa>0{\displaystyle a>0}and "NO" ifa≤0.{\displaystyle a\leq 0.}To see why, suppose the machine described byAkeeps outputting 0 asϵ{\displaystyle \epsilon }approximations. It is not clear how long to wait before deciding that the machine willneveroutput an approximation which forcesato be positive. Thus the machine will eventually have to guess that the number will equal 0, in order to produce an output; the sequence may later become different from 0. This idea can be used to show that the machine is incorrect on some sequences if it computes a total function. A similar problem occurs when the computable reals are represented asDedekind cuts. The same holds for the equality relation: the equality test is not computable. While the full order relation is not computable, the restriction of it to pairs of unequal numbers is computable. That is, there is a program that takes as input two Turing machinesAandBapproximating numbersa{\displaystyle a}andb{\displaystyle b}, wherea≠b{\displaystyle a\neq b}, and outputs whethera<b{\displaystyle a<b}ora>b.{\displaystyle a>b.}It is sufficient to useϵ{\displaystyle \epsilon }-approximations whereϵ<|b−a|/2,{\displaystyle \epsilon <|b-a|/2,}so by taking increasingly smallϵ{\displaystyle \epsilon }(approaching 0), one eventually can decide whethera<b{\displaystyle a<b}ora>b.{\displaystyle a>b.} The computable real numbers do not share all the properties of the real numbers used in analysis. For example, the least upper bound of a bounded increasing computable sequence of computable real numbers need not be a computable real number.[9]A sequence with this property is known as aSpecker sequence, as the first construction is due toErnst Speckerin 1949.[10]Despite the existence of counterexamples such as these, parts of calculus and real analysis can be developed in the field of computable numbers, leading to the study ofcomputable analysis. Every computable number isarithmetically definable, but not vice versa. There are many arithmetically definable, noncomputable real numbers, including: Both of these examples in fact define an infinite set of definable, uncomputable numbers, one for eachuniversal Turing machine. A real number is computable if and only if the set of natural numbers it represents (when written in binary and viewed as a characteristic function) is computable. The set of computable real numbers (as well as every countable,densely orderedsubset of computable reals without ends) isorder-isomorphicto the set of rational numbers. Turing's original paper defined computable numbers as follows: A real number is computable if its digit sequence can be produced by some algorithm or Turing machine. The algorithm takes an integern≥1{\displaystyle n\geq 1}as input and produces then{\displaystyle n}-th digit of the real number's decimal expansion as output. (The decimal expansion ofaonly refers to the digits following the decimal point.) Turing was aware that this definition is equivalent to theϵ{\displaystyle \epsilon }-approximation definition given above. The argument proceeds as follows: if a number is computable in the Turing sense, then it is also computable in theϵ{\displaystyle \epsilon }sense: ifn>log10⁡(1/ϵ){\displaystyle n>\log _{10}(1/\epsilon )}, then the firstndigits of the decimal expansion foraprovide anϵ{\displaystyle \epsilon }approximation ofa. For the converse, we pick anϵ{\displaystyle \epsilon }computable real numberaand generate increasingly precise approximations until thenth digit after the decimal point is certain. This always generates a decimal expansion equal toabut it may improperly end in an infinite sequence of 9's in which case it must have a finite (and thus computable) proper decimal expansion. Unless certain topological properties of the real numbers are relevant, it is often more convenient to deal with elements of2ω{\displaystyle 2^{\omega }}(total 0,1 valued functions) instead of reals numbers in[0,1]{\displaystyle [0,1]}. The members of2ω{\displaystyle 2^{\omega }}can be identified with binary decimal expansions, but since the decimal expansions.d1d2…dn0111…{\displaystyle .d_{1}d_{2}\ldots d_{n}0111\ldots }and.d1d2…dn10{\displaystyle .d_{1}d_{2}\ldots d_{n}10}denote the same real number, the interval[0,1]{\displaystyle [0,1]}can only be bijectively (and homeomorphically under the subset topology) identified with the subset of2ω{\displaystyle 2^{\omega }}not ending in all 1's. Note that this property of decimal expansions means that it is impossible to effectively identify the computable real numbers defined in terms of a decimal expansion and those defined in theϵ{\displaystyle \epsilon }approximation sense. Hirst has shown that there is no algorithm which takes as input the description of a Turing machine which producesϵ{\displaystyle \epsilon }approximations for the computable numbera, and produces as output a Turing machine which enumerates the digits ofain the sense of Turing's definition.[11]Similarly, it means that the arithmetic operations on the computable reals are not effective on their decimal representations as when adding decimal numbers. In order to produce one digit, it may be necessary to look arbitrarily far to the right to determine if there is a carry to the current location. This lack of uniformity is one reason why the contemporary definition of computable numbers usesϵ{\displaystyle \epsilon }approximations rather than decimal expansions. However, from acomputability theoreticormeasure theoreticperspective, the two structures2ω{\displaystyle 2^{\omega }}and[0,1]{\displaystyle [0,1]}are essentially identical. Thus, computability theorists often refer to members of2ω{\displaystyle 2^{\omega }}as reals. While2ω{\displaystyle 2^{\omega }}istotally disconnected, for questions aboutΠ10{\displaystyle \Pi _{1}^{0}}classes or randomness it is easier to work in2ω{\displaystyle 2^{\omega }}. Elements ofωω{\displaystyle \omega ^{\omega }}are sometimes called reals as well and though containing ahomeomorphicimage ofR{\displaystyle \mathbb {R} },ωω{\displaystyle \omega ^{\omega }}isn't evenlocally compact(in addition to being totally disconnected). This leads to genuine differences in the computational properties. For instance thex∈R{\displaystyle x\in \mathbb {R} }satisfying∀(n∈ω)ϕ(x,n){\displaystyle \forall (n\in \omega )\phi (x,n)}, withϕ(x,n){\displaystyle \phi (x,n)}quantifier free, must be computable while the uniquex∈ωω{\displaystyle x\in \omega ^{\omega }}satisfying a universal formula may have an arbitrarily high position in thehyperarithmetic hierarchy. The computable numbers include the specific real numbers which appear in practice, including all realalgebraic numbers, as well ase,π, and many othertranscendental numbers. Though the computable reals exhaust those reals we can calculate or approximate, the assumption that all reals are computable leads to substantially different conclusions about the real numbers. The question naturally arises of whether it is possible to dispose of the full set of reals and use computable numbers for all of mathematics. This idea is appealing from aconstructivistpoint of view, and has been pursued by the Russian school of constructive mathematics.[12] To actually develop analysis over computable numbers, some care must be taken. For example, if one uses the classical definition of a sequence, the set of computable numbers is not closed under the basic operation of taking thesupremumof abounded sequence(for example, consider aSpecker sequence, see the section above). This difficulty is addressed by considering only sequences which have a computablemodulus of convergence. The resulting mathematical theory is calledcomputable analysis. Computer packages representing real numbers as programs computing approximations have been proposed as early as 1985, under the name "exact arithmetic".[13]Modern examples include the CoRN library (Coq),[14]and the RealLib package (C++).[15]A related line of work is based on taking areal RAMprogram and running it with rational or floating-point numbers of sufficient precision, such as the iRRAM package.[16]
https://en.wikipedia.org/wiki/Computable_number
Acoprocessoris acomputer processorused to supplement the functions of the primary processor (theCPU). Operations performed by the coprocessor may befloating-point arithmetic,graphics,signal processing,string processing,cryptographyorI/O interfacingwith peripheral devices. By offloading processor-intensive tasks from themain processor, coprocessors can accelerate system performance. Coprocessors allow a line of computers to be customized, so that customers who do not need the extra performance do not need to pay for it. Coprocessors vary in their degree of autonomy. Some (such asFPUs) rely on direct control viacoprocessor instructions, embedded in theCPU'sinstruction stream. Others are independent processors in their own right, capable of working asynchronously; they are still not optimized forgeneral-purpose code, or they are incapable of it due to a limitedinstruction setfocused onaccelerating specific tasks. It is common for these to be driven bydirect memory access(DMA), with the host processor (a CPU) building acommand list. ThePlayStation 2'sEmotion Enginecontained an unusualDSP-likeSIMDvector unitcapable of both modes of operation. To make the best use ofmainframe computerprocessor time, input/output tasks were delegated to separate systems calledChannel I/O. The mainframe would not require any I/O processing at all, instead would just set parameters for an input or output operation and then signal the channel processor to carry out the whole of the operation. By dedicating relatively simple sub-processors to handle time-consuming I/O formatting and processing, overall system performance was improved. Coprocessors for floating-point arithmetic first appeared indesktop computersin the 1970s and became common throughout the 1980s and into the early 1990s. Early 8-bit and 16-bit processors used software to carry outfloating-pointarithmetic operations. Where a coprocessor was supported, floating-point calculations could be carried out many times faster. Math coprocessors were popular purchases for users ofcomputer-aided design(CAD) software and scientific and engineering calculations. Some floating-point units, such as theAMD 9511,Intel 8231/8232andWeitekFPUs were treated as peripheral devices, while others such as theIntel 8087,Motorola 68881andNational 32081were more closely integrated with the CPU. Another form of coprocessor was a video display coprocessor, as used in theAtari 8-bit computers,TI-99/4A, andMSXhome computers, which were called "Video Display Controllers". TheAmigacustom chipsetincludes such a unit known as theCopper, as well as ablitterfor acceleratingbitmapmanipulation in memory. As microprocessors developed, the cost of integrating the floating-point arithmetic functions into the processor declined. High processor speeds also made a closely integrated coprocessor difficult to implement. Separately packaged mathematics coprocessors are now uncommon in desktop computers. The demand for adedicated graphics coprocessorhas grown, however, particularly due to the increasing demand for realistic3D graphicsincomputer games. The originalIBM PCincluded a socket for theIntel 8087floating-pointcoprocessor (akaFPU) which was a popular option for people using the PC forcomputer-aided designor mathematics-intensive calculations. In that architecture, the coprocessor speeds up floating-point arithmetic on the order of fiftyfold. Users that only used the PC for word processing, for example, saved the high cost of the coprocessor, which would not have accelerated performance of text manipulation operations. The 8087 was tightly integrated with the8086/8088 and responded to floating-pointmachine codeoperation codes inserted in the 8088 instruction stream. An 8088 processor without an 8087 could not interpret these instructions, requiring separate versions of programs for FPU and non-FPU systems, or at least a test at run time to detect the FPU and select appropriate mathematical library functions. Another coprocessor for the 8086/8088 central processor was the8089input/output coprocessor. It used the same programming technique as 8087 for input/output operations, such as transfer of data from memory to a peripheral device, and so reducing the load on the CPU. But IBM did not use it in IBM PC design and Intel stopped development of this type of coprocessor. TheIntel 80386microprocessorused an optional "math" coprocessor (the80387) to perform floating-point operations directly inhardware. The Intel 80486DX processor included floating-point hardware on the chip. Intel released a cost-reduced processor, the 80486SX, that had no floating-point hardware, and also sold an 80487SX coprocessor that essentially disabled the main processor when installed, since the 80487SX was a complete 80486DX with a different set of pin connections.[1] Intel processors later than the 80486 integrated floating-point hardware on the main processor chip; the advances in integration eliminated the cost advantage of selling the floating-point processor as an optional element. It would be very difficult to adapt circuit-board techniques adequate at 75 MHz processor speed to meet the time-delay, power consumption, and radio-frequency interference standards required at gigahertz-range clock speeds. These on-chip floating-point processors are still referred to as coprocessors because they operate in parallel with the main CPU. During the era of 8- and 16-bit desktop computers another common source of floating-point coprocessors wasWeitek. These coprocessors had a different instruction set from the Intel coprocessors, and used a different socket, which not all motherboards supported. The Weitek processors did not provide transcendental mathematics functions (for example, trigonometric functions) like the Intel x87 family, and required specific software libraries to support their functions.[2] TheMotorola 68000family had the68881/68882coprocessors which provided similar floating-point speed acceleration as for the Intel processors. Computers using the 68000 family but not equipped with the hardware floating-point processor could trap and emulate the floating-point instructions in software, which, although slower, allowed one binary version of the program to be distributed for both cases. The 68451 memory-management coprocessor was designed to work with the 68020 processor.[3] As of 2001[update], dedicated Graphics Processing Units (GPUs) in the form ofgraphics cardsare commonplace. Certain models ofsound cardshave been fitted with dedicated processors providing digital multichannel mixing and real-time DSP effects as early as 1990 to 1994 (theGravis UltrasoundandSound Blaster AWE32being typical examples), while theSound Blaster Audigyand theSound Blaster X-Fiare more recent examples. In 2006,AGEIAannounced an add-in card for computers that it called thePhysXPPU. PhysX was designed to perform complex physics computations so that theCPUand GPU do not have to perform these time-consuming calculations. It was designed for video games, although other mathematical uses could theoretically be developed for it. In 2008, Nvidia purchased the company and phased out the PhysX card line; the functionality was added through software allowing their GPUs to render PhysX on cores normally used for graphics processing, using their Nvidia PhysX engine software. In 2006, BigFoot Systems unveiled a PCI add-in card they christened the KillerNIC which ran its own special Linux kernel on a FreeScalePowerQUICCrunning at 400 MHz, calling the FreeScale chip aNetwork Processing Unitor NPU. TheSpursEngineis a media-oriented add-in card with a coprocessor based on theCellmicroarchitecture. TheSPUsare themselves vector coprocessors. In 2008,Khronos Groupreleased theOpenCLwith the aim to support general-purpose CPUs, ATI/AMD and Nvidia GPUs (and other accelerators) with a single common language forcompute kernels. In 2010s, some mobile computation devices had implemented thesensor hubas a coprocessor. Examples of coprocessors used for handling sensor integration in mobile devices include theApple M7and M8motion coprocessors, theQualcomm Snapdragon Sensor CoreandQualcomm Hexagon, and theHolographic Processing Unitfor theMicrosoft HoloLens. In 2012,Intelannounced theIntel Xeon Phicoprocessor.[4] As of 2016[update], various companies are developing coprocessors aimed at acceleratingartificial neural networksfor vision and other cognitive tasks (e.g.vision processing units,TrueNorth, andZeroth), and as of 2018, such AI chips are in smartphones such as from Apple, and several Android phone vendors. Over time CPUs have tended to grow to absorb the functionality of the most popular coprocessors. FPUs are now considered an integral part of a processors' main pipeline;SIMDunits gave multimedia its acceleration, taking over the role of variousDSPaccelerator cards; and evenGPUshave become integrated on CPU dies. Nonetheless, specialized units remain popular away from desktop machines, and for additional power, and allow continued evolution independently of the main processor product lines.
https://en.wikipedia.org/wiki/Coprocessor
Decimal floating-point(DFP) arithmetic refers to both a representation and operations ondecimalfloating-pointnumbers. Working directly with decimal (base-10) fractions can avoid the rounding errors that otherwise typically occur when converting between decimal fractions (common in human-entered data, such as measurements or financial information) and binary (base-2) fractions. The advantage of decimal floating-point representation over decimalfixed-pointandintegerrepresentation is that it supports a much wider range of values. For example, while a fixed-point representation that allocates 8 decimal digits and 2 decimal places can represent the numbers 123456.78, 8765.43, 123.00, and so on, a floating-point representation with 8 decimal digits could also represent 1.2345678, 1234567.8, 0.000012345678, 12345678000000000, and so on. This wider range can dramatically slow the accumulation of rounding errors during successive calculations; for example, theKahan summation algorithmcan be used in floating point to add many numbers with no asymptotic accumulation of rounding error. Early mechanical uses of decimal floating point are evident in theabacus,slide rule, theSmallwood calculator, and some othercalculatorsthat support entries inscientific notation. In the case of the mechanical calculators, the exponent is often treated as side information that is accounted for separately. TheIBM 650computer supported an 8-digit decimal floating-point format in 1953.[1]The otherwise binaryWang VSmachine supported a 64-bit decimal floating-point format in 1977.[2]TheMotorola 68881supported a format with 17 digits of mantissa and 3 of exponent in 1984, with the floating-point support library for theMotorola 68040processor providing a compatible 96-bit decimal floating-point storage format in 1990.[2] Somecomputer languageshave implementations of decimal floating-point arithmetic, includingPL/I,.NET,[3]emacswith calc, andPython's decimal module.[4]In 1987, theIEEEreleasedIEEE 854, a standard for computing with decimal floating point, which lacked a specification for how floating-point data should be encoded for interchange with other systems. This was subsequently addressed inIEEE 754-2008, which standardized the encoding of decimal floating-point data, albeit with two different alternative methods. IBMPOWER6and newer POWER processors include DFP in hardware, as does theIBM System z9[5](and later zSeries machines). SilMinds offers SilAx, a configurable vector DFPcoprocessor.[6]IEEE 754-2008defines this in more detail.Fujitsualso has 64-bitSparcprocessors with DFP in hardware.[7][2] TheIEEE 754-2008standard defines 32-, 64- and 128-bit decimal floating-point representations. Like the binary floating-point formats, the number is divided into a sign, an exponent, and asignificand. Unlike binary floating-point, numbers are not necessarily normalized; values with fewsignificant digitshave multiple possible representations: 1×102=0.1×103=0.01×104, etc. When the significand is zero, the exponent can be any value at all. The exponent ranges were chosen so that the range available to normalized values is approximately symmetrical. Since this cannot be done exactly with an even number of possible exponent values, the extra value was given to Emax. Two different representations are defined: Both alternatives provide exactly the same range of representable values. The most significant two bits of the exponent are limited to the range of 0−2, and the most significant 4 bits of the significand are limited to the range of 0−9. The 30 possible combinations are encoded in a 5-bit field, along with special forms for infinity andNaN. If the most significant 4 bits of the significand are between 0 and 7, the encoded value begins as follows: If the leading 4 bits of the significand are binary 1000 or 1001 (decimal 8 or 9), the number begins as follows: The leading bit (s in the above) is a sign bit, and the following bits (xxx in the above) encode the additional exponent bits and the remainder of the most significant digit, but the details vary depending on the encoding alternative used. The final combinations are used for infinities and NaNs, and are the same for both alternative encodings: In the latter cases, all other bits of the encoding are ignored. Thus, it is possible to initialize an array to NaNs by filling it with a single byte value. This format uses a binary significand from 0 to 10p−1. For example, the Decimal32 significand can be up to 107−1 =9999999= 98967F16=1001100010010110011111112. While the encoding can represent larger significands, they are illegal and the standard requires implementations to treat them as 0, if encountered on input. As described above, the encoding varies depending on whether the most significant 4 bits of the significand are in the range 0 to 7 (00002to 01112), or higher (10002or 10012). If the 2 bits after the sign bit are "00", "01", or "10", then the exponent field consists of the 8 bits following the sign bit (the 2 bits mentioned plus 6 bits of "exponent continuation field"), and the significand is the remaining 23 bits, with an implicit leading 0 bit, shown here in parentheses: This includessubnormal numberswhere the leading significand digit is 0. If the 2 bits after the sign bit are "11", then the 8-bit exponent field is shifted 2 bits to the right (after both the sign bit and the "11" bits thereafter), and the represented significand is in the remaining 21 bits. In this case there is an implicit (that is, not stored) leading 3-bit sequence "100" in the true significand: The "11" 2-bit sequence after the sign bit indicates that there is animplicit"100" 3-bit prefix to the significand. Note that the leading bits of the significand field donotencode the most significant decimal digit; they are simply part of a larger pure-binary number. For example, a significand of8000000is encoded as binary011110100001001000000000, with the leading 4 bits encoding 7; the first significand which requires a 24th bit (and thus the second encoding form) is 223=8388608. In the above cases, the value represented is: Decimal64 and Decimal128 operate analogously, but with larger exponent continuation and significand fields. For Decimal128, the second encoding form is actually never used; the largest valid significand of 1034−1 = 1ED09BEAD87C0378D8E63FFFFFFFF16can be represented in 113 bits. In this version, the significand is stored as a series of decimal digits. The leading digit is between 0 and 9 (3 or 4 binary bits), and the rest of the significand uses thedensely packed decimal(DPD) encoding. The leading 2 bits of the exponent and the leading digit (3 or 4 bits) of the significand are combined into the five bits that follow the sign bit. This is followed by a fixed-offset exponent continuation field. Finally, the significand continuation field made of 2, 5, or 11 10-bitdeclets, each encoding 3 decimal digits.[8] If the first two bits after the sign bit are "00", "01", or "10", then those are the leading bits of the exponent, and the three bits after that are interpreted as the leading decimal digit (0 to 7):[9] If the first two bits after the sign bit are "11", then the second two bits are the leading bits of the exponent, and the last bit is prefixed with "100" to form the leading decimal digit (8 or 9): The remaining two combinations (11110 and 11111) of the 5-bit field are used to represent ±infinity and NaNs, respectively. The usual rule for performing floating-point arithmetic is that the exact mathematical value is calculated,[10]and the result is then rounded to the nearest representable value in the specified precision. This is in fact the behavior mandated for IEEE-compliant computer hardware, under normal rounding behavior and in the absence of exceptional conditions. For ease of presentation and understanding, 7-digit precision will be used in the examples. The fundamental principles are the same in any precision. A simple method to add floating-point numbers is to first represent them with the same exponent. In the example below, the second number is shifted right by 3 digits. We proceed with the usual addition method: The following example is decimal, which simply means the base is 10. Hence: This is nothing other than converting toscientific notation. In detail: This is the true result, the exact sum of the operands. It will be rounded to 7 digits and then normalized if necessary. The final result is: Note that the low 3 digits of the second operand (654) are essentially lost. This isround-off error. In extreme cases, the sum of two non-zero numbers may be equal to one of them: Another problem of loss of significance occurs whenapproximationsto two nearly equal numbers are subtracted. In the following examplee= 5;s= 1.234571 ande= 5;s= 1.234567 are approximations to the rationals 123457.1467 and 123456.659. The floating-point difference is computed exactly because the numbers are close—theSterbenz lemmaguarantees this, even in case of underflow whengradual underflowis supported. Despite this, the difference of the original numbers ise= −1;s= 4.877000, which differs more than 20% from the differencee= −1;s= 4.000000 of the approximations. In extreme cases, all significant digits of precision can be lost.[11][12]Thiscancellationillustrates the danger in assuming that all of the digits of a computed result are meaningful. Dealing with the consequences of these errors is a topic innumerical analysis; see alsoAccuracy problems. To multiply, the significands are multiplied, while the exponents are added, and the result is rounded and normalized. Division is done similarly, but that is more complicated. There are no cancellation or absorption problems with multiplication or division, though small errors may accumulate as operations are performed repeatedly. In practice, the way these operations are carried out in digital logic can be quite complex.
https://en.wikipedia.org/wiki/Decimal_floating_point
Double-precision floating-point format(sometimes calledFP64orfloat64) is afloating-pointnumber format, usually occupying 64bitsin computer memory; it represents a wide range of numeric values by using a floatingradix point. Double precision may be chosen when the range or precision ofsingle precisionwould be insufficient. In theIEEE 754standard, the 64-bit base-2 format is officially referred to asbinary64; it was calleddoubleinIEEE 754-1985. IEEE 754 specifies additional floating-point formats, including 32-bit base-2single precisionand, more recently, base-10 representations (decimal floating point). One of the firstprogramming languagesto provide floating-point data types wasFortran.[citation needed]Before the widespread adoption of IEEE 754-1985, the representation and properties of floating-point data types depended on thecomputer manufacturerand computer model, and upon decisions made by programming-language implementers. E.g.,GW-BASIC's double-precision data type was the64-bit MBFfloating-point format. Double-precision binary floating-point is a commonly used format on PCs, due to its wider range over single-precision floating point, in spite of its performance and bandwidth cost. It is commonly known simply asdouble. The IEEE 754 standard specifies abinary64as having: The sign bit determines the sign of the number (including when this number is zero, which issigned). The exponent field is an 11-bit unsigned integer from 0 to 2047, inbiased form: an exponent value of 1023 represents the actual zero. Exponents range from −1022 to +1023 because exponents of −1023 (all 0s) and +1024 (all 1s) are reserved for special numbers. The 53-bit significand precision gives from 15 to 17significant decimal digitsprecision (2−53≈ 1.11 × 10−16). If a decimal string with at most 15 significant digits is converted to the IEEE 754 double-precision format, giving a normal number, and then converted back to a decimal string with the same number of digits, the final result should match the original string. If an IEEE 754 double-precision number is converted to a decimal string with at least 17 significant digits, and then converted back to double-precision representation, the final result must match the original number.[1] The format is written with thesignificandhaving an implicit integer bit of value 1 (except for special data, see the exponent encoding below). With the 52 bits of the fraction (F) significand appearing in the memory format, the total precision is therefore 53 bits (approximately 16 decimal digits, 53 log10(2) ≈ 15.955). The bits are laid out as follows: The real value assumed by a given 64-bit double-precision datum with a givenbiased exponentE{\displaystyle E}and a 52-bit fraction is or Between 252=4,503,599,627,370,496 and 253=9,007,199,254,740,992 the representable numbers are exactly the integers. For the next range, from 253to 254, everything is multiplied by 2, so the representable numbers are the even ones, etc. Conversely, for the previous range from 251to 252, the spacing is 0.5, etc. The spacing as a fraction of the numbers in the range from 2nto 2n+1is 2n−52. The maximum relative rounding error when rounding a number to the nearest representable one (themachine epsilon) is therefore 2−53. The 11 bit width of the exponent allows the representation of numbers between 10−308and 10308, with full 15–17 decimal digits precision. By compromising precision, the subnormal representation allows even smaller values up to about 5 × 10−324. The double-precision binary floating-point exponent is encoded using anoffset-binaryrepresentation, with the zero offset being 1023; also known as exponent bias in the IEEE 754 standard. Examples of such representations would be: The exponents00016and7ff16have a special meaning: whereFis the fractional part of thesignificand. All bit patterns are valid encoding. Except for the above exceptions, the entire double-precision number is described by: In the case ofsubnormal numbers(e= 0) the double-precision number is described by: Encodings of qNaN and sNaNare not completely specified inIEEE 754and depend on the processor. Most processors, such as thex86family and theARMfamily processors, use the most significant bit of the significand field to indicate a quiet NaN; this is what is recommended by IEEE 754. ThePA-RISCprocessors use the bit to indicate a signaling NaN. By default,1/3rounds down, instead of up likesingle precision, because of the odd number of bits in the significand. In more detail: Using double-precision floating-point variables is usually slower than working with their single precision counterparts. One area of computing where this is a particular issue is parallel code running on GPUs. For example, when usingNvidia'sCUDAplatform, calculations with double precision can take, depending on hardware, from 2 to 32 times as long to complete compared to those done usingsingle precision.[4] Additionally, many mathematical functions (e.g., sin, cos, atan2, log, exp and sqrt) need more computations to give accurate double-precision results, and are therefore slower. Doubles are implemented in many programming languages in different ways such as the following. On processors with only dynamic precision, such asx86withoutSSE2(or when SSE2 is not used, for compatibility purpose) and with extended precision used by default, software may have difficulties to fulfill some requirements. C and C++ offer a wide variety ofarithmetic types. Double precision is not required by the standards (except by the optional annex F ofC99, covering IEEE 754 arithmetic), but on most systems, thedoubletype corresponds to double precision. However, on 32-bit x86 with extended precision by default, some compilers may not conform to the C standard or the arithmetic may suffer fromdouble rounding.[5] Fortranprovides several integer and real types, and the 64-bit typereal64, accessible via Fortran's intrinsic moduleiso_fortran_env, corresponds to double precision. Common Lispprovides the types SHORT-FLOAT, SINGLE-FLOAT, DOUBLE-FLOAT and LONG-FLOAT. Most implementations provide SINGLE-FLOATs and DOUBLE-FLOATs with the other types appropriate synonyms. Common Lisp provides exceptions for catching floating-point underflows and overflows, and the inexact floating-point exception, as per IEEE 754. No infinities and NaNs are described in the ANSI standard, however, several implementations do provide these as extensions. OnJavabefore version 1.2, every implementation had to be IEEE 754 compliant. Version 1.2 allowed implementations to bring extra precision in intermediate computations for platforms likex87. Thus a modifierstrictfpwas introduced to enforce strict IEEE 754 computations. Strict floating point has been restored in Java 17.[6] As specified by theECMAScriptstandard, all arithmetic inJavaScriptshall be done using double-precision floating-point arithmetic.[7] TheJSONdata encoding format supports numeric values, and the grammar to which numeric expressions must conform has no limits on the precision or range of the numbers so encoded. However, RFC 8259 advises that, since IEEE 754 binary64 numbers are widely implemented, good interoperability can be achieved by implementations processing JSON if they expect no more precision or range than binary64 offers.[8] RustandZighave thef64data type.[9][10]
https://en.wikipedia.org/wiki/Double-precision_floating-point_format
Experimental mathematicsis an approach tomathematicsin which computation is used to investigate mathematical objects and identify properties and patterns.[1]It has been defined as "that branch of mathematics that concerns itself ultimately with the codification and transmission of insights within the mathematical community through the use of experimental (in either the Galilean, Baconian, Aristotelian or Kantian sense) exploration ofconjecturesand more informal beliefs and a careful analysis of the data acquired in this pursuit."[2] As expressed byPaul Halmos: "Mathematics is not adeductive science—that's a cliché. When you try to prove a theorem, you don't just list thehypotheses, and then start to reason. What you do istrial and error, experimentation, guesswork. You want to find out what the facts are, and what you do is in that respect similar to what a laboratory technician does."[3] Mathematicians have always practiced experimental mathematics. Existing records of early mathematics, such asBabylonian mathematics, typically consist of lists of numerical examples illustrating algebraic identities. However, modern mathematics, beginning in the 17th century, developed a tradition of publishing results in a final, formal and abstract presentation. The numerical examples that may have led a mathematician to originally formulate a general theorem were not published, and were generally forgotten. Experimental mathematics as a separate area of study re-emerged in the twentieth century, when the invention of the electronic computer vastly increased the range of feasible calculations, with a speed and precision far greater than anything available to previous generations of mathematicians. A significant milestone and achievement of experimental mathematics was the discovery in 1995 of theBailey–Borwein–Plouffe formulafor the binary digits ofπ. This formula was discovered not by formal reasoning, but instead by numerical searches on a computer; only afterwards was a rigorousprooffound.[4] The objectives of experimental mathematics are "to generate understanding and insight; to generate and confirm or confront conjectures; and generally to make mathematics more tangible, lively and fun for both the professional researcher and the novice".[5] The uses of experimental mathematics have been defined as follows:[6] Experimental mathematics makes use ofnumerical methodsto calculate approximate values forintegralsandinfinite series.Arbitrary precision arithmeticis often used to establish these values to a high degree of precision – typically 100 significant figures or more.Integer relation algorithmsare then used to search for relations between these values andmathematical constants. Working with high precision values reduces the possibility of mistaking amathematical coincidencefor a true relation. A formal proof of a conjectured relation will then be sought – it is often easier to find a formal proof once the form of a conjectured relation is known. If acounterexampleis being sought or a large-scaleproof by exhaustionis being attempted,distributed computingtechniques may be used to divide the calculations between multiple computers. Frequent use is made of generalmathematical softwareor domain-specific software written for attacks on problems that require high efficiency. Experimental mathematics software usually includeserror detection and correctionmechanisms, integrity checks and redundant calculations designed to minimise the possibility of results being invalidated by a hardware or software error. Applications and examples of experimental mathematics include: Some plausible relations hold to a high degree of accuracy, but are still not true. One example is: The two sides of this expression actually differ after the 42nd decimal place.[13] Another example is that the maximumheight(maximum absolute value of coefficients) of all the factors ofxn− 1 appears to be the same as the height of thenthcyclotomic polynomial. This was shown by computer to be true forn< 10000 and was expected to be true for alln. However, a larger computer search showed that this equality fails to hold forn= 14235, when the height of thenth cyclotomic polynomial is 2, but maximum height of the factors is 3.[14] The followingmathematiciansandcomputer scientistshave made significant contributions to the field of experimental mathematics:
https://en.wikipedia.org/wiki/Experimental_mathematics
Incomputing,fixed-pointis a method of representingfractional(non-integer) numbers by storing a fixed number of digits of their fractional part.Dollaramounts, for example, are often stored with exactly two fractional digits, representing thecents(1/100 of dollar). More generally, the term may refer to representing fractional values as integer multiples of some fixed small unit, e.g. a fractional amount of hours as an integer multiple of ten-minute intervals. Fixed-point number representation is often contrasted to the more complicated and computationally demandingfloating-point representation. In the fixed-point representation, the fraction is often expressed in the samenumber baseas the integer part, but using negativepowersof the baseb. The most common variants aredecimal(base 10) andbinary(base 2). The latter is commonly known also asbinary scaling. Thus, ifnfraction digits are stored, the value will always be an integermultipleofb−n. Fixed-point representation can also be used to omit the low-order digits of integer values, e.g. when representing large dollar values as multiples of $1000. When decimal fixed-point numbers are displayed for human reading, the fraction digits are usually separated from those of the integer part by aradix character(usually '.' in English, but ',' or some other symbol in many other languages). Internally, however, there is no separation, and the distinction between the two groups of digits is defined only by the programs that handle such numbers. Fixed-point representation was the norm inmechanical calculators. Since most modernprocessorshave a fastfloating-point unit(FPU), fixed-point representations in processor-based implementations are now used only in special situations, such as in low-costembeddedmicroprocessorsandmicrocontrollers; in applications that demand high speed or lowpowerconsumption or smallchiparea, likeimage,video, anddigital signal processing; or when their use is more natural for the problem. Examples of the latter areaccountingof dollar amounts, when fractions of cents must be rounded to whole cents in strictly prescribed ways; and the evaluation offunctionsbytable lookup, or any application where rational numbers need to be represented without rounding errors (which fixed-point does but floating-point cannot). Fixed-point representation is still the norm forfield-programmable gate array(FPGA) implementations, as floating-point support in an FPGA requires significantly more resources than fixed-point support.[1] A fixed-point representation of a fractional number is essentially anintegerthat is to be implicitly multiplied by a fixed scaling factor. For example, the value 1.23 can be stored in a variable as the integer value 1230 with implicit scaling factor of 1/1000 (meaning that the last 3 decimal digits are implicitly assumed to be a decimal fraction), and the value1 230 000can be represented as 1230 with an implicit scaling factor of 1000 (with "minus 3" implied decimal fraction digits, that is, with 3 implicit zero digits at right). This representation allows standard integerarithmetic logic unitsto performrational numbercalculations. Negative values are usually represented in binary fixed-point format as a signed integer intwo's complementrepresentation with an implicit scaling factor as above. The sign of the value will always be indicated by thefirst stored bit(1 = negative, 0 = non-negative), even if the number of fraction bits is greater than or equal to the total number of bits. For example, the 8-bit signed binary integer (11110101)2= −11, taken with −3, +5, and +12 implied fraction bits, would represent the values −11/2−3= −88, −11/25= −0.343 75, and −11/212= −0.002 685 546 875, respectively. Alternatively, negative values can be represented by an integer in thesign-magnitudeformat, in which case the sign is never included in the number of implied fraction bits. This variant is more commonly used in decimal fixed-point arithmetic. Thus the signed 5-digit decimal integer (−00025)10, taken with −3, +5, and +12 implied decimal fraction digits, would represent the values −25/10−3= −25000, −25/105= −0.00025, and −25/1012= −0.000 000 000 025, respectively. A program will usually assume that all fixed-point values that will be stored into a given variable, or will be produced by a giveninstruction, will have the same scaling factor. This parameter can usually be chosen by the programmer depending on theprecisionneeded and range of values to be stored. The scaling factor of a variable or formula may not appear explicitly in the program.Good programming practicethen requires that it be provided in thedocumentation, at least as acommentin thesource code. For greater efficiency, scaling factors are often chosen to bepowers(positive or negative) of the basebused to represent the integers internally. However, often the best scaling factor is dictated by the application. Thus one often uses scaling factors that are powers of 10 (e.g. 1/100 for dollar values), for human convenience, even when the integers are represented internally in binary. Decimal scaling factors also mesh well with themetric (SI) system, since the choice of the fixed-point scaling factor is often equivalent to the choice of a unit of measure (likecentimetersormicronsinstead ofmeters). However, other scaling factors may be used occasionally, e.g. a fractional amount of hours may be represented as an integer number of seconds; that is, as a fixed-point number with scale factor of 1/3600. Even with the most careful rounding, fixed-point values represented with a scaling factorSmay have an error of up to ±0.5 in the stored integer, that is, ±0.5Sin the value. Therefore, smaller scaling factors generally produce more accurate results. On the other hand, a smaller scaling factor means a smaller range of the values that can be stored in a given program variable. The maximum fixed-point value that can be stored into a variable is the largest integer value that can be stored into it, multiplied by the scaling factor; and similarly for the minimum value. For example, the table below gives the implied scaling factorS, the minimum and maximum representable valuesVminandVmax, and the accuracyδ=S/2 of values that could be represented in 16-bit signed binary fixed point format, depending on the numberfof implied fraction bits. Fixed-point formats with scaling factors of the form 2n−1 (namely 1, 3, 7, 15, 31, etc.) have been said to be appropriate for image processing and other digital signal processing tasks. They are supposed to provide more consistent conversions between fixed- and floating-point values than the usual 2nscaling. TheJuliaprogramming language implements both versions.[2] Any binary fractiona/2m, such as 1/16 or 17/32, can be exactly represented in fixed-point, with a power-of-two scaling factor 1/2nwith anyn≥m. However, most decimal fractions like 0.1 or 0.123 are infiniterepeating fractionsin base 2. and hence cannot be represented that way. Similarly, any decimal fractiona/10m, such as 1/100 or 37/1000, can be exactly represented in fixed point with a power-of-ten scaling factor 1/10nwith anyn≥m. This decimal format can also represent any binary fractiona/2m, such as 1/8 (0.125) or 17/32 (0.53125). More generally, arational numbera/b, withaandbrelatively primeandbpositive, can be exactly represented in binary fixed point only ifbis a power of 2; and in decimal fixed point only ifbhas noprimefactors other than 2 and/or 5. Fixed-point computations can be faster and/or use less hardware than floating-point ones. If the range of the values to be represented is known in advance and is sufficiently limited, fixed point can make better use of the available bits. For example, if 32 bits are available to represent a number between 0 and 1, a fixed-point representation can have error less than 1.2 × 10−10, whereas the standard floating-point representation may have error up to 596 × 10−10— because 9 of the bits are wasted with the sign and exponent of the dynamic scaling factor. Specifically, comparing 32-bit fixed-point tofloating-pointaudio, a recording requiring less than 40dBofheadroomhas a highersignal-to-noise ratiousing 32-bit fixed. Programs using fixed-point computations are usually more portable than those using floating-point since they do not depend on the availability of an FPU. This advantage was particularly strong before theIEEE Floating Point Standardwas widely adopted when floating-point computations with the same data would yield different results depending on the manufacturer, and often on the computer model. Many embedded processors lack an FPU, because integer arithmetic units require substantially fewerlogic gatesand consume much smallerchiparea than an FPU; and softwareemulationof floating-point on low-speed devices would be too slow for most applications. CPU chips for the earlierpersonal computersandgame consoles, like theIntel 386and486SX, also lacked an FPU. Theabsoluteresolution (difference between successive values) of any fixed-point format is constant over the whole range, namely the scaling factorS. In contrast, therelativeresolution of a floating-point format is approximately constant over their whole range, varying within a factor of the baseb; whereas theirabsoluteresolution varies by many orders of magnitude, like the values themselves. In many cases, therounding and truncationerrors of fixed-point computations are easier to analyze than those of the equivalent floating-point computations. Applying linearization techniques to truncation, such asditheringand/ornoise shapingis more straightforward within fixed-point arithmetic. On the other hand, the use of fixed point requires greater care by the programmer. Avoidance of overflow requires much tighter estimates for the ranges of variables and all intermediate values in the computation, and often also extra code to adjust their scaling factors. Fixed-point programming normally requires the use ofinteger types of different widths. Fixed-point applications can make use ofblock floating point, which is a fixed-point environment having each array (block) of fixed-point data be scaled with a common exponent in a single word. A common use of decimal fixed-point is for storing monetary values, for which the complicated rounding rules of floating-point numbers are often a liability. For example, the open-source money management applicationGnuCash, written in C, switched from floating-point to fixed-point as of version 1.6, for this reason. Binary fixed-point (binary scaling) was widely used from the late 1960s to the 1980s for real-time computing that was mathematically intensive, such asflight simulationand innuclear power plantcontrol algorithms. It is still used in manyDSPapplications and custom-made microprocessors. Computations involving angles would usebinary angular measurement. Binary fixed point is used in theSTM32G4seriesCORDICco-processors and in thediscrete cosine transformalgorithms used to compressJPEGimages. Electronic instruments such aselectricity metersanddigital clocksoften use polynomials to compensate for introduced errors, e.g. from temperature or power supply voltage. The coefficients are produced bypolynomial regression. Binary fixed-point polynomials can utilize more bits of precision than floating-point and do so in fast code using inexpensive CPUs. Accuracy, crucial for instruments, compares well to equivalent-bit floating-point calculations, if the fixed-point polynomials are evaluated usingHorner's method(e.g.y= ((ax+b)x+c)x+d) to reduce the number of times that rounding occurs, and the fixed-point multiplications utilize rounding addends. To add or subtract two values with the same implicit scaling factor, it is sufficient to add or subtract the underlying integers; the result will have their common implicit scaling factor and can thus be stored in the same program variables as the operands. These operations yield the exact mathematical result, as long as nooverflowoccurs—that is, as long as the resulting integer can be stored in the receiving programvariable. If the operands have different scaling factors, then they must be converted to a common scaling factor before the operation. To multiply two fixed-point numbers, it suffices to multiply the two underlying integers, and assume that the scaling factor of the result is the product of their scaling factors. The result will be exact, with no rounding, provided that it does not overflow the receiving variable. For example, multiplying the numbers 123 scaled by 1/1000 (0.123) and 25 scaled by 1/10 (2.5) yields the integer 123×25 = 3075 scaled by (1/1000)×(1/10) = 1/10000, that is 3075/10000 = 0.3075. As another example, multiplying the first number by 155 implicitly scaled by 1/32 (155/32 = 4.84375) yields the integer 123×155 = 19065 with implicit scaling factor (1/1000)×(1/32) = 1/32000, that is 19065/32000 = 0.59578125. In binary, it is common to use a scaling factor that is a power of two. After the multiplication, the scaling factor can be divided away by shifting right. Shifting is simple and fast in most computers. Rounding is possible by adding a 'rounding addend' of half of the scaling factor before shifting; The proof: round(x/y) = floor(x/y + 0.5) = floor((x + y/2)/y) = shift-of-n(x + 2^(n−1)) A similar method is usable in any scaling. To divide two fixed-point numbers, one takes the integer quotient of their underlying integers and assumes that the scaling factor is the quotient of their scaling factors. In general, the first division requires rounding and therefore the result is not exact. For example, division of 3456 scaled by 1/100 (34.56) and 1234 scaled by 1/1000 (1.234) yields the integer 3456÷1234 = 3 (rounded) with scale factor (1/100)/(1/1000) = 10, that is, 30. As another example, the division of the first number by 155 implicitly scaled by 1/32 (155/32 = 4.84375) yields the integer 3456÷155 = 22 (rounded) with implicit scaling factor (1/100)/(1/32) = 32/100 = 8/25, that is 22×32/100 = 7.04. If the result is not exact, the error introduced by the rounding can be reduced or even eliminated by converting the dividend to a smaller scaling factor. For example, ifr= 1.23 is represented as 123 with scaling 1/100, ands= 6.25 is represented as 6250 with scaling 1/1000, then simple division of the integers yields 123÷6250 = 0 (rounded) with scaling factor (1/100)/(1/1000) = 10. Ifris first converted to 1,230,000 with scaling factor 1/1000000, the result will be 1,230,000÷6250 = 197 (rounded) with scale factor 1/1000 (0.197). The exact value 1.23/6.25 is 0.1968. In fixed-point computing it is often necessary to convert a value to a different scaling factor. This operation is necessary, for example: To convert a number from a fixed point type with scaling factorRto another type with scaling factorS, the underlying integer must be multiplied by the ratioR/S. Thus, for example, to convert the value 1.23 = 123/100 from scaling factorR=1/100 to one with scaling factorS=1/1000, the integer 123 must be multiplied by (1/100)/(1/1000) = 10, yielding the representation 1230/1000. If the scaling factor is a power of the base used internally to represent the integer, changing the scaling factor requires only dropping low-order digits of the integer, or appending zero digits. However, this operation must preserve the sign of the number. In two's complement representation, that means extending the sign bit as inarithmetic shiftoperations. IfSdoes not divideR(in particular, if the new scaling factorSis greater than the originalR), the new integer may have to berounded. In particular, ifrandsare fixed-point variables with implicit scaling factorsRandS, the operationr←r×srequires multiplying the respective integers and explicitly dividing the result byS. The result may have to be rounded, and overflow may occur. For example, if the common scaling factor is 1/100, multiplying 1.23 by 0.25 entails multiplying 123 by 25 to yield 3075 with an intermediate scaling factor of 1/10000. In order to return to the original scaling factor 1/100, the integer 3075 then must be multiplied by 1/100, that is, divided by 100, to yield either 31 (0.31) or 30 (0.30), depending on therounding policyused. Similarly, the operationr←r/swill require dividing the integers and explicitly multiplying the quotient byS. Rounding and/or overflow may occur here too. To convert a number from floating point to fixed point, one may multiply it by the scaling factorS, then round the result to the nearest integer. Care must be taken to ensure that the result fits in the destination variable or register. Depending on the scaling factor and storage size, and on the range input numbers, the conversion may not entail any rounding. To convert a fixed-point number to floating-point, one may convert the integer to floating-point and then divide it by the scaling factorS. This conversion may entail rounding if the integer's absolute value is greater than 224(for binary single-precision IEEE floating point) or of 253(for double-precision). Overflow orunderflowmay occur if |S| isverylarge orverysmall, respectively. Typical processors do not have specific support for fixed-point arithmetic. However, most computers with binary arithmetic have fastbit shiftinstructions that can multiply or divide an integer by any power of 2; in particular, anarithmetic shiftinstruction. These instructions can be used to quickly change scaling factors that are powers of 2, while preserving the sign of the number. Early computers like theIBM 1620and theBurroughs B3500used abinary-coded decimal(BCD) representation for integers, namely base 10 where each decimal digit was independently encoded with 4 bits. Some processors, such as microcontrollers, may still use it. In such machines, conversion of decimal scaling factors can be performed by bit shifts and/or by memory address manipulation. Some DSP architectures offer native support for specific fixed-point formats, for example, signedn-bit numbers withn−1 fraction bits (whose values may range between −1 and almost +1). The support may include a multiply instruction that includes renormalization—the scaling conversion of the product from 2n−2 ton−1 fraction bits.[citation needed]If the CPU does not provide that feature, the programmer must save the product in a large enough register or temporary variable, and code the renormalization explicitly. Overflow happens when the result of an arithmetic operation is too large to be stored in the designated destination area. In addition and subtraction, the result may require one bit more than the operands. In multiplication of two unsigned integers withmandnbits, the result may havem+nbits. In case of overflow, the high-order bits are usually lost, as the un-scaled integer gets reduced modulo 2nwherenis the size of the storage area. The sign bit, in particular, is lost, which may radically change the sign and the magnitude of the value. Some processors can set a hardwareoverflow flagand/or generate anexceptionon the occurrence of an overflow. Some processors may instead providesaturation arithmetic: if the result of an addition or subtraction were to overflow, they store instead the value with the largest magnitude that can fit in the receiving area and has the correct sign.[citation needed] However, these features are not very useful in practice; it is generally easier and safer to select scaling factors and word sizes so as to exclude the possibility of overflow, or to check the operands for excessive values before executing the operation. Explicit support for fixed-point numbers is provided by a few programming languages, notablyPL/I,COBOL,Ada,JOVIAL, andCoral 66. They provide fixed-pointdata types, with a binary or decimal scaling factor. The compiler automatically generates code to do the appropriate scaling conversions when doing operations on these data types, when reading or writing variables, or when converting the values to other data types such as floating-point. Most of those languages were designed between 1955 and 1990. More modern languages usually do not offer any fixed-point data types or support for scaling factor conversion. That is also the case for several older languages that are still very popular, likeFORTRAN,CandC++. The wide availability of fast floating-point processors, with strictly standardized behavior, has greatly reduced the demand for binary fixed-point support.[citation needed]Similarly, the support fordecimal floating pointin some programming languages, likeC#andPython, has removed most of the need for decimal fixed-point support. In the few situations that call for fixed-point operations, they can be implemented by the programmer, with explicit scaling conversion, in any programming language. On the other hand, all relationaldatabasesand theSQLnotation support fixed-point decimal arithmetic and storage of numbers.PostgreSQLhas a specialnumerictype for exact storage of numbers with up to 1000 digits.[3] Moreover, in 2008 theInternational Organization for Standardization(ISO) issued a proposal to extend the C programming language with fixed-point data types, for the benefit of programs running on embedded processors.[4]Also, theGNU Compiler Collection(GCC) hasback-endsupport for fixed-point.[5][6] Suppose there is the following multiplication with two fixed-point, 3-decimal-place numbers. Note how since there are 3 decimal places we show the trailing zeros. To re-characterize this as an integer multiplication we must first multiply by1000(=103){\displaystyle 1000\ (=10^{3})}moving all the decimal places in to integer places, then we will multiply by1/1000(=10−3){\displaystyle 1/1000\ (=10^{-3})}to put them back the equation now looks like This works equivalently if we choose a different base, notably base 2 for computing since a bit shift is the same as a multiplication or division by an order of 2. Three decimal digits is equivalent to about 10 binary digits, so we should round 0.05 to 10 bits after the binary point. The closest approximation is then 0.0000110011. Thus our multiplication becomes This rounds to 11.023 with three digits after the decimal point. Consider the task of computing the product of 1.2 and 5.6 with binary fixed point using 16 fraction bits. To represent the two numbers, one multiplies them by 216, obtaining78 643.2 and367 001.6; and round these values the nearest integers, obtaining78 643and367 002. These numbers will fit comfortably into a 32-bit word with two's complement signed format. Multiplying these integers together gives the 35-bit integer28 862 138 286with 32 fraction bits, without any rounding. Note that storing this value directly into a 32-bit integer variable would result in overflow and loss of the most significant bits. In practice, it would probably be stored in a signed 64-bit integer variable orregister. If the result is to be stored in the same format as the data, with 16 fraction bits, that integer should be divided by 216, which gives approximately440 401.28, and then rounded to the nearest integer. This effect can be achieved by adding 215and then shifting the result by 16 bits. The result is440 401, which represents the value 6.719 985 961 914 062 5. Taking into account the precision of the format, that value is better expressed as 6.719 986± 0.000 008(not counting the error that comes from the operand approximations). The correct result would be 1.2 × 5.6 = 6.72. For a more complicated example, suppose that the two numbers 1.2 and 5.6 are represented in 32-bit fixed point format with 30 and 20 fraction bits, respectively. Scaling by 230and 220gives1 288 490 188.8 and5 872 025.6, that round to1 288 490 189and5 872 026, respectively. Both numbers still fit in a 32-bit signed integer variable, and represent the fractions Their product is (exactly) the 53-bit integer7 566 047 890 552 914, which has 30+20 = 50 implied fraction bits and therefore represents the fraction If we choose to represent this value in signed 16-bit fixed format with 8 fraction bits, we must divide the integer product by 250−8= 242and round the result; which can be achieved by adding 241and shifting by 42 bits. The result is 1720, representing the value 1720/28= 6.718 75, or approximately 6.719 ± 0.002. Various notations have been used to concisely specify the parameters of a fixed-point format. In the following list,frepresents the number of fractional bits,mthe number of magnitude or integer bits,sthe number of sign bits, andbthe total number of bits.
https://en.wikipedia.org/wiki/Fixed-point_arithmetic
Floating-point error mitigationis the minimization of errors caused by the fact that real numbers cannot, in general, be accurately represented in a fixed space. By definition,floating-point errorcannot be eliminated, and, at best, can only be managed. Huberto M. Sierra noted in his 1956 patent "Floating Decimal Point Arithmetic Control Means for Calculator":[1] Thus under some conditions, the major portion of the significant data digits may lie beyond the capacity of the registers. Therefore, the result obtained may have little meaning if not totally erroneous. TheZ1, developed byKonrad Zusein 1936, was the first computer withfloating-point arithmeticand was thus susceptible to floating-point error. Early computers, however, with operation times measured in milliseconds, could not solve large, complex problems[2]and thus were seldom plagued with floating-point error. Today, however, withsupercomputersystem performance measured inpetaflops, floating-point error is a major concern for computational problem solvers. The following sections describe the strengths and weaknesses of various means of mitigating floating-point error. Though not the primary focus ofnumerical analysis,[3][4]: 5numerical error analysisexists for the analysis and minimization of floating-point rounding error. Error analysis byMonte Carloarithmetic is accomplished by repeatedly injecting small errors into an algorithm's data values and determining the relative effect on the results. Extension of precision is using of larger representations of real values than the one initially considered. TheIEEE 754standard defines precision as the number of digits available to represent real numbers. A programming language can includesingle precision(32 bits),double precision(64 bits), andquadruple precision(128 bits). While extension of precision makes the effects of error less likely or less important, the true accuracy of the results is still unknown. Variable length arithmeticrepresents numbers as a string of digits of a variable's length limited only by the memory available. Variable-length arithmetic operations are considerably slower than fixed-length format floating-point instructions. When high performance is not a requirement, but high precision is, variable length arithmetic can prove useful, though the actual accuracy of the result may not be known. The floating-point algorithm known asTwoSum[5]or2Sum, due to Knuth and Møller, and its simpler, but restricted versionFastTwoSumorFast2Sum(3 operations instead of 6), allow one to get the (exact) error term of a floating-point addition rounded to nearest. One can also obtain the (exact) error term of a floating-point multiplication rounded to nearest in 2 operations with afused multiply–add(FMA), or 17 operations if the FMA is not available (with an algorithm due to Dekker). These error terms can be used in algorithms in order to improve the accuracy of the final result, e.g. withfloating-point expansionsorcompensated algorithms. Operations giving the result of a floating-point addition or multiplication rounded to nearest with its error term (but slightly differing from algorithms mentioned above) have been standardized and recommended in the IEEE 754-2019 standard. Changing theradix, in particular from binary to decimal, can help to reduce the error and better control the rounding in some applications, such asfinancialapplications. Interval arithmeticis a mathematical technique used to put bounds onrounding errorsandmeasurement errorsinmathematical computation. Values are intervals, which can be represented in various ways, such as:[6] "Instead of using a single floating-point number as approximation for the value of a real variable in the mathematical model under investigation, interval arithmetic acknowledges limited precision by associating with the variable a set of reals as possible values. For ease of storage and computation, these sets are restricted to intervals."[7] The evaluation of interval arithmetic expression may provide a large range of values,[7]and may seriously overestimate the true error boundaries.[8]: 8 Unums("Universal Numbers") are an extension of variable length arithmetic proposed byJohn Gustafson.[9]Unums have variable length fields for the exponent andsignificandlengths and error information is carried in a single bit, the ubit, representing possible error in the least significant bit of the significand (ULP).[9]: 4 The efficacy of unums is questioned byWilliam Kahan.[8] Bounded floating pointis a method proposed and patented by Alan Jorgensen.[10]The data structure includes the standardIEEE 754data structure and interpretation, as well as information about the error between the true real value represented and the value stored by the floating point representation.[11] Bounded floating point has been criticized as being derivative of Gustafson's work on unums and interval arithmetic.[10][12]
https://en.wikipedia.org/wiki/Floating-point_error_mitigation
Floating point operations per second(FLOPS,flopsorflop/s) is a measure ofcomputer performanceincomputing, useful in fields of scientific computations that requirefloating-pointcalculations.[1] For such cases, it is a more accurate measure than measuringinstructions per second.[citation needed] Floating-point arithmeticis needed for very large or very smallreal numbers, or computations that require a large dynamic range. Floating-point representation is similar to scientific notation, except computers usebase two(with rare exceptions), rather thanbase ten. The encoding scheme stores the sign, theexponent(in base two for Cray andVAX, base two or ten forIEEE floating pointformats, and base 16 forIBM Floating Point Architecture) and thesignificand(number after theradix point). While several similar formats are in use, the most common isANSI/IEEE Std. 754-1985. This standard defines the format for 32-bit numbers calledsingle precision, as well as 64-bit numbers calleddouble precisionand longer numbers calledextended precision(used for intermediate results). Floating-point representations can support a much wider range of values than fixed-point, with the ability to represent very small numbers and very large numbers.[2] The exponentiation inherent in floating-point computation assures a much larger dynamic range – the largest and smallest numbers that can be represented – which is especially important when processing data sets where some of the data may have extremely large range of numerical values or where the range may be unpredictable. As such, floating-point processors are ideally suited for computationally intensive applications.[3] FLOPS andMIPSare units of measure for the numerical computing performance of a computer. Floating-point operations are typically used in fields such as scientific computational research, as well as inmachine learning. However, before the late 1980s floating-point hardware (it's possible to implement FP arithmetic in software over any integer hardware) was typically an optional feature, and computers that had it were said to be "scientific computers", or to have "scientific computation" capability. Thus the unit MIPS was useful to measure integer performance of any computer, including those without such a capability, and to account for architecture differences, similar MOPS (million operations per second) was used as early as 1970[4]as well. Note that besides integer (or fixed-point) arithmetics, examples of integer operation include data movement (A to B) or value testing (If A = B, then C). That's why MIPS as a performance benchmark is adequate when a computer is used in database queries, word processing, spreadsheets, or to run multiple virtual operating systems.[5][6]In 1974David Kuckcoined the terms flops and megaflops for the description of supercomputer performance of the day by the number of floating-point calculations they performed per second.[7]This was much better than using the prevalent MIPS to compare computers as this statistic usually had little bearing on the arithmetic capability of the machine on scientific tasks. FLOPS on an HPC-system can be calculated using this equation:[8] This can be simplified to the most common case: a computer that has exactly 1 CPU: FLOPS can be recorded in different measures of precision, for example, theTOP500supercomputer list ranks computers by 64-bit (double-precision floating-point format) operations per second, abbreviated toFP64.[9]Similar measures are available for32-bit(FP32) and16-bit(FP16) operations. (only GeForce GTX 465–480, 560 Ti, 570–590) (only Quadro 600–2000) (only Quadro 4000–7000, Tesla) (GeForce (except Titan and Titan Black), Quadro (except K6000), Tesla K10) (GeForce GTX Titan and Titan Black, Quadro K6000, Tesla (except K10)) FORTRANcompiler (ANSI 77 with vector extensions) In June 1997,Intel'sASCI Redwas the world's first computer to achieve one teraFLOPS and beyond. Sandia director Bill Camp said that ASCI Red had the best reliability of any supercomputer ever built, and "was supercomputing's high-water mark in longevity, price, and performance".[39] NEC'sSX-9supercomputer was the world's firstvector processorto exceed 100 gigaFLOPS per single core. In June 2006, a new computer was announced by Japanese research instituteRIKEN, theMDGRAPE-3. The computer's performance tops out at one petaFLOPS, almost two times faster than the Blue Gene/L, but MDGRAPE-3 is not a general purpose computer, which is why it does not appear in theTop500.orglist. It has special-purposepipelinesfor simulating molecular dynamics. By 2007,Intel Corporationunveiled the experimentalmulti-corePOLARISchip, which achieves 1 teraFLOPS at 3.13 GHz. The 80-core chip can raise this result to 2 teraFLOPS at 6.26 GHz, although the thermal dissipation at this frequency exceeds 190 watts.[40] In June 2007, Top500.org reported the fastest computer in the world to be theIBM Blue Gene/Lsupercomputer, measuring a peak of 596 teraFLOPS.[41]TheCray XT4hit second place with 101.7 teraFLOPS. On June 26, 2007,IBMannounced the second generation of its top supercomputer, dubbed Blue Gene/P and designed to continuously operate at speeds exceeding one petaFLOPS, faster than the Blue Gene/L. When configured to do so, it can reach speeds in excess of three petaFLOPS.[42] On October 25, 2007,NECCorporation of Japan issued a press release announcing its SX series modelSX-9,[43]claiming it to be the world's fastest vector supercomputer. TheSX-9features the first CPU capable of a peak vector performance of 102.4 gigaFLOPS per single core. On February 4, 2008, theNSFand theUniversity of Texas at Austinopened full scale research runs on anAMD,Sunsupercomputer named Ranger,[44]the most powerful supercomputing system in the world for open science research, which operates at sustained speed of 0.5 petaFLOPS. On May 25, 2008, an American supercomputer built byIBM, named 'Roadrunner', reached the computing milestone of one petaFLOPS. It headed the June 2008 and November 2008TOP500list of the most powerful supercomputers (excludinggrid computers).[45][46]The computer is located at Los Alamos National Laboratory in New Mexico. The computer's name refers to the New Mexicostate bird, thegreater roadrunner(Geococcyx californianus).[47] In June 2008, AMD released ATI Radeon HD 4800 series, which are reported to be the first GPUs to achieve one teraFLOPS. On August 12, 2008, AMD released the ATI Radeon HD 4870X2 graphics card with twoRadeon R770GPUs totaling 2.4 teraFLOPS. In November 2008, an upgrade to the CrayJaguar supercomputerat the Department of Energy's (DOE's) Oak Ridge National Laboratory (ORNL) raised the system's computing power to a peak 1.64 petaFLOPS, making Jaguar the world's first petaFLOPS system dedicated toopen research. In early 2009 the supercomputer was named after a mythical creature,Kraken. Kraken was declared the world's fastest university-managed supercomputer and sixth fastest overall in the 2009 TOP500 list. In 2010 Kraken was upgraded and can operate faster and is more powerful. In 2009, theCrayJaguar performed at 1.75 petaFLOPS, beating the IBM Roadrunner for the number one spot on theTOP500list.[48] In October 2010, China unveiled theTianhe-1, a supercomputer that operates at a peak computing rate of 2.5 petaFLOPS.[49][50] As of 2010[update]the fastest PCprocessorreached 109 gigaFLOPS (Intel Core i7980 XE)[51]in double precision calculations.GPUsare considerably more powerful. For example,Nvidia TeslaC2050 GPU computing processors perform around 515 gigaFLOPS[52]in double precision calculations, and the AMD FireStream 9270 peaks at 240 gigaFLOPS.[53] In November 2011, it was announced that Japan had achieved 10.51 petaFLOPS with itsK computer.[54]It has 88,128SPARC64 VIIIfxprocessorsin 864 racks, with theoretical performance of 11.28 petaFLOPS. It is named after the Japanese word "kei", which stands for 10quadrillion,[55]corresponding to the target speed of 10 petaFLOPS. On November 15, 2011, Intel demonstrated a single x86-based processor, code-named "Knights Corner", sustaining more than a teraFLOPS on a wide range ofDGEMMoperations. Intel emphasized during the demonstration that this was a sustained teraFLOPS (not "raw teraFLOPS" used by others to get higher but less meaningful numbers), and that it was the first general purpose processor to ever cross a teraFLOPS.[56][57] On June 18, 2012,IBM's Sequoia supercomputer system, based at the U.S. Lawrence Livermore National Laboratory (LLNL), reached 16 petaFLOPS, setting the world record and claiming first place in the latest TOP500 list.[58] On November 12, 2012, the TOP500 list certifiedTitanas the world's fastest supercomputer per the LINPACK benchmark, at 17.59 petaFLOPS.[59][60]It was developed by Cray Inc. at theOak Ridge National Laboratoryand combines AMD Opteron processors with "Kepler" NVIDIA Tesla graphics processing unit (GPU) technologies.[61][62] On June 10, 2013, China'sTianhe-2was ranked the world's fastest with 33.86 petaFLOPS.[63] On June 20, 2016, China'sSunway TaihuLightwas ranked the world's fastest with 93 petaFLOPS on the LINPACK benchmark (out of 125 peak petaFLOPS). The system was installed at the National Supercomputing Center in Wuxi, and represented more performance than the next five most powerful systems on the TOP500 list did at the time combined.[64] In June 2019,Summit, an IBM-built supercomputer now running at the Department of Energy's (DOE) Oak Ridge National Laboratory (ORNL), captured the number one spot with a performance of 148.6 petaFLOPS on High Performance Linpack (HPL), the benchmark used to rank the TOP500 list. Summit has 4,356 nodes, each one equipped with two 22-core Power9 CPUs, and six NVIDIA Tesla V100 GPUs.[65] In June 2022, the United States'Frontierwas the most powerful supercomputer on TOP500, reaching 1102 petaFlops (1.102 exaFlops) on the LINPACK benchmarks.[66][circular reference] In November 2024, the United States’El Capitanexascalesupercomputer, hosted at theLawrence Livermore National LaboratoryinLivermore, displaced Frontier as theworld's fastest supercomputerin the 64th edition of theTop500 (Nov 2024). Distributed computinguses the Internet to link personal computers to achieve more FLOPS: 3× NVIDIA RTX 3080 @ 29,770 GFLOPS each & $699.99 Total system GFLOPS = 89,794 / TFLOPS = 89.794 Total system cost incl. realistic but low cost parts; matched with other example = $2839[92] US$/GFLOP = $0.0314
https://en.wikipedia.org/wiki/FLOPS
Gal's accurate tablesis a method devised byShmuel Galto provide accurate values ofspecial functionsusing alookup tableandinterpolation. It is a fast and efficient method for generating values of functions like theexponentialor thetrigonometric functionsto within last-bit accuracy for almost all argument values without using extended precision arithmetic. The main idea in Gal's accurate tables is a different tabulation for the special function being computed. Commonly, the range is divided into several subranges, each with precomputed values and correction formulae. To compute the function, look up the closest point and compute a correction as a function of the distance. Gal's idea is to not precompute equally spaced values, but rather toperturbthe pointsxso that bothxandf(x) are very nearly exactly representable in the chosen numeric format. By searching approximately 1000 values on either side of the desired valuex, a value can be found such thatf(x) can be represented with less than ±1/2000 bit ofrounding error. If the correction is also computed to ±1/2000 bit of accuracy (which does not require extra floating-point precision as long as the correction is less than 1/2000 the magnitude of the stored valuef(x),andthe computed correction is more than ±1/1000 of a bit away from exactly half a bit (the difficult rounding case), then it is known whether the exact function value should be rounded up or down. The technique provides an efficient way to compute the function value to within ±1/1000 least-significant bit, i.e. 10 extra bits of precision. If this approximation is more than ±1/1000 of a bit away from exactly midway between two representable values (which happens 99.8% of the time), then the correctly rounded result is clear. Combined with an extended-precision fallback algorithm, this can compute the correctly rounded result in very reasonableaveragetime. In 2/1000 (0.2%) of the time, such a higher-precision evaluation is required to resolve the rounding uncertainty, but this is infrequent enough that it has little effect on the average calculation time. The problem of generating function values which are accurate to the last bit is known as thetable-maker's dilemma.
https://en.wikipedia.org/wiki/Gal%27s_accurate_tables
TheGNU Multiple Precision Floating-Point Reliable Library(GNU MPFR) is aGNUportableClibraryforarbitrary-precisionbinaryfloating-pointcomputation withcorrect rounding, based onGNU Multi-Precision Library.[1][2] MPFR's computation is both efficient and has a well-defined semantics: the functions are completely specified on all the possible operands and the results do not depend on the platform.[3]This is done by copying the ideas from theANSI/IEEE-754standard for fixed-precision floating-point arithmetic (correct rounding and exceptions, in particular). More precisely, its main features are: MPFR is not able to track theaccuracyof numbers in a whole program or expression; this is not its goal.Interval arithmeticpackages like Arb,[4]MPFI,[5]orReal RAMimplementations like iRRAM,[6]which may be based on MPFR, can do that for the user. MPFR is dependent upon theGNU Multiple Precision Arithmetic Library(GMP). MPFR is needed to build theGNU Compiler Collection(GCC).[7]Other software uses MPFR, such asALGLIB,CGAL,FLINT,GNOME Calculator, theJulia languageimplementation, theMagma computer algebra system,Maple, GNU MPC, andGNU Octave.
https://en.wikipedia.org/wiki/GNU_MPFR
Incomputing,half precision(sometimes calledFP16orfloat16) is abinaryfloating-pointcomputer number formatthat occupies16 bits(two bytes in modern computers) incomputer memory. It is intended for storage of floating-point values in applications where higher precision is not essential, in particularimage processingandneural networks. Almost all modern uses follow theIEEE 754-2008standard, where the 16-bitbase-2format is referred to asbinary16, and the exponent uses 5 bits. This can express values in the range ±65,504, with the minimum value above 1 being 1 + 1/1024. Depending on the computer, half-precision can be over an order of magnitude faster than double precision, e.g. 550 PFLOPS for half-precision vs 37 PFLOPS for double precision on one cloud provider.[1] Several earlier 16-bit floating point formats have existed including that of Hitachi's HD61810 DSP of 1982 (a 4-bit exponent and a 12-bit mantissa),[2]Thomas J. Scott's WIF of 1991 (5 exponent bits, 10 mantissa bits)[3]and the3dfx Voodoo Graphics processorof 1995 (same as Hitachi).[4] ILMwas searching for an image format that could handle a widedynamic range, but without the hard drive and memory cost of single or double precision floating point.[5]The hardware-accelerated programmable shading group led by John Airey atSGI (Silicon Graphics)used the s10e5 data type in 1997 as part of the 'bali' design effort. This is described in aSIGGRAPH2000 paper[6](see section 4.3) and further documented in US patent 7518615.[7]It was popularized by its use in the open-sourceOpenEXRimage format. NvidiaandMicrosoftdefined thehalfdatatypein theCg language, released in early 2002, and implemented it in silicon in theGeForce FX, released in late 2002.[8]However, hardware support for accelerated 16-bit floating point was later dropped by Nvidia before being reintroduced in theTegra X1mobile GPU in 2015. TheF16Cextension in 2012 allows x86 processors to convert half-precision floats to and from single-precision floats with a machine instruction. The IEEE 754 standard[9]specifies abinary16as having the following format: The format is laid out as follows: The format is assumed to have an implicit lead bit with value 1 unless the exponent field is stored with all zeros. Thus, only 10 bits of thesignificandappear in the memory format but the total precision is 11 bits. In IEEE 754 parlance, there are 10 bits of significand, but there are 11 bits of significand precision (log10(211) ≈ 3.311 decimal digits, or 4 digits ± slightly less than 5units in the last place). The half-precision binary floating-point exponent is encoded using anoffset-binaryrepresentation, with the zero offset being 15; also known as exponent bias in the IEEE 754 standard.[9] Thus, as defined by the offset binary representation, in order to get the true exponent the offset of 15 has to be subtracted from the stored exponent. The stored exponents 000002and 111112are interpreted specially. The minimum strictly positive (subnormal) value is 2−24≈ 5.96 × 10−8. The minimum positive normal value is 2−14≈ 6.10 × 10−5. The maximum representable value is (2−2−10) × 215= 65504. These examples are given in bit representation of the floating-point value. This includes the sign bit, (biased) exponent, and significand. By default, 1/3 rounds down like fordouble precision, because of the odd number of bits in the significand. The bits beyond the rounding point are0101... which is less than 1/2 of aunit in the last place. 65520 and larger numbers round to infinity. This is for round-to-even; other rounding strategies will change this cut-off. ARM processors support (via a floating-pointcontrol registerbit) an "alternative half-precision" format, which does away with the special case for an exponent value of 31 (111112).[10]It is almost identical to the IEEE format, but there is no encoding for infinity or NaNs; instead, an exponent of 31 encodes normalized numbers in the range 65536 to 131008. Half precision is used in severalcomputer graphicsenvironments to store pixels, includingMATLAB,OpenEXR,JPEG XR,GIMP,OpenGL,Vulkan,[11]Cg,Direct3D, andD3DX. The advantage over 8-bit or 16-bit integers is that the increaseddynamic rangeallows for more detail to be preserved in highlights andshadowsfor images, and avoids gamma correction. The advantage over 32-bitsingle-precisionfloating point is that it requires half the storage andbandwidth(at the expense of precision and range).[5] Half precision can be useful formeshquantization. Mesh data is usually stored using 32-bit single-precision floats for the vertices, however in some situations it is acceptable to reduce the precision to only 16-bit half-precision, requiring only half the storage at the expense of some precision. Mesh quantization can also be done with 8-bit or 16-bit fixed precision depending on the requirements.[12] Hardware and software formachine learningorneural networkstend to use half precision: such applications usually do a large amount of calculation, but don't require a high level of precision. Due to hardware typically not supporting 16-bit half-precision floats, neural networks often use thebfloat16format, which is the single precision float format truncated to 16 bits. If the hardware has instructions to compute half-precision math, it is often faster than single or double precision. If the system hasSIMDinstructions that can handle multiple floating-point numbers within one instruction, half precision can be twice as fast by operating on twice as many numbers simultaneously.[13] Zigprovides support for half precisions with itsf16type.[14] .NET5 introduced half precision floating point numbers with theSystem.Halfstandard library type.[15][16]As of January 2024[update], no .NET language (C#,F#,Visual Basic, andC++/CLIandC++/CX) has literals (e.g. in C#,1.0fhas typeSystem.Singleor1.0mhas typeSystem.Decimal) or a keyword for the type.[17][18][19] Swiftintroduced half-precision floating point numbers in Swift 5.3 with theFloat16type.[20] OpenCLalso supports half-precision floating point numbers with the half datatype on IEEE 754-2008 half-precision storage format.[21] As of 2024[update],Rustis currently working on adding a newf16type for IEEE half-precision 16-bit floats.[22] Juliaprovides support for half-precision floating point numbers with theFloat16type.[23] C++introduced half-precision since C++23 with thestd::float16_ttype.[24]GCCalready implements support for it.[25] Several versions of theARM architecturehave support for half precision.[26] Support for conversions with half-precision floats in thex86instruction setis specified in theF16Cinstruction set extension, first introduced in 2009 by AMD and fairly broadly adopted by AMD and Intel CPUs by 2012. This was further extended up theAVX-512_FP16instruction set extension implemented in the IntelSapphire Rapidsprocessor.[27] OnRISC-V, theZfhandZfhminextensionsprovide hardware support for 16-bit half precision floats. TheZfhminextension is a minimal alternative toZfh.[28] OnPower ISA, VSX and the not-yet-approved SVP64 extension provide hardware support for 16-bit half-precision floats as of PowerISA v3.1B and later.[29][30] Support for half precision onIBM Zis part of the Neural-network-processing-assist facility that IBM introduced withTelum. IBM refers to half precision floating point data as NNP-Data-Type 1 (16-bit).
https://en.wikipedia.org/wiki/Half-precision_floating-point_format
TheIEEE Standard for Floating-Point Arithmetic(IEEE 754) is atechnical standardforfloating-point arithmeticoriginally established in 1985 by theInstitute of Electrical and Electronics Engineers(IEEE). The standardaddressed many problemsfound in the diverse floating-point implementations that made them difficult to use reliably andportably. Many hardwarefloating-point unitsuse the IEEE 754 standard. The standard defines: IEEE 754-2008, published in August 2008, includes nearly all of the originalIEEE 754-1985standard, plus theIEEE 854-1987 Standard for Radix-Independent Floating-Point Arithmetic.The current version, IEEE 754-2019, was published in July 2019.[1]It is a minor revision of the previous version, incorporating mainly clarifications, defect fixes and new recommended operations. The need for a floating-point standard arose from chaos in the business and scientific computing industry in the 1960s and 1970s. IBM used ahexadecimal floating-point formatwitha longer significand and a shorter exponent[clarification needed].CDCandCraycomputers usedones' complementrepresentation, which admits a value of +0 and −0. CDC 60-bit computers did not have full 60-bit adders, so integer arithmetic was limited to 48 bits of precision from the floating-point unit. Exception processing from divide-by-zero was different on different computers. Moving data between systems and even repeating the same calculations on different systems was often difficult. The first IEEE standard for floating-point arithmetic,IEEE 754-1985, was published in 1985. It covered only binary floating-point arithmetic. A new version,IEEE 754-2008, was published in August 2008, following a seven-year revision process, chaired by Dan Zuras and edited byMike Cowlishaw. It replaced both IEEE 754-1985 (binary floating-point arithmetic) andIEEE 854-1987 Standard for Radix-Independent Floating-Point Arithmetic. The binary formats in the original standard are included in this new standard along with three new basic formats, one binary and two decimal. To conform to the current standard, an implementation must implement at least one of the basic formats as both an arithmetic format and an interchange format. The international standardISO/IEC/IEEE 60559:2011(with content identical to IEEE 754-2008) has been approved for adoption throughISO/IECJTC 1/SC 25 under the ISO/IEEE PSDO Agreement[2][3]and published.[4] The current version, IEEE 754-2019 published in July 2019, is derived from and replaces IEEE 754-2008, following a revision process started in September 2015, chaired by David G. Hough and edited by Mike Cowlishaw. It incorporates mainly clarifications (e.g.totalOrder) and defect fixes (e.g.minNum), but also includes some new recommended operations (e.g.augmentedAddition).[5][6] The international standardISO/IEC 60559:2020(with content identical to IEEE 754-2019) has been approved for adoption through ISO/IECJTC 1/SC 25 and published.[7] The next projected revision of the standard is in 2029.[8] An IEEE 754formatis a "set of representations of numerical values and symbols". A format may also include how the set is encoded.[9] A floating-point format is specified by A format comprises For example, ifb= 10,p= 7, andemax= 96, thenemin= −95, the significand satisfies 0 ≤c≤9999999, and the exponent satisfies−101 ≤q≤ 90. Consequently, the smallest non-zero positive number that can be represented is 1×10−101, and the largest is 9999999×1090(9.999999×1096), so the full range of numbers is −9.999999×1096through 9.999999×1096. The numbers −b1−emaxandb1−emax(here, −1×10−95and 1×10−95) are the smallest (in magnitude)normal numbers; non-zero numbers between these smallest numbers are calledsubnormal numbers. Some numbers may have several possible floating-point representations. For instance, ifb= 10, andp= 7, then −12.345 can be represented by −12345×10−3, −123450×10−4, and −1234500×10−5. However, for most operations, such as arithmetic operations, the result (value) does not depend on the representation of the inputs. For the decimal formats, any representation is valid, and the set of these representations is called acohort. When a result can have several representations, the standard specifies which member of the cohort is chosen. For the binary formats, the representation is made unique by choosing the smallest representable exponent allowing the value to be represented exactly. Further, the exponent is not represented directly, but abiasis added so that the smallest representable exponent is represented as 1, with 0 used for subnormal numbers. For numbers with an exponent in the normal range (the exponent field being neither all ones nor all zeros), the leading bit of the significand will always be 1. Consequently, a leading 1 can be implied rather than explicitly present in the memory encoding, and under the standard the explicitly represented part of the significand will lie between 0 and 1. This rule is calledleading bit convention,implicit bit convention, orhidden bit convention. This rule allows the binary format to have an extra bit of precision. The leading bit convention cannot be used for the subnormal numbers as they have an exponent outside the normal exponent range and scale by the smallest represented exponent as used for the smallest normal numbers. Due to the possibility of multiple encodings (at least in formats calledinterchange formats), a NaN may carry other information: a sign bit (which has no meaning, but may be used by some operations) and apayload, which is intended for diagnostic information indicating the source of the NaN (but the payload may have other uses, such asNaN-boxing[10][11][12]). The standard defines five basic formats that are named for their numeric base and the number of bits used in their interchange encoding. There are three binary floating-point basic formats (encoded with 32, 64 or 128 bits) and two decimal floating-point basic formats (encoded with 64 or 128 bits). Thebinary32andbinary64formats are thesingleanddoubleformats ofIEEE 754-1985respectively. A conforming implementation must fully implement at least one of the basic formats. The standard also definesinterchange formats, which generalize these basic formats.[13]For the binary formats, the leading bit convention is required. The following table summarizes some of the possible interchange formats (including the basic formats). In the table above, integer values are exact, whereas values in decimal notation (e.g. 1.0) are rounded values. The minimum exponents listed are for normal numbers; the specialsubnormal numberrepresentation allows even smaller (in magnitude) numbers to be represented with some loss of precision. For example, the smallest positive number that can be represented in binary64 is 2−1074; contributions to the −1074 figure include theeminvalue −1022 and all but one of the 53 significand bits (2−1022 − (53 − 1)= 2−1074). Decimal digits is the precision of the format expressed in terms of an equivalent number of decimal digits. It is computed asdigits× log10base. E.g. binary128 has approximately the same precision as a 34 digit decimal number. log10MAXVALis a measure of the range of the encoding. Its integer part is the largest exponent shown on the output of a value in scientific notation with one leading digit in the significand before the decimal point (e.g. 1.698×1038is near the largest value in binary32, 9.999999×1096is the largest value in decimal32). The binary32 (single) and binary64 (double) formats are two of the most common formats used today. The figure below shows the absolute precision for both formats over a range of values. This figure can be used to select an appropriate format given the expected value of a number and the required precision. An example of a layout for32-bit floating pointis and the64 bit layoutis similar. The standard specifies optionalextendedand extendable precision formats, which provide greater precision than the basic formats.[14]An extended precision format extends a basic format by using more precision and more exponent range. An extendable precision format allows the user to specify the precision and exponent range. An implementation may use whatever internal representation it chooses for such formats; all that needs to be defined are its parameters (b,p, andemax). These parameters uniquely describe the set of finite numbers (combinations of sign, significand, and exponent for the given radix) that it can represent. The standard recommends that language standards provide a method of specifyingpandemaxfor each supported baseb.[15]The standard recommends that language standards and implementations support an extended format which has a greater precision than the largest basic format supported for each radixb.[16]For an extended format with a precision between two basic formats the exponent range must be as great as that of the next wider basic format. So for instance a 64-bit extended precision binary number must have an 'emax' of at least 16383. Thex8780-bit extended formatmeets this requirement. The originalIEEE 754-1985standard also had the concept ofextended formats, but without any mandatory relation betweeneminandemax. For example, theMotorola 6888180-bit format,[17]whereemin= −emax, was a conforming extended format, but it became non-conforming in the 2008 revision. Interchange formats are intended for the exchange of floating-point data using a bit string of fixed length for a given format. For the exchange of binary floating-point numbers, interchange formats of length 16 bits, 32 bits, 64 bits, and any multiple of 32 bits ≥ 128[e]are defined. The 16-bit format is intended for the exchange or storage of small numbers (e.g., for graphics). The encoding scheme for these binary interchange formats is the same as that of IEEE 754-1985: a sign bit, followed bywexponent bits that describe the exponent offset by abias, andp− 1 bits that describe the significand. The width of the exponent field for ak-bit format is computed asw= round(4 log2(k)) − 13. The existing 64- and 128-bit formats follow this rule, but the 16- and 32-bit formats have more exponent bits (5 and 8 respectively) than this formula would provide (3 and 7 respectively). As with IEEE 754-1985, the biased-exponent field is filled with all 1 bits to indicate either infinity (trailing significand field = 0) or a NaN (trailing significand field ≠ 0). For NaNs, quiet NaNs and signaling NaNs are distinguished by using the most significant bit of the trailing significand field exclusively,[f]and the payload is carried in the remaining bits. For the exchange of decimal floating-point numbers, interchange formats of any multiple of 32 bits are defined. As with binary interchange, the encoding scheme for the decimal interchange formats encodes the sign, exponent, and significand. Two different bit-level encodings are defined, and interchange is complicated by the fact that some external indicator of the encoding in use may be required. The two options allow the significand to be encoded as a compressed sequence of decimal digits usingdensely packed decimalor, alternatively, as abinary integer. The former is more convenient for direct hardware implementation of the standard, while the latter is more suited to software emulation on a binary computer. In either case, the set of numbers (combinations of sign, significand, and exponent) that may be encoded is identical, andspecial values(±zero with the minimum exponent, ±infinity, quiet NaNs, and signaling NaNs) have identical encodings. The standard defines five rounding rules. The first two rules round to a nearest value; the others are calleddirected roundings: At the extremes, a value with a magnitude strictly less thank=bemax(b−12b1−p){\displaystyle k=b^{\text{emax}}\left(b-{\tfrac {1}{2}}b^{1-p}\right)}will be rounded to the minimum or maximum finite number (depending on the value's sign). Any numbers with exactly this magnitude are considered ties; this choice of tie may be conceptualized as the midpoint between±bemax(b−b1−p){\displaystyle \pm b^{\text{emax}}(b-b^{1-p})}and±bemax+1{\displaystyle \pm b^{{\text{emax}}+1}}, which, were the exponent not limited, would be the next representable floating-point numbers larger in magnitude. Numbers with a magnitude strictly larger thankare rounded to the corresponding infinity.[18] "Round to nearest, ties to even" is the default for binary floating point and the recommended default for decimal. "Round to nearest, ties to away" is only required for decimal implementations.[19] Unless specified otherwise, the floating-point result of an operation is determined by applying the rounding function on the infinitely precise (mathematical) result. Such an operation is said to becorrectly rounded. This requirement is calledcorrect rounding.[20] Required operations for a supported arithmetic format (including the basic formats) include: The standard provides comparison predicates to compare one floating-point datum to another in the supported arithmetic format.[32]Any comparison with a NaN is treated as unordered. −0 and +0 compare as equal. The standard provides a predicatetotalOrder, which defines atotal orderingon canonical members of the supported arithmetic format.[33]The predicate agrees with the comparison predicates (see section§ Comparison predicates) when one floating-point number is less than the other. The main differences are:[34] ThetotalOrderpredicate does not impose a total ordering on all encodings in a format. In particular, it does not distinguish among different encodings of the same floating-point representation, as when one or both encodings are non-canonical.[33]IEEE 754-2019 incorporates clarifications oftotalOrder. For the binary interchange formats whose encoding follows the IEEE 754-2008 recommendation onplacement of the NaN signaling bit, the comparison is identical to one thattype punsthe floating-point numbers to a sign–magnitude integer (assuming a payload ordering consistent with this comparison), an old trick for FP comparison without an FPU.[35] The standard defines five exceptions, each of which returns a default value and has a corresponding status flag that is raised when the exception occurs.[g]No other exception handling is required, but additional non-default alternatives are recommended (see§ Alternate exception handling). The five possible exceptions are These are the same five exceptions as were defined in IEEE 754-1985, but thedivision by zeroexception has been extended to operations other than the division. Some decimal floating-point implementations define additional exceptions,[36][37]which are not part of IEEE 754: Additionally, operations like quantize when either operand is infinite, or when the result does not fit the destination format, will also signal invalid operation exception.[38] In the IEEE 754 standard, zero is signed, meaning that there exist both a "positive zero" (+0) and a "negative zero" (−0). In mostrun-time environments, positive zero is usually printed as "0" and the negative zero as "-0". The two values behave as equal in numerical comparisons, but some operations return different results for +0 and −0. For instance,1/(−0)returns negative infinity, while1/(+0)returns positive infinity (so that the identity1/(1/±∞) = ±∞is maintained). Other commonfunctions with a discontinuityatx= 0which might treat +0 and −0 differently includeΓ(x)and theprincipal square rootofy+xifor any negative numbery. As with any approximation scheme, operations involving "negative zero" can occasionally cause confusion. For example, in IEEE 754,x=ydoes not always imply1/x= 1/y, as0 = −0but1/0 ≠ 1/(−0).[39]Moreover, the reciprocal square root[h]of±0is±∞while the mathematical function1/x{\displaystyle 1/{\sqrt {x}}}over the real numbers does not have any negative value. Subnormal values fill theunderflowgap with values where the absolute distance between them is the same as for adjacent values just outside the underflow gap. This is an improvement over the older practice to just have zero in the underflow gap, and where underflowing results were replaced by zero (flush to zero).[40] Modern floating-point hardware usually handles subnormal values (as well as normal values), and does not require software emulation for subnormals. The infinities of theextended real number linecan be represented in IEEE floating-point datatypes, just like ordinary floating-point values like 1, 1.5, etc. They are not error values in any way, though they are often (depends on the rounding) used as replacement values when there is an overflow. Upon a divide-by-zero exception, a positive or negative infinity is returned as an exact result. An infinity can also be introduced as a numeral (like C's "INFINITY" macro, or "∞" if the programming language allows that syntax). IEEE 754 requires infinities to be handled in a reasonable way, such as IEEE 754 specifies a special value called "Not a Number" (NaN) to be returned as the result of certain "invalid" operations, such as 0/0,∞×0, or sqrt(−1). In general, NaNs will be propagated, i.e. most operations involving a NaN will result in a NaN, although functions that would give some defined result for any given floating-point value will do so for NaNs as well, e.g. NaN ^ 0 = 1. There are two kinds of NaNs: the defaultquietNaNs and, optionally,signalingNaNs. A signaling NaN in any arithmetic operation (including numerical comparisons) will cause an "invalid operation"exceptionto be signaled. The representation of NaNs specified by the standard has some unspecified bits that could be used to encode the type or source of error; but there is no standard for that encoding. In theory, signaling NaNs could be used by aruntime systemto flag uninitialized variables, or extend the floating-point numbers with other special values without slowing down the computations with ordinary values, although such extensions are not common. It is a common misconception that the more esoteric features of the IEEE 754 standard discussed here, such as extended formats, NaN, infinities, subnormals etc., are only of interest tonumerical analysts, or for advanced numerical applications. In fact the opposite is true: these features are designed to give safe robust defaults for numerically unsophisticated programmers, in addition to supporting sophisticated numerical libraries by experts. The key designer of IEEE 754,William Kahannotes that it is incorrect to "... [deem] features of IEEE Standard 754 for Binary Floating-Point Arithmetic that ...[are] not appreciated to be features usable by none but numerical experts. The facts are quite the opposite. In 1977 those features were designed into the Intel 8087 to serve the widest possible market... Error-analysis tells us how to design floating-point arithmetic, like IEEE Standard 754, moderately tolerant of well-meaning ignorance among programmers".[41] A property of the single- and double-precision formats is that their encoding allows one to easily sort them without using floating-point hardware, as if the bits representedsign-magnitudeintegers, although it is unclear whether this was a design consideration (it seems noteworthy that the earlierIBM hexadecimal floating-pointrepresentation also had this property for normalized numbers). With the prevalenttwo's-complementrepresentation,interpretingthe bits as signed integers sorts the positives correctly, but with the negatives reversed; as one possible correction for that, with anxorto flip the sign bit for positive values and all bits for negative values, all the values become sortable as unsigned integers (with−0 < +0).[35] The standard recommends optional exception handling in various forms, including presubstitution of user-defined default values, and traps (exceptions that change the flow of control in some way) and other exception handling models that interrupt the flow, such as try/catch. The traps and other exception mechanisms remain optional, as they were in IEEE 754-1985. Clause 9 in the standard recommends additional mathematical operations[45]that language standards should define.[46]None are required in order to conform to the standard. The following are recommended arithmetic operations, which must round correctly:[47] TheasinPi{\displaystyle \operatorname {asinPi} },acosPi{\displaystyle \operatorname {acosPi} }andtanPi{\displaystyle \operatorname {tanPi} }functions were not part of the IEEE 754-2008 standard because they were deemed less necessary.[49]asinPi{\displaystyle \operatorname {asinPi} }andacosPi{\displaystyle \operatorname {acosPi} }were mentioned, but this was regarded as an error.[5]All three were added in the 2019 revision. The recommended operations also include setting and accessing dynamic mode rounding direction,[50]and implementation-defined vector reduction operations such as sum, scaled product, anddot product, whose accuracy is unspecified by the standard.[51] As of 2019[update],augmented arithmetic operations[52]for the binary formats are also recommended. These operations, specified for addition, subtraction and multiplication, produce a pair of values consisting of a result correctly rounded to nearest in the format and the error term, which is representable exactly in the format. At the time of publication of the standard, no hardware implementations are known, but very similar operations were already implemented in software using well-known algorithms. The history and motivation for their standardization are explained in a background document.[53][54] As of 2019, the formerly requiredminNum,maxNum,minNumMag, andmaxNumMagin IEEE 754-2008 are nowdeprecateddue to theirnon-associativity. Instead, two sets of new minimum and maximum operations are recommended.[55]The first set containsminimum,minimumNumber,maximumandmaximumNumber. The second set containsminimumMagnitude,minimumMagnitudeNumber,maximumMagnitudeandmaximumMagnitudeNumber. The history and motivation for this change are explained in a background document.[56] The standard recommends how language standards should specify the semantics of sequences of operations, and points out the subtleties of literal meanings and optimizations that change the value of a result. By contrast, the previous1985version of the standard left aspects of the language interface unspecified, which led to inconsistent behavior between compilers, or different optimization levels in anoptimizing compiler. Programming languages should allow a user to specify a minimum precision for intermediate calculations of expressions for each radix. This is referred to aspreferredWidthin the standard, and it should be possible to set this on a per-block basis. Intermediate calculations within expressions should be calculated, and any temporaries saved, using the maximum of the width of the operands and the preferred width if set. Thus, for instance, a compiler targetingx87floating-point hardware should have a means of specifying that intermediate calculations must use thedouble-extended format. The stored value of a variable must always be used when evaluating subsequent expressions, rather than any precursor from before rounding and assigning to the variable. The IEEE 754-1985 version of the standard allowed many variations in implementations (such as the encoding of some values and the detection of certain exceptions). IEEE 754-2008 has reduced these allowances, but a few variations still remain (especially for binary formats). The reproducibility clause recommends that language standards should provide a means to write reproducible programs (i.e., programs that will produce the same result in all implementations of a language) and describes what needs to be done to achieve reproducible results. The standard requires operations to convert between basic formats andexternal character sequenceformats.[57]Conversions to and from a decimal character format are required for all formats. Conversion to an external character sequence must be such that conversion back using round to nearest, ties to even will recover the original number. There is no requirement to preserve the payload of a quiet NaN or signaling NaN, and conversion from the external character sequence may turn a signaling NaN into a quiet NaN. The original binary value will be preserved by converting to decimal and back again using:[58] For other binary formats, the required number of decimal digits is[i] wherepis the number of significant bits in the binary format, e.g. 237 bits for binary256. When using a decimal floating-point format, the decimal representation will be preserved using: Algorithms, with code, for correctly rounded conversion from binary to decimal and decimal to binary are discussed by Gay,[59]and for testing – by Paxson and Kahan.[60] The standard recommends providing conversions to and fromexternal hexadecimal-significand character sequences, based onC99's hexadecimal floating point literals. Such a literal consists of an optional sign (+or-), the indicator "0x", a hexadecimal number with or without a period, an exponent indicator "p", and a decimal exponent with optional sign. The syntax is not case-sensitive.[61]The decimal exponent scales by powers of 2. For example,0x0.1p0is 1/16 and0x0.1p-4is 1/256.[62]
https://en.wikipedia.org/wiki/IEEE_754
Innumerical analysis, theKahan summation algorithm, also known ascompensated summation,[1]significantly reduces thenumerical errorin the total obtained by adding asequenceof finite-precisionfloating-point numbers, compared to the naive approach. This is done by keeping a separaterunning compensation(a variable to accumulate small errors), in effect extending the precision of the sum by the precision of the compensation variable. In particular, simply summingn{\displaystyle n}numbers in sequence has a worst-case error that grows proportional ton{\displaystyle n}, and aroot mean squareerror that grows asn{\displaystyle {\sqrt {n}}}for random inputs (the roundoff errors form arandom walk).[2]With compensated summation, using a compensation variable with sufficiently high precision the worst-case error bound is effectively independent ofn{\displaystyle n}, so a large number of values can be summed with an error that only depends on the floating-pointprecisionof the result.[2] Thealgorithmis attributed toWilliam Kahan;[3]Ivo Babuškaseems to have come up with a similar algorithm independently (henceKahan–Babuška summation).[4]Similar, earlier techniques are, for example,Bresenham's line algorithm, keeping track of the accumulated error in integer operations (although first documented around the same time[5]) and thedelta-sigma modulation.[6] Inpseudocode, the algorithm will be: This algorithm can also be rewritten to use theFast2Sumalgorithm:[7] The algorithm does not mandate any specific choice ofradix, only for the arithmetic to "normalize floating-point sums before rounding or truncating".[3]Computers typically use binary arithmetic, but to make the example easier to read, it will be given in decimal. Suppose we are using six-digit decimalfloating-point arithmetic,sumhas attained the value 10000.0, and the next two values ofinput[i]are 3.14159 and 2.71828. The exact result is 10005.85987, which rounds to 10005.9. With a plain summation, each incoming value would be aligned withsum, and many low-order digits would be lost (by truncation or rounding). The first result, after rounding, would be 10003.1. The second result would be 10005.81828 before rounding and 10005.8 after rounding. This is not correct. However, with compensated summation, we get the correctly rounded result of 10005.9. Assume thatchas the initial value zero. Trailing zeros shown where they are significant for the six-digit floating-point number. The sum is so large that only the high-order digits of the input numbers are being accumulated. But on the next step,c, an approximation of the running error, counteracts the problem. The algorithm performs summation with two accumulators:sumholds the sum, andcaccumulates the parts not assimilated intosum, to nudge the low-order part ofsumthe next time around. Thus the summation proceeds with "guard digits" inc, which is better than not having any, but is not as good as performing the calculations with double the precision of the input. However, simply increasing the precision of the calculations is not practical in general; ifinputis already in double precision, few systems supplyquadruple precision, and if they did,inputcould then be in quadruple precision. A careful analysis of the errors in compensated summation is needed to appreciate its accuracy characteristics. While it is more accurate than naive summation, it can still give large relative errors for ill-conditioned sums. Suppose that one is summingn{\displaystyle n}valuesxi{\displaystyle x_{i}}, fori=1,…,n{\displaystyle i=1,\,\ldots ,\,n}. The exact sum is With compensated summation, one instead obtainsSn+En{\displaystyle S_{n}+E_{n}}, where the errorEn{\displaystyle E_{n}}is bounded by[2] whereε{\displaystyle \varepsilon }is themachine precisionof the arithmetic being employed (e.g.ε≈10−16{\displaystyle \varepsilon \approx 10^{-16}}for IEEE standarddouble-precisionfloating point). Usually, the quantity of interest is therelative error|En|/|Sn|{\displaystyle |E_{n}|/|S_{n}|}, which is therefore bounded above by In the expression for the relative error bound, the fractionΣ|xi|/|Σxi|{\displaystyle \Sigma |x_{i}|/|\Sigma x_{i}|}is thecondition numberof the summation problem. Essentially, the condition number represents theintrinsicsensitivity of the summation problem to errors, regardless of how it is computed.[8]The relative error bound ofevery(backwards stable) summation method by a fixed algorithm in fixed precision (i.e. not those that usearbitrary-precisionarithmetic, nor algorithms whose memory and time requirements change based on the data), is proportional to this condition number.[2]Anill-conditionedsummation problem is one in which this ratio is large, and in this case even compensated summation can have a large relative error. For example, if the summandsxi{\displaystyle x_{i}}are uncorrelated random numbers with zero mean, the sum is arandom walk, and the condition number will grow proportional ton{\displaystyle {\sqrt {n}}}. On the other hand, for random inputs with nonzero mean the condition number asymptotes to a finite constant asn→∞{\displaystyle n\to \infty }. If the inputs are allnon-negative, then the condition number is 1. Given a condition number, the relative error of compensated summation is effectively independent ofn{\displaystyle n}. In principle, there is theO(nε2){\displaystyle O(n\varepsilon ^{2})}that grows linearly withn{\displaystyle n}, but in practice this term is effectively zero: since the final result is rounded to a precisionε{\displaystyle \varepsilon }, thenε2{\displaystyle n\varepsilon ^{2}}term rounds to zero, unlessn{\displaystyle n}is roughly1/ε{\displaystyle 1/\varepsilon }or larger.[2]In double precision, this corresponds to ann{\displaystyle n}of roughly1016{\displaystyle 10^{16}}, much larger than most sums. So, for a fixed condition number, the errors of compensated summation are effectivelyO(ε){\displaystyle O(\varepsilon )}, independent ofn{\displaystyle n}. In comparison, the relative error bound for naive summation (simply adding the numbers in sequence, rounding at each step) grows asO(εn){\displaystyle O(\varepsilon n)}multiplied by the condition number.[2]This worst-case error is rarely observed in practice, however, because it only occurs if the rounding errors are all in the same direction. In practice, it is much more likely that the rounding errors have a random sign, with zero mean, so that they form a random walk; in this case, naive summation has aroot mean squarerelative error that grows asO(εn){\displaystyle O\left(\varepsilon {\sqrt {n}}\right)}multiplied by the condition number.[9]This is still much worse than compensated summation, however. However, if the sum can be performed in twice the precision, thenε{\displaystyle \varepsilon }is replaced byε2{\displaystyle \varepsilon ^{2}}, and naive summation has a worst-case error comparable to theO(nε2){\displaystyle O(n\varepsilon ^{2})}term in compensated summation at the original precision. By the same token, theΣ|xi|{\displaystyle \Sigma |x_{i}|}that appears inEn{\displaystyle E_{n}}above is a worst-case bound that occurs only if all the rounding errors have the same sign (and are of maximal possible magnitude).[2]In practice, it is more likely that the errors have random sign, in which case terms inΣ|xi|{\displaystyle \Sigma |x_{i}|}are replaced by a random walk, in which case, even for random inputs with zero mean, the errorEn{\displaystyle E_{n}}grows only asO(εn){\displaystyle O\left(\varepsilon {\sqrt {n}}\right)}(ignoring thenε2{\displaystyle n\varepsilon ^{2}}term), the same rate the sumSn{\displaystyle S_{n}}grows, canceling then{\displaystyle {\sqrt {n}}}factors when the relative error is computed. So, even for asymptotically ill-conditioned sums, the relative error for compensated summation can often be much smaller than a worst-case analysis might suggest. Neumaier[10]introduced an improved version of Kahan algorithm, which he calls an "improved Kahan–Babuška algorithm", which also covers the case when the next term to be added is larger in absolute value than the running sum, effectively swapping the role of what is large and what is small. Inpseudocode, the algorithm is: This enhancement is similar to the Fast2Sum version of Kahan's algorithm with Fast2Sum replaced by2Sum. For many sequences of numbers, both algorithms agree, but a simple example due to Peters[11]shows how they can differ: summing[1.0,+10100,1.0,−10100]{\displaystyle [1.0,+10^{100},1.0,-10^{100}]}in double precision, Kahan's algorithm yields 0.0, whereas Neumaier's algorithm yields the correct value 2.0. Higher-order modifications of better accuracy are also possible. For example, a variant suggested by Klein,[12]which he called a second-order "iterative Kahan–Babuška algorithm". Inpseudocode, the algorithm is: Although Kahan's algorithm achievesO(1){\displaystyle O(1)}error growth for summingnnumbers, only slightly worseO(log⁡n){\displaystyle O(\log n)}growth can be achieved bypairwise summation: onerecursivelydivides the set of numbers into two halves, sums each half, and then adds the two sums.[2]This has the advantage of requiring the same number of arithmetic operations as the naive summation (unlike Kahan's algorithm, which requires four times the arithmetic and has a latency of four times a simple summation) and can be calculated in parallel. The base case of the recursion could in principle be the sum of only one (or zero) numbers, but to amortize the overhead of recursion, one would normally use a larger base case. The equivalent of pairwise summation is used in manyfast Fourier transform(FFT) algorithms and is responsible for the logarithmic growth of roundoff errors in those FFTs.[13]In practice, with roundoff errors of random signs, the root mean square errors of pairwise summation actually grow asO(log⁡n){\displaystyle O\left({\sqrt {\log n}}\right)}.[9] Another alternative is to usearbitrary-precision arithmetic, which in principle need no rounding at all with a cost of much greater computational effort. A way of performing correctly rounded sums using arbitrary precision is to extend adaptively using multiple floating-point components. This will minimize computational cost in common cases where high precision is not needed.[14][11]Another method that uses only integer arithmetic, but a large accumulator, was described by Kirchner andKulisch;[15]a hardware implementation was described by Müller, Rüb and Rülling.[16] In principle, a sufficiently aggressiveoptimizing compilercould destroy the effectiveness of Kahan summation: for example, if the compiler simplified expressions according to theassociativityrules of real arithmetic, it might "simplify" the second step in the sequence to and then to thus eliminating the error compensation.[17]In practice, many compilers do not use associativity rules (which are only approximate in floating-point arithmetic) in simplifications, unless explicitly directed to do so by compiler options enabling "unsafe" optimizations,[18][19][20][21]although theIntel C++ Compileris one example that allows associativity-based transformations by default.[22]The originalK&R Cversion of theC programming languageallowed the compiler to re-order floating-point expressions according to real-arithmetic associativity rules, but the subsequentANSI Cstandard prohibited re-ordering in order to make C better suited for numerical applications (and more similar toFortran, which also prohibits re-ordering),[23]although in practice compiler options can re-enable re-ordering, as mentioned above. A portable way to inhibit such optimizations locally is to break one of the lines in the original formulation into two statements, and make two of the intermediate productsvolatile: In general, built-in "sum" functions in computer languages typically provide no guarantees that a particular summation algorithm will be employed, much less Kahan summation.[citation needed]TheBLASstandard forlinear algebrasubroutines explicitly avoids mandating any particular computational order of operations for performance reasons,[24]and BLAS implementations typically do not use Kahan summation. The standard library of thePythoncomputer language specifies anfsumfunction for accurate summation. Starting with Python 3.12, the built-in "sum()" function uses the Neumaier summation.[25] In theJulialanguage, the default implementation of thesumfunction doespairwise summationfor high accuracy with good performance,[26]but an external library provides an implementation of Neumaier's variant namedsum_kbnfor the cases when higher accuracy is needed.[27] In theC#language,HPCsharp nuget packageimplements the Neumaier variant andpairwise summation: both as scalar, data-parallel usingSIMDprocessor instructions, and parallel multi-core.[28]
https://en.wikipedia.org/wiki/Kahan_summation_algorithm
Incomputing,Microsoft Binary Format(MBF) is a format forfloating-pointnumbers which was used inMicrosoft'sBASIClanguages, includingMBASIC,GW-BASICandQuickBASICprior to version 4.00.[1][2][3][4][5][6][7] There are two main versions of the format. The original version was designed for memory-constrained systems and stored numbers in 32 bits (4 bytes), with a 23-bitmantissa, 1-bit sign, and an 8-bitexponent.Extended (12k) BASICincluded a double-precision type with 64 bits. During the period when it was being ported from theIntel 8080platform to theMOS 6502processor, computers were beginning to ship with more memory as a standard feature. This version was offered with the original 32-bit format or an optional expanded 40-bit (5-byte) format. The 40-bit format was used by mosthome computersof the 1970s and 1980s. These two versions are sometimes known as "6-digit" and "9-digit", respectively.[8] OnPCswithx86processor,QuickBASIC, prior to version 4, reintroduced the double-precision format using a 55-bit mantissa in a 64-bit (8-byte) format. MBF was abandoned during the move to QuickBASIC 4, which used the standardIEEE 754format, introduced a few years earlier. Bill GatesandPaul Allenwere working onAltair BASICin 1975. They were developing the software atHarvard Universityon aDECPDP-10running theirAltairemulator.[9]One thing they lacked was code to handle floating-point numbers, required to support calculations with very big and very small numbers,[9]which would be particularly useful for science and engineering.[10][11]One of the proposed uses of the Altair was as a scientific calculator.[12] At a dinner atCurrier House, an undergraduate residential house at Harvard, Gates and Allen complained to their dinner companions that they had to write this code[9]and one of them,Monte Davidoff, told them that he had written floating-point routines before and convinced Gates and Allen that he was capable of writing the Altair BASIC floating-point code.[9]At the time, while IBM had introduced their own programs, there was no standard for floating-point numbers, so Davidoff had to come up with his own. He decided that 32 bits would allow enough range and precision.[13]When Allen had to demonstrate it toMITS, it was the first time it ran on an actual Altair.[14]But it worked, and when he entered ‘PRINT 2+2’, Davidoff's adding routine gave the correct answer.[9] A copy of the source code for Altair BASIC resurfaced in 1999. In the late 1970s, Gates's former tutor and deanHarry Lewishad found it behind some furniture in an office in Aiden, and put it in a file cabinet. After more or less forgetting about its existence for a long time, Lewis eventually came up with the idea of displaying the listing in the lobby. Instead, it was decided on preserving the original listing and producing several copies for display and preservation, after librarian and conservatorJanice Merrill-Oldhampointed out its importance.[15][16]A comment in the source credits Davidoff as the writer of Altair BASIC's math package.[15][16] Altair BASIC took off, and soon most early home computers ran some form of Microsoft BASIC.[17][18]The BASIC port for the6502CPU, such as used in theCommodore PET, took up more space due to the lower code density of the 6502. Because of this it would likely not fit in a single ROM chip together with the machine-specific input and output code. Since an extra chip was necessary, extra space was available, and this was used in part to extend the floating-point format from 32 to 40 bits.[8]This extended format was not only provided byCommodore BASIC1 & 2, but was also supported byApplesoft BASICI & II since version 1.1 (1977),KIM-1BASIC since version 1.1a (1977), andMicroTANBASIC since version 2b (1980).[8]Not long afterwards, theZ80ports, such asLevel II BASICfor theTRS-80(1978), introduced the 64-bit, double-precision format as a separate data type from 32-bit, single-precision.[19][20][21]Microsoft used the same floating-point formats in their implementation ofFortran[22]and for their macro assemblerMASM,[23]although their spreadsheetMultiplan[24][25]and theirCOBOLimplementation usedbinary-coded decimal(BCD) floating point.[26]Even so, for a while MBF became the de facto floating-point format on home computers, to the point where people still occasionally encounter legacy files and file formats using it.[27][28][29][30][31][32] In a parallel development,Intelhad started the development of a floating-pointcoprocessorin 1976.[33][34]William Morton Kahan, as a consultant to Intel, suggested that Intel use the floating point ofDigital Equipment Corporation's (DEC) VAX. The first VAX, theVAX-11/780had just come out in late 1977, and its floating point was highly regarded. VAX's floating-point formats differed from MBF only in that it had the sign in the most significant bit.[35][36]However, seeking to market their chip to the broadest possible market, Kahan was asked to draw up specifications.[33]When rumours of Intel's new chip reached its competitors, they started a standardization effort, calledIEEE 754, to prevent Intel from gaining too much ground. As an 8-bit exponent was not wide enough for some operations desired for double-precision numbers, e.g. to store the product of two 32-bit numbers,[1]Intel's proposal and a counter-proposal from DEC used 11 bits, like the time-tested60-bit floating-point formatof theCDC 6600from 1965.[34][37][38]Kahan's proposal also provided for infinities, which are useful when dealing with division-by-zero conditions; not-a-number values, which are useful when dealing with invalid operations;denormal numbers, which help mitigate problems caused by underflow;[37][39][40]and a better balancedexponent bias, which could help avoid overflow and underflow when taking the reciprocal of a number.[41][42] By the timeQuickBASIC 4.00was released,[when?]the IEEE 754 standard had become widely adopted—for example, it was incorporated into Intel's387coprocessor and everyx86processor from the486on. QuickBASIC versions 4.0 and 4.5 use IEEE 754 floating-point variables by default, but (at least in version 4.5) there is a command-line option/MBFfor the IDE and the compiler that switches from IEEE to MBF floating-point numbers, to support earlier-written programs that rely on details of the MBF data formats.Visual Basicalso uses the IEEE 754 format instead of MBF. MBF numbers consist of an 8-bit base-2exponent, a signbit(positive mantissa:s= 0; negative mantissa:s= 1) and a 23-,[43][8]31-[8]or 55-bit[43]mantissa of thesignificand. There is always a 1-bit implied to the left of the explicit mantissa, and theradix pointis located before thisassumed bit. The exponent is encoded with abiasof128[citation needed], so that exponents−127…−1[citation needed]are represented byx=1…127 (01h…7Fh)[citation needed], exponents0…127[citation needed]are represented byx=128…255 (80h…FFh)[citation needed], with a special case forx= 0 (00h) representing the whole number being zero. The MBF double-precision format provides less scale than theIEEE 754format, and although the format itself provides almost one extra decimal digit of precision, in practice the stored values are less accurate because IEEE calculations use 80-bit intermediate results, and MBF doesn't.[1][3][43][44]Unlike IEEE floating point, MBF doesn't supportdenormal numbers,infinitiesorNaNs.[45] MBF single-precision format (32 bits, "6-digit BASIC"):[43][8] MBF extended-precision format (40 bits, "9-digit BASIC"):[8] MBF double-precision format (64 bits):[43][1]
https://en.wikipedia.org/wiki/Microsoft_Binary_Format
Incomputing,minifloatsarefloating-pointvalues represented with very fewbits. This reduced precision makes them ill-suited for general-purpose numerical calculations, but they are useful for special purposes such as: Additionally, they are frequently encountered as a pedagogical tool in computer-science courses to demonstrate the properties and structures offloating-point arithmeticandIEEE 754numbers. Depending on contextminifloatmay mean any size less than 32, any size less or equal to 16, or any size less than 16. The termmicrofloatmay mean any size less or equal to 8.[3] This page uses the notation (S.E.M) to describe a mini float: Minifloats can be designed following the principles of theIEEE 754standard. Almost all use the smallest exponent forsubnormal and normal numbers. Many use the largest exponent forinfinityandNaN, indicated by (special exponent)SE= 1. Some mini floats use this exponent value normally, in which caseSE= 0. Theexponent biasB= 2E-1-SE. This value insures that all representable numbers have a representablereciprocal. The notation can be converted to a(B,P,L,U)format as(2,M+ 1,SE- 2E-1+ 1, 2E-1- 1). TheRadeon R300andR420GPUs used an "fp24" floating-point format (1.7.16).[4]"Full Precision" in Direct3D 9.0 is a proprietary 24-bit floating-point format. Microsoft's D3D9 (Shader Model 2.0) graphicsAPIinitially supported both FP24 (as in ATI's R300 chip) and FP32 (as in Nvidia's NV30 chip) as "Full Precision", as well as FP16 as "Partial Precision" for vertex and pixel shader calculations performed by the graphics hardware. Minifloats are also commonly used in embedded devices such asmicrocontrollerswhere floating-point will need to be emulated in software. To speed up the computation, the mantissa typically occupies exactly half of the bits, so the register boundary automatically addresses the parts without shifting (ie (1.3.4) on 4-bit devices).[citation needed] Thebfloat16(1.8.7) format is the first 16 bits of a single-precision number and was often used in image processing and machine learning before hardware support was added for other formats. TheIEEE 754-2008 revisionhas 16-bit (1.5.10) floats called "half-precision" (opposed to 32-bitsingleand 64-bitdouble precision). In 2016 Khronos defined 10-bit (0.5.5) and 11-bit (0.5.6)unsignedformats for use with Vulkan.[5][6]These can be converted from positive half-precision by truncating the sign and trailing digits. In 2022 NVidia and others announced support for "fp8" format (1.5.2).[2]These can be converted from half-precision by truncating the trailing digits. Since 2023, IEEE SA Working Group P3109 is working on a standard for 8-bit minifloats optimized for machine learning. The current draft defines not one format, but a family of 7 different formats, named "binary8pP", where "P" is a number from 1 to 7 and the bit pattern is (1.8-P.P-1). These also haveSE=0and use the largest value as Infinity and the pattern for negative zero as NaN.[7][2] Also since 2023, 4-bit (1.2.1) floating point numbers — without the four special IEEE values — have found use in acceleratinglarge language models.[8][9] A minifloat in 1 byte (8 bit) with 1 sign bit, 4 exponent bits and 3 significand bits (1.4.3) is demonstrated here. Theexponent biasis defined as 7 to center the values around 1 to match other IEEE 754 floats[10][11]so (for most values) the actual multiplier for exponentxis2x−7. All IEEE 754 principles should be valid.[12]This form is quite common for instruction.[citation needed] Zero is represented as zero exponent with a zero mantissa. The zero exponent means zero is a subnormal number with a leading "0." prefix, and with the zero mantissa all bits after the decimal point are zero, meaning this value is interpreted as0.0002×2−6=0{\displaystyle 0.000_{2}\times 2^{-6}=0}. Floating point numbers use asigned zero, so−0{\displaystyle -0}is also available and is equal to positive0{\displaystyle 0}. For the lowest exponent the significand is extended with "0." and the exponent value is treated as 1 higher like the least normalized number: All other exponents the significand is extended with "1.": Infinity values have the highest exponent, with the mantissa set to zero. The sign bit can be either positive or negative. NaN values have the highest exponent, with the mantissa non-zero. This is a chart of all possible values for this example 8-bit float: There are only 242 different non-NaN values (if +0 and −0 are regarded as different), because 14 of the bit patterns represent NaNs. At these small sizes other bias values may be interesting, for instance a bias of −2 will make the numbers 0–16 have the same bit representation as the integers 0–16, with the loss that no non-integer values can be represented. Any bit allocation is possible. A format could choose to give more of the bits to the exponent if they need more dynamic range with less precision, or give more of the bits to the significand if they need more precision with less dynamic range. At the extreme, it is possible to allocate all bits to the exponent (1.7.0), or all but one of the bits to the significand (1.1.6), leaving the exponent with only one bit. The exponent must be given at least one bit, or else it no longer makes sense as a float, it just becomes asigned number. Here is a chart of all possible values for (1.3.4).M≥ 2E-1ensures that the precision remains at least 0.5 throughout the entire range.[13] Tables like the above can be generated for any combination of SEMB (sign, exponent, mantissa/significand, and bias) values using a scriptin Pythonorin GDScript. With only 64 values, it is possible to plot all the values in a diagram, which can be instructive. These graphics demonstrates math of two 6-bit (1.3.2)-minifloats, following the rules of IEEE 754 exactly. Green X's are NaN results, Cyan X's are +Infinity results, Magenta X's are -Infinity results. The range of the finite results is filled with curves joining equal values, blue for positive and red for negative. The smallest possible float size that follows all IEEE principles, including normalized numbers, subnormal numbers, signed zero, signed infinity, and multiple NaN values, is a 4-bit float with 1-bit sign, 2-bit exponent, and 1-bit mantissa.[14] If normalized numbers are not required, the size can be reduced to 3-bit by reducing the exponent down to 1. In situations where the sign bit can be excluded, each of the above examples can be reduced by 1 bit further, keeping only the first row of the above tables. A 2-bit float with 1-bit exponent and 1-bit mantissa would only have 0, 1, Inf, NaN values. Removing the mantissa would allow only two values: 0 and Inf. Removing the exponent does not work, the above formulae produce 0 and sqrt(2)/2. The exponent must be at least 1 bit or else it no longer makes sense as a float (it would just be asigned number).
https://en.wikipedia.org/wiki/Minifloat
TheQ notationis a way to specify the parameters of a binaryfixed pointnumber format. Specifically, how many bits are allocated for the integer portion, how many for the fractional portion, and whether there is a sign-bit. For example, in Q notation,Q7.8means that the signed fixed point numbers in this format have 7 bits for the integer part and 8 bits for the fraction part. One extra bit is implicitly added forsigned numbers.[1]Therefore,Q7.8is a 16-bitword, with themost significant bitrepresenting thetwo's complementsign bit. There is an ARM variation of the Q notation which explicitly adds the sign bit to the integer part, so the above example would be calledQ8.8. A number ofother notationshave been used for the same purpose. The Q notation, as defined byTexas Instruments,[1]consists of the letterQfollowed by a pair of numbersm.n, wheremis the number of bits used for the integer part of the value, andnis the number of fraction bits. By default, the notation describessignedbinary fixed point format, with the unscaled integer being stored intwo's complementformat, used in most binary processors. The first bit always gives the sign of the value(1 = negative, 0 = non-negative), and it isnotcounted in themparameter. Thus, the total numberwof bits used is 1 +m+n. For example, the specificationQ3.12describes a signed binary fixed-point number with aw= 16 bits in total, comprising the sign bit, three bits for the integer part, and 12 bits that are the fraction. That is, a 16-bit signed (two's complement) integer, that is implicitly multiplied by the scaling factor 2−12 In particular, whennis zero, the numbers are just integers. Ifmis zero, all bits except the sign bit are fraction bits; then the range of the stored number is from −1.0 (inclusive) to +1.0 (exclusive). Themand the dot may be omitted, in which case they are inferred from the size of the variable or register where the value is stored. Thus,Q12means a signed integer with any number of bits, that is implicitly multiplied by 2−12. The letterUcan be prefixed to theQto denote anunsignedbinary fixed-point format. For example,UQ1.15describes values represented as unsigned 16-bit integers with an implicit scaling factor of 2−15, which range from 0.0 to (216−1)/215= +1.999969482421875. A variant of the Q notation has been in use byARM. In this variant, themnumber includes the sign bit. For example, a 16-bit signed integer which the TI variant denotes asQ15.0, would beQ16.0in the ARM variant.[2][3]Unsigned numbers are the same across both variants. While technically the sign-bit belongs just as much to the fractional part as the integer part, ARM's notation has the benefit that there are no implicit bits, so the size of the word is alwaysm+nbits{\displaystyle m+n\ {\textrm {bits}}}. Additionally, the integer resolution is always2m{\displaystyle 2^{m}}. For example,UQ8.0andQ8.0both can hold28{\displaystyle 2^{8}}different values. The resolution (difference between successive values) of a Qm.nor UQm.nformat is always 2−n. The range of representable values depends on the notation used: For example, a Q15.1 format number requires 15+1 = 16 bits, has resolution 2−1= 0.5, and the representable values range from −214= −16384.0 to +214− 2−1= +16383.5. In hexadecimal, the negative values range from 0x8000 to 0xFFFF followed by the non-negative ones from 0x0000 to 0x7FFF. Q numbers are a ratio of two integers: the numerator is kept in storage, the denominatord{\displaystyle d}is equal to 2n. Consider the following example: If the Q number's base is to be maintained (nremains constant) the Q number math operations must keep the denominatord{\displaystyle d}constant. The following formulas show math operations on the general Q numbersN1{\displaystyle N_{1}}andN2{\displaystyle N_{2}}. (If we consider the example as mentioned above,N1{\displaystyle N_{1}}is 384 andd{\displaystyle d}is 256.) N1d+N2d=N1+N2dN1d−N2d=N1−N2d(N1d×N2d)×d=N1×N2d(N1d/N2d)/d=N1/N2d{\displaystyle {\begin{aligned}{\frac {N_{1}}{d}}+{\frac {N_{2}}{d}}&={\frac {N_{1}+N_{2}}{d}}\\{\frac {N_{1}}{d}}-{\frac {N_{2}}{d}}&={\frac {N_{1}-N_{2}}{d}}\\\left({\frac {N_{1}}{d}}\times {\frac {N_{2}}{d}}\right)\times d&={\frac {N_{1}\times N_{2}}{d}}\\\left({\frac {N_{1}}{d}}/{\frac {N_{2}}{d}}\right)/d&={\frac {N_{1}/N_{2}}{d}}\end{aligned}}} Because the denominator is a power of two, the multiplication can be implemented as anarithmetic shiftto the left and the division as an arithmetic shift to the right; on many processors shifts are faster than multiplication and division. To maintain accuracy, the intermediate multiplication and division results must be double precision and care must be taken inroundingthe intermediate result before converting back to the desired Q number. UsingCthe operations are (note that here, Q refers to the fractional part's number of bits) : With saturation Unlike floating point ±Inf, saturated results are not sticky and will unsaturate on adding a negative value to a positive saturated value (0x7FFF) and vice versa in that implementation shown. In assembly language, the Signed Overflow flag can be used to avoid the typecasts needed for that C implementation.
https://en.wikipedia.org/wiki/Q_(number_format)
Incomputing,quadruple precision(orquad precision) is a binaryfloating-point–basedcomputer number formatthat occupies 16 bytes (128 bits) with precision at least twice the 53-bitdouble precision. This 128-bit quadruple precision is designed not only for applications requiring results in higher than double precision,[1]but also, as a primary function, to allow the computation of double precision results more reliably and accurately by minimising overflow andround-off errorsin intermediate calculations and scratch variables.William Kahan, primary architect of the original IEEE 754 floating-point standard noted, "For now the10-byte Extended formatis a tolerable compromise between the value of extra-precise arithmetic and the price of implementing it to run fast; very soon two more bytes of precision will become tolerable, and ultimately a 16-byte format ... That kind of gradual evolution towards wider precision was already in view whenIEEE Standard 754 for Floating-Point Arithmeticwas framed."[2] InIEEE 754-2008the 128-bit base-2 format is officially referred to asbinary128. The IEEE 754 standard specifies abinary128as having: The sign bit determines the sign of the number (including when this number is zero, which issigned). "1" stands for negative. This gives from 33 to 36 significant decimal digits precision. If a decimal string with at most 33 significant digits is converted to the IEEE 754 quadruple-precision format, giving a normal number, and then converted back to a decimal string with the same number of digits, the final result should match the original string. If an IEEE 754 quadruple-precision number is converted to a decimal string with at least 36 significant digits, and then converted back to quadruple-precision representation, the final result must match the original number.[3] The format is written with an implicit lead bit with value 1 unless the exponent is stored with all zeros (used to encodesubnormal numbersand zeros). Thus only 112 bits of thesignificandappear in the memory format, but the total precision is 113 bits (approximately 34 decimal digits:log10(2113) ≈ 34.016) for normal values; subnormals have gracefully degrading precision down to 1 bit for the smallest non-zero value. The bits are laid out as: The quadruple-precision binary floating-point exponent is encoded using anoffset binaryrepresentation, with the zero offset being 16383; this is also known as exponent bias in the IEEE 754 standard. Thus, as defined by the offset binary representation, in order to get the true exponent, the offset of 16383 has to be subtracted from the stored exponent. The stored exponents 000016and 7FFF16are interpreted specially. The minimum strictly positive (subnormal) value is 2−16494≈ 10−4965and has a precision of only one bit. The minimum positive normal value is 2−16382≈3.3621 × 10−4932and has a precision of 113 bits, i.e. ±2−16494as well. The maximum representable value is216384− 216271≈1.1897 × 104932. These examples are given in bitrepresentation, inhexadecimal, of the floating-point value. This includes the sign, (biased) exponent, and significand. By default, 1/3 rounds down likedouble precision, because of the odd number of bits in the significand. Thus, the bits beyond the rounding point are0101...which is less than 1/2 of aunit in the last place. A common software technique to implement nearly quadruple precision usingpairsofdouble-precisionvalues is sometimes calleddouble-double arithmetic.[4][5][6]Using pairs of IEEE double-precision values with 53-bit significands, double-double arithmetic provides operations on numbers with significands of at least[4]2 × 53 = 106 bits(actually 107 bits[7]except for some of the largest values, due to the limited exponent range), only slightly less precise than the 113-bit significand of IEEE binary128 quadruple precision. The range of a double-double remains essentially the same as the double-precision format because the exponent has still 11 bits,[4]significantly lower than the 15-bit exponent of IEEE quadruple precision (a range of1.8 × 10308for double-double versus1.2 × 104932for binary128). In particular, a double-double/quadruple-precision valueqin the double-double technique is represented implicitly as a sumq=x+yof two double-precision valuesxandy, each of which supplies half ofq's significand.[5]That is, the pair(x,y)is stored in place ofq, and operations onqvalues(+, −, ×, ...)are transformed into equivalent (but more complicated) operations on thexandyvalues. Thus, arithmetic in this technique reduces to a sequence of double-precision operations; since double-precision arithmetic is commonly implemented in hardware, double-double arithmetic is typically substantially faster than more generalarbitrary-precision arithmetictechniques.[4][5] Note that double-double arithmetic has the following special characteristics:[8] In addition to the double-double arithmetic, it is also possible to generate triple-double or quad-double arithmetic if higher precision is required without any higher precision floating-point library. They are represented as a sum of three (or four) double-precision values respectively. They can represent operations with at least 159/161 and 212/215 bits respectively. A similar technique can be used to produce adouble-quad arithmetic, which is represented as a sum of two quadruple-precision values. They can represent operations with at least 226 (or 227) bits.[9] Quadruple precision is often implemented in software by a variety of techniques (such as the double-double technique above, although that technique does not implement IEEE quadruple precision), since direct hardware support for quadruple precision is, as of 2016[update], less common (see "Hardware support" below). One can use generalarbitrary-precision arithmeticlibraries to obtain quadruple (or higher) precision, but specialized quadruple-precision implementations may achieve higher performance. A separate question is the extent to which quadruple-precision types are directly incorporated into computerprogramming languages. Quadruple precision is specified inFortranby thereal(real128)(moduleiso_fortran_envfrom Fortran 2008 must be used, the constantreal128is equal to 16 on most processors), or asreal(selected_real_kind(33, 4931)), or in a non-standard way asREAL*16. (Quadruple-precisionREAL*16is supported by theIntel Fortran Compiler[10]and by theGNU Fortrancompiler[11]onx86,x86-64, andItaniumarchitectures, for example.) For theC programming language, ISO/IEC TS 18661-3 (floating-point extensions for C, interchange and extended types) specifies_Float128as the type implementing the IEEE 754 quadruple-precision format (binary128).[12]Alternatively, inC/C++with a few systems and compilers, quadruple precision may be specified by thelong doubletype, but this is not required by the language (which only requireslong doubleto be at least as precise asdouble), nor is it common. On x86 and x86-64, the most common C/C++ compilers implementlong doubleas either 80-bitextended precision(e.g. theGNU C Compilergcc[13]and theIntel C++ Compilerwith a/Qlong‑doubleswitch[14]) or simply as being synonymous with double precision (e.g.Microsoft Visual C++[15]), rather than as quadruple precision. The procedure call standard for theARM 64-bit architecture(AArch64) specifies thatlong doublecorresponds to the IEEE 754 quadruple-precision format.[16]On a few other architectures, some C/C++ compilers implementlong doubleas quadruple precision, e.g. gcc onPowerPC(as double-double[17][18][19]) andSPARC,[20]or theSun Studio compilerson SPARC.[21]Even iflong doubleis not quadruple precision, however, some C/C++ compilers provide a nonstandard quadruple-precision type as an extension. For example, gcc provides a quadruple-precision type called__float128for x86, x86-64 andItaniumCPUs,[22]and onPowerPCas IEEE 128-bit floating-point using the -mfloat128-hardware or -mfloat128 options;[23]and some versions of Intel's C/C++ compiler for x86 and x86-64 supply a nonstandard quadruple-precision type called_Quad.[24] Zigprovides support for it with itsf128type.[25] Google's work-in-progress languageCarbonprovides support for it with the type calledf128.[26] As of 2024,Rustis currently working on adding a newf128type for IEEE quadruple-precision 128-bit floats.[27] IEEE quadruple precision was added to theIBM System/390G5 in 1998,[32]and is supported in hardware in subsequentz/Architectureprocessors.[33][34]The IBMPOWER9CPU (Power ISA 3.0) has native 128-bit hardware support.[23] Native support of IEEE 128-bit floats is defined inPA-RISC1.0,[35]and inSPARCV8[36]and V9[37]architectures (e.g. there are 16 quad-precision registers %q0, %q4, ...), but no SPARC CPU implements quad-precision operations in hardware as of 2004[update].[38] Non-IEEE extended-precision(128 bits of storage, 1 sign bit, 7 exponent bits, 112 fraction bits, 8 bits unused) was added to theIBM System/370series (1970s–1980s) and was available on someSystem/360models in the 1960s (System/360-85,[39]-195, and others by special request or simulated by OS software). TheSiemens7.700 and 7.500 series mainframes and their successors support the same floating-point formats and instructions as the IBM System/360 and System/370. TheVAXprocessor implemented non-IEEE quadruple-precision floating point as its "H Floating-point" format. It had one sign bit, a 15-bit exponent and 112-fraction bits, however the layout in memory was significantly different from IEEE quadruple precision and the exponent bias also differed. Only a few of the earliest VAX processors implemented H Floating-point instructions in hardware, all the others emulated H Floating-point in software. TheNEC Vector Enginearchitecture supports adding, subtracting, multiplying and comparing 128-bit binary IEEE 754 quadruple-precision numbers.[40]Two neighboring 64-bit registers are used. Quadruple-precision arithmetic is not supported in the vector register.[41] TheRISC-Varchitecture specifies a "Q" (quad-precision) extension for 128-bit binary IEEE 754-2008 floating-point arithmetic.[42]The "L" extension (not yet certified) will specify 64-bit and 128-bit decimal floating point.[43] Quadruple-precision (128-bit) hardware implementation should not be confused with "128-bit FPUs" that implementSIMDinstructions, such asStreaming SIMD ExtensionsorAltiVec, which refers to 128-bitvectorsof four 32-bit single-precision or two 64-bit double-precision values that are operated on simultaneously.
https://en.wikipedia.org/wiki/Quadruple-precision_floating-point_format
Single-precision floating-point format(sometimes calledFP32orfloat32) is acomputer number format, usually occupying32 bitsincomputer memory; it represents a widedynamic rangeof numeric values by using afloating radix point. A floating-point variable can represent a wider range of numbers than afixed-pointvariable of the same bit width at the cost of precision. Asigned32-bitintegervariable has a maximum value of 231− 1 = 2,147,483,647, whereas anIEEE 75432-bit base-2 floating-point variable has a maximum value of (2 − 2−23) × 2127≈ 3.4028235 × 1038. All integers with seven or fewer decimal digits, and any 2nfor a whole number −149 ≤n≤ 127, can be converted exactly into an IEEE 754 single-precision floating-point value. In the IEEE 754standard, the 32-bit base-2 format is officially referred to asbinary32; it was calledsingleinIEEE 754-1985. IEEE 754 specifies additional floating-point types, such as 64-bit base-2double precisionand, more recently, base-10 representations. One of the firstprogramming languagesto provide single- and double-precision floating-point data types wasFortran. Before the widespread adoption of IEEE 754-1985, the representation and properties of floating-point data types depended on thecomputer manufacturerand computer model, and upon decisions made by programming-language designers. E.g.,GW-BASIC's single-precision data type was the32-bit MBFfloating-point format. Single precision is termedREALinFortran;[1]SINGLE-FLOATinCommon Lisp;[2]floatinC,C++,C#andJava;[3]FloatinHaskell[4]andSwift;[5]andSingleinObject Pascal(Delphi),Visual Basic, andMATLAB. However,floatinPython,Ruby,PHP, andOCamlandsinglein versions ofOctavebefore 3.2 refer todouble-precisionnumbers. In most implementations ofPostScript, and someembedded systems, the only supported precision is single. The IEEE 754 standard specifies abinary32as having: This gives from 6 to 9significant decimal digitsprecision. If a decimal string with at most 6 significant digits is converted to the IEEE 754 single-precision format, giving anormal number, and then converted back to a decimal string with the same number of digits, the final result should match the original string. If an IEEE 754 single-precision number is converted to a decimal string with at least 9 significant digits, and then converted back to single-precision representation, the final result must match the original number.[6] The sign bit determines the sign of the number, which is the sign of the significand as well. "1" stands for negative. The exponent field is an 8-bit unsigned integer from 0 to 255, inbiased form: a value of 127 represents the actual exponent zero. Exponents range from −126 to +127 (thus 1 to 254 in the exponent field), because the biased exponent values 0 (all 0s) and 255 (all 1s) are reserved for special numbers (subnormal numbers,signed zeros,infinities, andNaNs). The true significand of normal numbers includes 23 fraction bits to the right of the binary point and animplicit leading bit(to the left of the binary point) with value 1. Subnormal numbers and zeros (which are the floating-point numbers smaller in magnitude than the least positive normal number) are represented with the biased exponent value 0, giving the implicit leading bit the value 0. Thus only 23 fraction bits of thesignificandappear in the memory format, but the total precision is 24 bits (equivalent to log10(224) ≈ 7.225 decimal digits) for normal values; subnormals have gracefully degrading precision down to 1 bit for the smallest non-zero value. The bits are laid out as follows: The real value assumed by a given 32-bitbinary32data with a givensign, biased exponentE(the 8-bit unsigned integer), and a23-bit fractionis which yields In this example: thus: Note: The single-precision binary floating-point exponent is encoded using anoffset-binaryrepresentation, with the zero offset being 127; also known as exponent bias in the IEEE 754 standard. Thus, in order to get the true exponent as defined by the offset-binary representation, the offset of 127 has to be subtracted from the stored exponent. The stored exponents 00Hand FFHare interpreted specially. The minimum positive normal value is2−126≈1.18×10−38{\displaystyle 2^{-126}\approx 1.18\times 10^{-38}}and the minimum positive (subnormal) value is2−149≈1.4×10−45{\displaystyle 2^{-149}\approx 1.4\times 10^{-45}}. In general, refer to the IEEE 754 standard itself for the strict conversion (including the rounding behaviour) of a real number into its equivalent binary32 format. Here we can show how to convert a base-10 real number into an IEEE 754 binary32 format using the following outline: Conversion of the fractional part:Consider 0.375, the fractional part of 12.375. To convert it into a binary fraction, multiply the fraction by 2, take the integer part and repeat with the new fraction by 2 until a fraction of zero is found or until the precision limit is reached which is 23 fraction digits for IEEE 754 binary32 format. We see that(0.375)10{\displaystyle (0.375)_{10}}can be exactly represented in binary as(0.011)2{\displaystyle (0.011)_{2}}. Not all decimal fractions can be represented in a finite digit binary fraction. For example, decimal 0.1 cannot be represented in binary exactly, only approximated. Therefore: Since IEEE 754 binary32 format requires real values to be represented in(1.x1x2...x23)2×2e{\displaystyle (1.x_{1}x_{2}...x_{23})_{2}\times 2^{e}}format (seeNormalized number,Denormalized number), 1100.011 is shifted to the right by 3 digits to become(1.100011)2×23{\displaystyle (1.100011)_{2}\times 2^{3}} Finally we can see that:(12.375)10=(1.100011)2×23{\displaystyle (12.375)_{10}=(1.100011)_{2}\times 2^{3}} From which we deduce: From these we can form the resulting 32-bit IEEE 754 binary32 format representation of 12.375: Note: consider converting 68.123 into IEEE 754 binary32 format: Using the above procedure you expect to get(42883EF9)16{\displaystyle ({\text{42883EF9}})_{16}}with the last 4 bits being 1001. However, due to the default rounding behaviour of IEEE 754 format, what you get is(42883EFA)16{\displaystyle ({\text{42883EFA}})_{16}}, whose last 4 bits are 1010. Example 1:Consider decimal 1. We can see that:(1)10=(1.0)2×20{\displaystyle (1)_{10}=(1.0)_{2}\times 2^{0}} From which we deduce: From these we can form the resulting 32-bit IEEE 754 binary32 format representation of real number 1: Example 2:Consider a value 0.25. We can see that:(0.25)10=(1.0)2×2−2{\displaystyle (0.25)_{10}=(1.0)_{2}\times 2^{-2}} From which we deduce: From these we can form the resulting 32-bit IEEE 754 binary32 format representation of real number 0.25: Example 3:Consider a value of 0.375. We saw that0.375=(0.011)2=(1.1)2×2−2{\displaystyle 0.375={(0.011)_{2}}={(1.1)_{2}}\times 2^{-2}} Hence after determining a representation of 0.375 as(1.1)2×2−2{\displaystyle {(1.1)_{2}}\times 2^{-2}}we can proceed as above: From these we can form the resulting 32-bit IEEE 754 binary32 format representation of real number 0.375: If the binary32 value,41C80000in this example, is in hexadecimal we first convert it to binary: then we break it down into three parts: sign bit, exponent, and significand. We then add the implicit 24th bit to the significand: and decode the exponent value by subtracting 127: Each of the 24 bits of the significand (including the implicit 24th bit), bit 23 to bit 0, represents a value, starting at 1 and halves for each bit, as follows: The significand in this example has three bits set: bit 23, bit 22, and bit 19. We can now decode the significand by adding the values represented by these bits. Then we need to multiply with the base, 2, to the power of the exponent, to get the final result: Thus This is equivalent to: wheresis the sign bit,xis the exponent, andmis the significand. These examples are given in bitrepresentation, inhexadecimalandbinary, of the floating-point value. This includes the sign, (biased) exponent, and significand. By default, 1/3 rounds up, instead of down likedouble-precision, because of the even number of bits in the significand. The bits of 1/3 beyond the rounding point are1010...which is more than 1/2 of aunit in the last place. Encodings of qNaN and sNaN are not specified inIEEE 754and implemented differently on different processors. Thex86family and theARMfamily processors use the most significant bit of the significand field to indicate aquiet NaN. ThePA-RISCprocessors use the bit to indicate asignaling NaN. The design of floating-point format allows various optimisations, resulting from the easy generation of abase-2 logarithmapproximation from an integer view of the raw bit pattern. Integer arithmetic and bit-shifting can yield an approximation toreciprocal square root(fast inverse square root), commonly required incomputer graphics.
https://en.wikipedia.org/wiki/Single-precision_floating-point_format
Standard Apple Numerics Environment(SANE) wasApple Computer's software implementation ofIEEE 754floating-point arithmetic. It was available for the6502-basedApple IIandApple IIImodels and came standard with the65816basedApple IIGSand680x0basedMacintoshandLisamodels. Later Macintosh models had hardware floating point arithmetic via68040microprocessorsor68881floating pointcoprocessors, but still included SANE for compatibility with existing software. SANE was replaced by Floating Point C Extensions access to hardware floating point arithmetic in the early 1990s as Apple switched from 680x0 toPowerPCmicroprocessors. ThisClassic Mac OSand/ormacOSsoftware–related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Standard_Apple_Numerics_Environment
Incomputer programming, anoperatoris aprogramming languageconstruct that provides functionality that may not be possible to define as a user-definedfunction(i.e.sizeofinC) or hassyntaxdifferent than a function (i.e.infixaddition as ina+b). Like other programming language concepts,operatorhas a generally accepted, although debatable meaning among practitioners while at the same time each language gives it specific meaning in that context, and therefore the meaning varies by language. Some operators are represented with symbols – characters typically not allowed for a functionidentifier– to allow for presentation that is more familiar looking than typical function syntax. For example, a function that tests for greater-than could be namedgt, but many languages provide an infix symbolic operator so that code looks more familiar. For example, this: if gt(x, y) then return Can be: if x > y then return Some languages allow a language-defined operator to be overridden with user-defined behavior and some allow for user-defined operator symbols. Operators may also differ semantically from functions. For example,short-circuitBoolean operations evaluate later arguments only if earlier ones are not false. Many operators differ syntactically from user-defined functions. In most languages, a function isprefix notationwith fixedprecedencelevel and associativity and often with compulsoryparentheses(e.g.Func(a)or(Func a)inLisp). In contrast, many operators are infix notation and involve different use of delimiters such as parentheses. In general, an operator may be prefix, infix, postfix,matchfix,circumfixor bifix,[1][2][3][4][5]and the syntax of anexpressioninvolving an operator depends on itsarity(number ofoperands), precedence, and (if applicable),associativity. Most programming languages supportbinary operatorsand a fewunary operators, with a few supporting more operands, such as the?:operator in C, which is ternary. There are prefix unary operators, such as unary minus-x, and postfix unary operators, such aspost-incrementx++; and binary operations are infix, such asx + yorx = y. Infix operations of higher arity require additional symbols, such as theternary operator?: in C, written asa ? b : c– indeed, since this is the only common example, it is often referred to astheternary operator. Prefix and postfix operations can support any desired arity, however, such as1 2 3 4 +. The semantics of an operator may significantly differ from that of a normal function. For reference, addition is evaluated like a normal function. For example,x + ycan be equivalent to a functionadd(x, y)in that the arguments are evaluated and then the functional behavior is applied. However,assignmentis different. For example, givena = bthe targetaisnotevaluated. Instead its value is replaced with the value ofb. Thescope resolutionand element access operators (as inFoo::Baranda.b, respectively, in the case of e.g.C++) operate on identifier names; not values. In C, for instance, the array indexing operator can be used for both read access as well as assignment. In the following example, theincrement operatorreads the element value of an array and then assigns the element value. The C++<<operator allows forfluentsyntax by supporting a sequence of operators that affect a single argument. For example: Some languages provide operators that aread hoc polymorphic– inherently overloaded. For example, inJavathe+operator sumsnumbersorconcatenatesstrings. Some languages support user-definedoverloading(such asC++andFortran). An operator, defined by the language, can beoverloadedto behave differently based on the type of input. Some languages (e.g. C, C++ andPHP) define a fixed set of operators, while others (e.g.Prolog,[6]Seed7,[7]F#,OCaml,Haskell) allow for user-defined operators. Some programming languages restrict operator symbols to special characters like+or:=while others allow names likediv(e.g.Pascal), and even arbitrary names (e.g.Fortranwhere an upto 31 character long operator name is enclosed between dots[8]). Most languages do not support user-defined operators since the feature significantly complicates parsing. Introducing a new operator changes the arity and precedencelexical specificationof the language, which affects phrase-levellexical analysis. Custom operators, particularly via runtime definition, often make correctstatic analysisof a program impossible, since the syntax of the language may be Turing-complete, so even constructing the syntax tree may require solving the halting problem, which is impossible. This occurs forPerl, for example, and some dialects ofLisp. If a language does allow for defining new operators, the mechanics of doing so may involve meta-programming – specifying the operator in a separate language. Some languages implicitly convert (akacoerce) operands to be compatible with each other. For example,Perlcoercion rules cause12 + "3.14"to evaluate to15.14. The string literal"3.14"is converted to the numeric value 3.14 before addition is applied. Further,3.14is treated as floating point so the result is floating point even though12is an integer literal.JavaScriptfollows different rules so that the same expression evaluates to"123.14"since12is converted to a string which is then concatenated with the second operand. In general, a programmer must be aware of the specific rules regarding operand coercion in order to avoid unexpected and incorrect behavior. The following table shows the operator features in several programming languages: non-ASCII:¬ +× ⊥ ↑ ↓ ⌊ ⌈ × ÷ ÷× ÷* □ ≤ ≥ ≠ ∧ ∨ ×:= ÷:= ÷×:= ÷*:= %×:= :≠:
https://en.wikipedia.org/wiki/Compound_operator_(computing)
TheDadda multiplieris a hardwarebinary multiplierdesign invented by computer scientistLuigi Daddain 1965.[1]It uses a selection offull and half addersto sum the partial products in stages (theDadda treeorDadda reduction) until two numbers are left. The design is similar to theWallace multiplier, but the different reduction tree reduces the required number ofgates(for all but the smallest operand sizes) and makes it slightly faster (for all operand sizes).[2] Both Dadda and Wallace multipliers have the same three steps for two bit stringsw1{\displaystyle w_{1}}andw2{\displaystyle w_{2}}of lengthsℓ1{\displaystyle \ell _{1}}andℓ2{\displaystyle \ell _{2}}respectively: As with the Wallace multiplier, the multiplication products of the first step carry different weights reflecting the magnitude of the original bit values in the multiplication. For example, the product of bitsanbm{\displaystyle a_{n}b_{m}}has weightn+m{\displaystyle n+m}. Unlike Wallace multipliers that reduce as much as possible on each layer, Dadda multipliers attempt to minimize the number of gates used, as well as input/output delay. Because of this, Dadda multipliers have a less expensive reduction phase, but the final numbers may be a few bits longer, thus requiring slightly bigger adders. To achieve a more optimal final product, the structure of the reduction process is governed by slightly more complex rules than in Wallace multipliers. The progression of the reduction is controlled by a maximum-height sequencedj{\displaystyle d_{j}}, defined by: This yields a sequence like so: The initial value ofj{\displaystyle j}is chosen as the largest value such thatdj<min(n1,n2){\displaystyle d_{j}<\min {(n_{1},n_{2})}}, wheren1{\displaystyle n_{1}}andn2{\displaystyle n_{2}}are the number of bits in the input multiplicand and multiplier. The lesser of the two bit lengths will be the maximum height of each column of weights after the first stage of multiplication. For each stagej{\displaystyle j}of the reduction, the goal of the algorithm is the reduce the height of each column so that it is less than or equal to the value ofdj{\displaystyle d_{j}}. For each stage from,…,1{\displaystyle ,\ldots ,1}, reduce each column starting at the lowest-weight column,c0{\displaystyle c_{0}}according to these rules: The example in the adjacent image illustrates the reduction of an 8 × 8 multiplier, explained here. The initial statej=4{\displaystyle j=4}is chosen asd4=6{\displaystyle d_{4}=6}, the largest value less than 8. Stagej=4{\displaystyle j=4},d4=6{\displaystyle d_{4}=6} Stagej=3{\displaystyle j=3},d3=4{\displaystyle d_{3}=4} Stagej=2{\displaystyle j=2},d2=3{\displaystyle d_{2}=3} Stagej=1{\displaystyle j=1},d1=2{\displaystyle d_{1}=2} Addition The output of the last stage leaves 15 columns of height two or less which can be passed into a standard adder.
https://en.wikipedia.org/wiki/Dadda_tree
Inidempotent analysis, thetropical semiringis asemiringofextended real numberswith the operations ofminimum(ormaximum) and addition replacing the usual ("classical") operations of addition and multiplication, respectively. The tropical semiring has various applications (seetropical analysis), and forms the basis oftropical geometry. The nametropicalis a reference to the Hungarian-born computer scientistImre Simon, so named because he lived and worked in Brazil.[1] Themin tropical semiring(ormin-plus semiringormin-plus algebra) is thesemiring(R∪{+∞}{\displaystyle \mathbb {R} \cup \{+\infty \}},⊕{\displaystyle \oplus },⊗{\displaystyle \otimes }), with the operations: The operations⊕{\displaystyle \oplus }and⊗{\displaystyle \otimes }are referred to astropical additionandtropical multiplicationrespectively. The identity element for⊕{\displaystyle \oplus }is+∞{\displaystyle +\infty }, and the identity element for⊗{\displaystyle \otimes }is 0. Similarly, themax tropical semiring(ormax-plus semiringormax-plus algebraorArctic semiring[citation needed]) is the semiring (R∪{−∞}{\displaystyle \mathbb {R} \cup \{-\infty \}},⊕{\displaystyle \oplus },⊗{\displaystyle \otimes }), with operations: The identity element unit for⊕{\displaystyle \oplus }is−∞{\displaystyle -\infty }, and the identity element unit for⊗{\displaystyle \otimes }is 0. The two semirings are isomorphic under negationx↦−x{\displaystyle x\mapsto -x}, and generally one of these is chosen and referred to simply as thetropical semiring. Conventions differ between authors and subfields: some use theminconvention, some use themaxconvention. The two tropical semirings are the limit ("tropicalization", "dequantization") of thelog semiringas the base goes to infinity⁠b→∞{\displaystyle b\to \infty }⁠(max-plus semiring) or to zero⁠b→0{\displaystyle b\to 0}⁠(min-plus semiring). Tropical addition isidempotent, thus a tropical semiring is an example of anidempotent semiring. A tropical semiring is also referred to as atropical algebra,[2]though this should not be confused with anassociative algebraover a tropical semiring. Tropicalexponentiationis defined in the usual way as iterated tropical products. The tropical semiring operations model howvaluationsbehave under addition and multiplication in avalued field. A real-valued fieldK{\displaystyle K}is a field equipped with a function which satisfies the following properties for alla{\displaystyle a},b{\displaystyle b}inK{\displaystyle K}: Therefore the valuationvis almost a semiring homomorphism fromKto the tropical semiring, except that the homomorphism property can fail when two elements with the same valuation are added together. Some common valued fields:
https://en.wikipedia.org/wiki/Tropical_arithmetic
Aslide calculator, also known as anAddiatorafter the best-known brand, is amechanical calculatorcapable of addition and subtraction, once made byAddiator GesellschaftofBerlin,Germany. Variants of it were manufactured from 1920 until 1982. The devices were made obsolete by theelectronic calculator. The Addiator is composed of sheet-metal sliders inside a metal envelope, manipulated by a stylus, with an innovative carry mechanism, doingsubtract ten, carry onewith a simple stylus movement. Some types of Addiators can also handle negative numbers (with a complementary bottom window or by providing a subtraction mode on the back side of the device). The Addiator also handles non-decimal measurements, like feet and inches, or pre-decimalizationpounds, shillings, and pence. Addition and subtraction require different "screens", handled by turning the instrument over, or flipping a front panel, or, later, by extended sliders and an extra lower panel. Although not always advertised (e.g., theMagic Brain Calculatormentions "add, subtract, multiply" on its front plate), procedures exist for multiplication (by repeated addition or by individual digit multiplications) and division (e.g., by repeated subtraction, or use of additions combined withcomplementary numbers). More expensive versions have a built-inslide ruleon the back. Sometime between 1666 and 1675, French polymathClaude Perraultinvented the first slide calculator, calledAbaque rhabdologique(arabdologicalabacus), when he needed to do a lot of calculations while working as anarchitect. About three decades later, around 1700 or shortly after, French businessman and amateur mathematician César Caze simplified Perrault's design and adapted it to counting money, getting a privilege (patent) in 1711. However, neither of these devices implemented a carry mechanism. In 1845, German musician and amateur mechanic Heinrich Kummer, who was living in St. Petersburg, saw a mechanical calculator of a different design made byHayyim Selig Slonimski, and in the next year borrowed the idea of its carry mechanism to greatly improve Caze's device, leading to the modern variant of the slide calculator. In 1889, Louis-Joseph Troncet successfully commercialized the Addiator,[1]which became one of the most popular calculators of its kind, and the name is often used to refer to the type generally.[2]Addiators appeared in newspaper advertisements as early as 1921, listed at a price of£4(equivalent to £224.15 in 2023) in theDaily Recordof Scotland.[3]As of 1968, Addiators were advertised in American newspapers starting at $3.98 each ($36.00 in 2024).[4]
https://en.wikipedia.org/wiki/Slide_calculator
The pascaline(also known as thearithmetic machineorPascal's calculator) is amechanical calculatorinvented byBlaise Pascalin 1642. Pascal was led to develop a calculator by the laborious arithmetical calculations required by his father's work as the supervisor of taxes inRouen, France.[2]He designed the machine to add and subtract two numbers and to perform multiplication and division through repeated addition or subtraction. There were three versions of his calculator: one for accounting, one for surveying, and one for science. The accounting version represented thelivrewhich was the currency in France at the time. The next dial to the right representedsolswhere 20 sols make 1 livre. The next, and right-most dial, representeddenierswhere 12 deniers make 1 sol. Pascal's calculator was especially successful in the design of itscarry mechanism, which carries 1 to the next dial when the first dial changes from 9 to 0. His innovation made each digit independent of the state of the others, enabling multiple carries to rapidly cascade from one digit to another regardless of the machine's capacity. Pascal was also the first to shrink and adapt for his purpose alantern gear, used inturret clocksandwater wheels. This innovation allowed the device to resist the strength of any operator input with very little added friction. Pascal designed the machine in 1642.[3]After 50prototypes, he presented the device to the public in 1645, dedicating it toPierre Séguier, thenchancellor of France.[4]Pascal built around twenty more machines during the next decade, many of which improved on his original design. In 1649, KingLouis XIVgave Pascal aroyal privilege(similar to apatent), which provided the exclusive right to design and manufacture calculating machines in France. Nine Pascal calculators presently exist;[5]most are on display in European museums. Many later calculators were either directly inspired by or shaped by the same historical influences that had led to Pascal's invention.Gottfried Leibnizinvented hisLeibniz wheelsafter 1671, after trying to add an automatic multiplication feature to the Pascaline.[6]In 1820,Thomas de Colmardesigned hisarithmometer, the first mechanicalcalculatorstrong enough and reliable enough to be used daily in an office environment. It is not clear whether he ever saw Leibniz's device, but he either re-invented it or utilized Leibniz's invention of the step drum. Blaise Pascal began to work on his calculator in 1642, when he was 18 years old. He had been assisting his father, who worked as a tax commissioner, and sought to produce a device which could reduce some of his workload. Pascal received aRoyal Privilegein 1649 that granted him exclusive rights to make and sell calculating machines in France. By 1654 he had sold about twenty machines (only nine of those twenty machines are known to exist today[7]), but the cost and complexity of the Pascaline was a barrier to further sales and production ceased in that year. By that time Pascal had moved on to the study ofreligionandphilosophy, which resulted in both theLettres provincialesand thePensées. The tercentenary celebration of Pascal's invention of the mechanical calculator occurred duringWorld War IIwhen France was occupied by Germany and therefore the main celebration was held in London, England. Speeches given during the event highlighted Pascal's practical achievements when he was already known in the field of pure mathematics, and his creative imagination, along with how ahead of their time both the machine and its inventor were.[8] The calculator had spoked metal wheel dials with the digits 0 through 9 displayed around the circumference of each wheel (for base 10 digits). To input a digit, the user placed a stylus in the corresponding space between the spokes and turned the dial clockwise until the metal stop was reached, similar to the way therotary dialof a telephone is used. This displayed the number in the accumulator at the top of the calculator. Then, one simply dialed the second number to be added, causing the sum of both numbers to appear in the accumulator. Each dial is associated with a one-digit display window located directly above it, which displays the value of the accumulator for this position. The complement of this digit, in the base of the wheel (6, 10, 12, 20), is displayed just above this digit. A horizontal bar hides either all the complement numbers when it is slid to the top, or all the accumulator numbers when it is slid toward the center of the machine. Since the gears of the calculator rotated in only one direction, only addition operations could be performed. However, to subtract one number from another, the method ofnine's complementwas used. The only two differences between an addition and a subtraction are the position of the display bar (accumulator versus complement) and the way the first number is entered (direct versus complement). For a 10-digit wheel, the fixed outside wheel is numbered from 0 to 9. The numbers are inscribed in a decreasing manner clockwise going from the bottom left to the bottom right of the stop lever. To add a 5, one must insert a stylus between the spokes that surround the number 5 and rotate the wheel clockwise all the way to the stop lever. The number displayed on the corresponding display register will be increased by 5 and, if a carry transfer takes place, the display register to the left of it will be increased by 1. To add 50, use the tens input wheel (second dial from the right on a decimal machine), to add 500, use the hundreds input wheel, etc... On all the wheels of all the known machines, except for themachine tardive,[9]two adjacent spokes are marked; these marks differ from machine to machine. On the wheel pictured on the right, they are drilled dots, on the surveying machine they are carved; some are just scratches or marks made with a bit of varnish,[10]some were even marked with little pieces of paper.[11] These marks are used to set the corresponding cylinder to its maximum number, ready to be re-zeroed. To do so, the operator inserts the stylus in between these two spokes and turns the wheel all the way to the stopping lever. This works because each wheel is directly linked to its corresponding display cylinder (it automatically turns by one during a carry operation). To mark the spokes during manufacturing, one can move the cylinder so that its highest number is displayed and then mark the spoke under the stopping lever and the one to the right of it. Four of the known machines have inner wheels of complements, which were used to enter the first operand in a subtraction. They are mounted at the center of each spoked metal wheel and turn with it. The wheel displayed in the picture above has an inner wheel of complements, but the numbers written on it are barely visible. On a decimal machine, the digits 0 through 9 are carved clockwise, with each digit positioned between two spokes so that the operator can directly inscribe its value in the window of complements by positioning his stylus in between them and turning the wheel clockwise all the way to the stop lever.[12]The marks on two adjacent spokes flank the digit 0 inscribed on this wheel. On four of the known machines, above each wheel, a small quotient wheel is mounted on the display bar. These quotient wheels, which are set by the operator, have numbers from 1 to 10 inscribed clockwise on their peripheries (even above a non-decimal wheel). Quotient wheels seem to have been used during a division to memorize the number of times the divisor was subtracted at each given index.[13] Pascal went through 50 prototypes before settling on his final design; we know that he started with some sort of calculating clock mechanism which apparently "works by springs and which has a very simple design", was used "many times" and remained in "operating order". Nevertheless, "while always improving on it" he found reason to try to make the whole system more reliable and robust.[14]Eventually he adopted a component of very large clocks, shrinking and adapting for his purpose the robust gears that can be found in a turret clock mechanism called alantern gear, itself derived from a water wheel mechanism. This could easily handle the strength of an operator input.[15] Pascal adapted a pawl and ratchet mechanism to his own turret wheel design; the pawl prevents the wheel from turning counterclockwise during an operator input, but it is also used to precisely position the display wheel and the carry mechanism for the next digit when it is pushed up and lands into its next position. Because of this mechanism, each number displayed is perfectly centered in the display window and each digit is precisely positioned for the next operation. This mechanism would be moved six times if the operator dialed a six on its associated input wheel. The sautoir is the centerpiece of the pascaline's carry mechanism. In his "Avis nécessaire...", Pascal noted that a machine with 10,000 wheels would work as well as a machine with two wheels because each wheel is independent of the other. When it is time to propagate a carry, the sautoir, under the sole influence of gravity,[16]is thrown toward the next wheel without any contact between the wheels. During its free fall the sautoir behaves like an acrobat jumping from one trapeze to the next without the trapezes touching each other ("sautoir" comes from the French verbsauter, which means to jump). All the wheels (including gears and sautoir) have therefore the same size and weight independently of the capacity of the machine. Pascal used gravity to arm the sautoirs. One must turn the wheel five steps from 4 to 9 in order to fully arm a sautoir, but the carry transfer will move the next wheel only one step. Thus, much extra energy is accumulated during the arming of a sautoir. All the sautoirs are armed by either an operator input or a carry forward. To re-zero a 10,000-wheel machine, if one existed, the operator would have to set every wheel to its maximum and then add a 1 to the "unit" wheel. The carry would turn every input wheel one by one in a very rapidDomino effectfashion and all the display registers would be reset. The carry transmission has three phases: The Pascaline is a direct adding machine (it has no crank) so the value of a number is added to the accumulator as it is being dialed in. By moving a display bar, the operator can see either the number stored in the accumulator or the 9's complement of its value. Subtraction is performed like addition by using 9's complement arithmetic. The 9's complement of any one-digit decimal numberdis 9-d. So the 9's complement of 4 is 5 and the 9's complement of 7 is 2. In a decimal machine withndials, the 9's complement of a number A is: and therefore the 9's complement of (A-B) is: In other words, the 9's complement of the difference of two numbers is equal to the sum of the 9's complement of the minuend plus the subtrahend. The same principle is valid and can be used with numbers composed of digits of various bases (base 6, 12, 20), like in the surveying or the accounting machines. This can also be extended to: This principle applied to the Pascaline: The machine has to be re-zeroed before each new operation. To reset his machine, the operator has to set all the wheels to their maximum, using themarks on two adjacent spokes, and then add 1 to the rightmost wheel.[17] The method of re-zeroing that Pascal chose, which propagates a carry right through the machine, is the most demanding task for a mechanical calculator and proves, before each operation, that the machine is fully functional. This is a testament to the quality of the Pascaline because none of the 18th century criticisms of the machine mentioned a problem with the carry mechanism and yet this feature was fully tested on all the machines, by their resets, all the time.[18] Additions are performed with the display bar moved closest to the edge of the machine, showing the direct value of the accumulator. After re-zeroing the machine, numbers are dialed in one after the other. The following table shows all the steps required to compute 12,345 + 56,789 = 69,134 Subtraction is performed with the display bar moved closest to the center of the machine thereby covering the accumulator value and, instead, showing the 9's complement value of the accumulator. After being reset, the accumulator would contain all 0's and, therefore, all 9's would be shown in the 9's complement display. The first number to be entered to perform a subtraction is the 9's complement of the minuend. For example, if the minuend was 7, it's 9's complement is 2 and this value would be dialled in as with addition. The number 7 will appear in the 9's complement display. Another method of entering the 9's complement directly is to simply place the stylus in the 9 value and rotate the dial to the number representing the value to be entered. In this case the dial would be rotated from 9 to 7: the same as entering 2! Once the minuend has been dialled in, the subtrahends are entered as if being added. The 9's complement display will show the result of the operation. The accumulator contains⁠C9(A){\displaystyle C9(A)}⁠during the first step and⁠C9(A−B){\displaystyle C9(A-B)}⁠after adding B. In displaying that data in the complement window, the operator sees⁠C9(C9(A)){\displaystyle C9(C9(A))}⁠which is A and then⁠C9(C9(A−B)){\displaystyle C9(C9(A-B))}⁠which is⁠(A−B){\displaystyle (A-B)}⁠. It feels like an addition since the only two differences in between an addition and a subtraction are the position of the display bar (accumulator versus complement) and the way the first number is entered (direct versus complement). The following table shows all the steps required to compute 54,321 − 12,345 = 41,976 Pascalines came in bothdecimaland non-decimal varieties, both of which can be viewed in museums today. They were designed for use by scientists, accountants and surveyors. The simplest Pascaline had five dials; later variants had up to ten dials. The contemporaryFrench currencysystem usedlivres,solsanddenierswith 20solsto alivreand 12deniersto asol. Length was measured intoises,pieds,poucesandligneswith 6piedsto atoise, 12poucesto apiedand 12lignesto apouce. Therefore, the pascaline needed wheels in base 6, 10, 12 and 20. Non-decimal wheels were always located before the decimal part. In an accounting machine (..10,10,20,12), the decimal part counted the number oflivres(20sols),sols(12deniers) anddeniers. In a surveyor's machine (..10,10,6,12,12), the decimal part counted the number oftoises(6pieds),pieds(12pouces),pouces(12lignes) andlignes. Scientific machines just had decimal wheels. The decimal part of each machine is highlighted. The metric system was adopted in France on December 10, 1799, by which time Pascal's basic design had inspired other craftsmen, although with a similar lack of commercial success. Most of the machines that have survived the centuries are of the accounting type. Seven of them are in European museums, one belongs to the IBM corporation and one is in private hands. Pascal planned to distribute the Pascaline broadly in order to reduce the workload for people who needed to perform laborious arithmetic.[24]Drawing inspiration from his father, atax commissioner, Pascal hoped to provide a shortcut to hours of number crunching performed by workers in professions such as mathematics, physics, astronomy, etc.[25]But, because of the intricacies of the device, the relationship Pascal had with craftsmen, and the intellectual property laws he influenced, the production of the Pascaline was far more limited than he had envisioned. Only 20 Pascalines were produced over the 10 years following its creation.[26] In 1649,King Louis XIV of Francegave Pascal aroyal privilege(a precursor to thepatent), which provided the exclusive right to design and manufacture calculating machines in France, allowing the Pascaline to be the first calculator sold by a distributor.[27]Pascal feared that craftsmen would not be able to accurately reproduce his Pascaline, which would result in false copies that would ruin his reputation along with the reputation of his machine.[24]In 1645, in order to control the production of his invention, Pascal wrote to Monseigneur Le Chancelier (the chancellor of France,Pierre Séguier) in his letter entitled "La Machine d’arithmétique. Lettre dédicatoire à Monseigneur le Chancelier".[24]Pascal requested that no Pascaline be made without his permission.[24]His ingenuity garnered the respect ofKing Louis XIVof France who granted his request, but it came at a price; craftsmen were not able to legally experiment with Pascal's design, nor were they able to distribute his machine without his permission/guidance. Pascal lived in France during France'sAncien Régime. During his time, craftsmen in Europe increasingly organised intoguilds, such as the English clockmakers who formed theClockmakersguild in 1631, half-way through Pascal's efforts to create the calculator. This affected Pascal’s ability to recruit talent as guilds often reduced the exchange of ideas and trade; sometimes, craftsmen would withhold their labour altogether to rebel against the nobles. Thus Pascal was in a market that had a scarcity of skills and willing workers.[28]Importantly, artisans were not as free as intellectuals to create the machine:Gottfried Leibniz, who built upon Pascal's calculator later in the 17th century, had the progress for his machine halted due to his artisan selling the machine's parts for financial solvency.[29] Pascal’s own conduct led to difficulty in recruiting artisans for his project. This was rooted by his belief that matters of the mind trumped those of the body. Pascal was not alone, as many natural philosophers of his time had ahylomorphicunderstanding of the inventing process: ideas precede materialisation, as form precedes matter. This naturally led to an emphasis on theoretical purity and an underappreciation for practical work. As Pascal described artisans: “[they] work through groping trial and error, that is, without certain measures and proportions regulated by art, produc[ing] nothing corresponding to what they had sought, or, what’s more, they make a little monster appear, that lacks its principal limbs, the others being deformed, lacking any proportion.”[30] Pascal operated his project with this hierarchy in mind: he invented and thought, while the artisans simply executed. He hid the theory from artisans, instead promoting that they should simply remember what to do, not necessarily why they should do it, i.e., until "practice has made the rules of theory so common that [the rules] have finally been reduced into art”. This stemmed from his lack of faith in not only the artisanal work process, but in the artisans themselves: “artisans cannot regulate themselves to produce unified machines autonomously."[30] In contrast,Samuel Morland, one of Pascal's contemporaries also working on creating a calculating machine, likely succeeded because of his ability to manage good relations with his craftsmen. Morland proudly attributed part of his invention to the artisans by name– an odd thing for a nobleman to do for a commoner at the time. Morland was able to recruit the best talent in Europe. His first craftsmen was the famousPeter Blondeau, who had already received protection and recognition from French statesmanRichelieufor his contributions in producing coinage for England. Morland's other craftsmen were similarly accomplished: the third, Dutchman JohnFromanteel, came a famous Dutch family who pioneered the pendulum clock.[30] In the end, Pascal succeeded in cementing his name as the sole creator of the Pascaline. The royal patent states that it was his invention exclusively.[31] Besides being the first calculating machine made public during its time, the pascaline is also: In 1957, Franz Hammer, a biographer ofJohannes Kepler, announced the discovery of two letters thatWilhelm Schickardhad written to his friend Johannes Kepler in 1623 and 1624 which contain the drawings of a previously unknown working calculating clock, predating Pascal's work by twenty years.[36]The 1624 letter stated that the first machine to be built by a professional had been destroyed in a fire during its construction and that he was abandoning his project.[37]After careful examination it was found, in contradiction to Franz Hammer's understanding, that Schickard's drawings had been published at least once per century starting from 1718.[38] Bruno von Freytag Loringhoff, a mathematics professor at theUniversity of Tübingenbuilt the first replica of Schickard's machine but not without adding wheels and springs to finish the design.[39]This detail is not described in Schickard's two surviving letters and drawings. A problem in the operation of the Schickard machine, based on the surviving notes, was found after the replicas were built.[40]Schickard's machine used clock wheels which were made stronger and were therefore heavier, to prevent them from being damaged by the force of an operator input. Each digit used a display wheel, an input wheel and an intermediate wheel. During a carry transfer all these wheels meshed with the wheels of the digit receiving the carry. The cumulative friction and inertia of all these wheels could "...potentially damage the machine if a carry needed to be propagated through the digits, for example like adding 1 to a number like 9,999".[41]The great innovation in Pascal's calculator was that it was designed so that each input wheel is totally independent from all the others and carries are propagated in sequence. Pascal chose, for his machine, a method of re-zeroing that propagates a carry right through the machine.[17]It is the most demanding operation to execute for a mechanical calculator and proved, before each operation, that the carry mechanism of the Pascaline was fully functional. This could be taken as a testament to the quality of the Pascaline because none of the 18th century criticisms of the machine mentioned a problem with the carry mechanism and yet this feature was fully tested on all the machines, by their resets, all the time.[18] Gottfried Leibnizstarted to work on his own calculator after Pascal's death. He first tried to build a machine that could multiply automatically while sitting on top of the Pascaline calculator, assuming incorrectly that all the dials on Pascal's calculator could be operated at the same time. Even though this could not be done, it was the first time that a pinwheel was described and used in the drawing of a calculator. He then devised a competing design, theStepped Reckonerwhich was meant to perform additions, subtractions and multiplications automatically and division under operator control. Leibniz struggled for forty years to perfect this design and produced two machines, one in 1694 and one in 1706.[42]Only the machine built in 1694 is known to exist; it was rediscovered at the end of the 19th century, having spent 250 years forgotten in an attic at theUniversity of Göttingen.[42] The German calculating-machine inventor Arthur Burkhardt was asked to attempt to put Leibniz' machine in operating condition. His report was favorable except for the sequence in the carry.[43]and "therefore, especially in the case of multiple carry transfers, the operator had to check the result and manually correct the possible errors".[44]Leibniz had not succeeded in creating a calculator that worked properly, but he had invented theLeibniz wheel, the principle of a two-motion mechanical calculator. He was also the first to have cursors to inscribe the first operand and a movable carriage for results. There were five additional attempts at designing "direct entry" calculating machines in the 17th century (including the designs ofTito Burattini,Samuel MorlandandRené Grillet). Around 1660 Claude Perrault designed anabaque rhabdologiquethat is often mistaken for a mechanical calculator because it has a carry mechanism in between the numbers. But it is actually an abacus, since it requires the operator to handle the machine differently when a carry transfer takes place.[45] Pascal's calculator was the most successful mechanical calculator developed in the 17th century for the addition and subtraction of large numbers. The stepped reckoner had a problem in the carry mechanism after more than two consecutive carries, and the other devices had carry mechanisms (one tooth wheel) that were limited in their capacity to carry across multiple digits or had no carry mechanism in between the digits of the accumulator. Calculating machines did not become commercially viable until 1851, whenThomas de Colmarreleased, after thirty years of development, his simplifiedarithmometer, the first machine strong enough to be used daily in an office environment. The Arithmometer was designed aroundLeibniz wheelsand initially used Pascal's9's complementmethod for subtractions.
https://en.wikipedia.org/wiki/Pascal%27s_calculator
AnEgyptian fractionis a finite sum of distinctunit fractions, such as12+13+116.{\displaystyle {\frac {1}{2}}+{\frac {1}{3}}+{\frac {1}{16}}.}That is, eachfractionin the expression has anumeratorequal to 1 and adenominatorthat is a positiveinteger, and all the denominators differ from each other. The value of an expression of this type is apositiverational numberab{\displaystyle {\tfrac {a}{b}}}; for instance the Egyptian fraction above sums to4348{\displaystyle {\tfrac {43}{48}}}. Every positive rational number can be represented by an Egyptian fraction. Sums of this type, and similar sums also including23{\displaystyle {\tfrac {2}{3}}}and34{\displaystyle {\tfrac {3}{4}}}assummands, were used as a serious notation for rational numbers by the ancient Egyptians, and continued to be used by other civilizations into medieval times. In modern mathematical notation, Egyptian fractions have been superseded byvulgar fractionsanddecimalnotation. However, Egyptian fractions continue to be an object of study in modernnumber theoryandrecreational mathematics, as well as in modern historical studies ofancient mathematics. Beyond their historical use, Egyptian fractions have some practical advantages over other representations of fractional numbers. For instance, Egyptian fractions can help in dividing food or other objects into equal shares.[1]For example, if one wants to divide 5 pizzas equally among 8 diners, the Egyptian fraction58=12+18{\displaystyle {\frac {5}{8}}={\frac {1}{2}}+{\frac {1}{8}}}means that each diner gets half a pizza plus another eighth of a pizza, for example by splitting 4 pizzas into 8 halves, and the remaining pizza into 8 eighths. Exercises in performing this sort offair divisionof food are a standard classroom example in teaching students to work with unit fractions.[2] Egyptian fractions can provide a solution torope-burning puzzles, in which a given duration is to be measured by igniting non-uniform ropes which burn out after a unit time. Any rational fraction of a unit of time can be measured by expanding the fraction into a sum of unit fractions and then, for each unit fraction1/x{\displaystyle 1/x}, burning a rope so that it always hasx{\displaystyle x}simultaneously lit points where it is burning. For this application, it is not necessary for the unit fractions to be distinct from each other. However, this solution may need an infinite number of re-lighting steps.[3] Egyptian fraction notation was developed in theMiddle Kingdom of Egypt. Five early texts in which Egyptian fractions appear were theEgyptian Mathematical Leather Roll, theMoscow Mathematical Papyrus, theReisner Papyrus, theKahun Papyrusand theAkhmim Wooden Tablet. A later text, theRhind Mathematical Papyrus, introduced improved ways of writing Egyptian fractions. The Rhind papyrus was written byAhmesand dates from theSecond Intermediate Period; it includes atable of Egyptian fraction expansions for rational numbers2n{\displaystyle {\tfrac {2}{n}}}, as well as 84word problems. Solutions to each problem were written out in scribal shorthand, with the final answers of all 84 problems being expressed in Egyptian fraction notation. Tables of expansions for2n{\displaystyle {\tfrac {2}{n}}}similar to the one on the Rhind papyrus also appear on some of the other texts. However, as theKahun Papyrusshows,vulgar fractionswere also used by scribes within their calculations. To write the unit fractions used in their Egyptian fraction notation, in hieroglyph script, the Egyptians placed thehieroglyph: (er, "[one] among" or possiblyre, mouth) above a number to represent thereciprocalof that number. Similarly in hieratic script they drew a line over the letter representing the number. For example: The Egyptians had special symbols for12{\displaystyle {\tfrac {1}{2}}},23{\displaystyle {\tfrac {2}{3}}}, and34{\displaystyle {\tfrac {3}{4}}}that were used to reduce the size of numbers greater than12{\displaystyle {\tfrac {1}{2}}}when such numbers were converted to an Egyptian fraction series. The remaining number after subtracting one of these special fractions was written as a sum of distinct unit fractions according to the usual Egyptian fraction notation. The Egyptians also used an alternative notation modified from the Old Kingdom to denote a special set of fractions of the form1/2k{\displaystyle 1/2^{k}}(fork=1,2,…,6{\displaystyle k=1,2,\dots ,6}) and sums of these numbers, which are necessarilydyadic rationalnumbers. These have been called "Horus-Eye fractions" after a theory (now discredited)[4]that they were based on the parts of theEye of Horussymbol. They were used in the Middle Kingdom in conjunction with the later notation for Egyptian fractions to subdivide ahekat, the primary ancient Egyptian volume measure for grain, bread, and other small quantities of volume, as described in theAkhmim Wooden Tablet. If any remainder was left after expressing a quantity in Eye of Horus fractions of a hekat, the remainder was written using the usual Egyptian fraction notation as multiples of aro, a unit equal to1320{\displaystyle {\tfrac {1}{320}}}of a hekat. Modern historians of mathematics have studied the Rhind papyrus and other ancient sources in an attempt to discover the methods the Egyptians used in calculating with Egyptian fractions. In particular, study in this area has concentrated on understanding the tables of expansions for numbers of the form2n{\displaystyle {\tfrac {2}{n}}}in the Rhind papyrus. Although these expansions can generally be described as algebraic identities, the methods used by the Egyptians may not correspond directly to these identities. Additionally, the expansions in the table do not match any single identity; rather, different identities match the expansions forprimeand forcompositedenominators, and more than one identity fits the numbers of each type: Egyptian fraction notation continued to be used in Greek times and into the Middle Ages,[9]despite complaints as early asPtolemy'sAlmagestabout the clumsiness of the notation compared to alternatives such as theBabylonianbase-60 notation. Related problems of decomposition into unit fractions were also studied in 9th-century India by Jain mathematicianMahāvīra.[10]An important text of medieval European mathematics, theLiber Abaci(1202) ofLeonardo of Pisa(more commonly known as Fibonacci), provides some insight into the uses of Egyptian fractions in the Middle Ages, and introduces topics that continue to be important in modern mathematical study of these series. The primary subject of theLiber Abaciis calculations involving decimal and vulgar fraction notation, which eventually replaced Egyptian fractions. Fibonacci himself used a complex notation for fractions involving a combination of amixed radixnotation with sums of fractions. Many of the calculations throughout Fibonacci's book involve numbers represented as Egyptian fractions, and one section of this book[11]provides a list of methods for conversion of vulgar fractions to Egyptian fractions. If the number is not already a unit fraction, the first method in this list is to attempt to split the numerator into a sum of divisors of the denominator; this is possible whenever the denominator is apractical number, andLiber Abaciincludes tables of expansions of this type for the practical numbers 6, 8, 12, 20, 24, 60, and 100. The next several methods involve algebraic identities such asaab−1=1b+1b(ab−1).{\displaystyle {\frac {a}{ab-1}}={\frac {1}{b}}+{\frac {1}{b(ab-1)}}.}For instance, Fibonacci represents the fraction⁠8/11⁠by splitting the numerator into a sum of two numbers, each of which divides one plus the denominator:⁠8/11⁠=⁠6/11⁠+⁠2/11⁠. Fibonacci applies the algebraic identity above to each these two parts, producing the expansion⁠8/11⁠=⁠1/2⁠+⁠1/22⁠+⁠1/6⁠+⁠1/66⁠. Fibonacci describes similar methods for denominators that are two or three less than a number with many factors. In the rare case that these other methods all fail, Fibonacci suggests a"greedy" algorithmfor computing Egyptian fractions, in which one repeatedly chooses the unit fraction with the smallest denominator that is no larger than the remaining fraction to be expanded: that is, in more modern notation, we replace a fraction⁠x/y⁠by the expansionxy=1⌈yx⌉+(−y)modxy⌈yx⌉,{\displaystyle {\frac {x}{y}}={\frac {1}{\,\left\lceil {\frac {y}{x}}\right\rceil \,}}+{\frac {(-y)\,{\bmod {\,}}x}{y\left\lceil {\frac {y}{x}}\right\rceil }},}where⌈ ⌉represents theceiling function; since(−y) modx<x, this method yields a finite expansion. Fibonacci suggests switching to another method after the first such expansion, but he also gives examples in which this greedy expansion was iterated until a complete Egyptian fraction expansion was constructed:⁠4/13⁠=⁠1/4⁠+⁠1/18⁠+⁠1/468⁠and⁠17/29⁠=⁠1/2⁠+⁠1/12⁠+⁠1/348⁠. Compared to ancient Egyptian expansions or to more modern methods, this method may produce expansions that are quite long, with large denominators, and Fibonacci himself noted the awkwardness of the expansions produced by this method. For instance, the greedy method expands5121=125+1757+1763309+1873960180913+11527612795642093418846225,{\displaystyle {\frac {5}{121}}={\frac {1}{25}}+{\frac {1}{757}}+{\frac {1}{763\,309}}+{\frac {1}{873\,960\,180\,913}}+{\frac {1}{1\,527\,612\,795\,642\,093\,418\,846\,225}},}while other methods lead to the shorter expansion5121=133+1121+1363.{\displaystyle {\frac {5}{121}}={\frac {1}{33}}+{\frac {1}{121}}+{\frac {1}{363}}.} Sylvester's sequence2, 3, 7, 43, 1807, ... can be viewed as generated by an infinite greedy expansion of this type for the number 1, where at each step we choose the denominator⌊⁠y/x⁠⌋ + 1instead of⌈⁠y/x⁠⌉, and sometimes Fibonacci's greedy algorithm is attributed toJames Joseph Sylvester. After his description of the greedy algorithm, Fibonacci suggests yet another method, expanding a fraction⁠a/b⁠by searching for a numberchaving many divisors, with⁠b/2⁠<c<b, replacing⁠a/b⁠by⁠ac/bc⁠, and expandingacas a sum of divisors ofbc, similar to the method proposed by Hultsch and Bruins to explain some of the expansions in the Rhind papyrus. Although Egyptian fractions are no longer used in most practical applications of mathematics, modern number theorists have continued to study many different problems related to them. These include problems of bounding the length or maximum denominator in Egyptian fraction representations, finding expansions of certain special forms or in which the denominators are all of some special type, the termination of various methods for Egyptian fraction expansion, and showing that expansions exist for any sufficiently dense set of sufficientlysmooth numbers. Some notable problems remain unsolved with regard to Egyptian fractions, despite considerable effort by mathematicians. Guy (2004)describes these problems in more detail and lists numerous additional open problems.
https://en.wikipedia.org/wiki/Egyptian_fraction
Ancient Egyptian mathematicsis themathematicsthat was developed and used inAncient Egyptc.3000 to c. 300BCE, from theOld Kingdom of Egyptuntil roughly the beginning ofHellenistic Egypt. The ancient Egyptians utilizeda numeral systemfor counting and solving written mathematical problems, often involvingmultiplicationandfractions. Evidence for Egyptian mathematics is limited to a scarce amount ofsurviving sourceswritten onpapyrus. From these texts it is known that ancient Egyptians understood concepts ofgeometry, such as determining thesurface areaandvolumeof three-dimensional shapes useful forarchitectural engineering, andalgebra, such as thefalse position methodandquadratic equations. Written evidence of the use of mathematics dates back to at least 3200 BC with the ivory labels found in Tomb U-j atAbydos. These labels appear to have been used as tags for grave goods and some are inscribed with numbers.[1]Further evidence of the use of the base 10 number system can be found on theNarmer Maceheadwhich depicts offerings of 400,000 oxen, 1,422,000 goats and 120,000 prisoners.[2]Archaeological evidence has suggested that the Ancient Egyptian counting system had origins in Sub-Saharan Africa.[3]Also, fractal geometry designs which are widespread among Sub-Saharan African cultures are also found in Egyptian architecture and cosmological signs.[4] The evidence of the use of mathematics in theOld Kingdom(c. 2690–2180 BC) is scarce, but can be deduced from inscriptions on a wall near amastabainMeidumwhich gives guidelines for the slope of the mastaba.[5]The lines in the diagram are spaced at a distance of onecubitand show the use of thatunit of measurement.[1] The earliest true mathematical documents date to the12th Dynasty(c. 1990–1800 BC). TheMoscow Mathematical Papyrus, theEgyptian Mathematical Leather Roll, theLahun Mathematical Papyriwhich are a part of the much larger collection ofKahun Papyriand theBerlin Papyrus 6619all date to this period. TheRhind Mathematical Papyruswhich dates to theSecond Intermediate Period(c. 1650 BC) is said to be based on an older mathematical text from the 12th dynasty.[6] The Moscow Mathematical Papyrus and Rhind Mathematical Papyrus are so called mathematical problem texts. They consist of a collection of problems with solutions. These texts may have been written by a teacher or a student engaged in solving typical mathematics problems.[1] An interesting feature ofancient Egyptianmathematics is the use of unit fractions.[7]The Egyptians used some special notation for fractions such as⁠1/2⁠,⁠1/3⁠and⁠2/3⁠and in some texts for⁠3/4⁠, but other fractions were all written asunit fractionsof the form⁠1/n⁠or sums of such unit fractions. Scribes used tables to help them work with these fractions. The Egyptian Mathematical Leather Roll for instance is a table of unit fractions which are expressed as sums of other unit fractions. The Rhind Mathematical Papyrus and some of the other texts contain⁠2/n⁠tables. These tables allowed the scribes to rewrite any fraction of the form⁠1/n⁠as a sum of unit fractions.[1] During theNew Kingdom(c. 1550–1070 BC) mathematical problems are mentioned in the literaryPapyrus Anastasi I, and thePapyrus Wilbourfrom the time ofRamesses IIIrecords land measurements. In the workers village ofDeir el-Medinaseveralostracahave been found that record volumes of dirt removed while quarrying the tombs.[1][6] Current understanding of ancient Egyptian mathematics is impeded by the paucity of available sources. The sources that do exist include the following texts (which are generally dated to the Middle Kingdom and Second Intermediate Period): From the New Kingdom there are a handful of mathematical texts and inscriptions related to computations: According toÉtienne Gilson,Abraham"taught the Egyptians arythmetic and astronomy".[9] Ancient Egyptian texts could be written in eitherhieroglyphsor inhieratic. In either representation the number system was always given in base 10. The number 1 was depicted by a simple stroke, the number 2 was represented by two strokes, etc. The numbers 10, 100, 1000, 10,000 and 100,000 had their own hieroglyphs. Number 10 is ahobblefor cattle, number 100 is represented by a coiled rope, the number 1000 is represented by a lotus flower, the number 10,000 is represented by a finger, the number 100,000 is represented by a frog, and a million was represented by a god with his hands raised in adoration.[8] Egyptian numerals date back to thePredynastic period. Ivory labels fromAbydosrecord the use of this number system. It is also common to see the numerals in offering scenes to indicate the number of items offered. The king's daughterNeferetiabetis shown with an offering of 1000 oxen, bread, beer, etc. The Egyptian number system was additive. Large numbers were represented by collections of the glyphs and the value was obtained by simply adding the individual numbers together. The Egyptians almost exclusively used fractions of the form⁠1/n⁠. One notable exception is the fraction⁠2/3⁠, which is frequently found in the mathematical texts. Very rarely a special glyph was used to denote⁠3/4⁠. The fraction⁠1/2⁠was represented by a glyph that may have depicted a piece of linen folded in two. The fraction⁠2/3⁠was represented by the glyph for a mouth with 2 (different sized) strokes. The rest of the fractions were always represented by a mouth super-imposed over a number.[8] Steps of calculations were written in sentences in Egyptian languages. (e.g. "Multiply 10 times 100; it becomes 1000.") In Rhind Papyrus Problem 28, the hieroglyphs (D54,D55), symbols for feet, were used to mean "to add" and "to subtract." These were presumably shorthands for meaning "to go in" and "to go out."[10][11] Egyptian multiplication was done by a repeated doubling of the number to be multiplied (the multiplicand), and choosing which of the doublings to add together (essentially a form ofbinaryarithmetic), a method that links to the Old Kingdom. The multiplicand was written next to figure 1; the multiplicand was then added to itself, and the result written next to the number 2. The process was continued until the doublings gave a number greater than half of themultiplier. Then the doubled numbers (1, 2, etc.) would be repeatedly subtracted from the multiplier to select which of the results of the existing calculations should be added together to create the answer.[2] As a shortcut for larger numbers, the multiplicand can also be immediately multiplied by 10, 100, 1000, 10000, etc. For example, Problem 69 on theRhind Papyrus(RMP) provides the following illustration, as if Hieroglyphic symbols were used (rather than the RMP's actual hieratic script).[8] Thedenotes the intermediate results that are added together to produce the final answer. The table above can also be used to divide 1120 by 80. We would solve this problem by finding the quotient (80) as the sum of those multipliers of 80 that add up to 1120. In this example that would yield a quotient of 10 + 4 = 14.[8]A more complicated example of the division algorithm is provided by Problem 66. A total of 3200 ro of fat are to be distributed evenly over 365 days. First the scribe would double 365 repeatedly until the largest possible multiple of 365 is reached, which is smaller than 3200. In this case 8 times 365 is 2920 and further addition of multiples of 365 would clearly give a value greater than 3200. Next it is noted that⁠2/3⁠+⁠1/10⁠+⁠1/2190⁠times 365 gives us the value of 280 we need. Hence we find that 3200 divided by 365 must equal 8 +⁠2/3⁠+⁠1/10⁠+⁠1/2190⁠.[8] Egyptian algebra problems appear in both theRhind mathematical papyrusand theMoscow mathematical papyrusas well as several other sources.[8] Aha problems involve finding unknown quantities (referred to as Aha) if the sum of the quantity and part(s) of it are given. TheRhind Mathematical Papyrusalso contains four of these type of problems. Problems 1, 19, and 25 of the Moscow Papyrus are Aha problems. For instance problem 19 asks one to calculate a quantity taken⁠1+1/2⁠times and added to 4 to make 10.[8]In other words, in modern mathematical notation we are asked to solve thelinear equation: Solving these Aha problems involves a technique calledmethod of false position. The technique is also called the method of false assumption. The scribe would substitute an initial guess of the answer into the problem. The solution using the false assumption would be proportional to the actual answer, and the scribe would find the answer by using this ratio.[8] The mathematical writings show that the scribes used (least) common multiples to turn problems with fractions into problems using integers. In this connection red auxiliary numbers are written next to the fractions.[8] The use of the Horus eye fractions shows some (rudimentary) knowledge of geometrical progression. Knowledge of arithmetic progressions is also evident from the mathematical sources.[8] The ancient Egyptians were the first civilization to develop and solve second-degree (quadratic) equations. This information is found in theBerlin Papyrusfragment. Additionally, the Egyptians solve first-degree algebraic equations found inRhind Mathematical Papyrus.[12] There are only a limited number of problems from ancient Egypt that concern geometry. Geometric problems appear in both theMoscow Mathematical Papyrus(MMP) and in theRhind Mathematical Papyrus(RMP). The examples demonstrate that theAncient Egyptiansknew how to compute areas of several geometric shapes and the volumes of cylinders and pyramids. Problem 56 of the RMP indicates an understanding of the idea of geometric similarity. This problem discusses the ratio run/rise, also known as the seqed. Such a formula would be needed for building pyramids. In the next problem (Problem 57), the height of a pyramid is calculated from the base length and theseked(Egyptian for the reciprocal of the slope), while problem 58 gives the length of the base and the height and uses these measurements to compute the seqed. In Problem 59 part 1 computes the seqed, while the second part may be a computation to check the answer:If you construct a pyramid with base side 12 [cubits] and with a seqed of 5 palms 1 finger; what is its altitude?[8]
https://en.wikipedia.org/wiki/Egyptian_mathematics
Abinary numberis anumberexpressed in thebase-2numeral systemorbinary numeral system, a method for representingnumbersthat uses only two symbols for thenatural numbers: typically "0" (zero) and "1" (one). Abinary numbermay also refer to arational numberthat has a finite representation in the binary numeral system, that is, the quotient of anintegerby a power of two. The base-2 numeral system is apositional notationwith aradixof2. Each digit is referred to as abit, or binary digit. Because of its straightforward implementation indigital electronic circuitryusinglogic gates, the binary system is used by almost all moderncomputers and computer-based devices, as a preferred system of use, over various other human techniques of communication, because of the simplicity of the language and the noise immunity in physical implementation.[1] The modern binary number system was studied in Europe in the 16th and 17th centuries byThomas Harriot, andGottfried Leibniz. However, systems related to binary numbers have appeared earlier in multiple cultures including ancient Egypt, China, Europe and India. The scribes of ancient Egypt used two different systems for their fractions,Egyptian fractions(not related to the binary number system) andHorus-Eyefractions (so called because many historians of mathematics believe that the symbols used for this system could be arranged to form the eye ofHorus, although this has been disputed).[2]Horus-Eye fractions are a binary numbering system for fractional quantities of grain, liquids, or other measures, in which a fraction of ahekatis expressed as a sum of the binary fractions 1/2, 1/4, 1/8, 1/16, 1/32, and 1/64. Early forms of this system can be found in documents from theFifth Dynasty of Egypt, approximately 2400 BC, and its fully developed hieroglyphic form dates to theNineteenth Dynasty of Egypt, approximately 1200 BC.[3] The method used forancient Egyptian multiplicationis also closely related to binary numbers. In this method, multiplying one number by a second is performed by a sequence of steps in which a value (initially the first of the two numbers) is either doubled or has the first number added back into it; the order in which these steps are to be performed is given by the binary representation of the second number. This method can be seen in use, for instance, in theRhind Mathematical Papyrus, which dates to around 1650 BC.[4] TheI Chingdates from the 9th century BC in China.[5]The binary notation in theI Chingis used to interpret itsquaternarydivinationtechnique.[6] It is based on taoistic duality ofyin and yang.[7]Eight trigrams (Bagua)and a set of64 hexagrams ("sixty-four" gua), analogous to the three-bit and six-bit binary numerals, were in use at least as early as theZhou dynastyof ancient China.[5] TheSong dynastyscholarShao Yong(1011–1077) rearranged the hexagrams in a format that resembles modern binary numbers, although he did not intend his arrangement to be used mathematically.[6]Viewing theleast significant biton top of single hexagrams in Shao Yong's square[8]and reading along rows either from bottom right to top left with solid lines as 0 and broken lines as 1 or from top left to bottom right with solid lines as 1 and broken lines as 0 hexagrams can be interpreted as sequence from 0 to 63.[9] Etruscansdivided the outer edge ofdivination liversinto sixteen parts, each inscribed with the name of a divinity and its region of the sky. Each liver region produced a binary reading which was combined into a final binary for divination.[10] Divination at Ancient GreekDodonaoracle worked by drawing from separate jars, questions tablets and "yes" and "no" pellets. The result was then combined to make a final prophecy.[11] The Indian scholarPingala(c. 2nd century BC) developed a binary system for describingprosody.[12][13]He described meters in the form of short and long syllables (the latter equal in length to two short syllables).[14]They were known aslaghu(light) andguru(heavy) syllables. Pingala's Hindu classic titledChandaḥśāstra(8.23) describes the formation of a matrix in order to give a unique value to each meter. "Chandaḥśāstra" literally translates toscience of metersin Sanskrit. The binary representations in Pingala's system increases towards the right, and not to the left like in the binary numbers of the modernpositional notation.[15]In Pingala's system, the numbers start from number one, and not zero. Four short syllables "0000" is the first pattern and corresponds to the value one. The numerical value is obtained by adding one to the sum ofplace values.[16] TheIfáis an African divination system.Similar to theI Ching, but has up to 256 binary signs,[17]unlike theI Chingwhich has 64. The Ifá originated in 15th century West Africa amongYoruba people. In 2008,UNESCOadded Ifá to its list of the "Masterpieces of the Oral and Intangible Heritage of Humanity".[18][19] The residents of the island ofMangarevainFrench Polynesiawere using a hybrid binary-decimalsystem before 1450.[20]Slit drumswith binary tones are used to encode messages across Africa and Asia.[7]Sets of binary combinations similar to theI Chinghave also been used in traditional African divination systems, such asIfáamong others, as well as inmedievalWesterngeomancy. The majority ofIndigenous Australian languagesuse a base-2 system.[21] In the late 13th centuryRamon Llullhad the ambition to account for all wisdom in every branch of human knowledge of the time. For that purpose he developed a general method or "Ars generalis" based on binary combinations of a number of simple basic principles or categories, for which he has been considered a predecessor of computing science and artificial intelligence.[22] In 1605,Francis Bacondiscussed a system whereby letters of the alphabet could be reduced to sequences of binary digits, which could then be encoded as scarcely visible variations in the font in any random text.[23]Importantly for the general theory of binary encoding, he added that this method could be used with any objects at all: "provided those objects be capable of a twofold difference only; as by Bells, by Trumpets, by Lights and Torches, by the report of Muskets, and any instruments of like nature".[23](SeeBacon's cipher.) In 1617,John Napierdescribed a system he calledlocation arithmeticfor doing binary calculations using a non-positional representation by letters.Thomas Harriotinvestigated several positional numbering systems, including binary, but did not publish his results; they were found later among his papers.[24]Possibly the first publication of the system in Europe was byJuan Caramuel y Lobkowitz, in 1700.[25] Leibniz wrote in excess of a hundred manuscripts on binary, most of them remaining unpublished.[26]Before his first dedicated work in 1679, numerous manuscripts feature early attempts to explore binary concepts, including tables of numbers and basic calculations, often scribbled in the margins of works unrelated to mathematics.[26] His first known work on binary,“On the Binary Progression", in 1679, Leibniz introduced conversion between decimal and binary, along with algorithms for performing basic arithmetic operations such as addition, subtraction, multiplication, and division using binary numbers. He also developed a form of binary algebra to calculate the square of a six-digit number and to extract square roots.[26] His most well known work appears in his articleExplication de l'Arithmétique Binaire(published in 1703). The full title of Leibniz's article is translated into English as the"Explanation of Binary Arithmetic, which uses only the characters 1 and 0, with some remarks on its usefulness, and on the light it throws on the ancient Chinese figures ofFu Xi".[27]Leibniz's system uses 0 and 1, like the modern binary numeral system. An example of Leibniz's binary numeral system is as follows:[27] While corresponding with the Jesuit priestJoachim Bouvetin 1700, who had made himself an expert on theI Chingwhile a missionary in China, Leibniz explained his binary notation, and Bouvet demonstrated in his 1701 letters that theI Chingwas an independent, parallel invention of binary notation. Leibniz & Bouvet concluded that this mapping was evidence of major Chinese accomplishments in the sort of philosophicalmathematicshe admired.[28]Of this parallel invention, Leibniz wrote in his "Explanation Of Binary Arithmetic" that "this restitution of their meaning, after such a great interval of time, will seem all the more curious."[29] The relation was a central idea to his universal concept of a language orcharacteristica universalis, a popular idea that would be followed closely by his successors such asGottlob FregeandGeorge Boolein formingmodern symbolic logic.[30]Leibniz was first introduced to theI Chingthrough his contact with the French JesuitJoachim Bouvet, who visited China in 1685 as a missionary. Leibniz saw theI Chinghexagrams as an affirmation of theuniversalityof his own religious beliefs as a Christian.[31]Binary numerals were central to Leibniz's theology. He believed that binary numbers were symbolic of the Christian idea ofcreatio ex nihiloor creation out of nothing.[32] [A concept that] is not easy to impart to the pagans, is the creationex nihilothrough God's almighty power. Now one can say that nothing in the world can better present and demonstrate this power than the origin of numbers, as it is presented here through the simple and unadorned presentation of One and Zero or Nothing. In 1854, British mathematicianGeorge Boolepublished a landmark paper detailing analgebraicsystem oflogicthat would become known asBoolean algebra. His logical calculus was to become instrumental in the design of digital electronic circuitry.[33] In 1937,Claude Shannonproduced his master's thesis atMITthat implemented Boolean algebra and binary arithmetic using electronic relays and switches for the first time in history. EntitledA Symbolic Analysis of Relay and Switching Circuits, Shannon's thesis essentially founded practicaldigital circuitdesign.[34] In November 1937,George Stibitz, then working atBell Labs, completed a relay-based computer he dubbed the "Model K" (for "Kitchen", where he had assembled it), which calculated using binary addition.[35]Bell Labs authorized a full research program in late 1938 with Stibitz at the helm. Their Complex Number Computer, completed 8 January 1940, was able to calculatecomplex numbers. In a demonstration to theAmerican Mathematical Societyconference atDartmouth Collegeon 11 September 1940, Stibitz was able to send the Complex Number Calculator remote commands over telephone lines by ateletype. It was the first computing machine ever used remotely over a phone line. Some participants of the conference who witnessed the demonstration wereJohn von Neumann,John MauchlyandNorbert Wiener, who wrote about it in his memoirs.[36][37][38] TheZ1 computer, which was designed and built byKonrad Zusebetween 1935 and 1938, usedBoolean logicand binaryfloating-point numbers.[39] Any number can be represented by a sequence ofbits(binary digits), which in turn may be represented by any mechanism capable of being in two mutually exclusive states. Any of the following rows of symbols can be interpreted as the binary numeric value of 667: The numeric value represented in each case depends on the value assigned to each symbol. In the earlier days of computing, switches, punched holes, and punched paper tapes were used to represent binary values.[40]In a modern computer, the numeric values may be represented by two differentvoltages; on amagneticdisk,magnetic polaritiesmay be used. A "positive", "yes", or "on" state is not necessarily equivalent to the numerical value of one; it depends on the architecture in use. In keeping with the customary representation of numerals usingArabic numerals, binary numbers are commonly written using the symbols0and1. When written, binary numerals are often subscripted, prefixed, or suffixed to indicate their base, orradix. The following notations are equivalent: When spoken, binary numerals are usually read digit-by-digit, to distinguish them from decimal numerals. For example, the binary numeral 100 is pronouncedone zero zero, rather thanone hundred, to make its binary nature explicit and for purposes of correctness. Since the binary numeral 100 represents the value four, it would be confusing to refer to the numeral asone hundred(a word that represents a completely different value, or amount). Alternatively, the binary numeral 100 can be read out as "four" (the correctvalue), but this does not make its binary nature explicit. Counting in binary is similar to counting in any other number system. Beginning with a single digit, counting proceeds through each symbol, in increasing order. Before examining binary counting, it is useful to briefly discuss the more familiardecimalcounting system as a frame of reference. Decimalcounting uses the ten symbols0through9. Counting begins with the incremental substitution of the least significant digit (rightmost digit) which is often called thefirst digit. When the available symbols for this position are exhausted, the least significant digit is reset to0, and the next digit of higher significance (one position to the left) is incremented (overflow), and incremental substitution of the low-order digit resumes. This method of reset and overflow is repeated for each digit of significance. Counting progresses as follows: Binary counting follows the exact same procedure, and again the incremental substitution begins with the least significant binary digit, orbit(the rightmost one, also called thefirst bit), except that only the two symbols0and1are available. Thus, after a bit reaches 1 in binary, an increment resets it to 0 but also causes an increment of the next bit to the left: In the binary system, each bit represents an increasing power of 2, with the rightmost bit representing 20, the next representing 21, then 22, and so on. The value of a binary number is the sum of the powers of 2 represented by each "1" bit. For example, the binary number 100101 is converted to decimal form as follows: Fractionsin binary arithmeticterminateonly if thedenominatoris apower of 2. As a result, 1/10 does not have a finite binary representation (10has prime factors2and5). This causes 10 × 1/10 not to precisely equal 1 in binaryfloating-point arithmetic. As an example, to interpret the binary expression for 1/3 = .010101..., this means: 1/3 = 0 ×2−1+ 1 ×2−2+ 0 ×2−3+ 1 ×2−4+ ... = 0.3125 + ... An exact value cannot be found with a sum of a finite number of inverse powers of two, the zeros and ones in the binary representation of 1/3 alternate forever. Arithmeticin binary is much like arithmetic in otherpositional notationnumeral systems. Addition, subtraction, multiplication, and division can be performed on binary numerals. The simplest arithmetic operation in binary is addition. Adding two single-digit binary numbers is relatively simple, using a form of carrying: Adding two "1" digits produces a digit "0", while 1 will have to be added to the next column. This is similar to what happens in decimal when certain single-digit numbers are added together; if the result equals or exceeds the value of the radix (10), the digit to the left is incremented: This is known ascarrying. When the result of an addition exceeds the value of a digit, the procedure is to "carry" the excess amount divided by the radix (that is, 10/10) to the left, adding it to the next positional value. This is correct since the next position has a weight that is higher by a factor equal to the radix. Carrying works the same way in binary: In this example, two numerals are being added together: 011012(1310) and 101112(2310). The top row shows the carry bits used. Starting in the rightmost column, 1 + 1 = 102. The 1 is carried to the left, and the 0 is written at the bottom of the rightmost column. The second column from the right is added: 1 + 0 + 1 = 102again; the 1 is carried, and 0 is written at the bottom. The third column: 1 + 1 + 1 = 112. This time, a 1 is carried, and a 1 is written in the bottom row. Proceeding like this gives the final answer 1001002(3610). When computers must add two numbers, the rule that: xxory = (x + y)mod2 for any two bits x and y allows for very fast calculation, as well. A simplification for many binary addition problems is the "long carry method" or "Brookhouse Method of Binary Addition". This method is particularly useful when one of the numbers contains a long stretch of ones. It is based on the simple premise that under the binary system, when given a stretch of digits composed entirely ofnones (wherenis any integer length), adding 1 will result in the number 1 followed by a string ofnzeros. That concept follows, logically, just as in the decimal system, where adding 1 to a string ofn9s will result in the number 1 followed by a string ofn0s: Such long strings are quite common in the binary system. From that one finds that large binary numbers can be added using two simple steps, without excessive carry operations. In the following example, two numerals are being added together: 1 1 1 0 1 1 1 1 1 02(95810) and 1 0 1 0 1 1 0 0 1 12(69110), using the traditional carry method on the left, and the long carry method on the right: The top row shows the carry bits used. Instead of the standard carry from one column to the next, the lowest-ordered "1" with a "1" in the corresponding place value beneath it may be added and a "1" may be carried to one digit past the end of the series. The "used" numbers must be crossed off, since they are already added. Other long strings may likewise be cancelled using the same technique. Then, simply add together any remaining digits normally. Proceeding in this manner gives the final answer of 1 1 0 0 1 1 1 0 0 0 12(164910). In our simple example using small numbers, the traditional carry method required eight carry operations, yet the long carry method required only two, representing a substantial reduction of effort. The binary addition table is similar to, but not the same as, thetruth tableof thelogical disjunctionoperation∨{\displaystyle \lor }. The difference is that1∨1=1{\displaystyle 1\lor 1=1}, while1+1=10{\displaystyle 1+1=10}. Subtractionworks in much the same way: Subtracting a "1" digit from a "0" digit produces the digit "1", while 1 will have to be subtracted from the next column. This is known asborrowing. The principle is the same as for carrying. When the result of a subtraction is less than 0, the least possible value of a digit, the procedure is to "borrow" the deficit divided by the radix (that is, 10/10) from the left, subtracting it from the next positional value. Subtracting a positive number is equivalent toaddinganegative numberof equalabsolute value. Computers usesigned number representationsto handle negative numbers—most commonly thetwo's complementnotation. Such representations eliminate the need for a separate "subtract" operation. Using two's complement notation, subtraction can be summarized by the following formula: Multiplicationin binary is similar to its decimal counterpart. Two numbersAandBcan be multiplied by partial products: for each digit inB, the product of that digit inAis calculated and written on a new line, shifted leftward so that its rightmost digit lines up with the digit inBthat was used. The sum of all these partial products gives the final result. Since there are only two digits in binary, there are only two possible outcomes of each partial multiplication: For example, the binary numbers 1011 and 1010 are multiplied as follows: Binary numbers can also be multiplied with bits after abinary point: See alsoBooth's multiplication algorithm. The binary multiplication table is the same as thetruth tableof thelogical conjunctionoperation∧{\displaystyle \land }. Long divisionin binary is again similar to its decimal counterpart. In the example below, thedivisoris 1012, or 5 in decimal, while thedividendis 110112, or 27 in decimal. The procedure is the same as that of decimallong division; here, the divisor 1012goes into the first three digits 1102of the dividend one time, so a "1" is written on the top line. This result is multiplied by the divisor, and subtracted from the first three digits of the dividend; the next digit (a "1") is included to obtain a new three-digit sequence: The procedure is then repeated with the new sequence, continuing until the digits in the dividend have been exhausted: Thus, thequotientof 110112divided by 1012is 1012, as shown on the top line, while the remainder, shown on the bottom line, is 102. In decimal, this corresponds to the fact that 27 divided by 5 is 5, with a remainder of 2. Aside from long division, one can also devise the procedure so as to allow for over-subtracting from the partial remainder at each iteration, thereby leading to alternative methods which are less systematic, but more flexible as a result. The process oftaking a binary square rootdigit by digit is essentially the same as for a decimal square root but much simpler, due to the binary nature. First group the digits in pairs, using a leading 0 if necessary so there are an even number of digits. Now at each step, consider the answer so far, extended with the digits 01. If this can be subtracted from the current remainder, do so. Then extend the remainder with the next pair of digits. If you subtracted, the next digit of the answer is 1, otherwise it's 0. Though not directly related to the numerical interpretation of binary symbols, sequences of bits may be manipulated usingBoolean logical operators. When a string of binary symbols is manipulated in this way, it is called abitwise operation; the logical operatorsAND,OR, andXORmay be performed on corresponding bits in two binary numerals provided as input. The logicalNOToperation may be performed on individual bits in a single binary numeral provided as input. Sometimes, such operations may be used as arithmetic short-cuts, and may have other computational benefits as well. For example, anarithmetic shiftleft of a binary number is the equivalent of multiplication by a (positive, integral) power of 2. To convert from a base-10integerto its base-2 (binary) equivalent, the number isdivided by two. The remainder is theleast-significant bit. The quotient is again divided by two; its remainder becomes the next least significant bit. This process repeats until a quotient of one is reached. The sequence of remainders (including the final quotient of one) forms the binary value, as each remainder must be either zero or one when dividing by two. For example, (357)10is expressed as (101100101)2.[43] Conversion from base-2 to base-10 simply inverts the preceding algorithm. The bits of the binary number are used one by one, starting with the most significant (leftmost) bit. Beginning with the value 0, the prior value is doubled, and the next bit is then added to produce the next value. This can be organized in a multi-column table. For example, to convert 100101011012to decimal: The result is 119710. The first Prior Value of 0 is simply an initial decimal value. This method is an application of theHorner scheme. The fractional parts of a number are converted with similar methods. They are again based on the equivalence of shifting with doubling or halving. In a fractional binary number such as 0.110101101012, the first digit is12{\textstyle {\frac {1}{2}}}, the second(12)2=14{\textstyle ({\frac {1}{2}})^{2}={\frac {1}{4}}}, etc. So if there is a 1 in the first place after the decimal, then the number is at least12{\textstyle {\frac {1}{2}}}, and vice versa. Double that number is at least 1. This suggests the algorithm: Repeatedly double the number to be converted, record if the result is at least 1, and then throw away the integer part. For example,(13)10{\textstyle ({\frac {1}{3}})_{10}}, in binary, is: Thus the repeating decimal fraction 0.3... is equivalent to the repeating binary fraction 0.01... . Or for example, 0.110, in binary, is: This is also a repeating binary fraction 0.00011... . It may come as a surprise that terminating decimal fractions can have repeating expansions in binary. It is for this reason that many are surprised to discover that 1/10 + ... + 1/10 (addition of 10 numbers) differs from 1 in binaryfloating-point arithmetic. In fact, the only binary fractions with terminating expansions are of the form of an integer divided by a power of 2, which 1/10 is not. The final conversion is from binary to decimal fractions. The only difficulty arises with repeating fractions, but otherwise the method is to shift the fraction to an integer, convert it as above, and then divide by the appropriate power of two in the decimal base. For example: x=1100.101110¯…x×26=1100101110.01110¯…x×2=11001.01110¯…x×(26−2)=1100010101x=1100010101/111110x=(789/62)10{\displaystyle {\begin{aligned}x&=&1100&.1{\overline {01110}}\ldots \\x\times 2^{6}&=&1100101110&.{\overline {01110}}\ldots \\x\times 2&=&11001&.{\overline {01110}}\ldots \\x\times (2^{6}-2)&=&1100010101\\x&=&1100010101/111110\\x&=&(789/62)_{10}\end{aligned}}} Another way of converting from binary to decimal, often quicker for a person familiar withhexadecimal, is to do so indirectly—first converting (x{\displaystyle x}in binary) into (x{\displaystyle x}in hexadecimal) and then converting (x{\displaystyle x}in hexadecimal) into (x{\displaystyle x}in decimal). For very large numbers, these simple methods are inefficient because they perform a large number of multiplications or divisions where one operand is very large. A simple divide-and-conquer algorithm is more effective asymptotically: given a binary number, it is divided by 10k, wherekis chosen so that the quotient roughly equals the remainder; then each of these pieces is converted to decimal and the two areconcatenated. Given a decimal number, it can be split into two pieces of about the same size, each of which is converted to binary, whereupon the first converted piece is multiplied by 10kand added to the second converted piece, wherekis the number of decimal digits in the second, least-significant piece before conversion. Binary may be converted to and from hexadecimal more easily. This is because theradixof the hexadecimal system (16) is a power of the radix of the binary system (2). More specifically, 16 = 24, so it takes four digits of binary to represent one digit of hexadecimal, as shown in the adjacent table. To convert a hexadecimal number into its binary equivalent, simply substitute the corresponding binary digits: To convert a binary number into its hexadecimal equivalent, divide it into groups of four bits. If the number of bits isn't a multiple of four, simply insert extra0bits at the left (calledpadding). For example: To convert a hexadecimal number into its decimal equivalent, multiply the decimal equivalent of each hexadecimal digit by the corresponding power of 16 and add the resulting values: Binary is also easily converted to theoctalnumeral system, since octal uses a radix of 8, which is apower of two(namely, 23, so it takes exactly three binary digits to represent an octal digit). The correspondence between octal and binary numerals is the same as for the first eight digits ofhexadecimalin the table above. Binary 000 is equivalent to the octal digit 0, binary 111 is equivalent to octal 7, and so forth. Converting from octal to binary proceeds in the same fashion as it does forhexadecimal: And from binary to octal: And from octal to decimal: Non-integers can be represented by using negative powers, which are set off from the other digits by means of aradix point(called adecimal pointin the decimal system). For example, the binary number 11.012means: For a total of 3.25 decimal. Alldyadic rational numbersp2a{\displaystyle {\frac {p}{2^{a}}}}have aterminatingbinary numeral—the binary representation has a finite number of terms after the radix point. Otherrational numbershave binary representation, but instead of terminating, theyrecur, with a finite sequence of digits repeating indefinitely. For instance 110310=12112=0.0101010101¯…2{\displaystyle {\frac {1_{10}}{3_{10}}}={\frac {1_{2}}{11_{2}}}=0.01010101{\overline {01}}\ldots \,_{2}}12101710=11002100012=0.101101001011010010110100¯…2{\displaystyle {\frac {12_{10}}{17_{10}}}={\frac {1100_{2}}{10001_{2}}}=0.1011010010110100{\overline {10110100}}\ldots \,_{2}} The phenomenon that the binary representation of any rational is either terminating or recurring also occurs in other radix-based numeral systems. See, for instance, the explanation indecimal. Another similarity is the existence of alternative representations for any terminating representation, relying on the fact that0.111111...is the sum of thegeometric series2−1+ 2−2+ 2−3+ ... which is 1. Binary numerals that neither terminate nor recur representirrational numbers. For instance,
https://en.wikipedia.org/wiki/Binary_numeral_system
Pierre René, Viscount Deligne(French:[dəliɲ]; born 3 October 1944) is a Belgian mathematician. He is best known for work on theWeil conjectures, leading to a complete proof in 1973. He is the winner of the 2013Abel Prize, 2008Wolf Prize, 1988Crafoord Prize, and 1978Fields Medal. Deligne was born inEtterbeek, attended school atAthénée Adolphe Maxand studied at theUniversité libre de Bruxelles(ULB), writing a dissertation titledThéorème de Lefschetz et critères de dégénérescence de suites spectrales(Theorem of Lefschetz and criteria of degeneration of spectral sequences). He completed his doctorate at theUniversity of Paris-SudinOrsay1972 under the supervision ofAlexander Grothendieck, with a thesis titledThéorie de Hodge. Starting in 1965, Deligne worked with Grothendieck at theInstitut des Hautes Études Scientifiques(IHÉS) near Paris, initially on the generalization withinscheme theoryofZariski's main theorem. In 1968, he also worked withJean-Pierre Serre; their work led to important results on the l-adic representations attached tomodular forms, and the conjecturalfunctional equationsofL-functions. Deligne also focused on topics inHodge theory. He introduced the concept of weights and tested them on objects incomplex geometry. He also collaborated withDavid Mumfordon a new description of themoduli spacesfor curves. Their work came to be seen as an introduction to one form of the theory ofalgebraic stacks, and recently has been applied to questions arising fromstring theory.[1]But Deligne's most famous contribution was his proof of the third and last of theWeil conjectures. This proof completed a programme initiated and largely developed byAlexander Grothendiecklasting for more than a decade. As a corollary he proved the celebratedRamanujan–Petersson conjectureformodular formsof weight greater than one; weight one was proved in his work with Serre. Deligne's 1974 paper contains the first proof of theWeil conjectures. Deligne's contribution was to supply the estimate of theeigenvaluesof theFrobenius endomorphism, considered the geometric analogue of theRiemann hypothesis. It also led to a proof of theLefschetz hyperplane theoremand the old and new estimates of the classical exponential sums, among other applications. Deligne's 1980 paper contains a much more general version of the Riemann hypothesis. From 1970 until 1984, Deligne was a permanent member of the IHÉS staff. During this time he did much important work outside of his work on algebraic geometry. In joint work withGeorge Lusztig, Deligne appliedétale cohomologyto construct representations offinite groups of Lie type; withMichael Rapoport, Deligne worked on the moduli spaces from the 'fine' arithmetic point of view, with application tomodular forms. He received aFields Medalin 1978. In 1984, Deligne moved to theInstitute for Advanced Studyin Princeton. In terms of the completion of some of the underlying Grothendieck program of research, he definedabsolute Hodge cycles, as a surrogate for the missing and still largely conjectural theory ofmotives. This idea allows one to get around the lack of knowledge of theHodge conjecture, for some applications. The theory ofmixed Hodge structures, a powerful tool in algebraic geometry that generalizes classical Hodge theory, was created by applying weight filtration, Hironaka'sresolution of singularitiesand other methods, which he then used to prove the Weil conjectures. He reworked theTannakian categorytheory in his 1990 paper for the "Grothendieck Festschrift", employingBeck's theorem– the Tannakian category concept being the categorical expression of the linearity of the theory of motives as the ultimateWeil cohomology. All this is part of theyoga of weights, unitingHodge theoryand the l-adicGalois representations. TheShimura varietytheory is related, by the idea that such varieties should parametrize not just good (arithmetically interesting) families of Hodge structures, but actual motives. This theory is not yet a finished product, and more recent trends have usedK-theoryapproaches. WithAlexander Beilinson,Joseph Bernstein, andOfer Gabber, Deligne made definitive contributions to the theory ofperverse sheaves.[2]This theory plays an important role in the recent proof of thefundamental lemmabyNgô Bảo Châu. It was also used by Deligne himself to greatly clarify the nature of theRiemann–Hilbert correspondence, which extendsHilbert's twenty-first problemto higher dimensions. Prior to Deligne's paper,Zoghman Mebkhout's 1980 thesis and the work ofMasaki KashiwarathroughD-modulestheory (but published in the 80s) on the problem have appeared. In 1974 at the IHÉS, Deligne's joint paper withPhillip Griffiths,John MorganandDennis Sullivanon the realhomotopy theoryof compactKähler manifoldswas a major piece of work in complex differential geometry which settled several important questions of both classical and modern significance. The input from Weil conjectures, Hodge theory, variations of Hodge structures, and many geometric and topological tools were critical to its investigations. His work in complexsingularity theorygeneralizedMilnor mapsinto an algebraic setting and extended thePicard-Lefschetz formulabeyond their general format, generating a new method of research in this subject. His paper withKen Ribeton abelian L-functions and their extensions toHilbert modular surfacesand p-adic L-functions form an important part of his work inarithmetic geometry. Other important research achievements of Deligne include the notion of cohomological descent, motivic L-functions, mixed sheaves, nearbyvanishing cycles, central extensions ofreductive groups, geometry and topology ofbraid groups, providing the modern axiomatic definition of Shimura varieties, the work in collaboration withGeorge Mostowon the examples of non-arithmetic lattices and monodromy ofhypergeometric differential equationsin two- and three-dimensional complexhyperbolic spaces, etc. He was awarded theFields Medalin 1978, theCrafoord Prizein 1988, theBalzan Prizein 2004, theWolf Prizein 2008, and theAbel Prizein 2013, "for seminal contributions to algebraic geometry and for their transformative impact on number theory, representation theory, and related fields". He was elected a foreign member of the Academie des Sciences de Paris in 1978. In 2006 he was ennobled by the Belgian king asviscount.[3] In 2009, Deligne was elected a foreign member of theRoyal Swedish Academy of Sciences[4]and a residential member of theAmerican Philosophical Society.[5]He is a member of theNorwegian Academy of Science and Letters.[6] Deligne wrote multiple hand-written letters to other mathematicians in the 1970s. These include The following mathematical concepts are named after Deligne: Additionally, many different conjectures in mathematics have been called theDeligne conjecture:
https://en.wikipedia.org/wiki/Deligne_tensor_product_of_abelian_categories
Inmathematics, theindefinite productoperator is the inverse operator ofQ(f(x))=f(x+1)f(x){\textstyle Q(f(x))={\frac {f(x+1)}{f(x)}}}. It is a discrete version of the geometric integral of geometric calculus, one of the non-Newtonian calculi. Thus More explicitly, if∏xf(x)=F(x){\textstyle \prod _{x}f(x)=F(x)}, then IfF(x) is a solution of this functional equation for a givenf(x), then so isCF(x) for any constantC. Therefore, each indefinite product actually represents a family of functions, differing by a multiplicative constant. IfT{\displaystyle T}is a period of functionf(x){\displaystyle f(x)}then Indefinite product can be expressed in terms ofindefinite sum: Some authors use the phrase "indefinite product" in a slightly different but related way to describe a product in which the numerical value of the upper limit is not given.[1]e.g. This is a list of indefinite products∏xf(x){\textstyle \prod _{x}f(x)}. Not all functions have an indefinite product which can be expressed in elementary functions.
https://en.wikipedia.org/wiki/Indefinite_product
Anabacus(pl.abaciorabacuses), also called acounting frame, is ahand-operated calculating tool which was used from ancient times in theancient Near East, Europe, China, and Russia, until the adoption of theHindu–Arabic numeral system.[1]An abacus consists of a two-dimensional array ofslidablebeads(or similar objects). In their earliest designs, the beads could be loose on a flat surface or sliding in grooves. Later the beads were made to slide on rods and built into a frame, allowing faster manipulation. Each rod typically represents onedigitof a multi-digitnumberlaid out using apositional numeral systemsuch asbase ten(though some cultures used differentnumerical bases).RomanandEast Asianabacuses use a system resemblingbi-quinary coded decimal, with a top deck (containing one or two beads) representing fives and a bottom deck (containing four or five beads) representing ones.Natural numbersare normally used, but some allow simplefractionalcomponents (e.g.1⁄2,1⁄4, and1⁄12inRoman abacus), and adecimal pointcan be imagined forfixed-point arithmetic. Any particular abacus design supports multiple methods to perform calculations, includingaddition,subtraction,multiplication,division, andsquareandcube roots. The beads are first arranged to represent a number, then are manipulated to perform amathematical operationwith another number, and their final position can be read as the result (or can be used as the starting number for subsequent operations). In the ancient world, abacuses were a practical calculating tool. It was widely used in Europe as late as the 17th century, but fell out of use with the rise ofdecimal notation[2]andalgorismicmethods. Althoughcalculatorsandcomputersare commonly used today instead of abacuses, abacuses remain in everyday use in some countries. The abacus has an advantage of not requiring awriting implementandpaper(needed foralgorism) or anelectric power source. Merchants, traders, and clerks in some parts ofEastern Europe, Russia, China, and Africa use abacuses. The abacus remains in common use as a scoring system in non-electronictable games. Others may use an abacus due tovisual impairmentthat prevents the use of a calculator.[1]The abacus is still used to teach the fundamentals ofmathematicsto children in many countries such as Japan[3]and China.[4] The wordabacusdates to at least 1387 AD when aMiddle Englishwork borrowed the word fromLatinthat described a sandboard abacus. The Latin word is derived fromancient Greekἄβαξ(abax) which means something without a base, and colloquially, any piece of rectangular material.[5][6][7]Alternatively, without reference to ancient texts on etymology, it has been suggested that it means "a square tablet strewn with dust",[8]or "drawing-board covered with dust (for the use of mathematics)"[9](the exact shape of the Latin perhaps reflects thegenitive formof the Greek word,ἄβακoς(abakos)). While the table strewn with dust definition is popular, some argue evidence is insufficient for that conclusion.[10][nb 1]Greekἄβαξprobably borrowed from aNorthwest Semitic languagelikePhoenician, evidenced by a cognate with theHebrewwordʾābāq(אבק‎), or "dust" (in the post-Biblical sense "sand used as a writing surface").[11] Bothabacuses[12]andabaci[12]are used as plurals. The user of an abacus is called anabacist.[13] TheSumerianabacus appeared between 2700 and 2300 BC. It held a table of successive columns which delimited the successive orders of magnitude of theirsexagesimal(base 60) number system.[14] Some scholars point to a character inBabylonian cuneiformthat may have been derived from a representation of the abacus.[15]It is the belief of Old Babylonian[16]scholars, such as Ettore Carruccio, that Old Babylonians "seem to have used the abacus for the operations of addition and subtraction; however, this primitive device proved difficult to use for more complex calculations".[17] Greek historianHerodotusmentioned the abacus inAncient Egypt. He wrote that the Egyptians manipulated the pebbles from right to left, opposite in direction to the Greek left-to-right method. Archaeologists have found ancient disks of various sizes that are thought to have been used as counters. However, wall depictions of this instrument are yet to be discovered.[18] At around 600 BC, Persians first began to use the abacus, during theAchaemenid Empire.[19]Under theParthian,Sassanian, andIranianempires, scholars concentrated on exchanging knowledge and inventions with the countries around them – India, China, and theRoman Empire– which is how the abacus may have been exported to other countries. The earliest archaeological evidence for the use of the Greek abacus dates to the 5th century BC.[20]Demosthenes(384–322 BC) complained that the need to use pebbles for calculations was too difficult.[21][22]A play byAlexisfrom the 4th century BC mentions an abacus and pebbles for accounting, and bothDiogenesandPolybiususe the abacus as a metaphor for human behavior, stating "that men that sometimes stood for more and sometimes for less" like the pebbles on an abacus.[22]The Greek abacus was a table of wood or marble, pre-set with small counters in wood or metal for mathematical calculations.[23]This Greek abacus was used in Achaemenid Persia, theEtruscan civilization, Ancient Rome, and the Western Christian world until theFrench Revolution. TheSalamis Tablet, found on the Greek islandSalamisin 1846 AD, dates to 300 BC, making it the oldest counting board discovered so far. It is a slab of white marble 149 cm (59 in) in length, 75 cm (30 in) wide, and 4.5 cm (2 in) thick, on which are 5 groups of markings. In the tablet's center is a set of 5 parallel lines equally divided by a vertical line, capped with a semicircle at the intersection of the bottom-most horizontal line and the single vertical line. Below these lines is a wide space with a horizontal crack dividing it. Below this crack is another group of eleven parallel lines, again divided into two sections by a line perpendicular to them, but with the semicircle at the top of the intersection; the third, sixth and ninth of these lines are marked with a cross where they intersect with the vertical line.[24]Also from this time frame, theDarius Vasewas unearthed in 1851. It was covered with pictures, including a "treasurer" holding a wax tablet in one hand while manipulating counters on a table with the other.[21] The normal method of calculation in ancient Rome, as in Greece, was by moving counters on a smooth table. Originally pebbles (Latin:calculi) were used. Marked lines indicated units, fives, tens, etc. as in theRoman numeralsystem. Writing in the 1st century BC,Horacerefers to the wax abacus, a board covered with a thin layer of black wax on which columns and figures were inscribed using a stylus.[25] One example of archaeological evidence of theRoman abacus, shown nearby in reconstruction, dates to the 1st century AD. It has eight long grooves containing up to five beads in each and eight shorter grooves having either one or no beads in each. The groove marked I indicates units, X tens, and so on up to millions. The beads in the shorter grooves denote fives (five units, five tens, etc.) resembling abi-quinary coded decimalsystem related to theRoman numerals. The short grooves on the right may have been used for marking Roman "ounces" (i.e. fractions). The Roman system of 'counter casting' was used widely in medieval Europe, and persisted in limited use into the nineteenth century.[23]Wealthy abacists used decorative minted counters, calledjetons. Due toPope Sylvester II's reintroduction of the abacus with modifications, it became widely used in Europe again during the 11th century[26][27]It used beads on wires, unlike the traditional Roman counting boards, which meant the abacus could be used much faster and was more easily moved.[28] The earliest known written documentation of the Chinese abacus dates to the 2nd century BC.[29] The Chinese abacus, also known as thesuanpan(算盤/算盘, lit. "calculating tray"), comes in various lengths and widths, depending on the operator. It usually has more than seven rods. There are two beads on each rod in the upper deck and five beads each in the bottom one, to represent numbers in abi-quinary coded decimal-like system. The beads are usually rounded and made ofhardwood. The beads are counted by moving them up or down towards the beam; beads moved toward the beam are counted, while those moved away from it are not.[30]One of the top beads is 5, while one of the bottom beads is 1. Each rod has a number under it, showing the place value. Thesuanpancan be reset to the starting position instantly by a quick movement along the horizontal axis to spin all the beads away from the horizontal beam at the center. The prototype of the Chinese abacus appeared during theHan dynasty, and the beads are oval. TheSong dynastyand earlier used the 1:4 type or four-beads abacus similar to the modern abacus including the shape of the beads commonly known as Japanese-style abacus.[31] In the earlyMing dynasty, the abacus began to appear in a 1:5 ratio. The upper deck had one bead and the bottom had five beads.[32]In the late Ming dynasty, the abacus styles appeared in a 2:5 ratio.[32]The upper deck had two beads, and the bottom had five. Various calculation techniques were devised forSuanpanenabling efficient calculations. Some schools teach students how to use it. In the long scrollAlong the River During the Qingming Festivalpainted byZhang Zeduanduring theSong dynasty(960–1297), asuanpanis clearly visible beside an account book and doctor's prescriptions on the counter of anapothecary's (Feibao). The similarity of theRoman abacusto the Chinese one suggests that one could have inspired the other, given evidence of a trade relationship between theRoman Empireand China. However, no direct connection has been demonstrated, and the similarity of the abacuses may be coincidental, both ultimately arising from counting with five fingers per hand. Where the Roman model (like most modern Korean andJapanese) has 4 plus 1 bead per decimal place, the standardsuanpanhas 5 plus 2. Incidentally, this ancientChinese calculation system市用制 (Shì yòng zhì) allows use with ahexadecimalnumeral system (or anybaseup to 18) which is used for traditional Chinese measures of weight [(jīn(斤) andliǎng(兩)]. (Instead of running on wires as in the Chinese, Korean, and Japanese models, the Roman model used grooves, presumably making arithmetic calculations much slower). Another possible source of thesuanpanis Chinesecounting rods, which operated with adecimal systembut lacked the concept ofzeroas a placeholder.[citation needed]The zero was probably introduced to the Chinese in theTang dynasty(618–907) when travel in the Indian Ocean and theMiddle Eastwould have provided direct contact with India, allowing them to acquire the concept of zero and thedecimal pointfrom Indian merchants and mathematicians.[citation needed] TheAbhidharmakośabhāṣyaofVasubandhu(316–396), a Sanskrit work onBuddhist philosophy, says that the second-century CE philosopherVasumitrasaid that "placing a wick (Sanskritvartikā) on the number one (ekāṅka) means it is a one while placing the wick on the number hundred means it is called a hundred, and on the number one thousand means it is a thousand". It is unclear exactly what this arrangement may have been. Around the 5th century, Indian clerks were already finding new ways of recording the contents of the abacus.[33]Hindu texts used the termśūnya(zero) to indicate the empty column on the abacus.[34] In Japan, the abacus is calledsoroban(算盤, そろばん, lit. "counting tray"). It was imported from China in the 14th century.[35]It was probably in use by the working class a century or more before the ruling class adopted it, as the class structure obstructed such changes.[36]The 1:4 abacus, which removes the seldom-used second and fifth bead, became popular in the 1940s. Today's Japanese abacus is a 1:4 type, four-bead abacus, introduced from China in theMuromachi era. It adopts the form of the upper deck one bead and the bottom four beads. The top bead on the upper deck was equal to five and the bottom one is similar to the Chinese or Korean abacus, and the decimal number can be expressed, so the abacus is designed as a 1:4 device. The beads are always in the shape of a diamond. The quotient division is generally used instead of the division method; at the same time, in order to make the multiplication and division digits consistently use the division multiplication. Later, Japan had a 3:5 abacus called 天三算盤, which is now in the Ize Rongji collection of Shansi Village inYamagataCity. Japan also used a 2:5 type abacus. The four-bead abacus spread, and became common around the world. Improvements to the Japanese abacus arose in various places. In China, an abacus with an aluminium frame and plastic beads has been used. The file is next to the four beads, and pressing the "clearing" button puts the upper bead in the upper position, and the lower bead in the lower position. The abacus is still manufactured in Japan, despite the proliferation, practicality, and affordability of pocketelectronic calculators. The use of the soroban is still taught in Japaneseprimary schoolsas part ofmathematics, primarily as an aid to faster mental calculation. Using visual imagery, one can complete a calculation as quickly as with a physical instrument.[37] The Chinese abacus migrated from China toKoreaaround 1400 AD.[21][38][39]Koreans call itjupan(주판),supan(수판) orjusan(주산).[40]The four-beads abacus (1:4) was introduced during theGoryeo Dynasty. The 5:1 abacus was introduced to Korea from China during the Ming Dynasty. Some sources mention the use of an abacus called anepohualtzintzinin ancientAztecculture.[41]ThisMesoamericanabacus used a 5-digit base-20 system.[42]The word NepōhualtzintzinNahuatl pronunciation:[nepoːwaɬˈt͡sint͡sin]comes fromNahuatl, formed by the roots;Ne– personal -;pōhualorpōhualliNahuatl pronunciation:[ˈpoːwalːi]– the account -; andtzintzinNahuatl pronunciation:[ˈt͡sint͡sin]– small similar elements. Its complete meaning was taken as: counting with small similar elements. Its use was taught in theCalmecacto thetemalpouhquehNahuatl pronunciation:[temaɬˈpoʍkeʔ], who were students dedicated to taking the accounts of skies, from childhood. The Nepōhualtzintzin was divided into two main parts separated by a bar or intermediate cord. In the left part were four beads. Beads in the first row have unitary values (1, 2, 3, and 4), and on the right side, three beads had values of 5, 10, and 15, respectively. In order to know the value of the respective beads of the upper rows, it is enough to multiply by 20 (by each row), the value of the corresponding count in the first row. The device featured 13 rows with 7 beads, 91 in total. This was a basic number for this culture. It had a close relation to natural phenomena, the underworld, and the cycles of the heavens. One Nepōhualtzintzin (91) represented the number of days that a season of the year lasts, two Nepōhualtzitzin (182) is the number of days of the corn's cycle, from its sowing to its harvest, three Nepōhualtzintzin (273) is the number of days of a baby's gestation, and four Nepōhualtzintzin (364) completed a cycle and approximated one year. When translated into modern computer arithmetic, the Nepōhualtzintzin amounted to the rank from 10 to 18 infloating point, which precisely calculated large and small amounts, although round off was not allowed. The rediscovery of the Nepōhualtzintzin was due to the Mexican engineer David Esparza Hidalgo,[43]who in his travels throughout Mexico found diverse engravings and paintings of this instrument and reconstructed several of them in gold, jade, encrustations of shell, etc.[44]Very old Nepōhualtzintzin are attributed to theOlmecculture, and some bracelets ofMayanorigin, as well as a diversity of forms and materials in other cultures. Sanchez wrote inArithmetic in Mayathat another base 5, base 4 abacus had been found in theYucatán Peninsulathat also computed calendar data. This was a finger abacus, on one hand, 0, 1, 2, 3, and 4 were used; and on the other hand 0, 1, 2, and 3 were used. Note the use of zero at the beginning and end of the two cycles. Thequipuof theIncaswas a system of colored knotted cords used to record numerical data,[45]like advancedtally sticks– but not used to perform calculations. Calculations were carried out using ayupana(Quechuafor "counting tool"; see figure) which was still in use after the conquest of Peru. The working principle of a yupana is unknown, but in 2001 Italian mathematician De Pasquale proposed an explanation. By comparing the form of several yupanas, researchers found that calculations were based using theFibonacci sequence1, 1, 2, 3, 5 and powers of 10, 20, and 40 as place values for the different fields in the instrument. Using the Fibonacci sequence would keep the number of grains within any one field at a minimum.[46] The Russian abacus, theschoty(Russian:счёты, plural fromRussian:счёт, counting), usually has a single slanted deck, with ten beads on each wire (except one wire with four beads for quarter-rublefractions). 4-bead wire was introduced for quarter-kopeks, which were minted until 1916.[47]The Russian abacus is used vertically, with each wire running horizontally. The wires are usually bowed upward in the center, to keep the beads pinned to either side. It is cleared when all the beads are moved to the right. During manipulation, beads are moved to the left. For easy viewing, the middle 2 beads on each wire (the 5th and 6th bead) usually are of a different color from the other eight. Likewise, the left bead of the thousands wire (and the million wire, if present) may have a different color. The Russian abacus was in use in shops and markets throughout theformer Soviet Union, and its usage was taught in most schools until the 1990s.[48][49]Even the 1874 invention ofmechanical calculator,Odhner arithmometer, had not replaced them in Russia. According toYakov Perelman, some businessmen attempting to import calculators into the Russian Empire were known to leave in despair after watching a skilled abacus operator.[50]Likewise, the mass production of Felixarithmometerssince 1924 did not significantly reduce abacus use in theSoviet Union.[51]The Russian abacus began to lose popularity only after the mass production of domesticmicrocalculatorsin 1974.[52] The Russian abacus was brought to France around 1820 by mathematicianJean-Victor Poncelet, who had served inNapoleon's army and had been aprisoner of warin Russia.[53]To Poncelet's French contemporaries, it was something new. Poncelet used it, not for any applied purpose, but as a teaching and demonstration aid.[54]TheTurksand theArmenianpeople used abacuses similar to the Russian schoty. It was named acoulbaby the Turks and achorebby the Armenians.[55] Around the world, abacuses have been used in pre-schools and elementary schools as an aid in teaching thenumeral systemandarithmetic. In Western countries, a bead frame similar to the Russian abacus but with straight wires and a vertical frame is common (see image). Each bead represents one unit (e.g. 74 can be represented by shifting all beads on 7 wires and 4 beads on the 8th wire, so numbers up to 100 may be represented). In the bead frame shown, the gap between the 5th and 6th wire, corresponding to the color change between the 5th and the 6th bead on each wire, suggests the latter use. Teaching multiplication, e.g. 6 times 7, may be represented by shifting 7 beads on 6 wires. The red-and-white abacus is used in contemporary primary schools for a wide range of number-related lessons. The twenty bead version, referred to by itsDutchnamerekenrek("calculating frame"), is often used, either on a string of beads or on a rigid framework.[56] Learning how to calculate with the abacus may improve capacity for mental calculation.Abacus-based mental calculation(AMC), which was derived from the abacus, is the act of performing calculations, including addition, subtraction, multiplication, and division, in the mind by manipulating an imagined abacus. It is a high-level cognitive skill that runs calculations with an effective algorithm. People doing long-term AMC training show higher numerical memory capacity and experience more effectively connected neural pathways.[57][58]They are able to retrieve memory to deal with complex processes.[59]AMC involves bothvisuospatialand visuomotor processing that generate the visual abacus and move the imaginary beads.[60]Since it only requires that the final position of beads be remembered, it takes less memory and less computation time.[60] The binary abacus is used to explain how computers manipulate numbers.[61]The abacus shows how numbers, letters, and signs can be stored in abinary systemon a computer, or viaASCII. The device consists of beads on parallel wires arranged in three rows; each bead represents a switch which can be either "on" or "off". An adapted abacus, invented by Tim Cranmer, and called a Cranmer abacus is commonly used by visually impaired users. A piece of soft fabric or rubber is placed behind the beads, keeping them in place while the users manipulate them. The device is then used to perform the mathematical functions of multiplication, division, addition, subtraction, square root, and cube root.[62] Although blind students have benefited from talking calculators, the abacus is often taught to these students in early grades.[63]Blind students can also complete mathematical assignments using a braille-writer andNemeth code(a type of braille code for mathematics) but large multiplication andlong divisionproblems are tedious. The abacus gives these students a tool to compute mathematical problems that equals the speed and mathematical knowledge required by their sighted peers using pencil and paper. Many blind people find this number machine a useful tool throughout life.[62]
https://en.wikipedia.org/wiki/Abacus
The term "computer", in use from the early 17th century (the first known written reference dates from 1613),[1]meant "one who computes": a person performing mathematicalcalculations, beforeelectronic calculatorsbecame available.Alan Turingdescribed the "human computer" as someone who is "supposed to be following fixed rules; he has no authority to deviate from them in any detail."[2]Teams of people, often women from the late nineteenth century onwards, were used to undertake long and often tedious calculations; the work was divided so that this could be done in parallel. The same calculations were frequently performed independently by separate teams to check the correctness of the results. Since the end of the 20th century, the term "human computer" has also been applied to individuals with prodigious powers ofmental arithmetic, also known asmental calculators. AstronomersinRenaissancetimes used that term about as often as they called themselves "mathematicians" for their principal work of calculating thepositions of planets. They often hired a "computer" to assist them. For some people, such asJohannes Kepler, assisting a scientist in computation was a temporary position until they moved on to greater advancements. Before he died in 1617,John Napiersuggested ways by which "the learned, who perchance may have plenty of pupils and computers" might construct an improvedlogarithm table.[3]: p.46 Computing became more organized when the FrenchmanAlexis Claude Clairaut(1713–1765) divided the computation to determine the time of the return ofHalley's Cometwith two colleagues,Joseph LalandeandNicole-Reine Lepaute.[4]Human computers continued plotting the future movements of astronomical objects to create celestial tables foralmanacsin the late 1760s.[5] The computers working on theNautical Almanacfor the British Admiralty includedWilliam Wales,Israel LyonsandRichard Dunthorne.[6]The project was overseen byNevil Maskelyne.[7]Maskelyne would borrow tables from other sources as often as he could in order to reduce the number of calculations his team of computers had to make.[8]Women were generally excluded, with some exceptions such asMary Edwardswho worked from the 1780s to 1815 as one of thirty-five computers for the BritishNautical Almanacused for navigation at sea. The United States also worked on their own version of a nautical almanac in the 1840s, withMaria Mitchellbeing one of the best-known computers on the staff.[9] Other innovations in human computing included the work done by a group of boys who worked in the Octagon Room of theRoyal Greenwich Observatoryfor Astronomer RoyalGeorge Airy.[10]Airy's computers, hired after 1835, could be as young as fifteen, and they were working on a backlog of astronomical data.[11]The way that Airy organized the Octagon Room with a manager, pre-printed computing forms, and standardized methods of calculating and checking results (similar to the way theNautical Almanaccomputers operated) would remain a standard for computing operations for the next 80 years.[12] Women were increasingly involved in computing after 1865.[13]Private companies hired them for computing and to manage office staff.[13] In the 1870s, the United StatesSignal Corpscreated a new way of organizing human computing to track weather patterns.[14]This built on previous work from theUS Navyand theSmithsonian meteorological project.[15]The Signal Corps used a small computing staff that processed data that had to be collected quickly and finished in "intensive two-hour shifts".[16]Each individual human computer was responsible for only part of the data.[14] In the late nineteenth centuryEdward Charles Pickeringorganized the "Harvard Computers".[17]The first woman to approach them,Anna Winlock, askedHarvard Observatoryfor a computing job in 1875.[18]By 1880, all of the computers working at the Harvard Observatory were women.[18]The standard computer pay started at twenty-five cents an hour.[19]There would be such a huge demand to work there, that some women offered to work for the Harvard Computers for free.[20]Many of the women astronomers from this era were computers with possibly the best-known beingFlorence Cushman,Henrietta Swan Leavitt, andAnnie Jump Cannon, who worked with Pickering from 1888, 1893, and 1896 respectively. Cannon could classify stars at a rate of three per minute.[21]Mina Fleming, one of the Harvard Computers, publishedThe Draper Catalogue of Stellar Spectrain 1890.[22]The catalogue organized stars byspectral lines.[22]The catalogue continued to be expanded by the Harvard Computers and added new stars in successive volumes.[23]Elizabeth Williamswas involved in calculations in the search for a new planet,Pluto, at theLowell Observatory. In 1893,Francis Galtoncreated the Committee for Conducting Statistical Inquiries into the Measurable Characteristics of Plants and Animals which reported to theRoyal Society.[24]The committee used advanced techniques for scientific research and supported the work of several scientists.[24]W.F. Raphael Weldon, the first scientist supported by the committee worked with his wife, Florence Tebb Weldon, who was his computer.[24]Weldon used logarithms and mathematical tables created byAugust Leopold Crelleand had no calculating machine.[25]Karl Pearson, who had a lab at theUniversity of London, felt that the work Weldon did was "hampered by the committee".[26]However, Pearson did create a mathematical formula that the committee was able to use for data correlation.[27]Pearson brought his correlation formula to his own Biometrics Laboratory.[27]Pearson had volunteer and salaried computers who were both men and women.[28]Alice Leewas one of his salaried computers who worked withhistogramsand thechi-squaredstatistics.[29]Pearson also worked withBeatriceandFrances Cave-Brown-Cave.[29]Pearson's lab, by 1906, had mastered the art ofmathematical tablemaking.[29] Human computers were used to compile 18th and 19th century Western Europeanmathematical tables, for example those fortrigonometryandlogarithms. Although these tables were most often known by the names of the principalmathematicianinvolved in the project, such tables were often in fact the work of an army of unknown and unsung computers. Ever more accurate tables to a high degree of precision were needed for navigation and engineering. Approaches differed, but one was to break up the project into a form ofpiece workcompleted at home. The computers, often educated middle-class women whom society deemed it unseemly to engage in the professions or go out to work, would receive and send back packets of calculations by post.[30]The Royal Astronomical Society eventually gave space to a new committee, the Mathematical Tables Committee, which was the only professional organization for human computers in 1925.[31] Human computers were used to predict the effects of building theAfsluitdijkbetween 1927 and 1932 in theZuiderzeein theNetherlands. The computer simulation was set up byHendrik Lorentz.[32] A visionary application to meteorology can be found in the scientific work ofLewis Fry Richardsonwho, in 1922, estimated that 64,000 humans could forecast the weather for the whole globe by solving the attending differentialprimitive equationsnumerically.[33]Around 1910 he had already used human computers to calculate the stresses inside a masonry dam.[34] It was not untilWorld War Ithat computing became a profession. "The First World War required large numbers of human computers. Computers on both sides of the war produced map grids, surveying aids, navigation tables and artillery tables. With the men at war, most of these new computers were women and many were college educated."[35]This would happen again duringWorld War II, as more men joined the fight, college educated women were left to fill their positions. One of the first female computers, Elizabeth Webb Wilson, was hired by the Army in 1918 and was a graduate ofGeorge Washington University. Wilson "patiently sought a war job that would make use of her mathematical skill. In later years, she would claim that the war spared her from the 'Washington social whirl', the rounds of society events that should have procured for her a husband"[35]and instead she was able to have a career. After the war, Wilson continued with a career in mathematics and became anactuaryand turned her focus tolife tables. Human computers played integral roles in the World War II war effort in the United States, and because of the depletion of the male labor force due to thedraft, many computers during World War II were women, frequently with degrees in mathematics. In the 1940s, women were hired to examine nuclear and particle tracks left on photographic emulsions.[36]In theManhattan Project, human computers working with a variety of mechanical aids assisted numerical studies of the complex formulas related tonuclear fission.[37] Human computers were involved in calculating ballistics tables during World War I.[38]Between the two world wars, computers were used in the Department of Agriculture in the United States and also atIowa State College.[39]The human computers in these places also used calculating machines and early electrical computers to aid in their work.[40]In the 1930s, The Columbia University Statistical Bureau was created byBenjamin Wood.[41]Organized computing was also established atIndiana University, theCowles Commissionand theNational Research Council.[42] Following World War II, theNational Advisory Committee for Aeronautics(NACA) used human computers in flight research to transcribe raw data from celluloid film andoscillographpaper and then, usingslide rulesand electriccalculators, reduced the data to standard engineering units.Margot Lee Shetterly's biographical book,Hidden Figures(made into amovie of the same namein 2016), depicts African-American women who served as human computers atNASAin support of theFriendship 7, the first American crewed mission into Earth orbit.[43]NACA had begun hiring black women as computers from 1940.[44]One such computer wasDorothy Vaughanwho began her work in 1943 with theLangley Research Centeras a special hire to aid the war effort,[45]and who came to supervise theWest Area Computers, a group of African-American women who worked as computers at Langley. Human computing was, at the time, considered menial work. On November 8, 2019, theCongressional Gold Medalwas awarded "In recognition of all the women who served as computers, mathematicians, and engineers at the National Advisory Committee for Aeronautics and the National Aeronautics and Space Administration (NASA) between the 1930s and the 1970s."[46] As electrical computers became more available, human computers, especially women, were drafted as some of the firstcomputer programmers.[47]Because the six people responsible for setting up problems on theENIAC(the first general-purpose electronic digital computer built at theUniversity of Pennsylvaniaduring World War II) were drafted from a corps of human computers, the world's first professional computer programmers were women, namely:Kay McNulty,Betty Snyder,Marlyn Wescoff,Ruth Lichterman,Betty Jean Jennings, andFran Bilas.[48] The term "human computer" has been recently used by a group of researchers who refer to their work as "human computation".[49]In this usage, "human computer" refers to activities of humans in the context ofhuman-based computation(HBC). This use of "human computer" is debatable for the following reason: HBC is a computational technique where a machine outsources certain parts of a task to humans to perform, which are not necessarily algorithmic. In fact, in the context of HBC most of the time humans are not provided with a sequence of exact steps to be executed to yield the desired result; HBC is agnostic about how humans solve the problem. This is why "outsourcing" is the term used in the definition above. The use of humans in the historical role of "human computers" forHBCis very rare.
https://en.wikipedia.org/wiki/Computer_(occupation)
TheCurtais a hand-heldmechanical calculatordesigned byCurt Herzstark.[1]It is known for its extremely compact design: a small cylinder that fits in the palm of the hand. It was affectionately known as the "pepper grinder" or "peppermill" due to its shape and means of operation; its superficial resemblance to a certain type ofhand grenadealso earned it the nickname "math grenade".[2][failed verification] Curtas were considered the best portable calculators available until they were displaced by electronic calculators in the 1970s.[1] The Curta was conceived byCurt Herzstarkin the 1930s inVienna,Austria. By 1938, he had filed a key patent, covering his complemented stepped drum.[3][4]This single drum replaced the multiple drums, typically around 10 or so, of contemporary calculators, and it enabled not only addition, but subtraction throughnines complementmath, essentially subtracting by adding. The nines' complement math breakthrough eliminated the significant mechanical complexity created when "borrowing" during subtraction. This drum was the key to miniaturizing the Curta. His work on the pocket calculator stopped in 1938 when theNazisforced him and his company to concentrate on manufacturing precision instruments for the German army.[5] Herzstark, the son of a Catholic mother and Jewish father, was taken into custody in 1943 and eventually sent toBuchenwald concentration camp, where he was encouraged to continue his earlier research: While I was imprisoned inside Buchenwald I had, after a few days, told the [people] in the work production scheduling department of my ideas. The head of the department, Mr. Munich said, 'See, Herzstark, I understand you've been working on a new thing, a small calculating machine. Do you know, I can give you a tip. We will allow you to make and draw everything. If it is really worth something, then we will give it to the Führer as a present after we win the war. Then, surely, you will be made an Aryan.' For me, that was the first time I thought to myself, my God, if you do this, you can extend your life. And then and there I started to draw the CURTA, the way I had imagined it. In the camp, Herzstark was able to develop working drawings for a manufacturable device. Buchenwald wasliberated by U.S. troopson 11 April 1945, and by November Herzstark had located a factory in Sommertal, nearWeimar, whose machinists were skilled enough to produce three working prototypes.[6] Sovietforces had arrived in July, and Herzstark feared being sent to Russia, so, later that same month, he fled to Austria. He began to look for financial backers, at the same time filing continuing patents as well as several additional patents to protect his work.Franz Joseph II, Prince of Liechtensteineventually showed interest in the manufacture of the device, and soon a newly formed company, Contina AG Mauren, began production in Liechtenstein. It was not long before Herzstark's financial backers, thinking they had got from him all they needed, conspired to force him out by reducing the value of all of the company's existing stock to zero, including his one-third interest.[1]These were the same people who had earlier elected not to have Herzstark transfer ownership of his patents to the company, so that, should anyone sue, they would be suing Herzstark, not the company, thereby protecting themselves at Herzstark's expense. This ploy now backfired: without the patent rights, they could manufacture nothing. Herzstark was able to negotiate a new agreement, and money continued to flow to him. Curtas were considered the best portable calculators available until they were displaced by electronic calculators in the 1970s.[1]The Curta, however, lives on, being a highly popular collectible, with thousands of machines working just as smoothly as they did at the time of their manufacture.[1][6][7] An estimated 140,000 Curta calculators were made (80,000 Type I and 60,000 Type II). According to Curt Herzstark, the last Curta was produced in 1972.[6] The Curta Type I was sold for $125 in the later years of production, and the Type II was sold for $175. While only 3% of Curtas were returned to the factory for warranty repair,[6]a small, but significant number of buyers returned their Curtas in pieces, having attempted to disassemble them. Reassembling the machine was more difficult, requiring intimate knowledge of the orientation of, and installation order for, each part and sub-assembly, plus special guides designed to hold the pieces in place during assembly. Many identical-looking parts, each with slightly different dimensions, required test fitting and selection as well as special tools to adjust to design tolerances.[8] The machines have a high curiosity value; in 2016 they sold for around US$1,000, but buyers paid as much as US$1,900 for models in pristine condition with notable serial numbers.[5] The Curta's design is a descendant ofGottfried Leibniz'sStepped ReckonerandCharles Thomas'sArithmometer, accumulating values on cogs, which are added or complemented by astepped drummechanism. Numbers are entered using slides (one slide per digit) on the side of the device. Therevolution counterandresult counterreside around the shiftable carriage, at the top of the machine. A single turn of the crank adds the input number to the result counter, at any carriage position, and increments the corresponding digit of the revolution counter. Pulling the crank upwards slightly before turning performs a subtraction instead of an addition. Multiplication, division, and other functions require a series of crank and carriage-shifting operations. The Type I Curta has eight digits for data entry (known as "setting sliders"), a six-digit revolution counter, and an eleven-digit result counter. According to the advertising literature, it weighs only 8 ounces (230 g). Serial number 70154, produced in 1969, weighs 245 grams (8.6 oz). The larger Type II Curta, introduced in 1954, has eleven digits for data entry, an eight-digit revolution counter, and a fifteen-digit result counter.[9] The Curta was popular among contestants insports car ralliesduring the 1960s, 1970s and into the 1980s. Even after the introduction of the electronic calculator for other purposes, they were used in time-speed-distance (TSD) rallies to aid in computation of times to checkpoints, distances off-course and so on, since the early electronic calculators did not fare well with the bounces and jolts of rallying.[1] The Curta was also favored by commercial and general-aviation pilots before the advent of electronic calculators because of its precision and the user's ability to confirm the accuracy of their manipulations via the revolution counter. Because calculations such as weight and balance are critical for safe flight, precise results free of pilot error are essential. The Curta calculator is very popular among collectors and can be purchased on many platforms. The Swiss entrepreneur and collector Peter Regenass holds a large collection of mechanical calculators, among them over 100 Curta calculators. A part of his collections is on display at theEnter MuseuminSolothurn, Switzerland. In 2016 he donated a Curta calculator to theYad VashemMuseum in Jerusalem.[10] The Curta plays a role inWilliam Gibson'sPattern Recognition(2003) as a piece of historic computing machinery as well as a crucial "trade" item. In 2016 a Curta was designed by Marcus Wu that could be produced on a 3D printer.[11]The Curta's fine tolerances were beyond the ability of printer technology of 2017 to produce to scale, so the printed Curta was about the size of a coffee can and weighed about three pounds.[12]
https://en.wikipedia.org/wiki/Curta
Aflight computeris a form ofslide ruleused inaviationand one of a very fewanalog computersin widespread use in the 21st century. Sometimes it is called by the make or model name likeE6B, CR, CRP-5 or in German, as theDreieckrechner.[1] They are mostly used inflight training, but many professional pilots still carry and use flight computers. They are used during flight planning (on the ground before takeoff) to aid in calculating fuel burn, wind correction, time en route, and other items. In the air, the flight computer can be used to calculate ground speed, estimated fuel burn and updated estimated time of arrival. The back is designed for wind correction calculations, i.e., determining how much the wind is affecting one's speed and course. One of the most useful parts of the E6B, is the technique of finding distance over time. Take the number 60 on the inner circle which usually has an arrow, and sometimes says rate on it. 60 is used in reference to the number of minutes in an hour, by placing the 60 on the airspeed in knots, on the outer ring the pilot can find how far the aircraft will travel in any given number of minutes. Looking at the inner ring for minutes traveled and the distance traveled will be above it on the outer ring. This can also be done backwards to find the amount of time the aircraft will take to travel a given number of nautical miles. On the main body of the flight computer it will find the wind component grid, which it will use to find how much crosswind the aircraft will actually have to correct for. The crosswind component is the amount of crosswind in knots that is being applied to the airframe and can be less than the actual speed of the wind because of the angle. Below that the pilot will find a grid called crosswind correction, this grid shows the difference the pilot needs to correct for because of wind. On either side of the front it will have rulers, one for statute miles and one for nautical miles on their sectional map. Another very useful part is the conversion scale on the front outer circle, which helps convert between Fahrenheit and Celsius. The back of the E6B is used to find ground speed and determine how much wind correction it needs.[2]
https://en.wikipedia.org/wiki/Flight_computer
Incomputing,floating-point arithmetic(FP) isarithmeticon subsets ofreal numbersformed by asignificand(asignedsequence of a fixed number of digits in somebase) multiplied by aninteger powerof that base. Numbers of this form are calledfloating-point numbers.[1]: 3[2]: 10 For example, the number 2469/200 is a floating-point number in base ten with five digits:2469/200=12.345=12345⏟significand×10⏟base−3⏞exponent{\displaystyle 2469/200=12.345=\!\underbrace {12345} _{\text{significand}}\!\times \!\underbrace {10} _{\text{base}}\!\!\!\!\!\!\!\overbrace {{}^{-3}} ^{\text{exponent}}}However, 7716/625 = 12.3456 is not a floating-point number in base ten with five digits—it needs six digits. The nearest floating-point number with only five digits is 12.346. And 1/3 = 0.3333… is not a floating-point number in base ten with any finite number of digits. In practice, most floating-point systems usebase two, though base ten (decimal floating point) is also common. Floating-point arithmetic operations, such as addition and division, approximate the corresponding real number arithmetic operations byroundingany result that is not a floating-point number itself to a nearby floating-point number.[1]: 22[2]: 10For example, in a floating-point arithmetic with five base-ten digits, the sum 12.345 + 1.0001 = 13.3451 might be rounded to 13.345. The termfloating pointrefers to the fact that the number'sradix pointcan "float" anywhere to the left, right, or between thesignificant digitsof the number. This position is indicated by the exponent, so floating point can be considered a form ofscientific notation. A floating-point system can be used to represent, with a fixed number of digits, numbers of very differentorders of magnitude— such as the number of metersbetween galaxiesorbetween protons in an atom. For this reason, floating-point arithmetic is often used to allow very small and very large real numbers that require fast processing times. The result of thisdynamic rangeis that the numbers that can be represented are not uniformly spaced; the difference between two consecutive representable numbers varies with their exponent.[3] Over the years, a variety of floating-point representations have been used in computers. In 1985, theIEEE 754Standard for Floating-Point Arithmetic was established, and since the 1990s, the most commonly encountered representations are those defined by the IEEE. The speed of floating-point operations, commonly measured in terms ofFLOPS, is an important characteristic of acomputer system, especially for applications that involve intensive mathematical calculations. Afloating-point unit(FPU, colloquially a mathcoprocessor) is a part of a computer system specially designed to carry out operations on floating-point numbers. Anumber representationspecifies some way of encoding a number, usually as a string of digits. There are several mechanisms by which strings of digits can represent numbers. In standard mathematical notation, the digit string can be of any length, and the location of theradix pointis indicated by placing an explicit"point" character(dot or comma) there. If the radix point is not specified, then the string implicitly represents anintegerand the unstated radix point would be off the right-hand end of the string, next to the least significant digit. Infixed-pointsystems, a position in the string is specified for the radix point. So a fixed-point scheme might use a string of 8 decimal digits with the decimal point in the middle, whereby "00012345" would represent 0001.2345. Inscientific notation, the given number is scaled by apower of 10, so that it lies within a specific range—typically between 1 and 10, with the radix point appearing immediately after the first digit. As a power of ten, the scaling factor is then indicated separately at the end of the number. For example, the orbital period ofJupiter's moonIois152,853.5047seconds, a value that would be represented in standard-form scientific notation as1.528535047×105seconds. Floating-point representation is similar in concept to scientific notation. Logically, a floating-point number consists of: To derive the value of the floating-point number, thesignificandis multiplied by thebaseraised to the power of theexponent, equivalent to shifting the radix point from its implied position by a number of places equal to the value of the exponent—to the right if the exponent is positive or to the left if the exponent is negative. Using base-10 (the familiardecimalnotation) as an example, the number152,853.5047, which has ten decimal digits of precision, is represented as the significand1,528,535,047together with 5 as the exponent. To determine the actual value, a decimal point is placed after the first digit of the significand and the result is multiplied by 105to give1.528535047×105, or152,853.5047. In storing such a number, the base (10) need not be stored, since it will be the same for the entire range of supported numbers, and can thus be inferred. Symbolically, this final value is:sbp−1×be,{\displaystyle {\frac {s}{b^{\,p-1}}}\times b^{e},} wheresis the significand (ignoring any implied decimal point),pis the precision (the number of digits in the significand),bis the base (in our example, this is the numberten), andeis the exponent. Historically, several number bases have been used for representing floating-point numbers, with base two (binary) being the most common, followed by base ten (decimal floating point), and other less common varieties, such as base sixteen (hexadecimal floating point[4][5][nb 3]), base eight (octal floating point[1][5][6][4][nb 4]), base four (quaternary floating point[7][5][nb 5]), base three (balanced ternary floating point[1]) and even base 256[5][nb 6]and base65,536.[8][nb 7] A floating-point number is arational number, because it can be represented as one integer divided by another; for example1.45×103is (145/100)×1000 or145,000/100. The base determines the fractions that can be represented; for instance, 1/5 cannot be represented exactly as a floating-point number using a binary base, but 1/5 can be represented exactly using a decimal base (0.2, or2×10−1). However, 1/3 cannot be represented exactly by either binary (0.010101...) or decimal (0.333...), but inbase 3, it is trivial (0.1 or 1×3−1) . The occasions on which infinite expansions occurdepend on the base and its prime factors. The way in which the significand (including its sign) and exponent are stored in a computer is implementation-dependent. The common IEEE formats are described in detail later and elsewhere, but as an example, in the binary single-precision (32-bit) floating-point representation,p=24{\displaystyle p=24}, and so the significand is a string of 24bits. For instance, the numberπ's first 33 bits are:110010010000111111011010_101000100.{\displaystyle 11001001\ 00001111\ 1101101{\underline {0}}\ 10100010\ 0.} In this binary expansion, let us denote the positions from 0 (leftmost bit, or most significant bit) to 32 (rightmost bit). The 24-bit significand will stop at position 23, shown as the underlined bit0above. The next bit, at position 24, is called theround bitorrounding bit. It is used to round the 33-bit approximation to the nearest 24-bit number (there arespecific rules for halfway values, which is not the case here). This bit, which is1in this example, is added to the integer formed by the leftmost 24 bits, yielding:110010010000111111011011_.{\displaystyle 11001001\ 00001111\ 1101101{\underline {1}}.} When this is stored in memory using the IEEE 754 encoding, this becomes thesignificands. The significand is assumed to have a binary point to the right of the leftmost bit. So, the binary representation of π is calculated from left-to-right as follows:(∑n=0p−1bitn×2−n)×2e=(1×2−0+1×2−1+0×2−2+0×2−3+1×2−4+⋯+1×2−23)×21≈1.57079637×2≈3.1415927{\displaystyle {\begin{aligned}&\left(\sum _{n=0}^{p-1}{\text{bit}}_{n}\times 2^{-n}\right)\times 2^{e}\\={}&\left(1\times 2^{-0}+1\times 2^{-1}+0\times 2^{-2}+0\times 2^{-3}+1\times 2^{-4}+\cdots +1\times 2^{-23}\right)\times 2^{1}\\\approx {}&1.57079637\times 2\\\approx {}&3.1415927\end{aligned}}} wherepis the precision (24in this example),nis the position of the bit of the significand from the left (starting at0and finishing at23here) andeis the exponent (1in this example). It can be required that the most significant digit of the significand of a non-zero number be non-zero (except when the corresponding exponent would be smaller than the minimum one). This process is callednormalization. For binary formats (which uses only the digits0and1), this non-zero digit is necessarily1. Therefore, it does not need to be represented in memory, allowing the format to have one more bit of precision. This rule is variously called theleading bit convention, theimplicit bit convention, thehidden bit convention,[1]or theassumed bit convention. The floating-point representation is by far the most common way of representing in computers an approximation to real numbers. However, there are alternatives: In 1914, the Spanish engineerLeonardo Torres QuevedopublishedEssays on Automatics,[9]where he designed a special-purpose electromechanical calculator based onCharles Babbage'sanalytical engineand described a way to store floating-point numbers in a consistent manner. He stated that numbers will be stored in exponential format asn× 10m{\displaystyle ^{m}}, and offered three rules by which consistent manipulation of floating-point numbers by machines could be implemented. For Torres, "nwill always be the same number ofdigits(e.g. six), the first digit ofnwill be of order of tenths, the second of hundredths, etc, and one will write each quantity in the form:n;m." The format he proposed shows the need for a fixed-sized significand as is presently used for floating-point data, fixing the location of the decimal point in the significand so that each representation was unique, and how to format such numbers by specifying a syntax to be used that could be entered through atypewriter, as was the case of hisElectromechanical Arithmometerin 1920.[10][11][12] In 1938,Konrad Zuseof Berlin completed theZ1, the first binary, programmablemechanical computer;[13]it uses a 24-bit binary floating-point number representation with a 7-bit signed exponent, a 17-bit significand (including one implicit bit), and a sign bit.[14]The more reliablerelay-basedZ3, completed in 1941, has representations for both positive and negative infinities; in particular, it implements defined operations with infinity, such as1/∞=0{\displaystyle ^{1}/_{\infty }=0}, and it stops on undefined operations, such as0×∞{\displaystyle 0\times \infty }. Zuse also proposed, but did not complete, carefully rounded floating-point arithmetic that includes±∞{\displaystyle \pm \infty }and NaN representations, anticipating features of the IEEE Standard by four decades.[15]In contrast,von Neumannrecommended against floating-point numbers for the 1951IAS machine, arguing that fixed-point arithmetic is preferable.[15] The firstcommercialcomputer with floating-point hardware was Zuse'sZ4computer, designed in 1942–1945. In 1946, Bell Laboratories introduced theModel V, which implementeddecimal floating-point numbers.[16] ThePilot ACEhas binary floating-point arithmetic, and it became operational in 1950 atNational Physical Laboratory, UK. Thirty-three were later sold commercially as theEnglish Electric DEUCE. The arithmetic is actually implemented in software, but with a one megahertz clock rate, the speed of floating-point and fixed-point operations in this machine were initially faster than those of many competing computers. The mass-producedIBM 704followed in 1954; it introduced the use of abiased exponent. For many decades after that, floating-point hardware was typically an optional feature, and computers that had it were said to be "scientific computers", or to have "scientific computation" (SC) capability (see alsoExtensions for Scientific Computation(XSC)). It was not until the launch of the Intel i486 in 1989 thatgeneral-purposepersonal computers had floating-point capability in hardware as a standard feature. TheUNIVAC 1100/2200 series, introduced in 1962, supported two floating-point representations: TheIBM 7094, also introduced in 1962, supported single-precision and double-precision representations, but with no relation to the UNIVAC's representations. Indeed, in 1964, IBM introducedhexadecimal floating-point representationsin itsSystem/360mainframes; these same representations are still available for use in modernz/Architecturesystems. In 1998, IBM implemented IEEE-compatible binary floating-point arithmetic in its mainframes; in 2005, IBM also added IEEE-compatible decimal floating-point arithmetic. Initially, computers used many different representations for floating-point numbers. The lack of standardization at the mainframe level was an ongoing problem by the early 1970s for those writing and maintaining higher-level source code; these manufacturer floating-point standards differed in the word sizes, the representations, and the rounding behavior and general accuracy of operations. Floating-point compatibility across multiple computing systems was in desperate need of standardization by the early 1980s, leading to the creation of theIEEE 754standard once the 32-bit (or 64-bit)wordhad become commonplace. This standard was significantly based on a proposal from Intel, which was designing thei8087numerical coprocessor; Motorola, which was designing the68000around the same time, gave significant input as well. In 1989, mathematician and computer scientistWilliam Kahanwas honored with theTuring Awardfor being the primary architect behind this proposal; he was aided by his student Jerome Coonen and a visiting professor,Harold Stone.[17] Among the x86 innovations are these: A floating-point number consists of twofixed-pointcomponents, whose range depends exclusively on the number of bits or digits in their representation. Whereas components linearly depend on their range, the floating-point range linearly depends on the significand range and exponentially on the range of exponent component, which attaches outstandingly wider range to the number. On a typical computer system, adouble-precision(64-bit) binary floating-point number has a coefficient of 53 bits (including 1 implied bit), an exponent of 11 bits, and 1 sign bit. Since 210= 1024, the complete range of the positive normal floating-point numbers in this format is from 2−1022≈ 2 × 10−308to approximately 21024≈ 2 × 10308. The number of normal floating-point numbers in a system (B,P,L,U) where is2(B−1)(BP−1)(U−L+1){\displaystyle 2\left(B-1\right)\left(B^{P-1}\right)\left(U-L+1\right)}. There is a smallest positive normal floating-point number, which has a 1 as the leading digit and 0 for the remaining digits of the significand, and the smallest possible value for the exponent. There is a largest floating-point number, which hasB− 1 as the value for each digit of the significand and the largest possible value for the exponent. In addition, there are representable values strictly between −UFL and UFL. Namely,positive and negative zeros, as well assubnormal numbers. TheIEEEstandardized the computer representation for binary floating-point numbers inIEEE 754(a.k.a. IEC 60559) in 1985. This first standard is followed by almost all modern machines. It wasrevised in 2008. IBM mainframes supportIBM's own hexadecimal floating point formatand IEEE 754-2008decimal floating pointin addition to the IEEE 754 binary format. TheCray T90series had an IEEE version, but theSV1still uses Cray floating-point format.[citation needed] The standard provides for many closely related formats, differing in only a few details. Five of these formats are calledbasic formats, and others are termedextended precision formatsandextendable precision format. Three formats are especially widely used in computer hardware and languages:[citation needed] Increasing the precision of the floating-point representation generally reduces the amount of accumulatedround-off errorcaused by intermediate calculations.[24]Other IEEE formats include: Any integer with absolute value less than 224can be exactly represented in the single-precision format, and any integer with absolute value less than 253can be exactly represented in the double-precision format. Furthermore, a wide range of powers of 2 times such a number can be represented. These properties are sometimes used for purely integer data, to get 53-bit integers on platforms that have double-precision floats but only 32-bit integers. The standard specifies some special values, and their representation: positiveinfinity(+∞), negative infinity (−∞), anegative zero(−0) distinct from ordinary ("positive") zero, and "not a number" values (NaNs). Comparison of floating-point numbers, as defined by the IEEE standard, is a bit different from usual integer comparison. Negative and positive zero compare equal, and every NaN compares unequal to every value, including itself. All finite floating-point numbers are strictly smaller than+∞and strictly greater than−∞, and they are ordered in the same way as their values (in the set of real numbers). Floating-point numbers are typically packed into a computer datum as the sign bit, the exponent field, and a field for the significand, from left to right. For theIEEE 754binary formats (basic and extended) that have extant hardware implementations, they are apportioned as follows: While the exponent can be positive or negative, in binary formats it is stored as an unsigned number that has a fixed "bias" added to it. Values of all 0s in this field are reserved for the zeros andsubnormal numbers; values of all 1s are reserved for the infinities and NaNs. The exponent range for normal numbers is [−126, 127] for single precision, [−1022, 1023] for double, or [−16382, 16383] for quad. Normal numbers exclude subnormal values, zeros, infinities, and NaNs. In the IEEE binary interchange formats the leading bit of a normalized significand is not actually stored in the computer datum, since it is always 1. It is called the "hidden" or "implicit" bit. Because of this, the single-precision format actually has a significand with 24 bits of precision, the double-precision format has 53, quad has 113, and octuple has 237. For example, it was shown above that π, rounded to 24 bits of precision, has: The sum of the exponent bias (127) and the exponent (1) is 128, so this is represented in the single-precision format as An example of a layout for32-bit floating pointis and the64-bit ("double")layout is similar. In addition to the widely usedIEEE 754standard formats, other floating-point formats are used, or have been used, in certain domain-specific areas. By their nature, all numbers expressed in floating-point format arerational numberswith a terminating expansion in the relevant base (for example, a terminating decimal expansion in base-10, or a terminating binary expansion in base-2). Irrational numbers, such asπor2{\textstyle {\sqrt {2}}}, or non-terminating rational numbers, must be approximated. The number of digits (or bits) of precision also limits the set of rational numbers that can be represented exactly. For example, the decimal number 123456789 cannot be exactly represented if only eight decimal digits of precision are available (it would be rounded to one of the two straddling representable values, 12345678 × 101or 12345679 × 101), the same applies tonon-terminating digits(.5to be rounded to either .55555555 or .55555556). When a number is represented in some format (such as a character string) which is not a native floating-point representation supported in a computer implementation, then it will require a conversion before it can be used in that implementation. If the number can be represented exactly in the floating-point format then the conversion is exact. If there is not an exact representation then the conversion requires a choice of which floating-point number to use to represent the original value. The representation chosen will have a different value from the original, and the value thus adjusted is called therounded value. Whether or not a rational number has a terminating expansion depends on the base. For example, in base-10 the number 1/2 has a terminating expansion (0.5) while the number 1/3 does not (0.333...). In base-2 only rationals with denominators that are powers of 2 (such as 1/2 or 3/16) are terminating. Any rational with a denominator that has a prime factor other than 2 will have an infinite binary expansion. This means that numbers that appear to be short and exact when written in decimal format may need to be approximated when converted to binary floating-point. For example, the decimal number 0.1 is not representable in binary floating-point of any finite precision; the exact binary representation would have a "1100" sequence continuing endlessly: where, as previously,sis the significand andeis the exponent. When rounded to 24 bits this becomes which is actually 0.100000001490116119384765625 in decimal. As a further example, the real numberπ, represented in binary as an infinite sequence of bits is but is when approximated byroundingto a precision of 24 bits. In binary single-precision floating-point, this is represented ass= 1.10010010000111111011011 withe= 1. This has a decimal value of whereas a more accurate approximation of the true value of π is The result of rounding differs from the true value by about 0.03 parts per million, and matches the decimal representation of π in the first 7 digits. The difference is thediscretization errorand is limited by themachine epsilon. The arithmetical difference between two consecutive representable floating-point numbers which have the same exponent is called aunit in the last place(ULP). For example, if there is no representable number lying between the representable numbers 1.45A70C2216and 1.45A70C2416, the ULP is 2×16−8, or 2−31. For numbers with a base-2 exponent part of 0, i.e. numbers with an absolute value higher than or equal to 1 but lower than 2, an ULP is exactly 2−23or about 10−7in single precision, and exactly 2−53or about 10−16in double precision. The mandated behavior of IEEE-compliant hardware is that the result be within one-half of a ULP. Rounding is used when the exact result of a floating-point operation (or a conversion to floating-point format) would need more digits than there are digits in the significand. IEEE 754 requirescorrect rounding: that is, the rounded result is as if infinitely precise arithmetic was used to compute the value and then rounded (although in implementation only three extra bits are needed to ensure this). There are several differentroundingschemes (orrounding modes). Historically,truncationwas the typical approach. Since the introduction of IEEE 754, the default method (round to nearest, ties to even, sometimes called Banker's Rounding) is more commonly used. This method rounds the ideal (infinitely precise) result of an arithmetic operation to the nearest representable value, and gives that representation as the result.[nb 8]In the case of a tie, the value that would make the significand end in an even digit is chosen. The IEEE 754 standard requires the same rounding to be applied to all fundamental algebraic operations, including square root and conversions, when there is a numeric (non-NaN) result. It means that the results of IEEE 754 operations are completely determined in all bits of the result, except for the representation of NaNs. ("Library" functions such as cosine and log are not mandated.) Alternative rounding options are also available. IEEE 754 specifies the following rounding modes: Alternative modes are useful when the amount of error being introduced must be bounded. Applications that require a bounded error are multi-precision floating-point, andinterval arithmetic. The alternative rounding modes are also useful in diagnosing numerical instability: if the results of a subroutine vary substantially between rounding to + and − infinity then it is likely numerically unstable and affected by round-off error.[34] Converting a double-precision binary floating-point number to a decimal string is a common operation, but an algorithm producing results that are both accurate and minimal did not appear in print until 1990, with Steele and White's Dragon4. Some of the improvements since then include: Many modern language runtimes use Grisu3 with a Dragon4 fallback.[41] The problem of parsing a decimal string into a binary FP representation is complex, with an accurate parser not appearing until Clinger's 1990 work (implemented in dtoa.c).[35]Further work has likewise progressed in the direction of faster parsing.[42] For ease of presentation and understanding, decimalradixwith 7 digit precision will be used in the examples, as in the IEEE 754decimal32format. The fundamental principles are the same in anyradixor precision, except that normalization is optional (it does not affect the numerical value of the result). Here,sdenotes the significand andedenotes the exponent. A simple method to add floating-point numbers is to first represent them with the same exponent. In the example below, the second number (with the smaller exponent) is shifted right by three digits, and one then proceeds with the usual addition method: In detail: This is the true result, the exact sum of the operands. It will be rounded to seven digits and then normalized if necessary. The final result is The lowest three digits of the second operand (654) are essentially lost. This isround-off error. In extreme cases, the sum of two non-zero numbers may be equal to one of them: In the above conceptual examples it would appear that a large number of extra digits would need to be provided by the adder to ensure correct rounding; however, for binary addition or subtraction using careful implementation techniques only aguardbit, aroundingbit and one extrastickybit need to be carried beyond the precision of the operands.[43][44]: 218–220 Another problem of loss of significance occurs whenapproximationsto two nearly equal numbers are subtracted. In the following examplee= 5;s= 1.234571 ande= 5;s= 1.234567 are approximations to the rationals 123457.1467 and 123456.659. The floating-point difference is computed exactly because the numbers are close—theSterbenz lemmaguarantees this, even in case of underflow whengradual underflowis supported. Despite this, the difference of the original numbers ise= −1;s= 4.877000, which differs more than 20% from the differencee= −1;s= 4.000000 of the approximations. In extreme cases, all significant digits of precision can be lost.[43][45]Thiscancellationillustrates the danger in assuming that all of the digits of a computed result are meaningful. Dealing with the consequences of these errors is a topic innumerical analysis; see alsoAccuracy problems. To multiply, the significands are multiplied while the exponents are added, and the result is rounded and normalized. Similarly, division is accomplished by subtracting the divisor's exponent from the dividend's exponent, and dividing the dividend's significand by the divisor's significand. There are no cancellation or absorption problems with multiplication or division, though small errors may accumulate as operations are performed in succession.[43]In practice, the way these operations are carried out in digital logic can be quite complex (seeBooth's multiplication algorithmandDivision algorithm).[nb 9] Literals for floating-point numbers depend on languages. They typically useeorEto denotescientific notation. TheC programming languageand theIEEE 754standard also define ahexadecimal literal syntaxwith a base-2 exponent instead of 10. In languages likeC, when the decimal exponent is omitted, a decimal point is needed to differentiate them from integers. Other languages do not have an integer type (such asJavaScript), or allow overloading of numeric types (such asHaskell). In these cases, digit strings such as123may also be floating-point literals. Examples of floating-point literals are: Floating-point computation in a computer can run into three kinds of problems: Prior to the IEEE standard, such conditions usually caused the program to terminate, or triggered some kind oftrapthat the programmer might be able to catch. How this worked was system-dependent, meaning that floating-point programs were notportable. (The term "exception" as used in IEEE 754 is a general term meaning an exceptional condition, which is not necessarily an error, and is a different usage to that typically defined in programming languages such as a C++ or Java, in which an "exception" is an alternative flow of control, closer to what is termed a "trap" in IEEE 754 terminology.) Here, the required default method of handling exceptions according to IEEE 754 is discussed (the IEEE 754 optional trapping and other "alternate exception handling" modes are not discussed). Arithmetic exceptions are (by default) required to be recorded in "sticky" status flag bits. That they are "sticky" means that they are not reset by the next (arithmetic) operation, but stay set until explicitly reset. The use of "sticky" flags thus allows for testing of exceptional conditions to be delayed until after a full floating-point expression or subroutine: without them exceptional conditions that could not be otherwise ignored would require explicit testing immediately after every floating-point operation. By default, an operation always returns a result according to specification without interrupting computation. For instance, 1/0 returns +∞, while also setting the divide-by-zero flag bit (this default of ∞ is designed to often return a finite result when used in subsequent operations and so be safely ignored). The original IEEE 754 standard, however, failed to recommend operations to handle such sets of arithmetic exception flag bits. So while these were implemented in hardware, initially programming language implementations typically did not provide a means to access them (apart from assembler). Over time some programming language standards (e.g.,C99/C11 and Fortran) have been updated to specify methods to access and change status flag bits. The 2008 version of the IEEE 754 standard now specifies a few operations for accessing and handling the arithmetic flag bits. The programming model is based on a single thread of execution and use of them by multiple threads has to be handled by ameansoutside of the standard (e.g.C11specifies that the flags havethread-local storage). IEEE 754 specifies five arithmetic exceptions that are to be recorded in the status flags ("sticky bits"): The default return value for each of the exceptions is designed to give the correct result in the majority of cases such that the exceptions can be ignored in the majority of codes.inexactreturns a correctly rounded result, andunderflowreturns a value less than or equal to the smallest positive normal number in magnitude and can almost always be ignored.[46]divide-by-zeroreturns infinity exactly, which will typically then divide a finite number and so give zero, or else will give aninvalidexception subsequently if not, and so can also typically be ignored. For example, the effective resistance of n resistors in parallel (see fig. 1) is given byRtot=1/(1/R1+1/R2+⋯+1/Rn){\displaystyle R_{\text{tot}}=1/(1/R_{1}+1/R_{2}+\cdots +1/R_{n})}. If a short-circuit develops withR1{\displaystyle R_{1}}set to 0,1/R1{\displaystyle 1/R_{1}}will return +infinity which will give a finalRtot{\displaystyle R_{tot}}of 0, as expected[47](see the continued fraction example ofIEEE 754 design rationalefor another example). Overflowandinvalidexceptions can typically not be ignored, but do not necessarily represent errors: for example, aroot-findingroutine, as part of its normal operation, may evaluate a passed-in function at values outside of its domain, returning NaN and aninvalidexception flag to be ignored until finding a useful start point.[46] The fact that floating-point numbers cannot accurately represent all real numbers, and that floating-point operations cannot accurately represent true arithmetic operations, leads to many surprising situations. This is related to the finiteprecisionwith which computers generally represent numbers. For example, the decimal numbers 0.1 and 0.01 cannot be represented exactly as binary floating-point numbers. In the IEEE 754 binary32 format with its 24-bit significand, the result of attempting to square the approximation to 0.1 is neither 0.01 nor the representable number closest to it. The decimal number 0.1 is represented in binary ase= −4;s= 110011001100110011001101, which is Squaring this number gives Squaring it with rounding to the 24-bit precision gives But the representable number closest to 0.01 is Also, the non-representability of π (and π/2) means that an attempted computation of tan(π/2) will not yield a result of infinity, nor will it even overflow in the usual floating-point formats (assuming an accurate implementation of tan). It is simply not possible for standard floating-point hardware to attempt to compute tan(π/2), because π/2 cannot be represented exactly. This computation in C: will give a result of 16331239353195370.0. In single precision (using thetanffunction), the result will be −22877332.0. By the same token, an attempted computation of sin(π) will not yield zero. The result will be (approximately) 0.1225×10−15in double precision, or −0.8742×10−7in single precision.[nb 10] While floating-point addition and multiplication are bothcommutative(a+b=b+aanda×b=b×a), they are not necessarilyassociative. That is,(a+b) +cis not necessarily equal toa+ (b+c). Using 7-digit significand decimal arithmetic: They are also not necessarilydistributive. That is,(a+b) ×cmay not be the same asa×c+b×c: In addition to loss of significance, inability to represent numbers such as π and 0.1 exactly, and other slight inaccuracies, the following phenomena may occur: Q(h)=f(a+h)−f(a)h.{\displaystyle Q(h)={\frac {f(a+h)-f(a)}{h}}.} Machine precisionis a quantity that characterizes the accuracy of a floating-point system, and is used inbackward error analysisof floating-point algorithms. It is also known as unit roundoff ormachine epsilon. Usually denotedΕmach, its value depends on the particular rounding being used. With rounding to zero,Emach=B1−P,{\displaystyle \mathrm {E} _{\text{mach}}=B^{1-P},\,}whereas rounding to nearest,Emach=12B1−P,{\displaystyle \mathrm {E} _{\text{mach}}={\tfrac {1}{2}}B^{1-P},}whereBis the base of the system andPis the precision of the significand (in baseB). This is important since it bounds therelative errorin representing any non-zero real numberxwithin the normalized range of a floating-point system:|fl⁡(x)−xx|≤Emach.{\displaystyle \left|{\frac {\operatorname {fl} (x)-x}{x}}\right|\leq \mathrm {E} _{\text{mach}}.} Backward error analysis, the theory of which was developed and popularized byJames H. Wilkinson, can be used to establish that an algorithm implementing a numerical function is numerically stable.[52]The basic approach is to show that although the calculated result, due to roundoff errors, will not be exactly correct, it is the exact solution to a nearby problem with slightly perturbed input data. If the perturbation required is small, on the order of the uncertainty in the input data, then the results are in some sense as accurate as the data "deserves". The algorithm is then defined asbackward stable. Stability is a measure of the sensitivity to rounding errors of a given numerical procedure; by contrast, thecondition numberof a function for a given problem indicates the inherent sensitivity of the function to small perturbations in its input and is independent of the implementation used to solve the problem.[53] As a trivial example, consider a simple expression giving the inner product of (length two) vectorsx{\displaystyle x}andy{\displaystyle y}, thenfl⁡(x⋅y)=fl⁡(fl⁡(x1⋅y1)+fl⁡(x2⋅y2)),wherefl⁡()indicates correctly rounded floating-point arithmetic=fl⁡((x1⋅y1)(1+δ1)+(x2⋅y2)(1+δ2)),whereδn≤Emach,from above=((x1⋅y1)(1+δ1)+(x2⋅y2)(1+δ2))(1+δ3)=(x1⋅y1)(1+δ1)(1+δ3)+(x2⋅y2)(1+δ2)(1+δ3),{\displaystyle {\begin{aligned}\operatorname {fl} (x\cdot y)&=\operatorname {fl} {\big (}\operatorname {fl} (x_{1}\cdot y_{1})+\operatorname {fl} (x_{2}\cdot y_{2}){\big )},&&{\text{ where }}\operatorname {fl} (){\text{ indicates correctly rounded floating-point arithmetic}}\\&=\operatorname {fl} {\big (}(x_{1}\cdot y_{1})(1+\delta _{1})+(x_{2}\cdot y_{2})(1+\delta _{2}){\big )},&&{\text{ where }}\delta _{n}\leq \mathrm {E} _{\text{mach}},{\text{ from above}}\\&={\big (}(x_{1}\cdot y_{1})(1+\delta _{1})+(x_{2}\cdot y_{2})(1+\delta _{2}){\big )}(1+\delta _{3})\\&=(x_{1}\cdot y_{1})(1+\delta _{1})(1+\delta _{3})+(x_{2}\cdot y_{2})(1+\delta _{2})(1+\delta _{3}),\end{aligned}}}and sofl⁡(x⋅y)=x^⋅y^,{\displaystyle \operatorname {fl} (x\cdot y)={\hat {x}}\cdot {\hat {y}},} where x^1=x1(1+δ1);x^2=x2(1+δ2);y^1=y1(1+δ3);y^2=y2(1+δ3),{\displaystyle {\begin{aligned}{\hat {x}}_{1}&=x_{1}(1+\delta _{1});&{\hat {x}}_{2}&=x_{2}(1+\delta _{2});\\{\hat {y}}_{1}&=y_{1}(1+\delta _{3});&{\hat {y}}_{2}&=y_{2}(1+\delta _{3}),\\\end{aligned}}} where δn≤Emach{\displaystyle \delta _{n}\leq \mathrm {E} _{\text{mach}}} by definition, which is the sum of two slightly perturbed (on the order of Εmach) input data, and so is backward stable. For more realistic examples innumerical linear algebra, see Higham 2002[54]and other references below. Although individual arithmetic operations of IEEE 754 are guaranteed accurate to within half aULP, more complicated formulae can suffer from larger errors for a variety of reasons. The loss of accuracy can be substantial if a problem or its data areill-conditioned, meaning that the correct result is hypersensitive to tiny perturbations in its data. However, even functions that are well-conditioned can suffer from large loss of accuracy if an algorithmnumerically unstablefor that data is used: apparently equivalent formulations of expressions in a programming language can differ markedly in their numerical stability. One approach to remove the risk of such loss of accuracy is the design and analysis of numerically stable algorithms, which is an aim of the branch of mathematics known asnumerical analysis. Another approach that can protect against the risk of numerical instabilities is the computation of intermediate (scratch) values in an algorithm at a higher precision than the final result requires,[55]which can remove, or reduce by orders of magnitude,[56]such risk:IEEE 754 quadruple precisionandextended precisionare designed for this purpose when computing at double precision.[57][nb 11] For example, the following algorithm is a direct implementation to compute the functionA(x) = (x−1) / (exp(x−1) − 1)which is well-conditioned at 1.0,[nb 12]however it can be shown to be numerically unstable and lose up to half the significant digits carried by the arithmetic when computed near 1.0.[58] If, however, intermediate computations are all performed in extended precision (e.g. by setting line [1] toC99long double), then up to full precision in the final double result can be maintained.[nb 13]Alternatively, a numerical analysis of the algorithm reveals that if the following non-obvious change to line [2] is made: then the algorithm becomes numerically stable and can compute to full double precision. To maintain the properties of such carefully constructed numerically stable programs, careful handling by thecompileris required. Certain "optimizations" that compilers might make (for example, reordering operations) can work against the goals of well-behaved software. There is some controversy about the failings of compilers and language designs in this area: C99 is an example of a language where such optimizations are carefully specified to maintain numerical precision. See the external references at the bottom of this article. A detailed treatment of the techniques for writing high-quality floating-point software is beyond the scope of this article, and the reader is referred to,[54][59]and the other references at the bottom of this article. Kahan suggests several rules of thumb that can substantially decrease by orders of magnitude[59]the risk of numerical anomalies, in addition to, or in lieu of, a more careful numerical analysis. These include: as noted above, computing all expressions and intermediate results in the highest precision supported in hardware (a common rule of thumb is to carry twice the precision of the desired result, i.e. compute in double precision for a final single-precision result, or in double extended or quad precision for up to double-precision results[60]); and rounding input data and results to only the precision required and supported by the input data (carrying excess precision in the final result beyond that required and supported by the input data can be misleading, increases storage cost and decreases speed, and the excess bits can affect convergence of numerical procedures:[61]notably, the first form of the iterative example given below converges correctly when using this rule of thumb). Brief descriptions of several additional issues and techniques follow. As decimal fractions can often not be exactly represented in binary floating-point, such arithmetic is at its best when it is simply being used to measure real-world quantities over a wide range of scales (such as the orbital period of a moon around Saturn or the mass of aproton), and at its worst when it is expected to model the interactions of quantities expressed as decimal strings that are expected to be exact.[56][59]An example of the latter case is financial calculations. For this reason, financial software tends not to use a binary floating-point number representation.[62]The "decimal" data type of theC#andPythonprogramming languages, and the decimal formats of theIEEE 754-2008standard, are designed to avoid the problems of binary floating-point representations when applied to human-entered exact decimal values, and make the arithmetic always behave as expected when numbers are printed in decimal. Expectations from mathematics may not be realized in the field of floating-point computation. For example, it is known that(x+y)(x−y)=x2−y2{\displaystyle (x+y)(x-y)=x^{2}-y^{2}\,}, and thatsin2⁡θ+cos2⁡θ=1{\displaystyle \sin ^{2}{\theta }+\cos ^{2}{\theta }=1\,}, however these facts cannot be relied on when the quantities involved are the result of floating-point computation. The use of the equality test (if (x==y) ...) requires care when dealing with floating-point numbers. Even simple expressions like0.6/0.2-3==0will, on most computers, fail to be true[63](in IEEE 754 double precision, for example,0.6/0.2 - 3is approximately equal to−4.44089209850063×10−16). Consequently, such tests are sometimes replaced with "fuzzy" comparisons (if (abs(x-y) < epsilon) ..., where epsilon is sufficiently small and tailored to the application, such as 1.0E−13). The wisdom of doing this varies greatly, and can require numerical analysis to bound epsilon.[54]Values derived from the primary data representation and their comparisons should be performed in a wider, extended, precision to minimize the risk of such inconsistencies due to round-off errors.[59]It is often better to organize the code in such a way that such tests are unnecessary. For example, incomputational geometry, exact tests of whether a point lies off or on a line or plane defined by other points can be performed using adaptive precision or exact arithmetic methods.[64] Small errors in floating-point arithmetic can grow when mathematical algorithms perform operations an enormous number of times. A few examples arematrix inversion,eigenvectorcomputation, and differential equation solving. These algorithms must be very carefully designed, using numerical approaches such asiterative refinement, if they are to work well.[65] Summation of a vector of floating-point values is a basic algorithm inscientific computing, and so an awareness of when loss of significance can occur is essential. For example, if one is adding a very large number of numbers, the individual addends are very small compared with the sum. This can lead to loss of significance. A typical addition would then be something like The low 3 digits of the addends are effectively lost. Suppose, for example, that one needs to add many numbers, all approximately equal to 3. After 1000 of them have been added, the running sum is about 3000; the lost digits are not regained. TheKahan summation algorithmmay be used to reduce the errors.[54] Round-off error can affect the convergence and accuracy of iterative numerical procedures. As an example,Archimedesapproximated π by calculating the perimeters of polygons inscribing and circumscribing a circle, starting with hexagons, and successively doubling the number of sides. As noted above, computations may be rearranged in a way that is mathematically equivalent but less prone to error (numerical analysis). Two forms of the recurrence formula for the circumscribed polygon are:[citation needed] Here is a computation using IEEE "double" (a significand with 53 bits of precision) arithmetic: While the two forms of the recurrence formula are clearly mathematically equivalent,[nb 14]the first subtracts 1 from a number extremely close to 1, leading to an increasingly problematic loss ofsignificant digits. As the recurrence is applied repeatedly, the accuracy improves at first, but then it deteriorates. It never gets better than about 8 digits, even though 53-bit arithmetic should be capable of about 16 digits of precision. When the second form of the recurrence is used, the value converges to 15 digits of precision. The aforementioned lack ofassociativityof floating-point operations in general means thatcompilerscannot as effectively reorder arithmetic expressions as they could with integer and fixed-point arithmetic, presenting a roadblock in optimizations such ascommon subexpression eliminationand auto-vectorization.[66]The "fast math" option on many compilers (ICC, GCC, Clang, MSVC...) turns on reassociation along with unsafe assumptions such as a lack of NaN and infinite numbers in IEEE 754. Some compilers also offer more granular options to only turn on reassociation. In either case, the programmer is exposed to many of the precision pitfalls mentioned above for the portion of the program using "fast" math.[67] In some compilers (GCC and Clang), turning on "fast" math may cause the program todisable subnormal floatsat startup, affecting the floating-point behavior of not only the generated code, but also any program using such code as alibrary.[68] In mostFortrancompilers, as allowed by the ISO/IEC 1539-1:2004 Fortran standard, reassociation is the default, with breakage largely prevented by the "protect parens" setting (also on by default). This setting stops the compiler from reassociating beyond the boundaries of parentheses.[69]Intel Fortran Compileris a notable outlier.[70] A common problem in "fast" math is that subexpressions may not be optimized identically from place to place, leading to unexpected differences. One interpretation of the issue is that "fast" math as implemented currently has a poorly defined semantics. One attempt at formalizing "fast" math optimizations is seen inIcing, a verified compiler.[71]
https://en.wikipedia.org/wiki/Floating_point
Hans Peter Luhn(July 1, 1896 – August 19, 1964) was a German-American[2]researcher in the field of computer science and Library & Information Science forIBM, and creator of theLuhn algorithm,KWIC(KeyWordsInContext) indexing, andselective dissemination of information ("SDI"). His inventions have found applications in diverse areas like computer science, thetextile industry,linguistics, andinformation science. He was awarded over 80patents. He created one of the earliest practical hash functions in the 1950s. Luhn was born inBarmen,Germany(now part ofWuppertal) on July 1, 1896. After he completed secondary school, Luhn moved to Switzerland to learn the printing trade so he could join the family business. His career in printing was halted by his service as a communications officer in the German Army duringWorld War I. After the war, Luhn entered the textile field, which eventually led him to the United States, where he invented a thread-counting gauge (the Lunometer) still on the market.[3]From the late 1920s to the early 1940s, during which time he obtained patents for a broad range of inventions, Luhn worked in textiles and as an independent engineering consultant. He joined IBM as a senior research engineer in 1941, and soon became manager of theinformation retrievalresearch division. His introduction to the field of documentation/information science came in 1947 when he was asked to work on a problem brought to IBM by James Perry and Malcolm Dyson that involved searching for chemical compounds recorded in coded form.[4]He came up with a solution for that and other problems usingpunched cards, but often had to overcome the limitations of the available machines by coming up with new ways of using them. By the dawn of thecomputer agein the 1950s, software became the means to surmount the limitations inherent in the punched card machines of the past. Luhn spent greater and greater amounts of time on the problems of information retrieval and storage faced by libraries and documentation centers, and pioneered the use of data processing equipment in resolving these problems. "Luhn was the first, or among the first, to work out many of the basic techniques now commonplace in information science." These techniques includedfull-text processing;hash codes;Key Word in Contextindexing (see alsoHerbert Marvin Ohlman);auto-indexing;automatic abstractingand the concept ofselective dissemination of information(SDI). Luhn was a pioneer inhash coding. In 1953, he suggested putting information intobucketsin order to speed up search. He did not only consider handling numbers more efficiently. He was applying his concepts to text as well. Luhn’s methods were improved by computer scientists decades after his inventions. Today,hashing algorithmsare essential for many applications such as textual tools,cloud services, data-intensive research andcryptographyamong numerous other uses. It is surprising that his name and contributions to information handling are largely forgotten.[5] Two of Luhn's greatest achievements are the idea for an SDI system and the KWIC[6]method of indexing. Today's SDI systems owe a great deal to a 1958 paper by Luhn, "A Business Intelligence System",[7]which described an "automatic method to provide current awareness services to scientists and engineers" who needed help to cope with the rapid post-war growth of scientific and technical literature. Luhn apparently coined the termbusiness intelligencein that paper.[8] Luhn received theAward of Meritfrom theAssociation for Information Science and Technologyin 1964.[9]
https://en.wikipedia.org/wiki/Hans_Peter_Luhn
Anomogram(fromGreekνόμος(nomos)'law'andγράμμα(gramma)'that which is drawn'), also called anomograph,alignment chart, orabac, is a graphicalcalculating device, a two-dimensional diagram designed to allow the approximate graphical computation of amathematical function. The field of nomography was invented in 1884 by the French engineerPhilbert Maurice d'Ocagne(1862–1938) and used extensively for many years to provide engineers with fast graphical calculations of complicated formulas to a practical precision. Nomograms use a parallelcoordinate systeminvented by d'Ocagne rather than standardCartesian coordinates. A nomogram consists of a set of n scales, one for each variable in an equation. Knowing the values of n-1 variables, the value of the unknown variable can be found, or by fixing the values of some variables, the relationship between the unfixed ones can be studied. The result is obtained by laying a straightedge across the known values on the scales and reading the unknown value from where it crosses the scale for that variable. The virtual or drawn line created by the straightedge is called anindex lineorisopleth. Nomograms flourished in many different contexts for roughly 75 years because they allowed quick and accurate computations before the age of pocket calculators. Results from a nomogram are obtained very quickly and reliably by simply drawing one or more lines. The user does not have to know how to solve algebraic equations, look up data in tables, use aslide rule, or substitute numbers into equations to obtain results. The user does not even need to know the underlying equation the nomogram represents. In addition, nomograms naturally incorporate implicit or explicitdomain knowledgeinto their design. For example, to create larger nomograms for greater accuracy the nomographer usually includes only scale ranges that are reasonable and of interest to the problem. Many nomograms include other useful markings such as reference labels and colored regions. All of these provide useful guideposts to the user. Like a slide rule, a nomogram is a graphical analog computation device. Also like a slide rule, its accuracy is limited by the precision with which physical markings can be drawn, reproduced, viewed, and aligned. Unlike the slide rule, which is a general-purpose computation device, a nomogram is designed to perform a specific calculation with tables of values built into the device'sscales. Nomograms are typically used in applications for which the level of accuracy they provide is sufficient and useful. Alternatively, a nomogram can be used to check an answer obtained by a more exact but error-prone calculation. Other types of graphical calculators—such asintercept charts,trilinear diagrams, andhexagonal charts—are sometimes called nomograms. These devices do not meet the definition of a nomogram as a graphical calculator whose solution is found by the use of one or more linear isopleths. A nomogram for a three-variable equation typically has three scales, although there exist nomograms in which two or even all three scales are common. Here two scales represent known values and the third is the scale where the result is read off. The simplest such equation isu1+u2+u3= 0 for the three variablesu1,u2andu3. An example of this type of nomogram is shown on the right, annotated with terms used to describe the parts of a nomogram. More complicated equations can sometimes be expressed as the sum of functions of the three variables. For example, the nomogram at the top of this article could be constructed as a parallel-scale nomogram because it can be expressed as such a sum after taking logarithms of both sides of the equation. The scale for the unknown variable can lie between the other two scales or outside of them. The known values of the calculation are marked on the scales for those variables, and a line is drawn between these marks. The result is read off the unknown scale at the point where the line intersects that scale. The scales include 'tick marks' to indicate exact number locations, and they may also include labeled reference values. These scales may belinear,logarithmic, or have some more complex relationship. The sample isopleth shown in red on the nomogram at the top of this article calculates the value ofTwhenS= 7.30 andR= 1.17. The isopleth crosses the scale forTat just under 4.65; a larger figure printed in high resolution on paper would yieldT= 4.64 to three-digit precision. Note that any variable can be calculated from values of the other two, a feature of nomograms that is particularly useful for equations in which a variable cannot be algebraically isolated from the other variables. Straight scales are useful for relatively simple calculations, but for more complex calculations the use of simple or elaborate curved scales may be required. Nomograms for more than three variables can be constructed by incorporating a grid of scales for two of the variables, or by concatenating individual nomograms of fewer numbers of variables into a compound nomogram. Nomograms have been used in an extensive array of applications. A sample includes: The nomogram below performs the computation: f(A,B)=11/A+1/B=ABA+B{\displaystyle f(A,B)={\frac {1}{1/A+1/B}}={\frac {AB}{A+B}}} This nomogram is interesting because it performs a useful nonlinear calculation using only straight-line, equally graduated scales. While the diagonal line has a scale2{\displaystyle {\sqrt {2}}}times larger than the axes scales, the numbers on it exactly match those directly below or to its left, and thus it can be easily created by drawing a straight line diagonally on a sheet ofgraph paper. AandBare entered on the horizontal and vertical scales, and the result is read from the diagonal scale. Being proportional to theharmonic meanofAandB, this formula has several applications. For example, it is theparallel-resistance formulainelectronics, and thethin-lens equationinoptics. In the example, the red line demonstrates that parallel resistors of 56 and 42ohmshave a combined resistance of 24 ohms. It also demonstrates that an object at a distance of 56 cm from alenswhosefocal lengthis 24 cm forms areal imageat a distance of 42 cm. The nomogram below can be used to perform an approximate computation of some values needed when performing a familiar statistical test,Pearson's chi-squared test. This nomogram demonstrates the use of curved scales with unevenly spaced graduations. The relevant expression is: (OBS−EXP)2EXP{\displaystyle {\frac {(OBS-EXP)^{2}}{EXP}}} The scale along the top is shared among five different ranges of observed values: A, B, C, D and E. The observed value is found in one of these ranges, and the tick mark used on that scale is found immediately above it. Then the curved scale used for the expected value is selected based on the range. For example, an observed value of 9 would use the tick mark above the 9 in range A, and curved scale A would be used for the expected value. An observed value of 81 would use the tick mark above 81 in range E, and curved scale E would be used for the expected value. This allows five different nomograms to be incorporated into a single diagram. In this manner, the blue line demonstrates the computation of: (9 − 5)2/ 5 = 3.2 and the red line demonstrates the computation of: (81 − 70)2/ 70 = 1.7 In performing the test,Yates's correction for continuityis often applied, and simply involves subtracting 0.5 from the observed values. A nomogram for performing the test with Yates's correction could be constructed simply by shifting each "observed" scale half a unit to the left, so that the 1.0, 2.0, 3.0, ... graduations are placed where the values 0.5, 1.5, 2.5, ... appear on the present chart. Although nomograms represent mathematical relationships, not all are mathematically derived. The following one was developed graphically to achieve appropriate end results that could readily be defined by the product of their relationships in subjective units rather than numerically. The use of non-parallel axes enabled the non-linear relationships to be incorporated into the model. The numbers in square boxes denote the axes requiring input after appropriate assessment. The pair of nomograms at the top of the image determine the probability of occurrence and the availability, which are then incorporated into the bottom multistage nomogram. Lines 8 and 10 are 'tie lines' or 'pivot lines' and are used for the transition between the stages of the compound nomogram. The final pair of parallel logarithmic scales (12) are not nomograms as such, but reading-off scales to translate the risk score (11, remote to extremely high) into a sampling frequency to address safety aspects and other 'consumer protection' aspects respectively. This stage requires political 'buy in' balancing cost against risk. The example uses a three-year minimum frequency for each, though with the high risk end of the scales different for the two aspects, giving different frequencies for the two, but both subject to an overall minimum sampling of every food for all aspects at least once every three years. Thisrisk assessmentnomogram was developed by theUK Public Analyst Servicewith funding from theUK Food Standards Agencyfor use as a tool to guide the appropriate frequency of sampling and analysis of food for official food control purposes, intended to be used to assess all potential problems with all foods, although not yet adopted. Using a ruler, one can readily read the missing term of thelaw of sinesor the roots of thequadraticandcubicequation.[4]
https://en.wikipedia.org/wiki/Nomogram
Thesector, also known as asector rule,proportional compass, ormilitary compass, is a majorcalculating instrumentthat was in use from the end of the sixteenth century until the nineteenth century. It is an instrument consisting of two rulers of equal length joined by a hinge. A number of scales are inscribed upon the instrument which facilitate various mathematical calculations. It is used for solving problems inproportion,multiplicationanddivision,geometry, andtrigonometry, and for computing various mathematical functions, such assquare rootsandcube roots. Its several scales permitted easy and direct solutions of problems ingunnery,surveyingandnavigation. The sector derives its name from the fourth proposition of the sixth book ofEuclid, where it is demonstrated thatsimilar triangleshave their like sides proportional. Some sectors also incorporated aquadrant, and sometimes a clamp at the end of one leg which allowed the device to be used as agunner's quadrant. The sector was invented, essentially simultaneously and independently, by a number of different people prior to the start of the 17th century. Fabrizio Mordente(1532 – ca 1608) was an Italian mathematician who is best known for his invention of the "proportional eight-pointed compass" which has two arms with cursors that allow the solution of problems in measuring the circumference, area and angles of a circle. In 1567 he published a single sheet treatise in Venice showing illustrations of his device.[1]In 1585Giordano Brunoused Mordente's compass to refuteAristotle's hypothesis on the incommensurability of infinitesimals, thus confirming the existence of the "minimum" which laid the basis of his own atomic theory.[2]Guidobaldo del Montedeveloped a "polymetric compass" c. 1670, including a scale for constructing regular polygons. The Italian astronomerGalileo Galileiadded further scales in the 1590s, and published a book on the subject in 1606.[3]Galileo's sector was first designed for military applications, but evolved into a general purpose calculating tool. The two earliest known sectors in England were made by Robert Beckit andCharles Whitwell, respectively, both dated 1597. These have a strong resemblance to the description of the device given by English mathematicianThomas Hood's 1598 book.[3]The sector Hood described was intended for use as a surveying instrument and included sights and a mounting socket for attaching the instrument to a pole or post, as well as an arc scale and an additional sliding leg. In the 1600s, the British mathematicianEdmund Gunterdispensed with accessories but added additional scales, including ameridian linewith divisions proportional to the spacing of latitudes along a meridian on theMercator projection,[4]privately distributing a Latin manuscript explaining its construction and use. Gunter published this in English asDe Sectore et Radioin 1623. Galileo first developed his sector in the early 1590s as a tool for artillerymen. By 1597 it had evolved into an instrument that had much broader utility. It could be used, for example, to calculate the area of any plane figure constructed from a combination of straight lines and semi-circles. Galileo was determined to improve his sector so that it could be used to calculate the area of any shape discussed inEuclid'sElements. To do this, he needed to add the capability to calculate the area ofcircular segments. It took him more than a year to solve this problem. The instrument we know today as Galileo's sector is the version with this added capability that he began to produce in 1599 with the help of the instrument makerMarc'Antonio Mazzoleni. Galileo provided Mazzoleni and his family with room and board, and paid him two-thirds of the 35 lire selling price; Galileo would charge 120 lire for a course teaching the use of the instrument, about half the annual wage of a skilled craftsmen.[5]Most of his customers were wealthy noblemen, includingArchduke Ferdinand, to whom Galileo sold a sector made of silver. More than a hundred were made in all, but only three are known to exist today: one in the Putnam Gallery atHarvard University, one in the Museum of Decorative Art in Milan'sCastello Sforzesco, and one in theGalileo Museumin Florence. Galileo described how to perform 32 different calculations with the sector in his 1606 manual.[6]In the introduction, Galileo wrote that his intention in producing the sector was to enable people who had not studied mathematics to perform complex calculations without having to know the mathematical details involved. The sector was used in combination with a divider, also called acompass. Each arm of the sector was marked with four lines on the front, and three on the back, and the pivot had a dimple that would accept the point of a divider. The lines and scales on each arm were identical, and arranged in the same order as you moved from the inner edge to the outer edge, thus forming seven pairs of lines. All the calculations could be performed with some combination of five very simple steps: measuring some length, separation or object width with the divider; opening the arms of the sector and setting the crosswise distance between two corresponding points on a pair of lines to the divider separation; measuring the crosswise distance between two corresponding points on a pair of lines once the sector had been set to some separation; reading a value from one of the scales at a point where the crosswise distances matches a divider separation; and reading a value off a scale where the distance from the pivot matches a divider. Galileo did not describe how the scales were constructed, he considered that a trade secret, but the details can be inferred. Scale markings were placed with an accuracy of about 1%. The innermost scales of the instrument are called thearithmetic linesfrom their division inarithmetic progression, that is, a linear scale. The sector in the Galileo Museum is marked from 16 to 260.[7]If we call the length from the pivotA,{\displaystyle A,}then given two marks with valuesn1{\displaystyle n_{1}}andn2,{\displaystyle n_{2},}the ratios of their lengths are in proportion to the ratios of the numbers. In modern notation: Galileo describes how to use these scales to divide a line into a number of equal parts, how to measure any fraction of a line, how to produce a scaled version of a figure or map, how to solve Euclid's Golden Rule (also called theRule of Three), how to convert a value in one currency into the value in another currency, and how to calculate the compounded value of an investment. As an example, the procedure for calculating the compounded value of an investment is as follows. If the initial investment is P0, set the divider to the distance from the pivot to the point marked at P0 on the arithmetic lines. Open the instrument and set the crosswise distance at the point 100–100 on the arithmetic lines to the distance just measured to P0. If the interest rate for the period is say 6%, then set the divider to the crosswise distance at 106-106. Place the divider at the pivot, and see where the other end falls on the arithmetic lines. This is the value of the investment at the end of the first period. Now set the crosswise distance at 100-100 again to the current divider separation and repeat the procedure for as many periods as needed. The next set of lines are called thegeometric lines, which have a scale numbered from 1 to 50, with lengths proportional to the square root, called geometric because they are used for finding thegeometric meanand working with areas of plane figures. If we call the length from the pivotG,{\displaystyle G,}then: Galileo describes how to use these lines to scale a figure such that the new figure has a given area ratio to the original, how to measure the area ratio of two similar figures, how to combine a set of similar figures into another similar figure such that the resulting figure has the combined area of the set, how to construct a similar figure that has area equal to the difference in area of two other similar figures, how to find the square root of a number, how to arrange N soldiers into a grid where the ratio of rows to columns is some specified value, and how to find thegeometric meanof two numbers. As an example, the procedure for producing a similar figure that has the combined area of a set of similar figures, is as follows: Choose a side in the largest figure and measure its length with a divider. Open the sector and set the crosswise distance at some intermediate value on the geometric lines to the divider separation, any number will do, say 20. Then measure the length of the corresponding side in each of the other figures, and read the Geometric Line scale value where the crosswise distance matches these lengths. Add together all the scale readings, including the 20 we originally set. At the combined value on the geometric lines, measure the crosswise distance. This will be the length of the side of the figure that has the combined area of the set. You can then use the arithmetic scale to scale all the other side lengths in the largest figure to match. This procedure will work for any closed figure made from straight lines. The procedure for calculating a square root varies depending on the size of the radicand. For a "medium" number ("in the region of 5,000"), start by measuring the distance from the pivot to the point marked 40 on the arithmetic lines, and setting the crosswise distance of the sector at 16–16 on the geometric lines to this distance. Next take your number and divide by 100, rounding to the nearest integer. So for example 8679 becomes 87. If this number is greater than 50 (the largest value on the geometric lines scale) then it must be reduced, in this example perhaps divided by 3 to make 29. Next measure the crosswise distance on the geometric lines at 29, this distance on the arithmetic lines represents2900.{\displaystyle {\sqrt {2900}}.}Because our number was reduced to fit on the sector, we must scale the length up by3.{\displaystyle {\sqrt {3}}.}We can choose any convenient value, e.g. 10, setting the sector crosswise distance at 10 to the divider separation, and then measure the crosswise distance at 30 on the geometric lines, then place the divider against the arithmetic lines to measure8700,{\displaystyle {\sqrt {8700}},}which is close enough to8679{\displaystyle {\sqrt {8679}}} The procedure for calculating the square root of a “small” number, a number “around 100”, is simpler: we don't bother dividing by 100 at the beginning but otherwise perform the same procedure. At the end, divide the resulting square root estimate by 10. For "large" numbers ("around 50,000"), set the sector crosswise at 10–10 on the geometric lines to the distance from the pivot to the point at 100 on the arithmetic lines. Divide the number by 1000 and round to the nearest integer. Then follow a similar procedure as before. Galileo provides no further guidance, or refinement. Knowing which procedure to use for a given number requires some thought, and an appreciation for thepropagation of uncertainty. Thestereometric linesare so called because they relate tostereometry, the geometry of three-dimensional objects. The scale is marked to 148, and the distance from the pivot is proportional to the cube root. If we call the lengthS,{\displaystyle S,}then These lines operate in an analogous way to the geometric lines, except that they deal with volumes instead of areas. Galileo describes how to use these lines to find the corresponding side length in a similar solid where the solid has a given volume ratio to the original, how to determine the volume ratio of two similar solids given the lengths of a pair of corresponding sides, how to find the side lengths of a similar solid that has the combined volume of a set of other similar solids, how to find the cube root of a number, how to find the two values intermediate between two numbersp{\displaystyle p}andq{\displaystyle q}such thatn1=rp{\displaystyle n_{1}=rp},n2=r2p{\displaystyle n_{2}=r^{2}p}andq=r3p{\displaystyle q=r^{3}p}for a given scaling factorr{\displaystyle r}, and how to find the side of a cube that has the same volume as arectangular cuboid(square-cornered box). To cube a rectangular cuboid of sidesa{\displaystyle a},b,{\displaystyle b,}andc{\displaystyle c}amounts to computings=abc3.{\displaystyle s={\sqrt[{3}]{abc}}.}Galileo's method is to first use the geometric lines to find the geometric mean of two of the sides,g=ab.{\displaystyle g={\sqrt {ab}}.}He then measures the distance along the arithmetic lines to the point markedg{\displaystyle g}using a divider, and then sets the sector crosswise to this distance at the point markedg{\displaystyle g}on the stereometric lines, calibrating the sector so that the distance from the pivot to the pointg{\displaystyle g}on the stereometric lines representsab3,{\displaystyle {\sqrt[{3}]{ab}},}the side of a cube with the volume of a cuboid with sidesa,{\displaystyle a,}b,{\displaystyle b,}and1.{\displaystyle 1.}He then measures the distance from the pivot to the point markedc{\displaystyle c}on the arithmetic lines, and sees at what value on the stereometric lines this distance fits crosswise, thus multiplying the previous result byc3,{\displaystyle {\sqrt[{3}]{c}},}resulting inabc3{\displaystyle {\sqrt[{3}]{abc}}}as desired. The procedure for calculating cube roots is like that used for square roots, except that it only works for values of 1,000 or more. For “medium” numbers we set the sector crosswise at 64–64 on the stereometric lines to the distance from the pivot to the point marked 40 on the arithmetic lines. We then drop the last three digits from our number, and if the number we dropped was more than 500, we add one to the remainder. We measure the crosswise distance on the stereometric lines at the remainder value, and place this against the arithmetic lines to find the cube root. The largest number that can be handled without rescaling here is 148,000. For “large” numbers we set the sector crosswise at 100–100 on the stereometric lines to the distance from the pivot to the point 100 on the arithmetic lines, and instead of dropping three digits, we drop four. This can handle numbers from 10,000 up to 1,480,000 without rescaling. For practical use, you should use the medium number procedure for all values up to 148,000 that are not within about 2% of a multiple of 10,000. Themetallic lines, the outermost pair on the front face, are marked with the symbols "ORO" (fororo,gold), PIO (forpiombo,lead), "AR" (forargento,silver), "RA" (forrame,copper), "FE" (forferro,iron), "ST" (forstagno,tin), "MA" (formarmo,marble), and "PIE" (forpietra,stone). These symbols are arranged by decreasingspecific weightsor densities, with distance proportional to the inverse cube root. Given two materials of densityρ1{\displaystyle \rho _{1}}andρ2,{\displaystyle \rho _{2},}if we call the length from the pivotM,{\displaystyle M,} The ratio of lengths on this scale is proportional to the ratio of diameters of two balls of the same weight but different materials. These lines were of interest to artillerymen to solve the problem of “making the caliber”, that is how to figure out the correct powder charge to use for a cannonball of some size and material, when the correct charge is known for a cannonball of a different size and material. To do that, you would measure the diameter of the cannonball with the known charge and set the sector crosswise at this cannonball's material mark on the metallic lines to that diameter. The crosswise distance at the second cannonball's material type gives you the diameter of a cannonball in this material that is the same weight as the first ball. We need to scale this length down stereometrically to the given diameter of the second ball to get the correct charge, so we set the crosswise distance on the stereometric lines at 100–100 to the crosswise distance we just measured from the metallic lines, and then see where the crosswise distance on the stereometric lines matches the actual diameter of the second ball. The charge required is then in the ratio of this scale reading to 100 compared to the ball with known charge. You could then use the arithmetic lines to scale the charge weight in this ratio. Thepolygraphic lines, innermost scale on the back of the instrument, is labelled from 3 to 15, and the distance from the pivot is inversely proportional to the side length of a regular polygon ofn{\displaystyle n}sides inscribed in a given circle, or directly proportional to the circumradius of a regular polygon ofn{\displaystyle n}sides of a given length. IfP{\displaystyle P}is the length on the polygraphic scale andcrd{\displaystyle \operatorname {crd} }represents the trigonometricchordlength of a circular arc measured in degrees, then Using functional notation in terms of the modernsinefunction, whereR{\displaystyle R}is the circumradius for a hexagon,P(6)=R.{\displaystyle P(6)=R.}These lines can be used to aid in the construction of anyregular polygonfrom the 3-sidedequilateral triangleto the 15-sidedpentadecagon. Galileo describes how to use these lines to find the radius of an enclosing circle for a polygon of n sides of a given length or in the other direction how to find the length of achordthat divides the circumference of a circle inton{\displaystyle n}parts. The procedure for finding the radius of the enclosing circle is as follows: Open the sector and set the crosswise distance at the point 6–6 on the polygraphic lines to the desired side length. The distance measured crosswise atn{\displaystyle n}on the polygraphic lines is the radius of the enclosing circle. Thetetragonic linesare marked from 13 down to 3 as you move away from the pivot, and the distance from the pivot can be inferred to beL(n)=Ltn3tan⁡180n{\displaystyle L(n)=L_{t}{\sqrt {\frac {n}{{\sqrt {3}}\tan {\frac {180}{n}}}}}}, whereLt{\displaystyle L_{t}}is the distance from the pivot to the point marked 3. There is a circle on the scale that lies nearly midway between 6 and 7. The name comes fromtetragon(quadrilateral), as the main purpose of these lines is thequadratureof regular polygons, that is, finding the side of a square whose area is the same as the given regular polygon. They can also be used tosquare the circle. The area of a regular polygon withn{\displaystyle n}sides isA(n)=L2n4tan⁡180n{\displaystyle A(n)=L^{2}{\frac {n}{4\tan {\frac {180}{n}}}}}, whereL{\displaystyle L}is the side length of the polygon. The radius of the circle with equal area isr=Ln4πtan⁡180n{\displaystyle r=L{\frac {n}{4\pi \tan {\frac {180}{n}}}}}. The value ofn{\displaystyle n}at which the radius of the circle is the same as the side length of the polygon, isn=6.5437{\displaystyle n=6.5437}. There is, of course, no such polygon, but this gives us a reference point on the Tetragonic Lines, the indicated circle, where it is easy to read off crosswise the radius of the circle that is equal in area to the polygon withn{\displaystyle n}sides if we set the sector atn{\displaystyle n}on the Tetragonic Lines crosswise to the polygon side length. Squaring the circle is then just usingn=4{\displaystyle n=4}. To square the polygon, all we do is set the sector crosswise atn{\displaystyle n}to the side length, and measure crosswise atn=4{\displaystyle n=4}. It is just as easy to find the required side lengths for any two polygons of equal area with different number of sides. The outermost set of lines on the back have a double scale, an outer and an inner scale. The outer scale is linear and runs from 18 down to 0 as you move away from the pivot, and the zero point is marked with a ⌓, the symbol for acircular segment. This zero point is about 70% of the way out along the arm. The inner scale is also described to run from 18 down to 0, but the sector in the Galileo Museum is only marked from 17. The zero point on the inner scale lies further out on the arm, at a distance ofπ/2La{\textstyle {\sqrt {\pi /2}}L_{a}}whereLa{\displaystyle L_{a}}is the distance from the pivot to the zero on the outer scale, and the zero is marked with a small square. The outer scale zero lies close to the point marked 6 on the inner scale. The inner scale at first glance also appears linear, but its point spacings are actually determined by a fairly complex formula which we have to infer as Galileo does not describe how this scale was constructed. The name of these lines derives from the fact that they were added by Galileo to an earlier version of his sector. These lines are used for squaring circular segments, that is finding the side length of a square that is equal in area to a circular segment with a given chord length and height, where the segment is at most a semicircle. The procedure for square a circular segment is as follows. Measure the half-length of the chord,c{\displaystyle c}. At the chord midpoint, measure the length of the line perpendicular to the chord to where it intersects the circle, the heighth{\displaystyle h}. Set the sector crosswise on the added lines at the zero of the outer scale to the half-chord length,c{\displaystyle c}. Find the point on the outer scale,n{\displaystyle n}, where the crosswise distance ish{\displaystyle h};h{\displaystyle h}must be less than or equal toc{\displaystyle c}. Move to the point on the inner scale that is also markedn{\displaystyle n}. The crosswise distance between the points n-n on the inner scale is the side length of the square equal in area to the circular segment. To see how this works, we start by noting (as can be seen in the figure incircular segment), that the area of the segment is the difference between the area of the pie slice defined by where the chord cuts the circle, and the triangle formed by the chord and the two radii that touch the ends of the chord. The base of the triangle has length2c{\displaystyle 2c}, and the height of the triangle isr−h{\displaystyle r-h}, so thearea of the triangleisAt=c(r−h){\displaystyle A_{t}=c(r-h)}. UsingPythogras' theorem, we can show thatr=(c2+h2)/2h{\displaystyle r=(c^{2}+h^{2})/2h}. The area of the pie slice is the fraction of the area of the circle covered by the angleθ{\displaystyle \theta }. Forθ{\displaystyle \theta }inradians, this area isApie=θ/2r2=r2arcsin⁡(c/r){\displaystyle A_{pie}=\theta /2r^{2}=r^{2}\arcsin(c/r)}, wherearcsin{\displaystyle \arcsin }is theinverse sinefunction. If we definex=h/c{\displaystyle x=h/c}, andz=(1+x2)/2x{\displaystyle z=(1+x^{2})/2x}, then we can write the area of the segment asAseg=c2(z2arcsin⁡1/z−z+x){\displaystyle A_{seg}=c^{2}(z^{2}\arcsin 1/z-z+x)}. The distance from the pivot to the point markedn{\displaystyle n}on the outer scale isLouter(n)=La(1−n/20){\displaystyle L_{outer}(n)=L_{a}(1-n/20)}whereLa{\displaystyle L_{a}}is the distance from the pivot to the zero point on the outer scale. When we set the sector crosswise toc{\displaystyle c}at the zero point and find the point on the outer scale where the crosswise distance ish{\displaystyle h}, we set up a pair of similar triangles that share the angle made by the arms of the sector at the pivot, so thath/c=1−n/20{\displaystyle h/c=1-n/20}. If we set the distance of the pointn{\displaystyle n}from the pivot on the inner scale toLinner(n)=Laz2arcsin⁡1/z−z+x{\textstyle L_{inner}(n)=L_{a}{\sqrt {z^{2}\arcsin 1/z-z+x}}}, withx=1−n/20{\displaystyle x=1-n/20}, andz{\displaystyle z}defined as before, then the crosswise distance measured atn{\displaystyle n}on the inner scale will be the side length of the square with area equal to that of the segment. The sector came with aplumb boband a detachablequadrantwhich, when in place, would lock the arms at 90° to each other. The sector could then be used for sighting and distance measurements usingtriangulation, with applications in surveying and ballistics. The sector could also be used to easily determine the elevation of a cannon by inserting one arm into the barrel and reading the elevation from the location of the plumb bob.
https://en.wikipedia.org/wiki/Sector_(instrument)
Aslide chartis a hand-held device, usually of paper, cardboard, or plastic, for conducting simple calculations or looking up information. A circular slide chart is sometimes referred to as a wheel chart orvolvelle. Unlike other hand-held mechanical calculating devices such asslide rulesandaddiators, which have been replaced by electronic calculators and computer software, wheel charts and slide charts have survived to the present time. There are a number of companies who design and manufacture these devices. Unlike the general-purpose mechanical calculators, slide charts are typically devoted to carrying out a particular specialized calculation, or displaying information on a single product or a particular process. For example, the "Curveasy" wheel chart displays information related to spherical geometry calculations, and the Prestolog calculator is used for cost/profit calculations. Another example of a wheel chart is theplanisphere, which shows the location of stars in the sky for a given location, date, and time. Slide charts are often associated with particular sports, political campaigns or commercial companies. For example, a pharmaceutical company may create wheel charts printed with their company name and product information for distribution to medical practitioners. Slide charts are common collectables. Thisdesign-related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Slide_chart
Timeline of computingpresents events in the history of computing organized by year and grouped into six topic areas: predictions and concepts, first use and inventions, hardware systems and processors, operating systems, programming languages, and new application areas. Detailed computing timelines:before 1950,1950–1979,1980–1989,1990–1999,2000–2009,2010–2019,2020–present
https://en.wikipedia.org/wiki/Timeline_of_computing
Avernier scale(/vərˈniːˈər/ver-NEE-er), named afterPierre Vernier, is a visual aid to take an accurate measurement reading between twograduationmarkings on a linear scale by using mechanicalinterpolation, which increasesresolutionand reducesmeasurement uncertaintyby usingvernier acuity. It may be found on many types of instrumentmeasuring lengthormeasuring angles, but in particular on avernier caliper, which measures lengths ofhuman-scaleobjects (including internal and external diameters). The vernier is a subsidiary scale replacing a single measured-value pointer, and has for instance ten divisions equal in distance to nine divisions on the main scale. The interpolated reading is obtained by observing which of the vernier scale graduations is coincident with a graduation on the main scale, which is easier to perceive than visual estimation between two points. Such an arrangement can go to a higher resolution by using a higher scale ratio, known as the vernier constant. A vernier may be used on circular or straight scales where a simple linear mechanism is adequate. Examples arecalipersandmicrometersto measure to finetolerances, onsextantsfornavigation, ontheodolitesinsurveying, and generally onscientific instruments. The Vernier principle of interpolation is also used for electronic displacement sensors such asabsolute encodersto measure linear or rotational movement, as part of an electronic measuring system. The first caliper with a secondary scale, which contributed extra precision, was invented in 1631 by theFrenchmathematicianPierre Vernier(1580–1637).[1]Its use was described in detail in English inNavigatio Britannica(1750) by mathematician and historianJohn Barrow.[2]While calipers are the most typical use of vernier scales today, they were originally developed for angle-measuring instruments such asastronomical quadrants. In some languages, the vernier scale is called a 'nonius'afterPortuguesemathematician and cosmographerPedro Nunes(Latin -Petrus Nonius, 1502–1578). In English, this term was used until the end of the 18th century.[3]Noniusnow refers to an earlier instrument that Nunes developed. The name "vernier" was popularised by the French astronomerJérôme Lalande(1732–1807) through his 'Traité d'astronomie'(2 vols) (1764).[4] The use of the vernier scale is shown on a vernier caliper which measures the internal and the external diameters of an object. The vernier scale is constructed so that it is spaced at a constant fraction of the fixed main scale. So for a vernier with a constant of 0.1, each mark on the vernier is spaced 9/10 of those on the main scale. If you put the two scales together with zero points aligned, the first mark on the vernier scale is 1/10 short of the first main scale mark, the second is 2/10 short, and so on up to the ninth mark, which is misaligned by 9/10. Only when a full ten marks are counted, is there alignment, because the tenth mark is 10/10—a whole main scale unit—short, and therefore aligns with the ninth mark on the main scale. (In simple words, eachVSD = 0.9 MSD, so each decrement of length 0.1 adds 10 times to make one MSD only in 9 divisions of vernier scale division). Now if you move the vernier by a small amount, say, 1/10 of its fixed main scale, the only pair of marks that come into alignment are the first pair, since these were the only ones originally misaligned by 1/10. If we move it 2/10, the second pair aligns, since these are the only ones originally misaligned by that amount. If we move it 5/10, the fifth pair aligns—and so on. For any movement, only one pair of marks aligns and that pair shows the value between the marks on the fixed scale. The difference between the value of one main scale division and the value of one vernier scale division is known as the least count of the vernier, also known as the vernier constant. Let the measure of the smallest main-scale reading, that is the distance between two consecutive graduations (also called itspitch) beS, and the distance between two consecutive vernier scale graduations beV, such that the length of (n− 1) main-scale divisions is equal tonvernier-scale divisions. Then Vernier scales work so well because most people are especially good at detecting which of the lines is aligned and misaligned, and that ability gets better with practice, in fact far exceeding the optical capability of the eye. This ability to detect alignment is calledvernier acuity.[5]Historically, none of the alternative technologies exploited this or any other hyperacuity, giving the vernier scale an advantage over its competitors.[6] Zero error is defined as the condition where a measuring instrument registers a nonzero value at the zero position. In case of vernier calipers it occurs when a zero on main scale does not coincide with a zero on vernier scale. The zero error may be of two types: when the scale is towards numbers greater than zero, it is positive; otherwise it is negative. The method to use a vernier scale or caliper with zero error is to use the formula Zero error may arise due to knocks or other damage which causes the 0.00 mm marks to be misaligned when the jaws are perfectly closed or just touching each other. Positive zero error refers to the case when the jaws of the vernier caliper are just closed and the reading is a positive reading away from the actual reading of 0.00mm. If the reading is 0.10mm, the zero error is referred to as +0.10 mm. Negative zero error refers to the case when the jaws of the vernier caliper are just closed and the reading is a negative reading away from the actual reading of 0.00mm. If the reading is 0.08mm, the zero error is referred to as −0.08mm. If positive, the error is subtracted from the mean reading the instrument reads. Thus if the instrument reads 4.39 cm and the error is +0.05, the actual length will be 4.39 − 0.05 = 4.34. If negative, the error is added to the mean reading the instrument reads. Thus if the instrument reads 4.39 cm and as above the error is −0.05 cm, the actual length will be 4.39 + 0.05 = 4.44. (Considering that, the quantity is called zero correction which should always be added algebraically to the observed reading to the correct value.) Direct verniersare the most common. The indicating scale is constructed so that when its zero point coincides with the start of the data scale, itsgraduationsare at a slightly smaller spacing than those on the data scale and so none but the last graduation coincide with any graduations on the data scale.Ngraduations of the indicating scale coverN− 1 graduations of the data scale. Retrograde verniersare found on some devices, including surveying instruments.[7]A retrograde vernier is similar to the direct vernier, except its graduations are at a slightly larger spacing than on the main scale.Ngraduations of the indicating scale coverN+ 1 graduations of the data scale. The retrograde vernier also extends backwards along the data scale. Direct and retrograde verniers are read in the same manner. This section includes references to techniques which use the Vernier principle to make fine-resolution measurements. Vernier spectroscopyis a type of cavity-enhanced laser absorption spectroscopy that is especially sensitive to trace gases. The method uses afrequency-comblaser combined with a high-finesseoptical cavityto produce anabsorption spectrumin a highly parallel manner. The method is also capable of detecting trace gases in very low concentration due to the enhancement effect of the optical resonator on the effective optical path length.[8]
https://en.wikipedia.org/wiki/Vernier_scale
Avolvelleorwheel chartis a type ofslide chart, apaperconstruction with rotating parts. It is considered an early example of a paperanalog computer.[1]Volvelles have been produced to accommodate organization and calculation in many diverse subjects. Early examples of volvelles are found in the pages ofastronomybooks. In the twentieth century, the volvelle had many diverse uses. InReinventing the Wheel, authorJessica Helfandintroduces twentieth-century volvelles with this: The twentieth century saw a robust growth in the design, manufacture, and production of a new generation of independent, free-standing volvelles. Categorically, they not only represent an unusually eclectic set of uses, but demonstrate, too, a remarkable range of stylistic, compositional, mechanical, informational, and kinetic conceits. There are volvelles that arrange their data peripherally, centrifugally, and radially; volvelles that use multiple concentric circles with pointers; and volvelles that benefit from the generous use of the die-cut, a particular technological hallmark of modern printing manufacture. Twentieth-century volvelles—often referred to aswheel charts—offer everything from inventory control to color calibration, mileage metering to verb conjugation. They anticipate animal breeding cycles and calculate radiation exposure, measure chocolate consumption and quantify bridge tips, chart bird calls, convert metrics, and calculate taxes. There are fortune-telling wheels and semaphore-charting wheels; emergency first-aid wheels and electronic fix-it wheels; playful wheels that test phonetics and prophylactic wheels that prevent pregnancy.[2] The rock bandLed Zeppelinemployed a volvelle in the sleeve design for the albumLed Zeppelin III(1970). Two games from the game companyInfocomincluded volvelles inside their package as "feelies":Sorcerer(1983) andA Mind Forever Voyaging(1985). Both volvelles served to impede copying of the games, because they contained information needed to play the game.
https://en.wikipedia.org/wiki/Volvelle
This article lists thecompaniesworldwideengaged in the development of quantum computing, quantum communication and quantum sensing.Quantum computingand communication are two sub-fields ofquantum information science, which describes and theorizesinformation sciencein terms ofquantum physics. While the fundamental unit of classical information is thebit, the basic unit ofquantuminformation is thequbit.Quantum sensingis the third main sub-field of quantum technologies and it focus consists in taking advantage of the quantum states sensitivity to the surrounding environment to perform atomic scale measurements. Computing/Communication Quantum Hardware Quantum Algorithms Quantum Computing Software Universitat de Barcelona (UB) Barcelona Supercomputing Center (BSC) Quantum Algorithms Quantum Random Number Generator Quantum Hardware Computing Crypto accelerators, High Performance Computing Orquestra Quantum Operating Environment[116]
https://en.wikipedia.org/wiki/List_of_companies_involved_in_quantum_computing_or_communication
Ananalog computeroranalogue computeris a type ofcomputationmachine (computer) that uses physical phenomena such aselectrical,mechanical, orhydraulicquantities behaving according to the mathematical principles in question (analog signals) tomodelthe problem being solved. In contrast,digital computersrepresent varying quantities symbolically and by discrete values of both time and amplitude (digital signals). Analog computers can have a very wide range of complexity.Slide rulesandnomogramsare the simplest, while naval gunfire control computers and large hybrid digital/analog computers were among the most complicated.[1]Complex mechanisms forprocess controlandprotective relaysused analog computation to perform control and protective functions. Analog computers were widely used in scientific and industrial applications even after the advent of digital computers, because at the time they were typically much faster, but they started to become obsolete as early as the 1950s and 1960s, although they remained in use in some specific applications, such as aircraftflight simulators, theflight computerinaircraft, and for teachingcontrol systemsin universities. Perhaps the most relatable example of analog computers aremechanical watcheswhere the continuous and periodic rotation of interlinked gears drives the second, minute and hour needles in the clock. More complex applications, such as aircraft flight simulators andsynthetic-aperture radar, remained the domain of analog computing (andhybrid computing) well into the 1980s, since digital computers were insufficient for the task.[2] This is a list of examples of early computation devices considered precursors of the modern computers. Some of them may even have been dubbed 'computers' by the press, though they may fail to fit modern definitions. TheAntikythera mechanism, a type of device used to determine the positions ofheavenly bodiesknown as anorrery, was described as an early mechanical analog computer by British physicist, information scientist, and historian of scienceDerek J. de Solla Price.[3]It was discovered in 1901, in theAntikythera wreckoff the Greek island ofAntikythera, betweenKytheraandCrete, and has been dated toc.150~100 BC, during theHellenistic period. Devices of a level of complexity comparable to that of the Antikythera mechanism would not reappear until a thousand years later. Many mechanical aids to calculation and measurement were constructed for astronomical and navigation use. Theplanispherewas first described byPtolemyin the 2nd century AD. Theastrolabewas invented in theHellenistic worldin either the 1st or 2nd centuries BC and is often attributed toHipparchus. A combination of the planisphere anddioptra, the astrolabe was effectively an analog computer capable of working out several different kinds of problems inspherical astronomy. Thesector, a calculating instrument used for solving problems in proportion, trigonometry, multiplication and division, and for various functions, such as squares and cube roots, was developed in the late 16th century and found application in gunnery, surveying and navigation. Theplanimeterwas a manual instrument to calculate the area of a closed figure by tracing over it with a mechanical linkage. Theslide rulewas invented around 1620–1630, shortly after the publication of theconcept of the logarithm. It is a hand-operated analog computer for doing multiplication and division. As slide rule development progressed, added scales provided reciprocals, squares and square roots, cubes and cube roots, as well astranscendental functionssuch as logarithms and exponentials, circular and hyperbolic trigonometry and otherfunctions. Aviation is one of the few fields where slide rules are still in widespread use, particularly for solving time–distance problems in light aircraft. In 1831–1835, mathematician and engineerGiovanni Planadevised aperpetual-calendar machine, which, through a system of pulleys and cylinders, could predict theperpetual calendarfor every year from AD 0 (that is, 1 BC) to AD 4000, keeping track of leap years and varying day length.[4] Thetide-predicting machineinvented bySir William Thomsonin 1872 was of great utility to navigation in shallow waters. It used a system of pulleys and wires to automatically calculate predicted tide levels for a set period at a particular location. Thedifferential analyser, a mechanical analog computer designed to solvedifferential equationsbyintegration, used wheel-and-disc mechanisms to perform the integration. In 1876James Thomsonhad already discussed the possible construction of such calculators, but he had been stymied by the limited output torque of theball-and-disk integrators. Several systems followed, notably those of SpanishengineerLeonardo Torres Quevedo, who built variousanalog machinesfor solving real and complex roots ofpolynomials;[5][6][7]and Michelson and Stratton, whose Harmonic Analyser performed Fourier analysis, but using an array of 80 springs rather than Kelvin integrators. This work led to the mathematical understanding of theGibbs phenomenonof overshoot in Fourier representation near discontinuities.[8]In a differential analyzer, the output of one integrator drove the input of the next integrator, or a graphing output. Thetorque amplifierwas the advance that allowed these machines to work. Starting in the 1920s,Vannevar Bushand others developed mechanical differential analyzers. TheDumaresqwas a mechanical calculating device invented around 1902 by LieutenantJohn Dumaresqof theRoyal Navy. It was an analog computer that related vital variables of the fire control problem to the movement of one's own ship and that of a target ship. It was often used with other devices, such as aVickers range clockto generate range and deflection data so the gun sights of the ship could be continuously set. A number of versions of the Dumaresq were produced of increasing complexity as development proceeded. By 1912,Arthur Pollenhad developed an electrically driven mechanical analog computer forfire-control systems, based on the differential analyser. It was used by theImperial Russian NavyinWorld War I.[9] Starting in 1929,AC network analyzerswere constructed to solve calculation problems related to electrical power systems that were too large to solve withnumerical methodsat the time.[10]These were essentially scale models of the electrical properties of the full-size system. Since network analyzers could handle problems too large for analytic methods or hand computation, they were also used to solve problems in nuclear physics and in the design of structures. More than 50 large network analyzers were built by the end of the 1950s. World War IIera gundirectors,gun data computers, andbomb sightsused mechanical analog computers. In 1942Helmut Hölzerbuilt a fully electronic analog computer atPeenemünde Army Research Center[11][12][13]as an embedded control system (mixing device) to calculateV-2 rockettrajectories from the accelerations and orientations (measured bygyroscopes) and to stabilize and guide the missile.[14][15]Mechanical analog computers were very important ingun fire controlin World War II, the Korean War and well past the Vietnam War; they were made in significant numbers. In the period 1930–1945 in the Netherlands,Johan van Veendeveloped an analogue computer to calculate and predict tidal currents when the geometry of the channels are changed. Around 1950, this idea was developed into theDeltar, ahydraulic analogycomputer supporting the closure of estuaries in the southwest of the Netherlands (theDelta Works). TheFERMIACwas an analog computer invented by physicistEnrico Fermiin 1947 to aid in his studies of neutron transport.[16]Project Cyclonewas an analog computer developed by Reeves in 1950 for the analysis and design of dynamic systems.[17]Project Typhoon was an analog computer developed by RCA in 1952. It consisted of over 4,000 electron tubes and used 100 dials and 6,000 plug-in connectors to program.[18]TheMONIAC Computerwas a hydraulic analogy of a national economy first unveiled in 1949.[19] Computer Engineering Associates was spun out ofCaltechin 1950 to provide commercial services using the "Direct Analogy Electric Analog Computer" ("the largest and most impressive general-purpose analyzer facility for the solution of field problems") developed there by Gilbert D. McCann, Charles H. Wilts, andBart Locanthi.[20][21] Educational analog computers illustrated the principles of analog calculation. TheHeathkitEC-1, a $199 educational analog computer, was made by the Heath Company, USc.1960.[22]It was programmed using patch cords that connected nineoperational amplifiersand other components.[23]General Electricalso marketed an "educational" analog computer kit of a simple design in the early 1960s consisting of two transistor tone generators and three potentiometers wired such that the frequency of the oscillator was nulled when the potentiometer dials were positioned by hand to satisfy an equation. The relative resistance of the potentiometer was then equivalent to the formula of the equation being solved. Multiplication or division could be performed, depending on which dials were inputs and which was the output. Accuracy and resolution was limited and a simple slide rule was more accurate. However, the unit did demonstrate the basic principle. Analog computer designs were published in electronics magazines. One example is the PEAC (Practical Electronics analogue computer), published inPractical Electronicsin the January 1968 edition.[24]Another more modern hybrid computer design was published inEveryday Practical Electronicsin 2002.[25]An example described in the EPE hybrid computer was the flight of aVTOL aircraftsuch as theHarrier jump jet.[25]The altitude and speed of the aircraft were calculated by the analog part of the computer and sent to a PC via a digital microprocessor and displayed on the PC screen. In industrialprocess control, analog loop controllers were used to automatically regulate temperature, flow, pressure, or other process conditions. The technology of these controllers ranged from purely mechanical integrators, through vacuum-tube and solid-state devices, to emulation of analog controllers by microprocessors. The similarity between linear mechanical components, such asspringsanddashpots(viscous-fluid dampers), and electrical components, such ascapacitors,inductors, andresistorsis striking in terms of mathematics. They can be modeled using equations of the same form. However, the difference between these systems is what makes analog computing useful. Complex systems often are not amenable to pen-and-paper analysis, and require some form of testing or simulation. Complex mechanical systems, such as suspensions for racing cars, are expensive to fabricate and hard to modify. And taking precise mechanical measurements during high-speed tests adds further difficulty. By contrast, it is very inexpensive to build an electrical equivalent of a complex mechanical system, to simulate its behavior. Engineers arrange a fewoperational amplifiers(op amps) and some passive linear components to form a circuit that follows the same equations as the mechanical system being simulated. All measurements can be taken directly with anoscilloscope. In the circuit, the (simulated) stiffness of the spring, for instance, can be changed by adjusting the parameters of an integrator. The electrical system is an analogy to the physical system, hence the name, but it is much less expensive than a mechanical prototype, much easier to modify, and generally safer. The electronic circuit can also be made to run faster or slower than the physical system being simulated. Experienced users of electronic analog computers said that they offered a comparatively intimate control and understanding of the problem, relative to digital simulations. Electronic analog computers are especially well-suited to representing situations described by differential equations. Historically, they were often used when a system of differential equations proved very difficult to solve by traditional means. As a simple example, the dynamics of aspring-mass systemcan be described by the equationmy¨+dy˙+cy=mg{\displaystyle m{\ddot {y}}+d{\dot {y}}+cy=mg}, withy{\displaystyle y}as the vertical position of a massm{\displaystyle m},d{\displaystyle d}thedamping coefficient,c{\displaystyle c}thespring constantandg{\displaystyle g}thegravity of Earth. For analog computing, the equation is programmed asy¨=−dmy˙−cmy−g{\displaystyle {\ddot {y}}=-{\tfrac {d}{m}}{\dot {y}}-{\tfrac {c}{m}}y-g}. The equivalent analog circuit consists of two integrators for the state variables−y˙{\displaystyle -{\dot {y}}}(speed) andy{\displaystyle y}(position), one inverter, and three potentiometers. Electronic analog computers have drawbacks: the value of the circuit's supply voltage limits the range over which the variables may vary (since the value of a variable is represented by a voltage on a particular wire). Therefore, each problem must be scaled so its parameters and dimensions can be represented using voltages that the circuit can supply —e.g., the expected magnitudes of the velocity and the position of aspring pendulum. Improperly scaled variables can have their values "clamped" by the limits of the supply voltage. Or if scaled too small, they can suffer from highernoise levels. Either problem can cause the circuit to produce an incorrect simulation of the physical system. (Modern digital simulations are much more robust to widely varying values of their variables, but are still not entirely immune to these concerns:floating-point digital calculationssupport a hugedynamic range, but can suffer from imprecision if tiny differences of huge values lead tonumerical instability.) The precision of the analog computer readout was limited chiefly by the precision of the readout equipment used, generally three or four significant figures. (Modern digital simulations are much better in this area. Digitalarbitrary-precision arithmeticcan provide any desired degree of precision.) However, in most cases the precision of an analog computer is absolutely sufficient given the uncertainty of the model characteristics and its technical parameters. Many small computers dedicated to specific computations are still part of industrial regulation equipment, but from the 1950s to the 1970s, general-purpose analog computers were the only systems fast enough for real time simulation of dynamic systems, especially in the aircraft, military and aerospace field. In the 1960s, the major manufacturer wasElectronic AssociatesofPrinceton, New Jersey, with its 231R Analog Computer (vacuum tubes, 20 integrators) and subsequently its EAI 8800 Analog Computer (solid state operational amplifiers, 64 integrators).[26]Its challenger was Applied Dynamics ofAnn Arbor, Michigan. Although the basic technology for analog computers is usually operational amplifiers (also called "continuous current amplifiers" because they have no low frequency limitation), in the 1960s an attempt was made in the French ANALAC computer to use an alternative technology: medium frequency carrier and non dissipative reversible circuits. In the 1970s, every large company and administration concerned with problems in dynamics had an analog computing center, such as: An analog computing machine consists of several main components:[27][28][29][30] On the patch panel, various connections and routes can be set and switched to configure the machine and determine signal flows. This allows users to flexibly configure and reconfigure the analog computing system to perform specific tasks. Patch panels are used to controldata flows, connect and disconnect connections between various blocks of the system, including signal sources, amplifiers, filters, and other components. They provide convenience and flexibility in configuring and experimenting with analog computations. Patch panels can be presented as a physical panel with connectors or, in more modern systems, as a software interface that allows virtual management of signal connections and routes. Output devices in analog machines can vary depending on the specific goals of the system. For example, they could be graphical indicators,oscilloscopes, graphic recording devices,TV connection module,voltmeter, etc. These devices allow for the visualization of analog signals and the representation of the results of measurements or mathematical operations. These are just general blocks that can be found in a typical analog computing machine. The actual configuration and components may vary depending on the specific implementation and the intended use of the machine. Analog computing devices are fast; digital computing devices are more versatile and accurate. The idea behind an analog-digital hybrid is to combine the two processes for the best efficiency. An example of such hybrid elementary device is the hybrid multiplier, where one input is an analog signal, the other input is a digital signal and the output is analog. It acts as an analog potentiometer, upgradable digitally. This kind of hybrid technique is mainly used for fast dedicated real time computation when computing time is very critical, as signal processing for radars and generally for controllers inembedded systems. In the early 1970s, analog computer manufacturers tried to tie together their analog computers with a digital computers to get the advantages of the two techniques. In such systems, the digital computer controlled the analog computer, providing initial set-up, initiating multiple analog runs, and automatically feeding and collecting data. The digital computer may also participate to the calculation itself usinganalog-to-digitalanddigital-to-analog converters. The largest manufacturer ofhybrid computerswasElectronic Associates. Their hybrid computer model 8900 was made of a digital computer and one or more analog consoles. These systems were mainly dedicated to large projects such as theApollo programandSpace ShuttleatNASA, or Ariane in Europe, especially during the integration step where at the beginning everything is simulated, and progressively real components replace their simulated parts.[31] Only one company was known as offering general commercial computing services on its hybrid computers,CISIof France, in the 1970s. The best reference in this field is the 100,000 simulation runs for each certification of the automatic landing systems ofAirbusandConcordeaircraft.[32] After 1980, purely digital computers progressed more and more rapidly and were fast enough to compete with analog computers. One key to the speed of analog computers was their fully parallel computation, but this was also a limitation. The more equations required for a problem, the more analog components were needed, even when the problem wasn't time critical. "Programming" a problem meant interconnecting the analog operators; even with a removable wiring panel this was not very versatile. Throughout history, many types of mechanical analog computers have been invented. These ranged from simple devices (likeplanimeters) to complex fire-control systems that guided WWII naval guns. Practical mechanical analog computers of any significant complexity used rotating shafts to carry variables from one mechanism to another. Cables and pulleys were used in a Fourier synthesizer, atide-predicting machine, which summed the individual harmonic components. Another category, not nearly as well known, used rotating shafts only for input and output, with precision racks and pinions. The racks were connected to linkages that performed the computation. At least one U.S. Naval sonar fire control computer of the later 1950s, made byLibrascope, was of this type, as was the principal computer in the Mk. 56 Gun Fire Control System.[33] These computers often employed precision miter-gear differentials (pairs of bevel gears arranged to produce the sum or difference of two shaft rotations) to transmit variables between computing elements. The Ford Instrument Mark I Fire Control Computer, for example, contained approximately 160 miter-gear differentials.[34][35] Integration with respect to another variable was done by a rotating disc driven by one variable. Output came from a pick-off device (such as a wheel) positioned at a radius on the disc proportional to the second variable. (A carrier with a pair of steel balls supported by small rollers worked especially well. A roller, its axis parallel to the disc's surface, provided the output. It was held against the pair of balls by a spring.) Arbitrary functions of one variable were provided by cams, with gearing to convert follower movement to shaft rotation. Functions of two variables were provided by three-dimensional cams. In one good design, one of the variables rotated the cam. A hemispherical follower moved its carrier on a pivot axis parallel to that of the cam's rotating axis. Pivoting motion was the output. The second variable moved the follower along the axis of the cam. One practical application was ballistics in gunnery. Coordinate conversion from polar to rectangular was done by a mechanical resolver (called a "component solver" in US Navy fire control computers). Two discs on a common axis positioned a sliding block with pin (stubby shaft) on it. One disc was a face cam, and a follower on the block in the face cam's groove set the radius. The other disc, closer to the pin, contained a straight slot in which the block moved. The input angle rotated the latter disc (the face cam disc, for an unchanging radius, rotated with the other (angle) disc; a differential and a few gears did this correction). Referring to the mechanism's frame, the location of the pin corresponded to the tip of the vector represented by the angle and magnitude inputs. Mounted on that pin was a square block. Rectilinear-coordinate outputs (both sine and cosine, typically) came from two slotted plates, each slot fitting on the block just mentioned. The plates moved in straight lines, the movement of one plate at right angles to that of the other. The slots were at right angles to the direction of movement. Each plate, by itself, was like aScotch yoke, known to steam engine enthusiasts. During World War II, a similar mechanism converted rectilinear to polar coordinates, but it was not particularly successful and was eliminated in a significant redesign (USN, Mk. 1 to Mk. 1A). Multiplication was done by mechanisms based on the geometry of similar right triangles. Using the trigonometric terms for a right triangle, specifically opposite, adjacent, and hypotenuse, the adjacent side was fixed by construction. One variable changed the magnitude of the opposite side. In many cases, this variable changed sign; the hypotenuse could coincide with the adjacent side (a zero input), or move beyond the adjacent side, representing a sign change. Typically, a pinion-operated rack moving parallel to the (trig.-defined) opposite side would position a slide with a slot coincident with the hypotenuse. A pivot on the rack let the slide's angle change freely. At the other end of the slide (the angle, in trig. terms), a block on a pin fixed to the frame defined the vertex between the hypotenuse and the adjacent side. At any distance along the adjacent side, a line perpendicular to it intersects the hypotenuse at a particular point. The distance between that point and the adjacent side is some fraction that is the product of1the distance from the vertex, and2the magnitude of the opposite side. The second input variable in this type of multiplier positions a slotted plate perpendicular to the adjacent side. That slot contains a block, and that block's position in its slot is determined by another block right next to it. The latter slides along the hypotenuse, so the two blocks are positioned at a distance from the (trig.) adjacent side by an amount proportional to the product. To provide the product as an output, a third element, another slotted plate, also moves parallel to the (trig.) opposite side of the theoretical triangle. As usual, the slot is perpendicular to the direction of movement. A block in its slot, pivoted to the hypotenuse block positions it. A special type of integrator, used at a point where only moderate accuracy was needed, was based on a steel ball, instead of a disc. It had two inputs, one to rotate the ball, and the other to define the angle of the ball's rotating axis. That axis was always in a plane that contained the axes of two movement pick-off rollers, quite similar to the mechanism of a rolling-ball computer mouse (in that mechanism, the pick-off rollers were roughly the same diameter as the ball). The pick-off roller axes were at right angles. A pair of rollers "above" and "below" the pick-off plane were mounted in rotating holders that were geared together. That gearing was driven by the angle input, and established the rotating axis of the ball. The other input rotated the "bottom" roller to make the ball rotate. Essentially, the whole mechanism, called a component integrator, was a variable-speed drive with one motion input and two outputs, as well as an angle input. The angle input varied the ratio (and direction) of coupling between the "motion" input and the outputs according to the sine and cosine of the input angle. Although they did not accomplish any computation, electromechanical position servos (aka. torque amplifiers) were essential in mechanical analog computers of the "rotating-shaft" type for providing operating torque to the inputs of subsequent computing mechanisms, as well as driving output data-transmission devices such as large torque-transmitter synchros in naval computers. Other readout mechanisms, not directly part of the computation, included internal odometer-like counters with interpolating drum dials for indicating internal variables, and mechanical multi-turn limit stops. Considering that accurately controlled rotational speed in analog fire-control computers was a basic element of their accuracy, there was a motor with its average speed controlled by a balance wheel, hairspring, jeweled-bearing differential, a twin-lobe cam, and spring-loaded contacts (ship's AC power frequency was not necessarily accurate, nor dependable enough, when these computers were designed). Electronic analog computers typically have front panels with numerous jacks (single-contact sockets) that permit patch cords (flexible wires with plugs at both ends) to create the interconnections that define the problem setup. In addition, there are precision high-resolution potentiometers (variable resistors) for setting up (and, when needed, varying) scale factors. In addition, there is usually a zero-center analog pointer-type meter for modest-accuracy voltage measurement. Stable, accurate voltage sources provide known magnitudes. Typical electronic analog computers contain anywhere from a few to a hundred or moreoperational amplifiers("op amps"), named because they perform mathematical operations. Op amps are a particular type of feedback amplifier with very high gain and stable input (low and stable offset). They are always used with precision feedback components that, in operation, all but cancel out the currents arriving from input components. The majority of op amps in a representative setup are summing amplifiers, which add and subtract analog voltages, providing the result at their output jacks. As well, op amps with capacitor feedback are usually included in a setup; they integrate the sum of their inputs with respect to time. Integrating with respect to another variable is the nearly exclusive province of mechanical analog integrators; it is almost never done in electronic analog computers. However, given that a problem solution does not change with time, time can serve as one of the variables. Other computing elements include analog multipliers, nonlinear function generators, and analog comparators. Electrical elements such as inductors and capacitors used in electrical analog computers had to be carefully manufactured to reduce non-ideal effects. For example, in the construction ofAC power network analyzers, one motive for using higher frequencies for the calculator (instead of the actual power frequency) was that higher-quality inductors could be more easily made. Many general-purpose analog computers avoided the use of inductors entirely, re-casting the problem in a form that could be solved using only resistive and capacitive elements, since high-quality capacitors are relatively easy to make. The use of electrical properties in analog computers means that calculations are normally performed inreal time(or faster), at a speed determined mostly by the frequency response of the operational amplifiers and other computing elements. In the history of electronic analog computers, there were some special high-speed types. Nonlinearfunctions and calculations can be constructed to a limited precision (three or four digits) by designingfunction generators—special circuits of various combinations of resistors and diodes to provide the nonlinearity. Typically, as the input voltage increases, progressively more diodes conduct. When compensated for temperature, the forward voltage drop of a transistor's base-emitter junction can provide a usably accurate logarithmic or exponential function. Op amps scale the output voltage so that it is usable with the rest of the computer. Any physical process that models some computation can be interpreted as an analog computer. Some examples, invented for the purpose of illustrating the concept of analog computation, include using a bundle ofspaghettiasa model of sorting numbers; a board, a set of nails, and a rubber band as a model of finding theconvex hullof a set of points; and strings tied together as a model of finding the shortest path in a network. These are all described inDewdney(1984). Analog computers often have a complicated framework, but they have, at their core, a set of key components that perform the calculations. The operator manipulates these through the computer's framework. Key hydraulic components might include pipes, valves and containers. Key mechanical components might include rotating shafts for carrying data within the computer,miter geardifferentials, disc/ball/roller integrators,cams(2-D and 3-D), mechanical resolvers and multipliers, and torque servos. Key electrical/electronic components might include: The core mathematical operations used in an electric analog computer are: In some analog computer designs, multiplication is much preferred to division. Division is carried out with a multiplier in the feedback path of an Operational Amplifier. Differentiation with respect to time is not frequently used, and in practice is avoided by redefining the problem when possible. It corresponds in the frequency domain to a high-pass filter, which means that high-frequency noise is amplified; differentiation also risks instability. In general, analog computers are limited by non-ideal effects. Ananalog signalis composed of four basic components: DC and AC magnitudes, frequency, and phase. The real limits of range on these characteristics limit analog computers. Some of these limits include the operational amplifier offset, finite gain, and frequency response,noise floor,non-linearities,temperature coefficient, andparasitic effectswithin semiconductor devices. For commercially available electronic components, ranges of these aspects of input and output signals are alwaysfigures of merit. In the 1950s to 1970s, digital computers based on first vacuum tubes, transistors, integrated circuits and then micro-processors became more economical and precise. This led digital computers to largely replace analog computers. Even so, some research in analog computation is still being done. A few universities still use analog computers to teachcontrol system theory. The American company Comdyna manufactured small analog computers.[36]At Indiana University Bloomington, Jonathan Mills has developed the Extended Analog Computer based on sampling voltages in a foam sheet.[37]At the Harvard Robotics Laboratory,[38]analog computation is a research topic. Lyric Semiconductor's error correction circuits use analog probabilistic signals.Slide rulesare still used asflight computersinflight training. With the development ofvery-large-scale integration(VLSI) technology, Yannis Tsividis' group at Columbia University has been revisiting analog/hybrid computers design in standard CMOS process. Two VLSI chips have been developed, an 80th-order analog computer (250 nm) by Glenn Cowan[39]in 2005[40]and a 4th-order hybrid computer (65 nm) developed by Ning Guo in 2015,[41]both targeting at energy-efficient ODE/PDE applications. Glenn's chip contains 16 macros, in which there are 25 analog computing blocks, namely integrators, multipliers, fanouts, few nonlinear blocks. Ning's chip contains one macro block, in which there are 26 computing blocks including integrators, multipliers, fanouts, ADCs, SRAMs and DACs. Arbitrary nonlinear function generation is made possible by the ADC+SRAM+DAC chain, where the SRAM block stores the nonlinear function data. The experiments from the related publications revealed that VLSI analog/hybrid computers demonstrated about 1–2 orders magnitude of advantage in both solution time and energy while achieving accuracy within 5%, which points to the promise of using analog/hybrid computing techniques in the area of energy-efficient approximate computing.[citation needed]In 2016, a team of researchers developed a compiler to solvedifferential equationsusing analog circuits.[42] Analog computers are also used inneuromorphic computing, and in 2021 a group of researchers have shown that a specific type ofartificial neural networkcalled aspiking neural networkwas able to work with analog neuromorphic computers.[43] In 2021, the German companyanabridGmbH began to produceTHE ANALOG THING(abbreviated THAT), a small low-cost analog computer mainly for educational and scientific use.[44]The company is also constructing analogmainframesandhybrid computers. These are examples of analog computers that have been constructed or practically used: Analog (audio) synthesizerscan also be viewed as a form of analog computer, and their technology was originally based in part on electronic analog computer technology. TheARP 2600's Ring Modulator was actually a moderate-accuracy analog multiplier. The Simulation Council (or Simulations Council) was an association of analog computer users in US. It is now known as The Society for Modeling and Simulation International. The Simulation Council newsletters from 1952 to 1963 are available online and show the concerns and technologies at the time, and the common use of analog computers for missilry.[45]
https://en.wikipedia.org/wiki/Analog_computer
AQUA@homewas avolunteer computingproject operated byD-Wave Systemsthat ran on theBerkeley Open Infrastructure for Network Computing(BOINC)software platform. It ceased functioning in August 2011. Its goal was to predict the performance ofsuperconductingadiabatic quantum computerson a variety of problems arising in fields ranging frommaterials sciencetomachine learning. It designed and analyzedquantum computingalgorithms, usingQuantum Monte Carlotechniques. AQUA@home was the first BOINC project to providemulti-threadedapplications.[1]It was also the first project to deploy anOpenCLtest application under BOINC.[2] Thiscomputer sciencearticle is astub. You can help Wikipedia byexpanding it. Thisquantum mechanics-related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/AQUA@home
Inquantum computing, more specifically insuperconducting quantum computing,flux qubits(also known aspersistent current qubits) are micrometer sized loops of superconducting metal that is interrupted by a number ofJosephson junctions. These devices function asquantum bits. The flux qubit was first proposed by Terry P. Orlando et al. at MIT in 1999 and fabricated shortly thereafter.[1]During fabrication, the Josephson junction parameters are engineered so that a persistent current will flow continuously when an external magnetic flux is applied. Only an integer number offlux quantaare allowed to penetrate the superconducting ring, resulting in clockwise or counter-clockwisemesoscopicsupercurrents(typically 300 nA[2]) in the loop to compensate (screen or enhance) a non-integer external flux bias. When the applied flux through the loop area is close to a half integer number of flux quanta, the two lowest energyeigenstatesof the loop will be aquantum superpositionof the clockwise and counter-clockwise currents. The two lowest energy eigenstates differ only by the relative quantum phase between the composing current-direction states. Higher energy eigenstates correspond to much larger (macroscopic) persistent currents, that induce an additional flux quantum to the qubit loop, thus are well separated energetically from the lowest two eigenstates. This separation, known as the "qubit non linearity" criteria, allows operations with the two lowest eigenstates only, effectively creating atwo level system. Usually, the two lowest eigenstates will serve as the computational basis for thelogical qubit. Computational operations are performed by pulsing the qubit withmicrowavefrequency radiation which has an energy comparable to that of the gap between the energy of the two basis states, similar toRF-SQUID. Properly selected pulse duration and strength can put the qubit into aquantum superpositionof the two basis states while subsequent pulses can manipulate the probability weighting that the qubit will be measured in either of the two basis states, thus performing a computational operation. Flux qubits are fabricated using techniques similar to those used formicroelectronics. The devices are usually made on silicon or sapphire wafers usingelectron beam lithographyand metallic thin film evaporation processes. To createJosephson junctions, a technique known asshadow evaporationis normally used; this involves evaporating the source metal alternately at two angles through the lithography defined mask in the electron beam resist. This results in two overlapping layers of the superconducting metal, in between which a thin layer of insulator (normallyaluminum oxide) is deposited.[3] Dr. Shcherbakova's group reported using niobium as the contacts for their flux qubits. Niobium is often used as the contact and is deposited by employing a sputtering technique and using optical lithography to pattern the contacts. An argon beam can then be used to reduce the oxide layer that forms on top of the contacts. The sample must be cooled during the etching process in order to keep the niobium contacts from melting. At this point, the aluminum layers can be deposited on top of the clean niobium surfaces. The aluminum is then deposited in two steps from alternating angles on the niobium contacts. An oxide layer forms between the two aluminum layers in order to create the Al/AlOx/Al Josephson junction.[3]In standard flux qubits, 3 or 4 Josephson junctions will be patterned around the loop. Resonators can be fabricated to measure the readout of the flux qubit through a similar techniques. The resonator can be fabricated by e-beam lithography and CF4reactive ion etching of thin films of niobium or a similar metal. The resonator could then be coupled to the flux qubit by fabricating the flux qubit at the end of the resonator.[4] The flux qubit is distinguished from other known types ofsuperconducting qubitsuch as thecharge qubitorphase qubitby the coupling energy and charging energy of its junctions. In the charge qubit regime the charging energy of the junctions dominates the coupling energy. In a Flux qubit the situation is reversed and the coupling energy dominates. Typically for a flux qubit the coupling energy is 10-100 times greater than the charging energy which allows the Cooper pairs to flow continuously around the loop, rather than tunnel discretely across the junctions like a charge qubit. In order for a superconducting circuit to function as a qubit, there needs to be a non-linear element. If the circuit has a harmonic oscillator, such as in anLC-circuit, the energy levels are degenerate. This prohibits the formation of a two qubit computational space because any microwave radiation that is applied to manipulate the ground state and the first excited state to perform qubit operations would also excite the higher energy states. Josephson junctions are the only electronic element that are non-linear as well as non-dissipative at low temperatures[citation needed]. These are requirements for quantum integrated circuits, making the Josephson junction essential in the construction of flux qubits.[5]Understanding the physics of the Josephson junction will improve comprehension of how flux qubits operate. Essentially, Josephson junctions consist of two pieces of superconducting thin film that are separated by a layer of insulator. In the case of flux qubits, Josephson junctions are fabricated by the process that is described above. The wave functions of the superconducting components overlap, and this construction allows for the tunneling of electrons which creates a phase difference between the wave functions on either side of the insulating barrier.[5]This phase difference that is equivalent toϕ=ϕ2−ϕ1{\displaystyle \phi =\phi _{2}-\phi _{1}}, whereϕ1,ϕ2{\displaystyle \phi _{1},\phi _{2}}correspond to the wave functions on either side of the tunneling barrier. For this phase difference, the followingJosephson relationshave been established: IJ=I0sin⁡ϕ{\displaystyle I_{J}=I_{0}\sin \phi }[6] V=Φ02πdϕdt{\displaystyle V={\frac {\Phi _{0}}{2\pi }}{\frac {d\phi }{dt}}}[6] Here,IJ{\displaystyle I_{J}}is the Josephson current andΦ0{\displaystyle \Phi _{0}}is the flux quantum. By differentiating the current equation and using substitution, one obtains the Josephson inductance termLJ{\displaystyle L_{J}}: LJ=Φ02πI0cos⁡ϕ{\displaystyle L_{J}={\frac {\Phi _{0}}{2\pi I_{0}\cos \phi }}}[6] From these equations, it can be seen that the Josephson inductance term is non-linear from the cosine term in the denominator; because of this, the energy level spacings are no longer degenerate, restricting the dynamics of the system to the two qubit states. Because of the non-linearity of the Josephson junction, operations using microwaves can be performed on the two lowest energy eigenvalue states (the two qubit states) without exciting the higher energy states. This was previously referred to as the "qubit non linearity" criteria. Thus, Josephson junctions are an integral element of flux qubits and superconducting circuits in general. Coupling between two or more qubits is essential to implement many-qubitgates. The two basic coupling mechanisms are the direct inductive coupling and coupling via a microwave resonator. In the direct coupling, the circulating currents of the qubits inductively affect one another - clockwise current in one qubit induces counter-clockwise current in the other. In thePauli Matricesformalism, aσzσzterm appears in theHamiltonian, essential for thecontrolled NOT gateimplementation.[7]The direct coupling might be further enhanced bykinetic inductance, if the qubit loops are made to share an edge, so that the currents will flow through the same superconducting line. Inserting a Josephson junction on that joint line will add a Josephson inductance term, and increase the coupling even more. To implement a switchable coupling in the direct coupling mechanism, as required to implement a gate of finite duration, an intermediate coupling loop may be used. The control magnetic flux applied to the coupler loop switches the coupling on and off, as implemented, for example, in theD-Wave Systemsmachines. The second method of coupling uses an intermediatemicrowave cavityresonator, commonly implemented in acoplanar waveguidegeometry. By tuning the energy separation of the qubits to match that of the resonator, the phases of the loop currents are synchronized, and aσxσxcoupling is implemented. Tuning the qubits in and out of resonance (for example, by modifying their bias magnetic flux) controls the duration of the gate operation. Like all quantum bits, flux qubits require a suitably sensitive probe coupled to it in order to measure its state after a computation has been carried out. Such quantum probes should introduce as little back-action as possible onto the qubit during measurement. Ideally they should be decoupled during computation and then turned "on" for a short time during read-out. Read-out probes for flux qubits work by interacting with one of the qubit's macroscopic variables, such as the circulating current, the flux within the loop or the macroscopic phase of the superconductor. This interaction then changes some variable of the read-out probe which can be measured using conventional low-noise electronics. The read-out probe is typically the technology aspect that separates the research of different University groups working on flux qubits. Prof. Mooij's group atDelftin the Netherlands,[2]along with collaborators, has pioneered flux qubit technology, and were the first to conceive, propose and implement flux qubits as they are known today. The Delft read-out scheme is based on aSQUIDloop that is inductively coupled to the qubit, the qubit's state influences the critical current of the SQUID. The critical current can then be read-out using ramped measurement currents through the SQUID. Recently the group has used the plasma frequency of the SQUID as the read-out variable. Dr. Il'ichev's group atIPHT Jenain Germany[8]are using impedance measurement techniques based on the flux qubit influencing the resonant properties of a high quality tank circuit, which, like the Delft group is also inductively coupled to the qubit. In this scheme the qubit's magnetic susceptibility, which is defined by its state, changes the phase angle between the current and voltage when a small A.C. signal is passed into the tank circuit. Prof. Petrashov's group atRoyal Holloway[9]are using anAndreev interferometerprobe to read out flux qubits.[10][11]This read-out uses the phase influence of a superconductor on the conductance properties of a normal metal. A length of normal metal is connected at either end to either side of the qubit using superconducting leads, the phase across the qubit, which is defined by its state, is translated into the normal metal, the resistance of which is then read-out using low noise resistance measurements. Dr. Jerger's group uses resonators that are coupled with the flux qubit. Each resonator is dedicated to just one qubit, and all resonators can be measured with a single transmission line. The state of the flux qubit alters the resonant frequency of the resonator due to a dispersive shift that is picked up by the resonator from the coupling with the flux qubit. The resonant frequency is then measured by the transmission line for each resonator in the circuit. The state of the flux qubit is then determined by the measured shift in the resonant frequency.[4]
https://en.wikipedia.org/wiki/Flux_qubit
Superconducting quantum computingis a branch ofsolid statephysics and quantum computing that implementssuperconductingelectronic circuitsusing superconducting qubits as artificial atoms, orquantum dots. For superconducting qubits, the two logic states are theground stateand theexcited state, denoted|g⟩and|e⟩{\displaystyle |g\rangle {\text{ and }}|e\rangle }respectively.[1]Research in superconducting quantum computing is conducted by companies such asGoogle,[2]IBM,[3]IMEC,[4]BBN Technologies,[5]Rigetti,[6]andIntel.[7]Many recently developed QPUs (quantum processing units, or quantum chips) use superconducting architecture. As of May 2016[update], up to 9 fully controllablequbitsare demonstrated in the 1Darray,[8]and up to 16 in 2D architecture.[3]In October 2019, theMartinisgroup, partnered withGoogle, published an article demonstrating novelquantum supremacy, using a chip composed of 53 superconducting qubits.[9] Classicalcomputationmodels rely on physical implementations consistent with the laws ofclassical mechanics.[10]Classical descriptions are accurate only for specific systems consisting of a relatively large number of atoms. A more general description of nature is given byquantum mechanics.Quantum computationstudies quantum phenomena applications beyond the scope of classical approximation, with the purpose of performingquantum informationprocessing and communication. Various models of quantum computation exist, but the most popular models incorporate concepts ofqubitsandquantum gates(or gate-based superconducting quantum computing). Superconductors are implemented due to the fact that at low temperatures they have infinite conductivity and zero resistance. Each qubit is built using semiconductor circuits with anLC circuit: a capacitor and an inductor.[citation needed] Superconducting capacitors and inductors are used to produce a resonant circuit that dissipates almost no energy, as heat can disrupt quantum information. The superconducting resonant circuits are a class of artificial atoms that can be used as qubits. Theoretical and physical implementations of quantum circuits are widely different. Implementing a quantum circuit had its own set of challenges and must abide byDiVincenzo's criteria, conditions proposed by theoretical physicist David P DiVincenzo,[11]which is set of criteria for the physical implementation of superconducting quantum computing, where the initial five criteria ensure that the quantum computer is in line with the postulates of quantum mechanics and the remaining two pertaining to the relaying of this information over a network.[citation needed] We map the ground and excited states of these atoms to the 0 and 1 state as these are discrete and distinct energy values and therefore it is in line with the postulates of quantum mechanics. In such a construction however an electron can jump to multiple other energy states and not be confined to our excited state; therefore, it is imperative that the system be limited to be affected only by photons with energy difference required to jump from the ground state to the excited state.[12]However, this leaves one major issue, we require uneven spacing between our energy levels to prevent photons with the same energy from causing transitions between neighboring pairs of states. Josephson junctions are superconducting elements with a nonlinear inductance, which is critically important for qubit implementation.[12]The use of this nonlinear element in the resonant superconducting circuit produces uneven spacings between the energy levels.[citation needed] A qubit is a generalization of abit(a system with two possiblestates) capable of occupying aquantum superpositionof both states. A quantum gate, on the other hand, is a generalization of alogic gatedescribing thetransformationof one or more qubits once a gate is applied given their initial state. Physical implementation of qubits and gates is challenging for the same reason that quantum phenomena are difficult to observe in everyday life given the minute scale on which they occur. One approach to achieving quantum computers is by implementingsuperconductorswhereby quantum effects aremacroscopicallyobservable, though at the price of extremely low operationtemperatures. Unlike typical conductors, superconductors possess acritical temperatureat which resistivity plummets to zero and conductivity is drastically increased. In superconductors, the basic charge carriers are pairs ofelectrons(known asCooper pairs), rather than singlefermionsas found in typical conductors.[13]Cooper pairs are loosely bound and have an energy state lower than that ofFermi Energy. Electrons forming Cooper pairs possess equal and opposite momentum and spin so that the totalspinof the Cooper pair is anintegerspin. Hence, Cooper pairs arebosons. Two such superconductors which have been used in superconducting qubit models areniobiumandtantalum, both d-band superconductors.[14] Once cooled to nearlyabsolute zero, a collection of bosons collapse into their lowest energy quantum state (theground state) to form a state of matter known asBose–Einstein condensate. Unlike fermions, bosons may occupy the same quantum energy level (orquantum state) and do not obey thePauli exclusion principle. Classically, Bose-Einstein Condensate can be conceptualized as multiple particles occupying the same position in space and having equalmomentum. Because interactive forces between bosons are minimized, Bose-Einstein Condensates effectively act as a superconductor. Thus, superconductors are implemented in quantum computing because they possess both near infiniteconductivityand near zeroresistance. The advantages of a superconductor over a typical conductor, then, are twofold in that superconductors can, in theory, transmit signals nearly instantaneously and run infinitely with no energy loss. The prospect of actualizing superconducting quantum computers becomes all the more promising consideringNASA's recent development of theCold Atom Labin outer space where Bose-Einstein Condensates are more readily achieved and sustained (without rapid dissipation) for longer periods of time without the constraints ofgravity.[15] At each point of a superconducting electronic circuit (a network ofelectrical elements), the condensatewave functiondescribingchargeflow is well-defined by somecomplexprobability amplitude. In typical conductor electrical circuits, this same description is true for individualcharge carriersexcept that the various wave functions are averaged in macroscopic analysis, making it impossible to observe quantum effects. The condensatewave functionbecomes useful in allowing design and measurement of macroscopic quantum effects. Similar to the discrete atomicenergy levelsin theBohr model, only discrete numbers ofmagnetic flux quantacan penetrate a superconducting loop. In both cases,quantizationresults from complexamplitudecontinuity. Differing from microscopic implementations of quantum computers (such asatomsorphotons), parameters of superconducting circuits are designed by setting (classical) values to the electrical elements composing them such as by adjustingcapacitanceorinductance. To obtain a quantum mechanical description of an electrical circuit, a few steps are required. Firstly, all electrical elements must be described by the condensate wave function amplitude and phase rather than by closely related macroscopiccurrentandvoltagedescriptions used for classical circuits. For instance, the square of the wave function amplitude at any arbitrary point in space corresponds to the probability of finding a charge carrier there. Therefore, the squared amplitude corresponds to a classical charge distribution. The second requirement to obtain a quantum mechanical description of an electrical circuit is that generalizedKirchhoff's circuit lawsare applied at every node of the circuit network to obtain the system'sequations of motion. Finally, these equations of motion must be reformulated toLagrangian mechanicssuch that aquantum Hamiltonianis derived describing the total energy of the system. Superconducting quantum computing devices are typically designed in theradio-frequency spectrum, cooled indilution refrigeratorsbelow 15mKand addressed with conventional electronic instruments, e.g.frequency synthesizersandspectrum analyzers. Typical dimensions fall on the range of micrometers, with sub-micrometer resolution, allowing for the convenient design of aHamiltoniansystem with well-establishedintegrated circuittechnology. Manufacturing superconducting qubits follows a process involvinglithography, depositing of metal,etching, and controlledoxidationas described in.[16]Manufacturers continue to improve the lifetime of superconducting qubits and have made significant improvements since the early 2000s.[16]: 4 One distinguishable attribute of superconducting quantum circuits is the use ofJosephson junctions. Josephson junctions are anelectrical elementwhich does not exist innormal conductors. Recall that ajunctionis a weak connection between two leads of wire (in this case a superconductive wire) on either side of a thin layer ofinsulatormaterial only a fewatomsthick, usually implemented usingshadow evaporationtechnique. The resulting Josephson junction device exhibits theJosephson Effectwhereby the junction produces asupercurrent. An image of a single Josephson junction is shown to the right. The condensate wave function on the two sides of the junction are weakly correlated, meaning that they are allowed to have different superconducting phases. This distinction ofnonlinearitycontrasts continuous superconducting wire for which the wave function across the junction must becontinuous. Current flow through the junction occurs byquantum tunneling, seeming to instantaneously "tunnel" from one side of the junction to the other. This tunneling phenomenon is unique to quantum systems. Thus, quantum tunneling is used to create nonlinear inductance, essential for qubit design as it allows a design ofanharmonic oscillatorsfor which energy levels are discretized (orquantized) with nonuniform spacing between energy levels, denotedΔE{\displaystyle \Delta E}.[1]In contrast, thequantum harmonic oscillatorcannotbe used as a qubit as there is no way to address only two of its states, given that the spacing between every energy level and the next is exactly the same. The three primary superconducting qubit archetypes are thephase,chargeandfluxqubit. Many hybridizations of these archetypes exist including the fluxonium,[17]transmon,[18]Xmon,[19]and quantronium.[20]For any qubit implementation the logicalquantum states{|0⟩,|1⟩}{\displaystyle \{|0\rangle ,|1\rangle \}}aremappedto different states of the physical system (typically to discreteenergy levelsor theirquantum superpositions). Each of the three archetypes possess a distinct range of Josephson energy to charging energy ratio. Josephson energy refers to the energy stored in Josephson junctions when current passes through, and charging energy is the energy required for one Cooper pair to charge the junction's total capacitance.[21]Josephson energy can be written as whereI0{\displaystyle I_{0}}is the critical current parameter of the Josephson junction,Φ0=h2e{\displaystyle \textstyle \Phi _{0}={\frac {h}{2e}}}is (superconducting)flux quantum, andδ{\displaystyle \delta }is thephase differenceacross the junction.[21]Notice that the termcosδ{\displaystyle cos\delta }indicates nonlinearity of the Josephson junction.[21]Charge energy is written as whereC{\displaystyle C}is the junction's capacitance ande{\displaystyle e}is electron charge.[21]Of the three archetypes, phase qubits allow the most of Cooper pairs to tunnel through the junction, followed by flux qubits, and charge qubits allow the fewest. The phase qubit possesses a Josephson to charge energy ratio on the order of magnitude106{\displaystyle 10^{6}}. For phase qubits, energy levels correspond to different quantum charge oscillationamplitudesacross a Josephson junction, where charge andphaseare analogous to momentum and position respectively as analogous to aquantum harmonic oscillator. Note that in this context phase is the complex argument of the superconducting wave function (also known as the superconductingorder parameter), not the phase between the different states of the qubit. The flux qubit (also known as a persistent-current qubit) possesses a Josephson to charging energy ratio on the order of magnitude10{\displaystyle 10}. For flux qubits, the energy levels correspond to differentintegernumbers of magnetic flux quanta trapped in a superconducting ring. Fluxonium qubits are a specific type of flux qubit whose Josephson junction is shunted by a linear inductor ofEJ≫EL{\displaystyle E_{J}\gg E_{L}}whereEL=(ℏ/2e)2/L{\displaystyle E_{L}=(\hbar /2e)^{2}/L}.[24]In practice, the linear inductor is usually implemented by a Josephson junction array that is composed of a large number (can be oftenN>100{\displaystyle N>100}) of large-sized Josephson junctions connected in a series. Under this condition, the Hamiltonian of a fluxonium can be written as: One important property of the fluxonium qubit is the longerqubit lifetimeat the half flux sweet spot, which can exceed 1 millisecond.[24][25]Another crucial advantage of the fluxonium qubit biased at the sweet spot is the large anharmonicity, which allows fast local microwave control and mitigates spectral crowding problems, leading to better scalability.[26][27] The charge qubit, also known as theCooper pair box, possesses a Josephson to charging energy ratio on the order of magnitude<1{\displaystyle <1}. For charge qubits, different energy levels correspond to an integer number ofCooper pairson a superconducting island (a small superconducting area with a controllable number of charge carriers).[28]Indeed, the first experimentally realized qubit was the Cooper pair box, achieved in 1999.[29] Transmons are a special type of qubit with ashuntedcapacitor specifically designed to mitigatenoise. The transmon qubit model was based on the Cooper pair box[31](illustrated in the table above in row one column one). It was also the first qubit to demonstratequantum supremacy.[32]The increased ratio of Josephson to charge energy mitigates noise. Two transmons can be coupled using acoupling capacitor.[1]For this 2-qubit system the Hamiltonian is written whereJ{\displaystyle J}iscurrent densityandσ{\displaystyle \sigma }issurface charge density.[1] The Xmon is very similar in design to a transmon in that it originated based on the planar transmon model.[33]An Xmon is essentially a tunable transmon. The major distinguishing difference between transmon and Xmon qubits is the Xmon qubits is grounded with one of its capacitor pads.[34] Another variation of the transmon qubit is the Gatemon. Like the Xmon, the Gatemon is a tunable variation of the transmon. The Gatemon is tunable viagate voltage. In 2022 researchers fromIQMQuantum Computers,Aalto University, andVTT Technical Research Centreof Finland discovered a novel superconducting qubit known as the Unimon.[36]A relatively simple qubit, the Unimon consists of a single Josephson junction shunted by a linear inductor (possessing an inductance not depending on current) inside a (superconducting)resonator.[37]Unimons have increased anharmonicity and display faster operation time resulting in lower susceptibility to noise errors.[37]In addition to increased anharmonicity, other advantages Unimon qubit include decreased susceptibility to flux noise and complete insensitivity to dc charge noise.[22] H=EC(N−Ng)2−EJcos⁡ϕ{\displaystyle H=E_{C}(N-N_{g})^{2}-E_{J}\cos \phi } In this caseN{\displaystyle N}is the number ofCooper pairstotunnelthrough the junction,Ng=CV0/2e{\displaystyle N_{g}=CV_{0}/2e}is the charge on thecapacitorin units of Cooper pairs number,EC=(2e)2/2(CJ+C){\displaystyle E_{C}=(2e)^{2}/2(C_{J}+C)}is the charging energy associated with both capacitanceC{\displaystyle C}and Josephson junction capacitanceCJ{\displaystyle C_{J}}. H=q22CJ+(Φ02π)2ϕ22L−EJcos⁡[ϕ−Φ2πΦ0]{\displaystyle H={\frac {q^{2}}{2C_{J}}}+\left({\frac {\Phi _{0}}{2\pi }}\right)^{2}{\frac {\phi ^{2}}{2L}}-E_{J}\cos \left[\phi -\Phi {\frac {2\pi }{\Phi _{0}}}\right]} Note thatϕ{\displaystyle \phi }is only allowed to take values greater than2π{\displaystyle 2\pi }and is alternatively defined as the time integral of voltage along inductanceL{\displaystyle L}. H=(2e)22CJq2−I0Φ02πϕ−EJcos⁡ϕ{\displaystyle H={\frac {(2e)^{2}}{2C_{J}}}q^{2}-I_{0}{\frac {\Phi _{0}}{2\pi }}\phi -E_{J}\cos \phi }HereΦ0{\displaystyle \Phi _{0}}is magnetic flux quantum. In the table above, the three superconducting qubit archetypes are reviewed. In the first row, the qubit's electrical circuit diagram is presented. The second row depicts a quantum Hamiltonian derived from the circuit. Generally, the Hamiltonian is the sum of the system'skineticandpotentialenergy components (analogous to a particle in apotential well). For the Hamiltonians denoted,ϕ{\displaystyle \phi }is the superconducting wave function phase difference across the junction,CJ{\displaystyle C_{J}}is the capacitance associated with the Josephson junction, andq{\displaystyle q}is the charge on the junction capacitance. For each potential depicted, only solid wave functions are used for computation. The qubit potential is indicated by a thick red line, and schematic wave function solutions are depicted by thin lines, lifted to their appropriate energy level for clarity. Note that particle mass corresponds to aninverse functionof the circuit capacitance and that the shape of the potential is governed by regularinductorsand Josephson junctions. Schematic wave solutions in the third row of the table show the complex amplitude of the phase variable. Specifically, if a qubit's phase is measured while the qubit occupies a particular state, there is a non-zero probability of measuring a specific valueonlywhere the depicted wave function oscillates. All three rows are essentially different presentations of the same physical system. TheGHzenergy gap between energy levels of a superconducting qubit is designed to be compatible with available electronic equipment, due to theterahertz gap(lack of equipment in the higherfrequency band). Thesuperconductor energy gapimplies a top limit of operation below ~1THz beyond which Cooper pairs break, so energy level separation cannot be too high. On the other hand, energy level separation cannot be too small due to cooling considerations: a temperature of 1 K impliesenergy fluctuationsof 20 GHz. Temperatures of tens of millikelvins are achieved indilution refrigeratorsand allow qubit operation at a ~5 GHz energy level separation. Qubit energy level separation is frequently adjusted by controlling a dedicatedbias currentline, providing a "knob" to fine tune the qubit parameters. A single qubit gate is achieved by rotation in theBloch sphere. Rotations between different energy levels of a single qubit are induced bymicrowavepulses sent to anantennaortransmission linecoupled to the qubit with afrequencyresonant with the energy separation between levels. Individual qubits may be addressed by a dedicatedtransmission lineor by a shared one if the other qubits are offresonance. Theaxis of rotationis set byquadrature amplitude modulationof microwave pulse, while pulse length determines theangle of rotation.[39] More formally (following the notation of[39]) for a driving signal of frequencyωd{\displaystyle \omega _{d}}, a driven qubit Hamiltonian in arotating wave approximationis whereω{\displaystyle \omega }is the qubit resonance andσx,σy{\displaystyle \sigma _{x},\sigma _{y}}arePauli matrices. To implement a rotation about theX{\displaystyle X}axis, one can setEy(t)=0{\displaystyle {\mathcal {E}}^{y}(t)=0}and apply a microwave pulse at frequencyωd=ω{\displaystyle \omega _{d}=\omega }for timetg{\displaystyle t_{g}}. The resulting transformation is This is exactly therotation operatorRX(θ){\displaystyle R_{X}(\theta )}by angleθ=∫0tgEx(t)dt{\displaystyle \theta =\int _{0}^{t_{g}}{\mathcal {E}}^{x}(t)dt}about theX{\displaystyle X}axis in the Bloch sphere. A rotation about theY{\displaystyle Y}axis can be implemented in a similar way. Showing the two rotation operators is sufficient for satisfyinguniversalityas every single qubit unitary operatorU{\displaystyle U}may be presented asU=RX(θ1)RY(θ2)RX(θ3){\displaystyle U=R_{X}(\theta _{1})R_{Y}(\theta _{2})R_{X}(\theta _{3})}(up to a globalphasewhich is physically inconsequential) by a procedure known as theX−Y{\displaystyle X-Y}decomposition.[40]Setting∫0tgEx(t)dt=π{\displaystyle \int _{0}^{t_{g}}{\mathcal {E}}^{x}(t)dt=\pi }results in the transformation up to the global phase−i{\displaystyle -i}and is known as theNOT gate. The ability to couple qubits is essential for implementing 2-qubitgates. Coupling two qubits can be achieved by connecting both to an intermediate electrical coupling circuit. The circuit may be either a fixed element (such as a capacitor) or be controllable (like theDC-SQUID). In the first case,decouplingqubits during the time the gate is switched off is achieved by tuning qubits out of resonance one from another, making the energy gaps between their computational states different.[41]This approach is inherently limited to nearest-neighbor coupling since a physical electrical circuit must be laid out between connected qubits. Notably,D-Wave Systems' nearest-neighbor coupling achieves a highly connectedunit cellof 8 qubits in Chimera graph configuration.Quantum algorithmstypically require coupling between arbitrary qubits. Consequently, multipleswapoperations are necessary, limiting the length of quantum computation possible before processordecoherence. Another method of coupling two or more qubits is by way of aquantum bus, by pairing qubits to this intermediate. A quantum bus is often implemented as amicrowave cavitymodeled by a quantum harmonic oscillator. Coupled qubits may be brought in and out of resonance with the bus and with each other, eliminating the nearest-neighbor limitation. Formalism describing coupling iscavity quantum electrodynamics. In cavity quantum electrodynamics, qubits are analogous to atoms interacting with anoptical photon cavitywith a difference of GHz (rather than the THz regime of electromagnetic radiation). Resonant excitation exchange among these artificial atoms is potentially useful for direct implementation of multi-qubit gates.[42]Following the dark statemanifold, the Khazali-Mølmer scheme[42]performs complex multi-qubit operations in a single step, providing a substantial shortcut to the conventional circuit model. One popular gating mechanism uses two qubits and a bus, each tuned to different energy level separations. Applying microwave excitation to the first qubit, with a frequency resonant with the second qubit, causes aσx{\displaystyle \sigma _{x}}rotation of the second qubit. Rotation direction depends on the state of the first qubit, allowing acontrolled phase gateconstruction.[43] Following the notation of,[43]the drive Hamiltonian describing the excited system through the first qubit driving line is formally written whereA(t){\displaystyle A(t)}is the shape of the microwave pulse in time,ω~2{\displaystyle {\tilde {\omega }}_{2}}is resonance frequency of the second qubit,{I,σx,σy,σz}{\displaystyle \{I,\sigma _{x},\sigma _{y},\sigma _{z}\}}are thePauli matrices,J{\displaystyle J}is the coupling coefficient between the two qubits via the resonator,Δ12≡ω1−ω2{\displaystyle \Delta _{12}\equiv \omega _{1}-\omega _{2}}is qubit detuning,m12{\displaystyle m_{12}}is stray (unwanted) coupling between qubits, andℏ{\displaystyle \hbar }is thereduced Planck constant. The timeintegraloverA(t){\displaystyle A(t)}determines the angle of rotation. Unwanted rotations from the first and third terms of the Hamiltonian can be compensated for with single qubit operations. The remaining component, combined with single qubit rotations, forms a basis for thesu(4)Lie algebra. Higher levels (outside of the computational subspace) of a pair of coupled superconducting circuits can be used to induce a geometric phase on one of the computational states of the qubits. This leads to an entangling conditional phase shift of the relevant qubit states. This effect has been implemented by flux-tuning the qubit spectra[44]and by using selective microwave driving.[45]Off-resonant driving can be used to induce differential ac-Stark shift, allowing the implementation of all-microwave controlled-phase gates.[46] The Heisenberg model of interactions, written as H^XXZ/ℏ=∑i,jJXY(σ^xiσ^xj+σ^yiσ^yj)+JZZσ^ziσ^zj{\displaystyle {\hat {\mathcal {H}}}_{\mathrm {XXZ} }/\hbar =\sum _{i,j}J_{\mathrm {XY} }({\hat {\sigma }}_{\text{x}}^{i}{\hat {\sigma }}_{\text{x}}^{j}+{\hat {\sigma }}_{\text{y}}^{i}{\hat {\sigma }}_{\text{y}}^{j})+J_{\mathrm {ZZ} }{\hat {\sigma }}_{\text{z}}^{i}{\hat {\sigma }}_{\text{z}}^{j}}, serves as the basis for analog quantum simulation of spin systems and the primitive for an expressive set of quantum gates, sometimes referred to asfermionic simulation(orfSim) gates. In superconducting circuits, this interaction model has been implemented using flux-tunable qubits with flux-tunable coupling,[47]allowing the demonstration of quantum supremacy.[48]In addition, it can also be realized in fixed-frequency qubits with fixed-coupling using microwave drives.[49]The fSim gate family encompasses arbitrary XY and ZZ two-qubit unitaries, including the iSWAP, the CZ, and the SWAP gates (seeQuantum logic gate). Architecture-specific readout, ormeasurement, mechanisms exist. Readout of a phase qubit is explained in thequbit archetypes tableabove. A flux qubit state is often read using an adjustable DC-SQUIDmagnetometer. States may also be measured using anelectrometer.[1]A more general readout scheme includes a coupling to a microwaveresonator, where resonance frequency of the resonator is dispersively shifted by the qubit state.[50][51]Multi-level systems (qudits) can be readout using electron shelving.[52] DiVincenzo's criteriais a list describing the requirements for a physical system to be capable of implementing a logical qubit. DiVincenzo's criteria is satisfied by superconducting quantum computing implementation. Much of the current development effort in superconducting quantum computing aim to achieve interconnect, control, andreadoutin the 3rd dimension with additionallithographylayers.The list of DiVincenzo's criteria for a physical system to implement a logical qubit is satisfied by the implementation of superconducting qubits. Although DiVincenzo's criteria as originally proposed consists of five criteria required for physically implementing a quantum computer, the more complete list consists of seven criteria as it takes into account communication over a computer network capable of transmitting quantum information between computers, known as the “quantum internet”. Therefore, the first five criteria ensure successful quantum computing, while the final two criteria allow for quantum communication. The final two criteria have been experimentally proven by research performed byETHwith two superconducting qubits connected by acoaxial cable.[57] One of the primary challenges of superconducting quantum computing is the extremely low temperatures at which superconductors like Bose-Einstein Condensates exist. Other basic challenges in superconducting qubit design are shaping the potential well and choosing particle mass such that energy separation between two specific energy levels is unique, differing from all other interlevel energy separation in the system, since these two levels are used as logical states of the qubit. Superconducting quantum computing must also mitigatequantum noise(disruptions of the system caused by its interaction with an environment) as well asleakage(information being lost to the surrounding environment). One way to reduce leakage is withparity measurements.[16]Another strategy is to use qubits with large anharmonicity.[26][27]Many current challenges faced by superconducting quantum computing lie in the field of microwave engineering.[50]As superconducting quantum computing approaches larger scale devices, researchers face difficulties inqubit coherence, scalablecalibrationsoftware, efficient determination offidelityof quantum states across an entire chip, and qubit and gate fidelity.[16]Moreover, superconducting quantum computing devices must be reliably reproducible at increasingly large scales such that they are compatible with these improvements.[16] Journey of superconducting quantum computing: Although not the newest development, the focus began to shift onto superconducting qubits in the latter half of the 1990s when quantum tunneling across Josephson junctions became apparent which allowed for the realization that quantum computing could be achieved through these superconducting qubits.[58] At the end of the century in 1999, a paper[59]was published by Yasunobu Nakamura, which exhibited the initial design of a superconducting qubit which is now known as the "charge qubit". This is the primary basis point on which later designs amended upon. These initial qubits had their limitations in respect to maintaining long coherence times and destructive measurements. The further amendment to this initial breakthrough lead to the invention of the phase and flux qubit and subsequently resulting in the transmon qubit which is now widely and primarily used in Superconducting Quantum Computing.The transmon qubit has enhanced original designs and has further cushioned charge noise from the qubit.[58] The journey has been long, arduous and full of breakthroughs but has seen significant advancements in the recent history and has massive potential for revolutionizing computing. Recent Advances in Josephson Junction–Based QPUs A recent paper by Mohebi and Mohseni provides additional insight into the engineering challenges and innovations necessary for advancing superconducting quantum processing units (QPUs): Future of superconducting quantum computing: The sector's leading industry giants, like Google, IBM and Baidu, are using superconducting quantum computing and transmon qubits to make leaps and bounds in the area of quantum computing. In August 2022, Baidu released its plans to build a fully integrated top to bottom quantum computer which incorporated superconducting qubits. This computer will be all encompassing with hardware, software and applications fully integrated. This is a first in the world of quantum computing and will lead to ground-breaking advancements.[61] IBM released the following roadmap publicly that they have set for their quantum computers which also incorporated superconducting qubits and the transmon qubit. Google in 2016, implemented 16 qubits to convey a demonstration of theFermi-Hubbard Model. In another recent experiment, Google used 17 qubits to optimize theSherrington-Kirkpatrick model. Google produced the Sycamore quantum computer which performed a task in 200 seconds that would have taken 10,000 years on a classical computer.[64]
https://en.wikipedia.org/wiki/Superconducting_quantum_computing
IBM Quantum System Oneis the firstcircuit-based commercialquantum computer, introduced byIBMin January 2019.[1][2][3] This integrated quantum computing system is housed in an airtight borosilicate glass cube that maintains a controlled physical environment.[2][4]Each face of the cube is 9 feet (2.7 m) wide and tall.[2]A cylindrical protrusion from the center of the ceiling is adilution refrigerator, containing a 20-qubittransmonquantum processor.[1][5]It was tested for the first time in the summer of 2018, for two weeks, inMilan,Italy. IBM Quantum System One was developed byIBM Research, with assistance from theMap Project Officeand Universal Design Studio.CERN,ExxonMobil,Fermilab,Argonne National LaboratoryandLawrence Berkeley National Laboratoryare among the clients signed up to access the system remotely.[6][7] From April 6 to May 31, 2019, theBoston Museum of Sciencehosted an exhibit featuring a replica of the IBM Quantum System One.[8][9]On June 15, 2021, IBM deployed the first unit of Quantum System One inGermanyat its headquarters inEhningen.[10]On April 5, 2024, IBM unveiled a Quantum System One at theRensselaer Polytechnic Institute, the first IBM quantum system on a university campus.[11] Thiscomputer hardwarearticle is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/IBM_Q_System_One
TheDefense Advanced Research Projects Agency(DARPA) is aresearch and developmentagency of theUnited States Department of Defenseresponsible for the development of emerging technologies for use by the military.[3][4]Originally known as theAdvanced Research Projects Agency(ARPA), the agency was created on February 7, 1958, by PresidentDwight D. Eisenhowerin response to theSovietlaunching ofSputnik 1in 1957. By collaborating with academia, industry, and government partners, DARPA formulates and executes research and development projects to expand the frontiers of technology and science, often beyond immediateU.S. militaryrequirements.[5]The name of the organization first changed from its founding name, ARPA, to DARPA, in March 1972, changing back to ARPA in February 1993, then reverted to DARPA in March 1996.[6] The Economisthas called DARPA "the agency that shaped the modern world", with technologies like "Moderna's COVID-19 vaccine...weather satellites,GPS,drones,stealth technology,voice interfaces, thepersonal computerand theinterneton the list of innovations for which DARPA can claim at least partial credit".[7]Its track record of success has inspired governments around the world to launch similar research and development agencies.[7] DARPA is independent of other military research and development and reports directly to senior Department of Defense management. DARPA comprises approximately 220 government employees in six technical offices, including nearly 100 program managers, who together oversee about 250 research and development programs.[8]Rob McHenry is the current acting director.[9] The Advanced Research Projects Agency (ARPA) was suggested by thePresident's Scientific Advisory Committeeto PresidentDwight D. Eisenhowerin a meeting called after the launch of Sputnik.[10]ARPA was formally authorized by President Eisenhower in 1958 for the purpose of forming and executing research and development projects to expand the frontiers of technology and science, and able to reach far beyond immediate military requirements.[5]The two relevant acts are the Supplemental Military Construction Authorization (Air Force)[11](Public Law 85-325) and Department of Defense Directive 5105.15, in February 1958. It was placed within the Office of the Secretary of Defense (OSD) and counted approximately 150 people.[12]Its creation was directly attributed to the launching ofSputnikand to U.S. realization that theSoviet Unionhad developed the capacity to rapidly exploit military technology. Initial funding of ARPA was $520 million.[13]ARPA's first director, Roy Johnson, left a $160,000 management job atGeneral Electricfor an $18,000 job at ARPA.[14][15]Herbert YorkfromLawrence Livermore National Laboratorywas hired as his scientific assistant.[16] Johnson and York were both keen on space projects, but whenNASAwas established later in 1958 all space projects and most of ARPA's funding were transferred to it. Johnson resigned and ARPA was repurposed to do "high-risk", "high-gain", "far out" basic research, a posture that was enthusiastically embraced by the nation's scientists and research universities.[17]ARPA's second director was Brigadier General Austin W. Betts, who resigned in early 1961 and was succeeded byJack Ruinawho served until 1963.[18]Ruina, the first scientist to administer ARPA, managed to raise its budget to $250 million.[19]It was Ruina who hiredJ. C. R. Licklideras the first administrator of theInformation Processing Techniques Office, which played a vital role in creation ofARPANET, the basis for the future Internet.[20] Additionally, the political and defense communities recognized the need for a high-level Department of Defense organization to formulate and execute R&D projects that would expand the frontiers of technology beyond the immediate and specific requirements of the Military Services and their laboratories. In pursuit of this mission, DARPA has developed and transferred technology programs encompassing a wide range of scientific disciplines that address the full spectrum of national security needs. From 1958 to 1965, ARPA's emphasis centered on major national issues, including space,ballistic missile defense, andnuclear testdetection.[21]During 1960, all of its civilian space programs were transferred to theNational Aeronautics and Space Administration(NASA) and the military space programs to the individual services.[22] This allowed ARPA to concentrate its efforts on the Project Defender (defense against ballistic missiles),Project Vela(nuclear test detection), andProject AGILE(counterinsurgencyR&D) programs, and to begin work on computer processing,behavioral sciences, and materials sciences. The DEFENDER and AGILE programs formed the foundation of DARPA sensor,surveillance, and directed energy R&D, particularly in the study ofradar,infraredsensing, andx-ray/gamma raydetection. ARPA at this point (1959) played an early role inTransit(also called NavSat) a predecessor to theGlobal Positioning System(GPS).[23]"Fast-forward to 1959 when a joint effort between DARPA and the Johns Hopkins Applied Physics Laboratory began to fine-tune the early explorers' discoveries. TRANSIT, sponsored by the Navy and developed under the leadership of Richard Kirschner at Johns Hopkins, was the first satellite positioning system."[24][25] During the late 1960s, with the transfer of these mature programs to the Services, ARPA redefined its role and concentrated on a diverse set of relatively small, essentially exploratory research programs. The agency was renamed the Defense Advanced Research Projects Agency (DARPA) in 1972, and during the early 1970s, it emphasized direct energy programs, information processing, and tactical technologies.[citation needed] Concerning information processing, DARPA made great progress, initially through its support of the development oftime-sharing. All modern operating systems rely on concepts invented for theMulticssystem, developed by a cooperation amongBell Labs,General ElectricandMIT, which DARPA supported by fundingProject MACatMITwith an initial two-million-dollar grant.[26] DARPA supported the evolution of theARPANET(the first wide-area packet switching network), Packet Radio Network, Packet Satellite Network and ultimately, theInternetand research in theartificial intelligencefields of speech recognition and signal processing, including parts ofShakey the robot.[27]DARPA also supported the early development of bothhypertextandhypermedia. DARPA funded one of the first two hypertext systems,Douglas Engelbart'sNLScomputer system, as well asThe Mother of All Demos. DARPA later funded the development of theAspen Movie Map, which is generally seen as the first hypermedia system and an important precursor ofvirtual reality. TheMansfield Amendmentof 1973 expressly limited appropriations for defense research (through ARPA/DARPA) only to projects with direct military application. The resulting "brain drain" is credited with boosting the development of the fledgling personal computer industry. Some young computer scientists left the universities to startups and private research laboratories such asXerox PARC. Between 1976 and 1981, DARPA's major projects were dominated by air, land, sea, and space technology, tactical armor and anti-armor programs, infrared sensing for space-based surveillance, high-energy laser technology for space-based missile defense, antisubmarine warfare, advanced cruise missiles, advanced aircraft, and defense applications of advanced computing. Many of the successful programs were transitioned to the Services, such as the foundation technologies inautomatic target recognition, space-based sensing, propulsion, and materials that were transferred to theStrategic Defense Initiative Organization(SDIO), later known as theBallistic Missile Defense Organization(BMDO), now titled theMissile Defense Agency(MDA). During the 1980s, the attention of the Agency was centered on information processing and aircraft-related programs, including theNational Aerospace Plane (NASP)or Hypersonic Research Program. The Strategic Computing Program enabled DARPA to exploit advanced processing and networking technologies and to rebuild and strengthen relationships with universities after theVietnam War. In addition, DARPA began to pursue new concepts for small, lightweight satellites (LIGHTSAT) and directed new programs regarding defense manufacturing, submarine technology, and armor/anti-armor. In 1981, two engineers, Robert McGhee and Kenneth Waldron, started to develop the Adaptive Suspension Vehicle (ASV) nicknamed the "Walker" at theOhio State University, under a research contract from DARPA.[28]The vehicle was 17 feet long, 8 feet wide, and 10.5 feet high, and had six legs to support its three-ton aluminum body, in which it was designed to carry cargo over difficult terrains. However, DARPA lost interest in the ASV, after problems with cold-weather tests.[29] On February 4, 2004, the agency shut down its so called "LifeLog Project". The project's aim would have been, "to gather in a single place just about everything an individual says, sees or does".[30] On October 28, 2009, the agency broke ground on a new facility inArlington County, Virginiaa few miles fromThe Pentagon.[31] In fall 2011, DARPA hosted the100-Year StarshipSymposium with the aim of getting the public to start thinking seriously about interstellar travel.[32] On June 5, 2016,NASAand DARPA announced that it planned to build newX-planeswithNASA's plan setting to create a whole series of X planes over the next 10 years.[33] Between 2014 and 2016, DARPA shepherded the firstmachine-to-machinecomputer security competition, theCyber Grand Challenge(CGC), bringing a group of top-notch computer security experts to search for securityvulnerabilities,exploitthem, and create fixes that patch those vulnerabilities in a fully automated fashion.[34][35]It is one ofDARPA prize competitionsto spur innovations. In June 2018, DARPA leaders demonstrated a number of new technologies that were developed within the framework of theGXV-Tprogram. The goal of this program is to create a lightly armored combat vehicle of not very large dimensions, which, due to maneuverability and other tricks, can successfully resist modernanti-tank weaponsystems.[36] In September 2020, DARPA and theUS Air Forceannounced that theHypersonic Air-breathing Weapon Concept(HAWC) are ready for free-flight tests within the next year.[37] Victoria Colemanbecame the director of DARPA in November 2020.[38] In recent years, DARPA officials have contracted out core functions to corporations. For example, during fiscal year 2020, Chenega ran physical security on DARPA's premises,[39]System High Corp. carried out program security,[40]and Agile Defense ran unclassified IT services.[41]General Dynamics runs classified IT services.[42]Strategic Analysis Inc. provided support services regarding engineering, science, mathematics, and front office and administrative work.[43] DARPA has six technical offices that manage the agency's research portfolio, and two additional offices that manage special projects.[44][45]All offices report to the DARPA director, including: A 1991 reorganization created several offices which existed throughout the early 1990s:[53] A 2010 reorganization merged two offices: Directors of DARPA have included:[56] A list of DARPA's active and archived projects is available on the agency's website. Because of the agency's fast pace, programs constantly start and stop based on the needs of the U.S. government. Structured information about some of the DARPA's contracts and projects is publicly available.[68] DARPA is well known as a high-tech government agency, and as such has many appearances in popular fiction. Some realistic references to DARPA in fiction are as "ARPA" inTom Swiftand the Visitor from Planet X(DARPA consults on a technical threat),[266]in episodes of television programThe West Wing(the ARPA-DARPA distinction), the television programNumb3rs,[267]and the Netflix filmSpectral.[268]
https://en.wikipedia.org/wiki/DARPA
ARPA-E, orAdvanced Research Projects Agency–Energyis an agency within theUnited States Department of Energytasked with funding the research and development of advanced energy technologies.[1]The goal of the agency is to improve U.S. economic prosperity, national security, and environmental well being. ARPA-E typically funds short-term research projects with the potential for a transformative impact.[2]It is inspired by theDefense Advanced Research Projects Agency(DARPA).[3] The program directors at ARPA-E serve limited terms, in an effort to reduce bureaucracy and bias.[1]Since January 2023,[update]the director isEvelyn Wang.[4] ARPA-E was initially conceived by a report by theNational AcademiesentitledRising Above the Gathering Storm: Energizing and Employing America for a Brighter Economic Future. The report described a need for the US to stimulate innovation and develop clean, affordable, and reliable energy.[5]ARPA-E was officially created by theAmerica COMPETES Act, authored by CongressmanBart Gordon,[6]within theUnited States Department of Energy(DOE) in 2007, though without a budget. The initial budget of about $400 million was a part of theeconomic stimulus billof February 2009.[7]In early January 2011, theAmerica COMPETES Reauthorization Act of 2010made additional changes to ARPA-E's structure; this structure is codified in Title 42, Chapter 149, Subchapter XVII, § 16538 of the United States Code. Among its main provisions, Section 16538 provides that ARPA-E shall achieve its goals through energy technology projects by doing the following: Like DARPA does for military technology, ARPA-E is intended to fund high-risk, high-reward research involving government labs, private industry, and universities that might not otherwise be pursued.[8]ARPA-E has four objectives: ARPA-E was created as part of the America COMPETES act signed by PresidentGeorge W. Bushin August 2007. PresidentBarack Obamaannounced the launch of ARPA-E on April 27, 2009 as part of an announcement about federal investment in research and development and science education. Soon after its launch, ARPA-E released its firstFunding Opportunity Announcement, offering $151 million in total with individual awards ranging from $500,000 to $9 million. Applicants submitted eight-page "concept papers" that outlined the technical concept; some were invited to submit full applications.[9] Arun Majumdar, former deputy director of theLawrence Berkeley National Laboratory, was appointed the first director of ARPA-E in September 2009, over six months after the organization was first funded.[10]U.S. Secretary of EnergySteven Chupresided over the inaugural ARPA-E Energy Innovation Summit on March 1–3, 2010 inWashington, D.C.[11] 2006 The National Academies released “Rising Above the Gathering Storm” report. August 9, 2007 PresidentGeorge W. Bushsigned into law the America COMPETES Act that codified many of the recommendations in the National Academies report, thus creating ARPA-E. April 27, 2009 PresidentBarack Obamaallocated $400 million in funding to ARPA-E from theAmerican Recovery and Reinvestment Act of 2009. September 18, 2009 President Barack Obama nominatedArun Majumdaras Director of ARPA-E. October 22, 2009 Senate confirmed Arun Majumdar as ARPA-E's first Director. October 26, 2009 Department of Energy awarded $151 million in Recovery Act funds for 37 energy research projects under ARPA-E's first Funding Opportunity Announcement. December 7, 2009 U.S. Secretary of EnergySteven Chuannounced ARPA-E's second round of funding opportunities in the areas of “Electrofuels”, “Innovative Materials & Processes for Advanced Carbon Capture Technologies (IMPACCT),” and “Batteries for Electrical Energy Storage in Transportation (BEEST).” March 1 – 3, 2010 ARPA-E hosted the inaugural “Energy Innovation Summit” which attracted over 1,700 participants. March 2, 2010 U.S. Secretary of Energy Steven Chu announced ARPA-E's third round of funding opportunity in the areas of “Grid-Scale Rampable Intermittent Dispatchable Storage (GRIDS),” “Agile Delivery of Electrical Power Technology (ADEPT),” and “Building Energy Efficiency Through Innovative Thermodevices (BEET-IT).” April 29, 2010 Vice PresidentJoe Bidenannounced 37 awarded projects under ARPA-E's second funding opportunity. July 12, 2010 Department of Energy awarded $92 Million for 42 research projects under ARPA-E's third funding opportunity. December 8, 2014 Ellen Williamsconfirmed by Senate as Director of ARPA-E.[12] June 28, 2019 Lane Genatowski confirmed by Senate as Director of ARPA-E.[13] December 22, 2022 Evelyn Wangwas confirmed by the Senate as director of ARPA-E. ARPA-E was created to fund energy technology projects that translate scientific discoveries and inventions into technological innovations, and accelerate technological advances in high-risk areas that industry is not likely to pursue independently. This goal is similar to the work of the U.S. Department of Energy'sOffice of Energy Efficiency and Renewable Energy (EERE)which advances clean energy projects according to established roadmaps.[14]However, ARPA-E also funds advanced technology in other spaces such as natural gas and grid technology.[15][16][17]ARPA-E does not fund incremental improvements to existing technologies or roadmaps established by existing DOE programs.[18] ARPA-E programs are created through a process of debate surrounding the technical/scientific merits and challenges of potential research areas. Programs must satisfy both “technology push”—the technical merit of innovative platform technologies that can be applied to energy systems—and “market pull”—the potential market impact and cost-effectiveness of the technology. The program creation process begins with a “deep dive” where an energy problem is explored to identify potential topics for program development. ARPA-E Program Directors then hold technical workshops to gather input from experts in various disciplines about current and upcoming technologies. To date, ARPA-E has hosted or co-hosted 13 technical workshops. Following each workshop, the Program Director proposes a new program and defends the program against a set of criteria that justifies its creation. The Program Director then refines the program, incorporating feedback, and seeks approval from the Director. If successful, a new ARPA-E program is created, and a funding opportunity announcement (FOA) is released soliciting project proposals. The ARPA-E peer review process is designed to help drive program success. During proposal review, ARPA-E solicits external feedback from leading experts in a particular field. ARPA-E reviewers evaluate applications over several weeks and then convene a review panel. One notable facet of ARPA-E's evaluation process is the opportunity for the applicant to read reviewers’ comments and provide a rebuttal that the Agency reviews before making funding decisions. The applicant response period allows ARPA-E to avoid misunderstandings by asking clarifying questions that enable ARPA-E to make informed decisions.[5] The U.S. Department of Energy and ARPA-E awarded $151 million inAmerican Recovery and Reinvestment Actfunds on October 26, 2009 for 37 energy research projects. It supportedrenewable energytechnologies forsolar cells,wind turbines,geothermal drilling,biofuels, andbiomass energy crops. The grants also supportedenergy efficiencytechnologies, includingpower electronicsandengine-generatorsforadvanced vehicles, devices forwaste heat recovery,smart glassand control systems forsmart buildings,light-emitting diodes(LEDs),reverse-osmosis membranesforwater desalination,catalyststo split water into hydrogen and oxygen, improvedfuel cellmembranes, and more energy-densemagnetic materialsfor electronic components. Six grants went toenergy storagetechnologies, including anultracapacitor, improvedlithium-ion batteries,metal-air batteriesthat useionic liquids,liquid sodium batteries, andliquid metal batteries.[19][20][21]Other awards went to projects that conducted research and development on abioreactorwith potential to produce gasoline directly from sunlight and carbon dioxide, andcrystal growthtechnology to lower the cost oflight emitting diodes.[19][20][21] The U.S. Secretary of EnergySteven Chuannounced a second round of ARPA-E funding opportunities on December 7, 2009.[22]ARPA-E solicited projects that focused on three critical areas: Biofuels from Electricity (Electrofuels), Batteries for Electrical Energy Storage in Transportation (BEEST), and Innovative Materials and Processes for Advanced Carbon Capture Technologies (IMPACCT). On April 29, 2010, Vice PresidentBidenannounced the 37 awardees that ARPA-E had selected from over 540 initial concept papers.[23]The awards ranged from around $500,000 to $6 million and involved a variety of national laboratories, universities, and companies.[24] Unlike the First Funding Opportunity, the Second Funding Opportunity designated project submissions by category. Of the selected projects, 14 focused on IMPAACT, 13 focused on Electrofuels, and 10 focused on BEEST. For example,Harvard Medical Schoolsubmitted a project under Electrofuels entitled "Engineering a Bacterial Reverse Fuel Cell," which focuses on development of a bacterium that can convert carbon dioxide into gasoline.MITreceived an award under BEEST for a proposal entitled "Semi-Solid Rechargeable Fuel Battery," a concept for producing lighter, smaller, and cheaper vehicle batteries. IMPAACT projects included theGE Global Research Center's "CO2 Capture Process Using Phase-Changing Absorbents," which focuses on a liquid that turns solid when exposed to carbon dioxide.[23] On March 2, 2010, at the inaugural ARPA-E Energy Innovation Summit, U.S. Energy Secretary Steven Chu announced a third funding opportunity for ARPA-E projects. Like the second funding opportunity, ARPA-E solicited projects by category: Grid-Scale Rampable Intermittent Dispatchable Storage (GRIDS), Agile Delivery of Electrical Power Technology (ADEPT), and Building Energy Efficiency Through Innovative Thermodevices (BEET-IT). GRIDS welcomed projects that focused on widespread deployment of cost-effective grid-scale energy storage in two specific areas: 1) proof of concept storage component projects focused on validating new, over-the-horizon electrical energy storage concepts, and 2) advanced system prototypes that address critical shortcomings of existing grid-scale energy storage technologies. ADEPT focused on investing in materials for fundamental advances in soft magnetics, high voltage switches, and reliable, high-density charge storage in three categories: 1) fully integrated, chip-scale power converters for applications including, but not limited to, compact, efficient drivers for solid-state lighting, distributed micro-inverters for photovoltaics, and single-chip power supplies for computers, 2) kilowatt scale package integrated power converters by enabling applications such as low-cost, efficient inverters for grid-tied photovoltaics and variable speed motors, and 3) lightweight, solid-state, medium voltage energy conversion for high power applications such as solid-state electrical substations and wind turbine generators. BEET-IT solicited projects regarding energy efficient cooling technologies and air conditioners (AC) for buildings to save energy and reduce GHG emissions in the following areas: 1) cooling systems that use refrigerants with lowglobal warming potential; 2) energy efficient air conditioning (AC) systems for warm and humid climates with an increasedcoefficient of performance(COP); and 3) vapor compression AC systems for hot climates for re-circulating air loads with an increased COP.[25] Secretary Chu announced the selection of 43 projects under GRIDS, ADEPT, and BEET-IT on July 12, 2010. The awards totaled $92 million and ranged from $400,000 to $5 million. The awards included 14 projects in ADEPT, 17 projects in BEET-IT, and 12 projects in GRIDS. Examples of awarded projects include a "Soluble Acid Lead Flow Battery" that pumps chemicals through a battery cell when electricity is needed (GRIDS), "Silicon Carbide Power Modules for Grid Scale Power Conversion" that uses advanced transistors to make the electrical grid more flexible and controllable (ADEPT), and an "Absorption-Osmosis Cooling Cycle," a new air conditioning system that uses water as a refrigerant, rather than chemicals (BEET-IT).[26] ARPA-E's fourth round of funding was announced on April 20, 2011 and awarded projects in five technology areas: Plants Engineered To Replace Oil (PETRO), High Energy Advanced Thermal Storage (HEATS), Rare Earth Alternatives in Critical Technologies (REACT), Green Electricity Network Integration (GENI), and Solar Agile Delivery of Electrical Power Technology (Solar ADEPT). PETRO focused on projects that had systems to create biofuels from domestic sources such as tobacco and pine trees for half their current cost. REACT funded early-stage technology alternatives that reduced or eliminated the dependence onrare earthmaterials by developing substitutes in two key areas: electric vehicle motors and wind generators. HEATS funded projects that promoted advancement in thermal energy storage technology. GENI focused on funding software and hardware that could reliably control the grid network. Solar ADEPT accepted projects that integrated power electronics into solar panels and solar farms to extract and deliver energy more efficiently. The Awardees for the fourth funding opportunity were announced on September 29, 2011. The 60 projects received $156 million from the ARPA-E Fiscal Year 2011 budget. Examples of the awarded projects included a project that increases the production ofturpentine, a natural liquid biofuel (PETRO); a project entitled "Manganese-Based Permanent Magnet," that reduces the cost of wind turbines and electric vehicles by developing a replacement for rare earth magnets based on an innovative composite using manganese materia (REACT); a project entitled "HybriSol," that develops a heat battery to store energy from the sun (HEATS); a project that develops a new system that allows real-time, automated control over the transmission lines that make up the electric power grid (GENI); and a project that develops light-weight electronics to connect tophotovoltaic solar panelsto be installed on walls or rooftops.[27] Since 2010, ARPA-E has hosted the Energy Innovation Summit. The 10th Summit was held July 8–10, 2019 inDenver, Colorado, and the 11th Summit was held March 17–19, 2021 at the Gaylord Convention Center, near Washington, D.C.[28] ARPA-E has generated over 1,000 projects since inception, attracted about $4.9 billion in private investment for 179 of these projects, with $2.6 billion invested in R&D by the US government. Published, peer reviewed research articles are also a significant output, totaling 4,614. In addition, the program has generated 716 patents.[29]
https://en.wikipedia.org/wiki/Advanced_Research_Projects_Agency%E2%80%93Energy
Homeland Security Advanced Research Projects Agency(HSARPA) is a part of the Science and Technology Directorate at theUnited States Department of Homeland Security. Much likeDARPAin the Department of Defense, HSARPA is tasked with advanced projects to advance the technology needed to protect the US. Some of the chief beneficiaries of HSARPA are theCustoms and Border Protection, and theOffice of Intelligence and Analysis. HSARPA manages a broad portfolio of solicitations and proposals for the development ofhomeland securitytechnology. HSARPA performs this function in part by awarding procurement contracts, grants, cooperative agreements, or other transactions for research or prototypes to public or private entities, businesses, federally funded research and development centers, and universities. HSARPA invests in programs offering the potential for revolutionary changes in technologies that promote homeland security. It also accelerates the prototyping and deployment of technologies intended to reduce homeland vulnerabilities. HSARPA is divided into 5 main divisions:[1]the Borders and Maritime Security Division,[2]Chemical and Biological Defense Division,[3]Cyber Security Division,[4]Explosives Division,[5]and the Resilient Systems Division. This United States government–related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Homeland_Security_Advanced_Research_Projects_Agency
TheAdvanced Research Projects Agency for Health(ARPA-H) is an agency within theDepartment of Health and Human Services.[1]Its mission is to "make pivotal investments in break-through technologies and broadly applicable platforms, capabilities, resources, and solutions that have the potential to transform important areas of medicine and health for the benefit of all patients and that cannot readily be accomplished through traditional research or commercial activity."[2] ARPA-H was approved by Congress with the passing of H.R. 2471, theConsolidated Appropriations Act, 2022and was signed into Public Law 117-103 by U.S. presidentJoe Bidenon March 15, 2022.[3]15 days later Health and Human Services SecretaryXavier Becerraannounced that the agency will have access to the resources of the National Institutes of Health, but will answer to theU.S. Secretary of Health and Human Services.[4]The agency initially has a $1 billion budget to be used before fiscal year 2025 (October 2024) and the Biden administration has requested much more funding from Congress. In December 2022, theConsolidated Appropriations Act, 2023(Pub.L. 117–328) provided $1.5 billion for ARPA-H for fiscal year 2023. The Biden administration requested and received $2.5 billion for FY2024, and had spent $400 million in research grants by August 13, 2024.[5] In March 2023, ARPA-H announced one of its three headquarters locations would be in theWashington metropolitan area.[6][7]In September 2023, ARPA-H announced that a second hub would be located inCambridge, Massachusetts, following a bid led byU.S. representativeRichard NealfromMassachusetts's 1st congressional districtandUniversity of Massachusetts SystempresidentMarty Meehanto have the agency locate a hub in theGreater Boston area.[8][9]The third patient engagement-focused hub was established in Dallas, Texas.[10] TheDefense Advanced Research Projects Agency(DARPA, formerly ARPA) has been the military's in-house innovator since 1958, a year after the USSR launchedSputnik. DARPA is widely known for creatingARPAnet, the predecessor of theinternet, and has been instrumental in advancing hardened electronics,brain-computer interfacetechnology,drones, andstealth technology. Inspired by the success of DARPA, in 2002 theHomeland Security Advanced Research Projects Agency(HSARPA) was created and in 2006 theIntelligence Advanced Research Projects Activity(IARPA) was created. This was followed by theAdvanced Research Projects Agency–Energy(ARPA-E) in 2009 and theAdvanced Research Projects Agency–Infrastructure(ARPA-I) in 2022. DARPA also inspired theAdvanced Research and Invention Agencyin the UK and in 2021 the Biden administration proposed ARPA-C for climate research.[11] The Suzanne Wright Foundationproposed "HARPA" in 2017 to focus on pancreatic cancer and other challenging diseases.[12]A white paper was published by former Obama White House staffers,Michael StebbinsandGeoffrey Lingthrough the Day One Project thatproposedthe creation of a new federal agency modeled on DARPA, but focused on health. That proposal was adopted by President Biden's campaign and was the model used for establishing ARPA-H.[13]In June 2021 noted biologistsFrancis S. Collins(then head of the NIH),Tara Schwetz,Lawrence Tabak, andEric Landerpenned an article inSciencesupporting the idea.[14]Dr. Collins became an important champion of the idea on Capitol Hill and the legislation garnered numerous sponsors in the117th Congress. In September 2022,Renee Wegrzynwas appointed as the agency's inaugural director.[15][16][17]She was dismissed by theTrump administrationin February 2025.[18] A White House white paper identifies a number of potential directions for technological development that could occur under the direction of ARPA-H, including cancer vaccines, pandemic preparedness, and prevention technologies, less intrusive wearable blood glucose monitors, and patient-specific T-cell therapies.[19]Additionally, the proposal suggests that ARPA-H focus on platforms to reduce health disparities in maternal morbidity and mortality and improve how medications provided are taken. One of the first grants from the organization was part of if it's DIGIHEALS initiative to innovative research that aims to protect the United States health care system against hostile online threats. Christian Dameff andJeff Tully, medical doctors and medical cybersecurity researchers University of California San Diego School of Medicine, as well as cybersecurity expertStefan Savagewere named investigators to the Healthcare Ransomware Resiliency and Response Program, or H-R3P, project.[20][21]
https://en.wikipedia.org/wiki/Advanced_Research_Projects_Agency_for_Health
TheInfrastructure Investment and Jobs Act(IIJA), also known as theBipartisan Infrastructure Law(BIL), (H.R. 3684) is aUnited States federal statuteenacted by the117th United States Congressand signed into law by PresidentJoe Bidenon November 15, 2021. It was introduced in the House as theINVEST in America Actand nicknamed the Bipartisan Infrastructure Bill. The act was initially a$547–715 billioninfrastructurepackage that included provisions related to federal highway aid, transit,highway safety, motor carrier, research,hazardous materialsand rail programs of theDepartment of Transportation.[1][2]After congressional negotiations, it was amended and renamed the Infrastructure Investment and Jobs Act to add funding forbroadband access, clean water andelectric gridrenewal in addition to the transportation and road proposals of the original House bill. This amended version included approximately $1.2 trillion in spending, with $550 billion newly authorized spending on top of whatCongresswas planning to authorize regularly.[3][4] The amended bill was passed 69–30 by theSenateon August 10, 2021. On November 5, it was passed 228–206 by theHouse, and ten days later was signed into law by President Biden.[5] On March 31, 2021,[6]PresidentJoe Bidenunveiled his $2.3 trillionAmerican Jobs Plan(which, when combined with theAmerican Families Plan, amounted to $4 trillion in infrastructure spending),[7]pitched by him as "a transformative effort to overhaul the nation's economy".[8]The detailed plan aimed to create millions of jobs, bolsterlabor unions, expand labor protections, andaddress climate change.[9][10] In mid-April 2021, Republican lawmakers offered a $568 billion counterproposal to the American Jobs Plan.[11]On May 9, Senate Minority LeaderMitch McConnellsaid it should cost no more than $800 billion.[12]On May 21, the administration reduced the price tag to $1.7 trillion, which was quickly rejected by Republicans.[13]A day later, a bipartisan group within theSenate Environment and Public Works Committeeannounced that they had reached a deal for $304 billion in U.S. highway funding.[14]This was approved unanimously by the committee on May 26.[15]On June 4,House Transportation and Infrastructure CommitteeChairPeter DeFazioannounced a $547 billion plan, called the INVEST in America Act, which would address parts of the American Jobs Plan.[16][a]On July 1, the House passed an amended $715 billion infrastructure bill focused on land transportation and water.[17] On May 27, Republican senatorShelley Moore Capitopresented a $928 billion plan,[18][b][c]and on June 4, increased it by about $50 billion; this was quickly rejected by the Biden administration.[19]On June 8, the administration shifted its focus to a bipartisan group of 20 senators, which had been working on a package tentatively priced around $900 billion.[20][d]On June 10, a bipartisan group of 10 senators reached a deal costing $974 billion over five years; or about $1.2 trillion if stretched over eight years.[22]On June 16, the plan was endorsed by a bipartisan group of 21 senators.[23]On June 24, the bipartisan group met with the president and reached a compromise deal costing $1.2 trillion over eight years, which focuses on physical infrastructure (notably roads, bridges, railways, water, sewage, broadband, electric vehicles). This was planned to be paid for through reinforcedInternal Revenue Service(IRS) collection, unspent COVID-19 relief funds, and other sources.[24]By July 2021, the IRS portion of the funding had reportedly been scrapped.[25]Biden stipulated that a separate "human infrastructure" bill (notablychild care,home care, andclimate change) – later known as theBuild Back Better Act– must also pass, whether through bipartisanship orreconciliation,[24]but later walked back this position.[26]House SpeakerNancy Pelosisimilarly stated that the House would not vote on the physical infrastructure bill until the larger bill passes in the Senate,[27]despite the fact that reconciliation overrides much of the obstructive power of thefilibuster.[27][28] White House officials stated on July 7 that legislative text was nearing completion.[29]On July 14, theSenate Energy and Natural Resources Committeeadvanced an energy bill expected to be included in the bipartisan package.[30]On July 21, Senate Majority LeaderCharles Schumerput forward a "shell bill" for a vote to kick off debate in the Senate, intending to add the bipartisan text via an amendment.[31][e]On July 25, Republican senatorRob Portmanstated that an agreement was "about 90%" complete, with mass transit being one remaining point of contention.[33]On July 30, Portman stated that this had been resolved.[34]On July 28, SenatorKyrsten Sinemastated that she did not support a reconciliation bill costing $3.5 trillion, breaking the stalemate and allowing the bipartisan bill to move forward.[35]That day, the Senate voted 67–32 to advance the bill,[36]and on July 30, voted 66–28 to proceed to its consideration.[37]The legislation text was completed and substituted into the bill on August 1.[38]On August 5, Schumer moved to truncate debate on the legislation, setting up a procedural vote on August 7,[39]which passed 67–27.[40]Fifteen or more amendments were expected to receive votes through the weekend.[40]On August 10, the bill was passed by the Senate 69–30.[41]It sets aside $550 billion in new spending.[42]A procedural vote on a House rule concerning passing both bills passed along party lines on August 24.[43] In early August, ninemoderate Democratscalled for an immediate House vote on the bill, citing a desire not to lose the momentum from the Senate passage of the bill. They committed to voting against taking up the reconciliation resolution until there was a vote on the bipartisan infrastructure bill.[44][45]While both Biden and House SpeakerNancy Pelosihad reversed earlier positions to support passing the bipartisan bill separately,[26][46]progressivesincludingCongressional Progressive CaucuschairwomanPramila Jayapaland SenatorBernie Sandersmaintained that it be utilized as leverage to pass the most expensive reconciliation bill possible.[47][48][49]The lack of a deal caused a late September House vote to be postponed.[49]On October 2, Pelosi set a new deadline of October 31.[50]By October 28, Jayapal and other progressive leaders indicated that they were willing to vote on the bill separately,[51]but Sanders and others opposed this.[52][53]On October 31, a majority of progressives signaled that they would support both bills.[54] Votes on both bills were considered on November 5, but the hesitation of several moderates to pass the reconciliation bill before it could be scored by theCongressional Budget Officemade passing the bipartisan bill unlikely.[55]Negotiations between centrist and progressive Democrats concluded with the centrists committing to passing the Build Back Better Act.[56]The bill ultimately went to a vote, as did a rule to vote on the larger bill once it was scored, passing 228–206; 13 Republicans joined all but six Democrats (members of "the Squad") in supporting the legislation.[57][58][59]The six Democrats who voted 'No' stated that their opposition was because the legislation had been decoupled from the social-safety net provisions of the Build Back Better bill.[60][61]Biden signed the bill into law at a signing ceremony on November 15.[62] The following is the bill summary authorized by theCongressional Research Service(CRS) for the INVEST in America Act, the original version which passed the House on July 1, 2021: The specific amounts in surface transportation spending were $343 billion for roads, highways, bridges and motor safety, $109 billion for transit, and $95 billion for rail.[16]Provisions of the bill incentivized prioritizing maintenance and repair spending over spending on new infrastructure, holistically planning for all modes of transport when considering how to connect job centers to housing (including collecting data on reductions invehicle miles traveledthroughtransit-oriented development), and lowering speed limits to increase road safety and encourage buildingcomplete streets. The Senate version, and the final bill, de-emphasized these incentives.[2][64][65][66][67][68] The final version restores theSuperfundexcise tax on certain chemicals[69]which expired in 1995.[70] According toNPR, the version which passed the Senate on July 28 was set to include: The law would also make theMinority Business Development Agencya permanent agency.[72]It authorizes the DOT to create an organization called the Advanced Research Projects Agency–Infrastructure (ARPA–I), with a broad remit over transportation research akin toDARPA,HSARPA,IARPA,ARPA-E, andARPA-H,[73]with the first appropriations of $3.22 million being made in theConsolidated Appropriations Act, 2023.[74][75][76]Lastly, it broadens the powers of the Federal Permitting Improvement Steering Council, to provide faster conflict resolution among agencies, in speeding up infrastructure design approvals.[77] An October 2021 report written by the REPEAT Project, a partnership between the Evolved Energy Research firm andPrinceton University's ZERO Lab, said the Infrastructure Investment and Jobs Act alone will make only a small reduction in emissions, but as they say:[78] We lack modeling capabilities to reflect the net effect of surface transportation investments in highways (which tend to increase on-road vehicle and freight miles traveled) and rail and public transit (which tend to reduce on-road vehicle and freight miles traveled). These significant programs are therefore not modeled in this analysis, an important limitation of our assessment of the Infrastructure Investment and Jobs Act. The Georgetown Climate Center tried to estimate how the $599 billion investment for surface transportation in the law can impact emissions from transportation. It created two scenarios: "high emissions" and "low emissions". In the first scenario, from the money dedicated to highways, more money will go to building new highways, while in the second, more will go to repairing existing highways. The other spending areas characteristics are not so different. The first scenario sees increased cumulative emissions over the years 2022–2040 by more than 200 million tons, while the second decreases them by around 250 million tons.[79] In August 2022, theBoston Consulting Groupanalyzed the Act and found $41 billion of it would be spent on energy projects germane to climate action, $18 billion on similarly germane transportation projects, $18 billion on "clean tech" intended to cut hard-to-abate emissions, $0 on manufacturing, and $34 billion on other climate action provisions.[80] The law includes the largest federal investment inpublic transitin history.[81]The law includes spending figures of $105 billion in public transport. It also spends $110 billion on fixing roads and bridges and includes measures for climate change mitigation and improving access forcyclistsandpedestrians.[82]Increasing use ofpublic transportand relatedtransit-oriented developmentcan reduce transportation emissions in human settlements by 78% and overall US emissions by 15%.[83] The law includes spending:[84] New or improved, affordable transportation options to increase safe mobility and connectivity for all, including for people with disabilities, through lower-carbon travel like walking, cycling, rolling, and transit that reduce greenhouse gas emissions and promote active travel.[91] $73 billion will be spent on overhauling the energy policy of the United States. TheBoston Consulting Groupprojects $41 billion of the Act will be germane to climate action in energy.[80]$11 billion of the $73 billion amount will be invested in theelectrical grid's adjustment torenewable energy, with some of the money going to new loans forelectric power transmissionlines and required studies for future transmission needs.[92][93][94]$6 billion of that $73 billion will go todomestic nuclear power. Also of that $73 billion, the IIJA invests $45 billion ininnovationandindustrial policyfor key emerging technologies in energy; $430 million[95]–$21 billion in new demonstration projects at the DOE; and nearly $24 billion in onshoring,supply chain resilience, and bolstering U.S.-held competitive advantages in energy; the latter amount will be divided into an $8.6 billion investment incarbon capture and storage, $3 billion in battery material reprocessing, $3 billion inbattery recycling, $1 billion inrare-earth mineralsstockpiling, and $8 billion in new research hubs forgreen hydrogen.[85]The DOE has imposed grant requirements on $7 billion of the IIJA's battery and transportation spending, which are meant to promotecommunity benefits agreements,social justice, and formation oftrade unions.[96]It created the $225 million Resilient and Efficient Codes Implementation program for cities, tribes and counties to revise building codes for electrical and heating work.[97]Finally, the law gives $4.7 billion to caporphan wellsabandoned by oil and gas companies.[86][87][88] The law invests a total of $65 billion in advancing the U.S. quest forbroadband universal service. Of this $65 billion, the law invests $42.45 billion in a new infrastructure grant program by theNational Telecommunications and Information Administrationcalled theBroadband Equity, Access, and Deployment Program, with highest priority going to communities with Internet speeds below 25 downstream and 3 upstreamMbps. $2 billion will go to the NTIA's Tribal Broadband Connectivity Program, $1 billion to a newmiddle mileinfrastructure program,[98]$1.44 billion in formula grants to state and territorial digital equity plan implementation, $60 million in formula grants to new digital equity plan development, and $1.25 billion in discretionary grants to "specific types of political subdivisions to implement digital equity projects".[99][100] The law gives theUSDA$5.5 billion of the $65 billion total to deliver broadband to rural communities smaller than 20,000 people, $5 million of which is obligated toutility cooperatives.[101][102] The law invests $14.2 billion of the total in theFederal Communications Commission'sAffordable Connectivity Program, the successor to the American Rescue Plan's broadband subsidies. It gives a $30 monthly discount on internet services to qualifying low-income families ($75 on tribal lands), and provides a $100 discount on tablets, laptops and desktops for them.[103][104]The program ran out of funds on April 30, 2024.[105]The law also requires the FCC to return consumer broadband labels it developed in 2016 to statute, to revise its public comment process and to issue rules and model policies for combating digital deployment discrimination, with theUnited States Attorney General's cooperation, and theGovernment Accountability Officeto deliver a report on updating broadband thresholds by November 2022.[106] To support safe drinking water programs, the law provides: For surface water programs, such aswatershed managementandpollution control, the law provides: The Act provides $8 billion for helping Western states deal with theSouthwestern North American megadrought. Spending for many related projects is included under the category "Western Water Infrastructure".[110][111] Prior to the enactment of the infrastructure law in 2021, no dedicated federal bridge funding had existed since fiscal year 2013. The law created two new programs specifically to fund bridge projects:[112] With $27.5 billion over five years, the BFP distributes funds to every state, theDistrict of Columbia, andPuerto Ricobased on a formula that accounts for each state's cost to replace or rehabilitate its poor or fair condition bridges. Each state is guaranteed a minimum of $45 million per year from this program. At least 15% of each state's funds must be spent on off-system bridges (i.e., public bridges that are not on federal-aid highways), and 3% is set aside each year for bridges on tribal lands. Off-system and tribal bridge projects may be funded with a 100% federal share (as opposed to the standard 80% federal share).[113] With $12.5 billion over five years, the BIP is a competitive grant program to replace, rehabilitate, preserve, or make resiliency improvements to bridges. Half of the funding is reserved for large bridge projects, which are defined as projects that cost over $100 million. Large projects are funded at a maximum 50% federal share, while other projects are funded at a maximum 80% federal share.[114] The infrastructure law is the largest investment in passenger rail since the 1971 creation ofAmtrak(which under the law will receive $22 billion in advance appropriations and $19 billion in fully authorized funds).[115][116]It directly appropriated $66 billion for rail over a five-year period (including the Amtrak appropriations), of which at least $18 billion is designated for expanding passenger rail service to new corridors, and it authorized an additional $36 billion.[116]Most of this funding for new passenger rail lines is implemented through the Federal-State Partnership for Intercity Passenger Rail program, which will receive $36 billion in advance appropriations and $7.5 billion in fully authorized funds.[116]The Consolidated Rail Infrastructure and Safety Improvements program will receive $5 billion in advance appropriations and $5 billion in fully authorized funds, while programs forgrade separationreplacinglevel crossingswill receive $3 billion in advance appropriations and $2.5 billion in fully authorized funds, and the Restoration and Enhancement Grant program intended to revive discontinued passenger rail services will receive $250 million in advance appropriations and $250 million in fully authorized funds.[116]Per the law's requirements, at least $12 billion is available and $3.4–4.1 billion authorized for expanding service outside of theNortheast Corridor, and $24 billion is available and $3.4–4.1 billion authorized to partially rebuild the Corridor.[117] To help plan and guide the expansion of passenger rail service beyond theNortheast Corridor, the infrastructure law also created a $1.8 billionCorridor Identification and Development Program.[118]The law also expands eligibility for a potential $23 billion in transit funding to these corridors and changes the allocation methods for state government-supported passenger rail shorter than 750 miles, to encourage states to implement more such service. The law established and authorized $1.75 billion over five years for a new All Stations Accessibility Program (ASAP).[119]This program is designed to improve the accessibility of rail system stations that were built before theAmericans with Disabilities Act of 1990(ADA). At the time of the infrastructure law's passage, over 900 transit stations were not fully ADA-compliant.[120] The law includes $1 billion over five years for Reconnecting Communities planning and construction grants intended to build marginalized community-recommended projects removing or capping highways and railroads, the first $185 million of which were awarded to 45 projects on February 28, 2023.[121]The program was later combined with the Neighborhood Equity and Access program from theInflation Reduction Actfor efficiency reasons, before the next 132 projects were given $3.3 billion in awards on March 13, 2024.[122] The Act creates the National Electric Vehicle Infrastructure (NEVI) program within the Department of Energy. It provides funding of up to $4.155 billion[123]to state governments for up to 80 percent of eligible project costs, to add substantial open-accesselectric vehicle(EV)charging infrastructurealong major highway corridors.[124][125] The Infrastructure Investment and Jobs Act requires theNational Highway Traffic Safety Administration(NHTSA) to develop a safety mechanism to preventdrunk driving, which causes about 10,000 deaths each year in the United States as of 2021, which will be rolled out in phases for retroactive fitting,[126][127]and will become mandatory for all new vehicles in 2027.[128]The technology, which is being developed by NHTSA in cooperation with theAutomotive Coalition for Traffic Safetyand Swedish automobile safety companyAutoliv, consists of a breath-based and a touch-based sensor that stops the car if the driver is above the legalblood alcohol content, and will beopen-sourcedto automobile manufacturers.[129] Under the law, theUnited States Department of Transportation(DOT) will be required to develop regulations for a system that can detect distracted, fatigued, or impaired drivers.[126]The NHTSA has recommended implementing a camera-based warning system for the former, similar to a technology mandated by theEuropean Unionin July 2022.[129] The law also requires the NHTSA'sNew Car Assessment Programto testcollision avoidance systemsin preparation for new federal regulations; new DOT reporting requirements for statistical data on crashes involvingmotorized scootersandelectric bicycles; new federal regulations on headlamps; research directives on technology to protect pedestrians and cyclists,advanced driver-assistance systems, federal hood and bumper regulations,smart cityinfrastructure, andself-driving cars; and a newFederal Highway Administration(FHWA) office specializing incybersecurity.[126] The infrastructure law created theWildlife CrossingsPilot Program with $350 million in funding over five years. This is a competitive grant program that funds planning and construction projects that prevent wildlife-vehicle collisions and improve the connectivity of animal habitats.[130] The law also allocated $1 billion to create the NationalCulvertRemoval, Replacement, and Restoration Grant program to improve the passage ofanadromousfish such assalmon.[131] Biden's infrastructure advisor and the staffer in charge of implementing the law has been identified asMitch Landrieu. Biden's National Security AdvisorJake Sullivanhas been identified as the staffer in charge of ensuring the law does not conflict with American foreign policy interests.[132]To support the implementation of the Act, Biden issued Executive Order 14052, which establishes a task force comprising most of his Cabinet. Biden appointed Landrieu and then-United States National Economic CouncilchiefBrian Deeseas the task force co-chairs.[133][134][135] In May 2022, the Biden administration published a manual on the use of the law, aimed mainly at local authorities. The manual briefly describes the over 350 programs included in the law. Each description includes the aim of the program, its funding and possible recipients, its period of availability, and more. The programs are grouped into four categories: "Transportation", "Climate, Energy and the Environment", "Broadband", and "Other Programs".[136] By the law's second anniversary in November 2023, around $400 billion from the law, about a third of all IIJA funding, was allocated to more than 40,000 projects related toinfrastructure,transport, andsustainability. By May 2024, the law's halfway mark, the numbers had increased to $454 billion (38 percent of the Act's funds) for more than 56,000 projects,[137]and by the third anniversary in November 2024, they had increased to $568 billion (47 percent) to 68,000 projects, leaving 53 percent of IIJA funds unallocated but showing the administration had been accelerating funding approvals.[138]Public attention has remained relatively low, due in part to slow implementation of projects.[139][140][141] The White House offers a "Map of Progress" which tracks all spending that resulted from the act.[142] According to theNew Democrat-linked think tankCenter for American Progress, the IIJA, theCHIPS and Science Act, and theInflation Reduction Acthave together catalyzed over 35,000 public and private investments.[143]EconomistsNoah Smithand Joseph Politano credited the three acts together for spurring booms in factory construction and utility jobs, as well as limiting geographic concentrations of key industries to ensure more dispersed job creation nationwide, though they raised issues of whether the three would serve to limit project delays and significantly increase labor productivity in the long term.[144][145]The Biden administration itself claimed that as of January 10, 2025[update], the IIJA, CaSA, and IRA together catalyzed $1 trillion in private investment (including $449 billion in electronics and semiconductors, $184 billion in electric vehicles and batteries, $215 billion in clean power, $93 billion in clean energy tech manufacturing and infrastructure, and $51 billion in heavy industry) and over $756.2 billion in public infrastructure spending (including $99 billion in energy aside from tax credits in the IRA).[146][needs update] In September 2023, White House data revealed that 60 percent of the Act's energy and transmission funding (up to that point, totaling $12.31 billion) had been awarded to states that voted majority Republican in the 2020 election cycle. Of the Act's top ten recipients, seven states had voted majority Republican, with Wyoming ($1.95 billion) and Texas ($1.71 billion) in the lead. The largest single energy project to receive Act funds was aGeneration IV reactorinKemmerer, Wyomingby the nuclear fission startupTerraPower.[147] In November 2022, the Biden administration announced it would furnish $550 million for the Energy Efficiency and Conservation Block Grant program for clean energy generators for low-income and minority communities, the first such appropriation since theRecovery Actin 2009.[148][149]The administration announced the competitive portion would award $8.8 million to 12 communities on October 12, 2023, with the next award applications due in April (later changed to October) 2024.[150][151]By June 28, 2024, the seventh tranche of funding had been awarded from the EECBG program, totaling about $150 million for 175 communities, with that date's instance seeing $18.5 million awarded to four states and 20 communities.[152] In April 2023, the Biden administration announced it would award $450 million from the Act to projects that built solar farms on abandoned coal mines.[153][154]Further support for coal communities followed. In November 2023 the IIJA's Office of Manufacturing and Energy Supply Chains announced $275 million in grants would go to seven projects in coal communities, creating 1,500 jobs and leveraging $600 million in private investment.[155]The next October it announced $428 million in grants for 14 projects in coal communities, creating 1,900 jobs and leveraging $500 million in private investments.[156] On July 12, 2023, the Biden administration announced it would award $90 million from the Act's Resilient and Efficient Codes Implementation program[97]to 27 cities and counties to update building energy codes.[157]On March 4, 2024 the DOE announced $90 million more would be awarded from the program later that October.[158] On October 24, 2023, the administration announced the first $3.46 billion in Grid Resilience and Innovation Partnerships grants from the Act's $11 billion grid rebuilding authorization, would go to 58 projects in 44 states. A majority are categorized forsmart gridprojects and eight are categorized as pursuing grid innovation. The investment is the largest in the American grid since theRecovery Act14 years earlier. According to Energy SecretaryJennifer Granholm, the projects could enable 35 gigawatts of renewable energy to come online by 2030, $8 billion in investments to be catalyzed, and 400microgridsto be built.[159][160]On August 6, 2024, the DOE announced the recipients of the next $2.2 billion in GRIP grants, eight grid innovation projects across 18 states adding a total of 13 gigawatts of capacity to the grid and catalyzing $10 billion in investments.[161]On October 18, 2024, the DOE announced nearly $2 billion more in GRIP grants would be awarded to 38 smaller projects in 42 states and the District of Columbia, altogether adding 7.5 gigawatts of capacity to the grid and catalyzing nearly $4.2 billion in investment.[162] On October 30, 2023, the DOE announced the results of a mandated triennial study that, for the first time in its history, included anticipation of future grid transmission needs; the Act had explicitly required this inclusion. The study found fewer infrastructure investments since 2015 and consistently high prices in the Rust Belt and California since 2018, and projected a 20 to 128 percent increase in transmission would be needed within regions, while interregional transmission would need to increase by 25 to 412 percent. The DOE found the most potential was in better connecting Texas to the Southwest region, the Mississippi Delta and Midwest regions to the Great Plains region, and New York to New England.[93][163]The DOE also announced the first three recipients of a new $2.5 billion loan program called the Transmission Facilitation Program, created to provide funding to help build up the interstate power grid. They are a line between Quebec, New Hampshire and Vermont, a line between Utah and Nevada; and a line between Arizona and New Mexico.[94][92]The following April 25, the TFP announced the selection of an extension of theOne Nevada Transmission Linenorthward to Idaho.[164]The next October, the DOE announced that four projects in Maine, Oklahoma, New Mexico, and between Texas and Mississippi, were being awarded a total of $1.5 billion under the TFP; the DOE also released its first ever National Transmission Planning Study to follow up on the Needs Study, forecasting a needed national transmission capacity increase of 2.4 to 3.5 times the 2020 level by 2050 to keep costs low and facilitate the energy transmission, with estimated cost savings ranging from $270 billion to $490 billion.[165] On November 16, 2023, the Biden administration announced the first recipients of $40.8 million in grants from a workforce training program the Act created, which will provide skills for industrial technology, the building trades andenergy auditing.[166][167]In December 2023 the DOE fulfilled the IIJA's requirement that the designation process forNational Interest Electric Transmission Corridorsbe revised.[168] On January 17, 2024, more than $104 million were allocated to 31 projects which are expected to increaseenergy conservationandclean energyuse in federal facilities and save $29 million in their first years. The projects advance, among other technologies,heat recovery ventilation,heat pumps,building insulation, andsolar thermal panels.[169]On February 13, the Biden administration announced thatChevron CorporationandFervo Energywould receive $74 million under the law to begin demonstrating the efficacy ofenhanced geothermal systems, at a site nearThe Geysers, California for Chevron, and a site nearMilford, Utahfor Fervo.[170]On February 27, the Department of Energy announced that under the Energy Improvements in Rural or Remote Areas program, 17 projects in rural areas across 20 states and 30 tribal communities had been approved to receive $366 million in grants to decarbonize and densify their grids. A majority of approved projects involved installation of solar panels, grid battery storage, and microgrids.[171] On March 21, the Biden administration announced that five projects in Arizona, Nevada, West Virginia, Kentucky, and Pennsylvania would receive $475 million from the Act, to build solar and geothermal power plants and energy storage on current and former mine lands.[172]On March 25, 2024, the Biden administration announced the first 33 grant recipients of the Department of Energy's $6 billion Industrial Demonstrations Program to reduce embedded emissions in factories and materials processing, of which the Infrastructure Investment and Jobs Act funds $489 million.Cementandconcreteindustry projects received $1.5 billion in total,steelmakingprojects received $1.5 billion, andchemical engineeringand refinery projects $1.2 billion. The Biden administration expects these projected to drive 1.4 million tons in carbon emissions cuts;[173]however, most of the grants had yet to be finalized by November 11.[174]On April 30, the Department of Energy announced 19 more recipients across 12 states and 13 tribal communities, of $78 million in award grants from the Act's Energy Improvements in Remote or Rural Areas program, with a majority of projects involving solar power.[175] On May 13, 2024, theFederal Energy Regulatory Commissionpublished Order No. 1977, clarifying a provision in the Act by stating that the Commission has 'backstop siting authority' in case a state agency neglects to hand out a construction permit for a new transmission project.[176] On September 5, 2024, the Energy Department announced the awarding of over $430 million in incentives to 293 existing hydroelectricity projects, under the Act's Section 40333.[177][178]On September 20, the DOE announced it would award $3 billion to, and leverage $13 billion in investments in, 25 battery manufacturing and supply chain projects, more than half of which had pledgedProject Labor Agreements. 12,000 new jobs across 14 states were projected for creation.[179] In December 2024, the DOE announced that the first three new NIETCs under the IIJA's process would move closer toward full eligibility for TFP funds under the Act's new process, a corridor on the bed of Lake Erie between Ontario and Pennsylvania, a connector between Colorado, New Mexico and Oklahoma, and a connector between the Dakotas.[180]Notably, the sponsor of the Kansas-Indiana Grain Belt Express requested that it be taken off the eligibility list because they had likely secured enough funding to do so.[181] The Biden administration awarded $7 billion of the $8 billion appropriation to seven hydrogen research hubs, based in California, eastern Washington, southeastern Pennsylvania, southeastern Texas, Illinois, Minnesota, and West Virginia and affecting projects there and in eight more states, on October 13, 2023. The remaining $1 billion will be used for demand-side economic policies to drive growth in hydrogen use.[182][183] Several criticisms of the hubs emerged. Jeff St. John, editor in chief ofCanary Media, noted while it does mandate that the DOE create a clean hydrogen definitional standard (which as of October 2023[update]the DOE had not published), and that the DOE selected applicants who pledgedcommunity benefits agreements, the Act does not prescribe metrics or guidelines for measuring emissions from these hubs.[184]Researcher Hannah Story Brown of the watchdog group Revolving Door Project noted that the majority of hub projects announced are powered by fossil fuels, not renewable energy.[185]Staffers for California GovernorGavin Newsomrequested that the Treasury Department exempt the state's hub from emissions restrictions, citing poor alignment with the state's plans for100% renewable energy.[186] On the first anniversary of the October 2023 announcement, St. John reported that the Californian, Washingtonian, and West Virginian hub collaboratives were the farthest along in working towards finalizing their funding, and the DOE's Office of Clean Energy Demonstrations was optimistic, but also that all projects were lagging behind in transparency and community outreach, with several projects seeing corporate partners withdraw.[187]Jael Holzmanof the outlet Heatmap News reported that soon after, experts in energy markets pointed at a lack of coordination between the Hub program and the IRA's hydrogen tax credits, price increases for electrolyzers, and the historically low cost of natural gas as additional reasons for the withdrawal of investment in Hub projects.[188] Later in 2024, the DOE selected the hubs based in California, Washington, Illinois, Texas and West Virginia for near-final deals that together would cost a total $5.3 billion. The final two hubs based in Minnesota and Pennsylvania were not far behind in negotiations.[189] The Act appropriates $3.5 billion to a new RegionalDirect Air CaptureHubs program as part of its $8.6 billion carbon capture and storage investment. In August 2023, the DOE selected two projects (leaving two more to be selected), together worth $1.2 billion: The projects together will remove 2 million metric tons of carbon dioxide and create 4,800 jobs.[190][191] In September 2024, the DOE announced it intended to fund up to $1.8 billion more in direct air capture projects, with the full solicitation released on December 17.[192][193] By April 2024, the Affordable Connectivity Program had seen 23 million households enroll in it.[105]As of June 2024, the program has ended. In May 2024, the Biden administration announced $3 billion in funding from the law had been allotted to replace lead water pipes.[194] The bill contains $27 billion in funding for specific, concrete programs within theFederal Highway Administrationthat are already implemented to reducegreenhouse gas emissionsfrom the transportation sector, all of which was allotted in November 2023. For example, $7.2 billion is allocated to the "Transportation Alternatives Set-Aside Program" (creating more possibilities forbikingandwalking), $6.4 billion to the "Carbon Reduction Program" (reducing emissions from highways), $69 million to the "Transit-Oriented Development Program" (enhancingtransit-oriented developmentand improving land use) and more.[195]However, because states have wide discretion over use of funds from other highway programs under the Act, which leads to states with fast population growth investing more in highway expansion, the Act has been projected byTransportation for Americato increase carbon emissions by 77 million metric tonnes by 2040 compared to a no-Act baseline.[196] On December 4, the Department of Energy released a proposed rule clarifying the definition of "foreign entities of concern" under the Act's car battery materials provisions, in line with theInflation Reduction Act's Section 30D.[197] On December 8, the Biden administration announced it would award $8.2 billion from the Act's Federal-State Partnership for Intercity Passenger Rail Program to ten construction projects, includingBrightline West, theSoutheast High Speed Rail Corridor, theKeystone Corridor,California High-Speed Rail, theDowneasterandEmpire Builderservices, a partial rebuilding ofChicago Union Station, and a bridge replacement nearWillowon theAlaska Railroad. It also announced the first results of the Act'sCorridor ID Program, with $34.5 million being distributed to 15 existing rail upgrades, 47 extensions of rail corridors, and 7 newhigh-speed railstudies.[198][199] The bill included $7.5 billion for electric vehicle charging. As of December 2024, 37 charging stations with a total of 226 spots for charging vehicles had been built.[200] On April 2, 2024, an award announcement was made for the transit-oriented development program, which was expanded under the Act.[201] In 2023 an agreement between seven states was achieved, aiming to preserve theColorado Riverwater system from collapse due to poor management and climate change. The United States is heavily dependent on the river for power generation, drinking water, agriculture, wildlands restoration, and native cultural practices. Some states will reduce water use, receiving compensation for it (totaling $1.2 billion) from the federal government. Many other projects for preserving the river such aswater recyclingandrainwater harvesting, are advanced. The funding comes from the Infrastructure Investment and Jobs Act and theInflation Reduction Act.[202][203] In February 2024, $157 million was allocated to 206 projects linked toecosystem restoration. The projects are spread all over the territory of the United States and are advanced in cooperation withstates,tribes,nonprofitsandterritories. More than half of them benefit underserved communities. The projects include cleaning uppollution, restoring Central U.S.grasslandsincludingbisonpopulations, protecting birds inHawaiifrom extinction, stoppinginvasive species, restoringsalmonpopulations inAlaska, restoringsagebrush steppesand more. On this occasionUnited States Secretary of the InteriorDeb Haalandremarked, "Nature is our best ally in the fight against climate change."[204] The bill provides around $7 billion to theFederal Emergency Management Agencyfor helping communities adapt to different climate-related disasters such ashurricanes,droughts, andheat waves. In August 2023, $3 billion was allocated to different related projects, including 124 projects related to resilient infrastructure and communities (located in "38 states, one tribe and the District of Columbia") and 149 projects related to protection from flooding (located in "28 states and the District of Columbia"). From the projects related to infrastructure, 64 usenature-based solutions. Some of the most vulnerable communities will receive help for free.[205] In November 2023, the Biden administration announced that $300 million from FEMA's new Swift Current Initiative created by the Act would go to helping communities impacted by floods recover and grow their resiliency.[206][207]It also announced that it would award "$50 million in project awards to improve the reliability of water resources and support ecosystem health in Western states, along with an additional $50 million funding opportunity for water conservation projects and hydropower upgrades."[206] In March 2024, $120 million was delivered to helpindigenous peoplesin the U.S. adapt to climate change. Of this number, $26 million was allocated from the Infrastructure Investment and Jobs Act. The efforts will include planning, ecosystem management and restoration, planned relocation, and promotion and use of indigenous knowledge.[208][209] In January 2025, the incoming Trump administration froze selected IIJA grants. However, that April, federal judge Mary McElroy ruled on a case brought by Rhode Island conservation groups that the IIJA grants had to be unfrozen, citing constitutionality concerns.[210] Around $1.1 billion was allocated for restoration of theEvergladesecosystems.[211]In March 2024,Marco Rubio, supported by a bipartisan group of lawmakers, demanded $725 million more, as the rising levels of water in theLake Okeechobeecreated additional problems.[212] In October 2023, $450 million (including $275 million from the bill) was delivered to clean theMilwaukee Riverestuary ofPolychlorinated biphenyl, heavy metals, and oil products. This pollution had negative effects on surrounding communities for a long time. This is the most funding ever distributed by a Great Lakes cleanup program.[213] Republican senators balked at Biden's tandem plan to pass both a bipartisan plan and a separate Democratic-supported reconciliation bill.[214]McConnell criticized Biden for "caving" to his own party by issuing an "ultimatum" that he would not sign the bipartisan bill without a separate reconciliation package.[215]After Biden walked back his comments, Republican senators restated their confidence in the bipartisan bill.[26]AYahoo! News/YouGovpoll conducted in late June found that 60% of Republican voters favored the plan.[216] On June 20, 2021, SenatorBernie Sandersstated that he would not support paying for the bill via a proposed gas tax or a surcharge on electric vehicles.[217] On June 28, 2021,Sunrise Movementand several progressive representatives performed a protest at the White House in criticism of the size and scope of Biden's Civilian Climate Corps. Several protesters were arrested for blocking White House entrances.[218] On July 6, the 58-member bipartisan HouseProblem Solvers Caucusstated their support for the bipartisan bill and called for an expeditious and independent House vote.[219]On July 21, a group of 65 former governors and mayors endorsed the plan.[220] Ahead of a procedural vote on August 7, former presidentDonald Trumpattacked the bill and said he would support Republicanprimarychallengers of senators who vote for it.[40]He reiterated his criticisms following the bill's passage by Congress.[221] Following the bill's passage by Congress in November, Trump criticized it as containing "only 11% for real Infrastructure", calling it "the Elect Democrats in 2022/24 Act", and attacked Republicans who had supported it, saying in particular that McConnell had lent "lifelines to those who are destroying" the country.[221]Various House Republicans also criticized the 13 Republican representatives who voted for the bill.[222]Lauren Boebertdescribed them as "RINOS" (Republican in Name Only).[222]Mary Millercalled them "spineless" and said they helped enact a "socialist takeover".[222]Marjorie Taylor Greenecalled them "traitors" and "American job & energy killers", who "are China-First and America-Last", because they "agree with Globalist Joe [Biden] that America must depend on China to drive" electric vehicles.[223]Gary Palmerwas criticized for touting funding for the Birmingham Northern Beltline that he added to the bill, while neglecting to mention that he voted against the final bill.[224]Paul Gosarwas also criticized for taking credit for the bill's funding forKingman Airportdespite voting against it.[225]Several Republican governors who condemned the bill, includingKristi NoemofSouth DakotaandGreg GianforteofMontana, accepted the funding and directed it to various programs.[226] On June 22, theU.S. Chamber of Commerce,Business RoundtableandNo Labelsmade a joint statement urging the president to consider a bipartisan bill.[227]The former two groups have lobbied for the plan not to raise corporate taxes, and to instead impose user fees and borrow from other federal funds.[227] According to an early AugustHarvard CAPS-Harris Pollsurvey, about 72% of voters support the bill.[228] On September 24, leaders from theU.S. Conference of Mayors, theNational League of Cities, theNational Urban League, and other Black American advocacy groups signaled their support for the bill.[72] On September 25,Peter J. Wallisonauthored an opinion piece forThe Hillin which he argued that Republicans should try to pass the bipartisan bill to prevent it from being used as further leverage to pass the reconciliation bill.[229]Subsequently, Republican House leaders formally opposed the bipartisan bill.[47] "Historians, economists and engineers interviewed by The Associated Press welcomed Biden's efforts. But they stressed that $1 trillion was not nearly enough to overcome the government's failure for decades to maintain and upgrade the country's infrastructure."[230] The think tankTransportation for Americapraised the House version of the bill,[64]but heavily criticized the Senate version for its shortcomings on safety, climate resilience, long-term transit and rail funding and transit-oriented development, and maintenance spending, though it later noted that the final version that became law made small steps to address them.[66][65][67][68] The nuclear industry favored the legislation as it signaled continued federal government support.[231] Polling from Third Way and Impact Research released in July 2022 showed that only 24% of voters were aware the bill was signed into law, despite House Democrats holding over 1,000 events to promote it.[232] Reception to the drunk driver detection and distraction detection requirements have been mixed.Mothers Against Drunk Drivingpraised the requirement as "the beginning of the end of drunk driving".[233]In contrast, theAmerican Civil Liberties Unionhas expressed concern that the technology developed could pose a severe privacy risk to drivers if it collects or stores unnecessary data.[234]Writing forVice, Aaron Gordon also argued that the technology is likely to have an unacceptably high false-positive rate — existingignition interlock devicesthat are sometimes installed after drunk driving convictions are prone to catastrophic failures.[235] In October 2023, theNatural Resources Defense Councilcriticized the IIJA's hydrogen hubs program for its lack of transparency, emphasizing the need for detailed technical reports, public hearings to thwart localNIMBYismand skepticism of hydrogen, and incorporation of environmental justice advocates into project leadership.[236]
https://en.wikipedia.org/wiki/Infrastructure_Investment_and_Jobs_Act#Overview
Quantum programmingis the process of designing orassemblingsequences of instructions, called quantum circuits, using gates, switches, and operators to manipulate a quantum system for a desired outcome or results of a given experiment.Quantum circuit algorithmscan be implemented on integrated circuits, conducted with instrumentation, or written in a programming language for use with aquantum computeror a quantum processor. With quantum processor based systems, quantumprogramming languageshelp expressquantum algorithmsusing high-level constructs.[1]The field is deeply rooted in theopen-sourcephilosophy and as a result most of the quantum software discussed in this article is freely available asopen-source software.[2] Quantum computers, such as those based on theKLM protocol, alinear optical quantum computing(LOQC) model, use quantum algorithms (circuits) implemented with electronics, integrated circuits, instrumentation, sensors, and/or by other physical means.[not verified in body] Other circuits designed for experimentation related to quantum systems can be instrumentation and sensor based.[not verified in body] Quantum instruction sets are used to turn higher level algorithms into physical instructions that can be executed on quantum processors. Sometimes these instructions are specific to a given hardware platform, e.g.ion trapsorsuperconducting qubits. Blackbird[3][4]is a quantum instruction set and intermediate representation used byXanadu Quantum Technologiesand Strawberry Fields. It is designed to representcontinuous-variablequantum programs that can run on photonic quantum hardware. cQASM,[5]also known as common QASM, is a hardware-agnostic quantum assembly language which guarantees the interoperability between all the quantum compilation and simulation tools. It was introduced by the QCA Lab atTUDelft. OpenQASM[6]is the intermediate representation introduced by IBM for use withQiskitand theIBM Q Experience. Quilis an instruction set architecture for quantum computing that first introduced a shared quantum/classical memory model. It was introduced by Robert Smith, Michael Curtis, and William Zeng inA Practical Quantum Instruction Set Architecture.[7]Many quantum algorithms (includingquantum teleportation,quantum error correction, simulation,[8][9]and optimization algorithms[10]) require a shared memory architecture. Quantumsoftware development kitsprovide collections of tools to create and manipulate quantum programs.[11]They also provide the means to simulate the quantum programs or prepare them to be run usingcloud-based quantum devicesand self-hosted quantum devices. The followingsoftware development kitscan be used to run quantum circuits on prototype quantum devices, as well as on simulators. An open source project developed byGoogle, which uses thePython programminglanguage to create and manipulate quantum circuits. Programs written in Cirq can be run onIonQ,Pasqal,[12]Rigetti, andAlpine Quantum Technologies.[13] A cloud-based quantum IDE developed by Classiq, uses a high-level quantum language,Qmod, to generate scalable and efficient quantum circuits with a hardware-aware synthesis engine, that can be deployed across a wide range of QPUs. The platform includes a large library of quantum algorithms. An open source project developed byRigetti, which uses thePython programminglanguage to create and manipulate quantum circuits. Results are obtained either using simulators or prototype quantum devices provided by Rigetti. As well as the ability to create programs using basic quantum operations, higher level algorithms are available within the Grove package.[14]Forest is based on theQuilinstruction set. MindQuantum is a quantum computing framework based onMindSpore, focusing on the implementation ofNISQalgorithms.[15][16][17] Anopen sourcesuite of tools developed by D-Wave. Written mostly in the Python programming language, it enables users to formulate problems in Ising Model and Quadratic Unconstrained Binary Optimization formats (QUBO). Results can be obtained by submitting to an online quantum computer in Leap, D-Wave's real-time Quantum Application Environment, customer-owned machines, or classical samplers.[citation needed] Anopen-sourcePythonlibrary developed byXanadu Quantum Technologiesfordifferentiable programmingof quantum computers.[18][19][20][21]PennyLane provides users the ability to create models usingTensorFlow,NumPy, orPyTorch, and connect them with quantum computer backends available fromIBMQ,Google Quantum,Rigetti,Quantinuum[22]andAlpine Quantum Technologies.[13][23] An open-source project created byQuandela[fr]for designing photonic quantum circuits and developing quantum algorithms, based onPython. Simulations are run either on the user's own computer or on thecloud. Perceval is also used to connect to Quandela's cloud-basedphotonic quantum processor.[24][25] An open source project developed at the Institute for Theoretical Physics atETH, which uses thePython programminglanguage to create and manipulate quantum circuits.[26]Results are obtained either using a simulator, or by sending jobs to IBM quantum devices. An open source full-stack API for quantum simulation, quantum hardware control and calibration developed by multiple research laboratories, includingQRC,CQTandINFN.Qibois a modular framework which includes multiple backends for quantum simulation and hardware control.[27][28]This project aims at providing a platform agnostic quantum hardware control framework with drivers for multiple instruments[29]and tools for quantum calibration, characterization and validation.[30]This framework focuses on self-hosted quantum devices by simplifying the software development required in labs. An open source project developed byIBM.[31]Quantum circuits are created and manipulated usingPython. Results are obtained either using simulators that run on the user's own device, simulators provided by IBM or prototype quantum devices provided by IBM. As well as the ability to create programs using basic quantum operations, higher level tools for algorithms and benchmarking are available within specialized packages.[32]Qiskit is based on theOpenQASMstandard for representing quantum circuits. It also supports pulse level control of quantum systems via QiskitPulse standard.[33] Qrisp[34]is an open source project coordinated by theEclipse Foundation[35]and developed inPython programmingbyFraunhofer FOKUS[36]Qrisp is a high-level programming language for creating and compiling quantum algorithms. Its structured programming model enables scalable development and maintenance. The expressive syntax is based on variables instead of qubits, with the QuantumVariable as core class, and functions instead of gates. Additional tools, such as a performant simulator and automatic uncomputation, complement the extensive framework. Furthermore, it is platform independent, since it offers alternative compilation of elementary functions down to the circuit level, based on device-specific gate sets. A project developed byMicrosoft[37]as part of the.NET Framework. Quantum programs can be written and run withinVisual StudioandVSCodeusing the quantum programming language Q#. Programs developed in the QDK can be run on Microsoft'sAzure Quantum,[38]and run on quantum computers fromQuantinuum,[22]IonQ, andPasqal.[12] Anopen-sourcePythonlibrarydeveloped byXanadu Quantum Technologiesfor designing, simulating, and optimizingcontinuous variable(CV)quantum opticalcircuits.[39][40]Three simulators are provided - one in theFock basis, one using the Gaussian formulation of quantum optics,[41]and one using theTensorFlowmachine learning library. Strawberry Fields is also the library for executing programs on Xanadu's quantum photonic hardware.[42][43] A quantum programming environment and optimizing compiler developed byCambridge Quantum Computingthat targets simulators and several quantum hardware back-ends, released in December 2018.[44] There are two main groups of quantum programming languages:imperativequantum programming languages andfunctionalquantum programming languages. The most prominent representatives of the imperative languages are QCL,[45]LanQ[46]and Q|SI>.[47] Ket[48]is an open-source embedded language designed to facilitate quantum programming, leveraging the familiar syntax and simplicity of Python. It serves as an integral component of the Ket Quantum Programming Platform,[49]seamlessly integrating with a Rust runtime library and a quantum simulator. Maintained by Quantuloop, the project emphasizes accessibility and versatility for researchers and developers. The following example demonstrates the implementation of aBell stateusing Ket: The Logic of Quantum Programs (LQP) is a dynamic quantum logic, capable of expressing important features of quantum measurements and unitary evolutions of multi-partite states, and provides logical characterizations of various forms of entanglement. The logic has been used to specify and verify the correctness of various protocols in quantum computation.[50][51] Q Language is the second implemented imperative quantum programming language.[52]Q Language was implemented as an extension of C++ programming language. It provides classes for basic quantum operations like QHadamard, QFourier, QNot, and QSwap, which are derived from the base class Qop. New operators can be defined using C++ class mechanism. Quantum memory is represented by class Qreg. The computation process is executed using a provided simulator. Noisy environments can be simulated using parameters of the simulator. A language developed byMicrosoftto be used with theQuantum Development Kit.[53] Quantum Computation Language(QCL) is one of the first implemented quantumprogramming languages.[54]The most important feature of QCL is the support for user-defined operators and functions. Itssyntaxresembles the syntax of theC programming languageand its classicaldata typesare similar to primitive data types in C. One can combine classical code and quantum code in the same program. Quantum Guarded Command Language (qGCL) was defined by P. Zuliani in his PhD thesis. It is based onGuarded Command Languagecreated byEdsger Dijkstra. It can be described as a language of quantum programs specification. Quantum Macro Assembler (QMASM) is a low-level language specific to quantum annealers such as the D-Wave.[55] Quantum Modeling (Qmod) language is a high-level language that abstracts away the gate-level qubit operation, providing a functional approach to the implementation of quantum algorithms on quantum registers. The language is part of theClassiqplatform and can be used directly with its native syntax, through a Python SDK, or with a visual editor, all methods can take advantage of the larger library of algorithms and the efficient circuit optimization. Q|SI> is a platform embedded in.Netlanguage supporting quantum programming in a quantum extension of while-language.[47][56]This platform includes a compiler of the quantum while-language[57]and a chain of tools for the simulation of quantum computation, optimisation of quantum circuits, termination analysis of quantum programs,[58]and verification of quantum programs.[59][60] Quantum pseudocode proposed by E. Knill is the first formalized language for description ofquantum algorithms. It was introduced and, moreover, was tightly connected with a model of quantum machine calledQuantum Random Access Machine(QRAM). Scaffold is C-like language, that compiles to QASM and OpenQASM. It is built on top of theLLVMCompiler Infrastructure to perform optimizations on Scaffold code before generating a specified instruction set.[61][62] Silq is a high-level programming language for quantum computing with a strong static type system, developed atETH Zürich.[63][64] Efforts are underway to developfunctional programming languagesforquantum computing. Functional programming languages are well-suited for reasoning about programs. Examples include Selinger's QPL,[65]and theHaskell-like language QML by Altenkirch and Grattage.[66][67]Higher-order quantum programming languages, based onlambda calculus, have been proposed by van Tonder,[68]Selinger and Valiron[69]and by Arrighi and Dowek.[70] LIQUi|> (pronouncedliquid) is a quantum simulation extension on theF#programming language.[71]It is currently being developed by the Quantum Architectures and Computation Group (QuArC)[72]part of the StationQ efforts at Microsoft Research. LIQUi|> seeks to allow theorists to experiment with quantum algorithm design before physical quantum computers are available for use.[73] It includes a programming language, optimization and scheduling algorithms, and quantum simulators. LIQUi|> can be used to translate a quantum algorithm written in the form of a high-level program into the low-level machine instructions for a quantum device.[74] QFC and QPL are two closely related quantum programming languages defined by Peter Selinger. They differ only in their syntax: QFC uses a flow chart syntax, whereas QPL uses a textual syntax. These languages have classical control flow but can operate on quantum or classical data. Selinger gives a denotational semantics for these languages in a category ofsuperoperators. QML is aHaskell-like quantum programming language by Altenkirch and Grattage.[75][66]Unlike Selinger's QPL, this language takes duplication, rather than discarding, of quantum information as a primitive operation. Duplication in this context is understood to be the operation that maps|ϕ⟩{\displaystyle |\phi \rangle }to|ϕ⟩⊗|ϕ⟩{\displaystyle |\phi \rangle \otimes |\phi \rangle }, and is not to be confused with the impossible operation ofcloning; the authors claim it is akin to how sharing is modeled in classical languages. QML also introduces both classical and quantum control operators, whereas most other languages rely on classical control. Anoperational semanticsfor QML is given in terms ofquantum circuits, while adenotational semanticsis presented in terms ofsuperoperators, and these are shown to agree. Both the operational and denotational semantics have been implemented (classically) in Haskell.[76] Quantum lambda calculi are extensions of the classicallambda calculusintroduced byAlonzo ChurchandStephen Cole Kleenein the 1930s. The purpose of quantum lambda calculi is to extend quantum programming languages with a theory ofhigher-order functions. The first attempt to define a quantum lambda calculus was made by Philip Maymin in 1996.[77]His lambda-q calculus is powerful enough to express any quantum computation. However, this language can efficiently solveNP-completeproblems, and therefore appears to be strictly stronger than the standard quantum computational models (such as thequantum Turing machineor thequantum circuitmodel). Therefore, Maymin's lambda-q calculus is probably not implementable on a physical device[citation needed]. In 2003, André van Tonder defined an extension of thelambda calculussuitable for proving correctness of quantum programs. He also provided an implementation in theSchemeprogramming language.[78] In 2004, Selinger and Valiron defined astrongly typedlambda calculus for quantum computation with a type system based onlinear logic.[79] Quipper was published in 2013.[80][81]It is implemented as an embedded language, usingHaskellas the host language.[82]For this reason, quantum programs written in Quipper are written in Haskell using provided libraries. For example, the following code implements preparation of a superposition
https://en.wikipedia.org/wiki/Quantum_programming
This is atimeline ofquantum computing. Stephen Wiesnerinventsconjugate coding[1][a] 13 June –James L. Park(Washington State University,Pullman)'s paper is received byFoundations of Physics[6]in which he describes the non possibility of disturbance in a quantumtransition statein the context of a disproof ofquantum jumpsin the concept of the atom described byBohr.[7][8][b] At the first Conference on the Physics of Computation, held at theMassachusetts Institute of Technology(MIT) in May,[25]Paul Benioff andRichard Feynmangive talks on quantum computing. Benioff's talk built on his earlier 1980 work showing that a computer can operate under the laws of quantum mechanics. The talk was titled "Quantum mechanical Hamiltonian models of discrete processes that erase their own histories: application to Turing machines".[26]In Feynman's talk, he observed that it appeared to be impossible to efficiently simulate the evolution of a quantum nature system on a classical computer, and he proposed a basic model for a quantum computer.[27]Feynman's conjecture on a quantum simulating computer, published 1982,[d]understood as - the reality ofquantum mechanicsexpressed as an effective quantum system necessitates quantum computers,[28]is conventionally accepted as a beginning of quantum computing.[29][30] Charles BennettandGilles Brassardemploy Wiesner's conjugate coding for distribution of cryptographic keys.[34] Artur Ekertat the University of Oxford, proposesentanglement-based secure communication.[40] Daniel R. Simon, atUniversité de Montréal, Quebec, Canada, invent anoracleproblem,Simon's problem, for which a quantum computer would beexponentially fasterthan a conventional computer. Thisalgorithmintroduces the main ideas which were then developed inPeter Shor's factorization algorithm.
https://en.wikipedia.org/wiki/Timeline_of_quantum_computing_and_communication
Collective intelligenceCollective actionSelf-organized criticalityHerd mentalityPhase transitionAgent-based modellingSynchronizationAnt colony optimizationParticle swarm optimizationSwarm behaviour Social network analysisSmall-world networksCentralityMotifsGraph theoryScalingRobustnessSystems biologyDynamic networks Evolutionary computationGenetic algorithmsGenetic programmingArtificial lifeMachine learningEvolutionary developmental biologyArtificial intelligenceEvolutionary robotics Reaction–diffusion systemsPartial differential equationsDissipative structuresPercolationCellular automataSpatial ecologySelf-replication Conversation theoryEntropyFeedbackGoal-orientedHomeostasisInformation theoryOperationalizationSecond-order cyberneticsSelf-referenceSystem dynamicsSystems scienceSystems thinkingSensemakingVariety Ordinary differential equationsPhase spaceAttractorsPopulation dynamicsChaosMultistabilityBifurcation Rational choice theoryBounded rationality Acomplex systemis asystemcomposed of many components which may interact with each other.[1]Examples of complex systems are Earth's globalclimate,organisms, thehuman brain, infrastructure such as power grid, transportation or communication systems, complexsoftwareand electronic systems, social and economic organizations (likecities), anecosystem, a livingcell, and, ultimately, for some authors, the entireuniverse.[2][3][4] The behavior of a complex system is intrinsically difficult to model due to the dependencies, competitions, relationships, and other types of interactions between their parts or between a given system and its environment.[5]Systems that are "complex" have distinct properties that arise from these relationships, such asnonlinearity,emergence,spontaneous order,adaptation, andfeedback loops, among others.[6]Because such systems appear in a wide variety of fields, the commonalities among them have become the topic of their independent area of research. In many cases, it is useful to represent such a system as a network where the nodes represent the components and links represent their interactions. The termcomplex systemsoften refers to the study of complex systems, which is an approach to science that investigates how relationships between a system's parts give rise to its collective behaviors and how the system interacts and forms relationships with its environment.[7]The study of complex systems regards collective, or system-wide, behaviors as the fundamental object of study; for this reason, complex systems can be understood as an alternative paradigm toreductionism, which attempts to explain systems in terms of their constituent parts and the individual interactions between them. As an interdisciplinary domain, complex systems draw contributions from many different fields, such as the study ofself-organizationand critical phenomena from physics, ofspontaneous orderfrom the social sciences,chaosfrom mathematics,adaptationfrom biology, and many others.Complex systemsis therefore often used as a broad term encompassing a research approach to problems in many diverse disciplines, includingstatistical physics,information theory,nonlinear dynamics,anthropology,computer science,meteorology,sociology,economics,psychology, andbiology. Complex systems can be: Complex adaptive systemsare special cases of complex systems that areadaptivein that they have the capacity to change and learn from experience.[11]Examples of complex adaptive systems include the internationaltrademarkets, social insect andantcolonies, thebiosphereand theecosystem, thebrainand theimmune system, thecelland the developingembryo, cities,manufacturing businessesand any human social group-based endeavor in a cultural andsocial systemsuch aspolitical partiesorcommunities.[12] A system isdecomposableif the parts of the system (subsystems) are independent from each other, for exemple the moel of aperfect gasconsider the relations among molecules negligeable.[13] In anearly decomposablesystem, the interactions between subsystems are weak but not negligeable, this is often the case in social systems.[13]Conceptually, a system is nearly decomposable if the variables composing it can be separated into classes and subclasses, if these variables are independent for many functions but affect each other, and if the whole system is greater than the parts.[14] Complex systems may have the following features:[15] In 1948, Dr. Warren Weaver published an essay on "Science and Complexity",[31]exploring the diversity of problem types by contrasting problems of simplicity, disorganized complexity, and organized complexity. Weaver described these as "problems which involve dealing simultaneously with a sizable number of factors which are interrelated into an organic whole." While the explicit study of complex systems dates at least to the 1970s,[32]the first research institute focused on complex systems, theSanta Fe Institute, was founded in 1984.[33][34]Early Santa Fe Institute participants included physics Nobel laureatesMurray Gell-MannandPhilip Anderson, economics Nobel laureateKenneth Arrow, and Manhattan Project scientistsGeorge CowanandHerb Anderson.[35]Today, there are over 50 institutes and research centers focusing on complex systems.[citation needed] Since the late 1990s, the interest of mathematical physicists in researching economic phenomena has been on the rise. The proliferation of cross-disciplinary research with the application of solutions originated from the physics epistemology has entailed a gradual paradigm shift in the theoretical articulations and methodological approaches in economics, primarily in financial economics. The development has resulted in the emergence of a new branch of discipline, namely "econophysics", which is broadly defined as a cross-discipline that applies statistical physics methodologies which are mostly based on the complex systems theory and the chaos theory for economics analysis.[36] The 2021Nobel Prize in Physicswas awarded toSyukuro Manabe,Klaus Hasselmann, andGiorgio Parisifor their work to understand complex systems. Their work was used to create more accurate computer models of the effect of global warming on the Earth's climate.[37] The traditional approach to dealing with complexity is to reduce or constrain it. Typically, this involves compartmentalization: dividing a large system into separate parts. Organizations, for instance, divide their work into departments that each deal with separate issues. Engineering systems are often designed using modular components. However, modular designs become susceptible to failure when issues arise that bridge the divisions. Jane Jacobs described cities as being a problem in organized complexity in 1961, citing Dr. Weaver's 1948 essay.[38]As an example, she explains how an abundance of factors interplay into how various urban spaces lead to a diversity of interactions, and how changing those factors can change how the space is used, and how well the space supports the functions of the city. She further illustrates how cities have been severely damaged when approached as a problem in simplicity by replacing organized complexity with simple and predictable spaces, such as Le Corbusier's "Radiant City" and Ebenezer Howard's "Garden City". Since then, others have written at length on the complexity of cities.[39] Over the last decades, within the emerging field ofcomplexity economics, new predictive tools have been developed to explain economic growth. Such is the case with the models built by theSanta Fe Institutein 1989 and the more recenteconomic complexity index(ECI), introduced by theMITphysicistCesar A. Hidalgoand theHarvardeconomistRicardo Hausmann. Recurrence quantification analysishas been employed to detect the characteristic ofbusiness cyclesandeconomic development. To this end, Orlando et al.[40]developed the so-called recurrence quantification correlation index (RQCI) to test correlations of RQA on a sample signal and then investigated the application to business time series. The said index has been proven to detect hidden changes in time series. Further, Orlando et al.,[41]over an extensive dataset, shown that recurrence quantification analysis may help in anticipating transitions from laminar (i.e. regular) to turbulent (i.e. chaotic) phases such as USA GDP in 1949, 1953, etc. Last but not least, it has been demonstrated that recurrence quantification analysis can detect differences between macroeconomic variables and highlight hidden features of economic dynamics. Focusing on issues of student persistence with their studies, Forsman, Moll and Linder explore the "viability of using complexity science as a frame to extend methodological applications for physics education research", finding that "framing a social network analysis within a complexity science perspective offers a new and powerful applicability across a broad range of PER topics".[42] Healthcare systems are prime examples of complex systems, characterized by interactions among diverse stakeholders, such as patients, providers, policymakers, and researchers, across various sectors like health, government, community, and education. These systems demonstrate properties like non-linearity, emergence, adaptation, and feedback loops.[43]Complexity science in healthcare framesknowledge translationas a dynamic and interconnected network of processes—problem identification, knowledge creation, synthesis, implementation, and evaluation—rather than a linear or cyclical sequence. Such approaches emphasize the importance of understanding and leveraging the interactions within and between these processes and stakeholders to optimize the creation and movement of knowledge. By acknowledging the complex, adaptive nature of healthcare systems,complexity scienceadvocates for continuous stakeholder engagement,transdisciplinarycollaboration, and flexible strategies to effectively translate research into practice.[43] Complexity science has been applied to living organisms, and in particular to biological systems. Within the emerging field offractal physiology, bodily signals, such as heart rate or brain activity, are characterized usingentropyor fractal indices. The goal is often to assess the state and the health of the underlying system, and diagnose potential disorders and illnesses.[citation needed] Complex systems theory is related tochaos theory, which in turn has its origins more than a century ago in the work of the French mathematicianHenri Poincaré. Chaos is sometimes viewed as extremely complicated information, rather than as an absence of order.[44]Chaotic systems remain deterministic, though their long-term behavior can be difficult to predict with any accuracy. With perfect knowledge of the initial conditions and the relevant equations describing the chaotic system's behavior, one can theoretically make perfectly accurate predictions of the system, though in practice this is impossible to do with arbitrary accuracy. The emergence of complex systems theory shows a domain between deterministic order and randomness which is complex.[45]This is referred to as the "edge of chaos".[46] When one analyzes complex systems, sensitivity to initial conditions, for example, is not an issue as important as it is within chaos theory, in which it prevails. As stated by Colander,[47]the study of complexity is the opposite of the study of chaos. Complexity is about how a huge number of extremely complicated and dynamic sets of relationships can generate some simple behavioral patterns, whereas chaotic behavior, in the sense of deterministic chaos, is the result of a relatively small number of non-linear interactions.[45]For recent examples in economics and business see Stoop et al.[48]who discussedAndroid's market position, Orlando[49]who explained the corporate dynamics in terms of mutual synchronization and chaos regularization of bursts in a group of chaotically bursting cells and Orlando et al.[50]who modelled financial data (Financial Stress Index, swap and equity, emerging and developed, corporate and government, short and long maturity) with a low-dimensional deterministic model. Therefore, the main difference between chaotic systems and complex systems is their history.[51]Chaotic systems do not rely on their history as complex ones do. Chaotic behavior pushes a system in equilibrium into chaotic order, which means, in other words, out of what we traditionally define as 'order'.[clarification needed]On the other hand, complex systems evolve far from equilibrium at the edge of chaos. They evolve at a critical state built up by a history of irreversible and unexpected events, which physicistMurray Gell-Manncalled "an accumulation of frozen accidents".[52]In a sense chaotic systems can be regarded as a subset of complex systems distinguished precisely by this absence of historical dependence. Many real complex systems are, in practice and over long but finite periods, robust. However, they do possess the potential for radical qualitative change of kind whilst retaining systemic integrity. Metamorphosis serves as perhaps more than a metaphor for such transformations. A complex system is usually composed of many components and their interactions. Such a system can be represented by a network where nodes represent the components and links represent their interactions.[53][54]For example, theInternetcan be represented as a network composed of nodes (computers) and links (direct connections between computers). Other examples of complex networks include social networks, financial institution interdependencies,[55]airline networks,[56]and biological networks.
https://en.wikipedia.org/wiki/Complex_system
Acomputeris amachinethat can beprogrammedto automaticallycarry outsequences ofarithmeticorlogical operations(computation). Moderndigital electroniccomputers can perform generic sets of operations known asprograms, which enable computers to perform a wide range of tasks. The termcomputer systemmay refer to a nominally complete computer that includes thehardware,operating system,software, andperipheralequipment needed and used for full operation; or to a group of computers that are linked and function together, such as acomputer networkorcomputer cluster. A broad range ofindustrialandconsumer productsuse computers ascontrol systems, including simple special-purpose devices likemicrowave ovensandremote controls, and factory devices likeindustrial robots. Computers are at the core of general-purpose devices such aspersonal computersandmobile devicessuch assmartphones. Computers power theInternet, which links billions of computers and users. Early computers were meant to be used only forcalculations. Simple manual instruments like theabacushave aided people in doing calculations since ancient times. Early in theIndustrial Revolution, some mechanical devices were built to automate long, tedious tasks, such as guiding patterns forlooms. More sophisticated electrical machines did specializedanalogcalculations in the early 20th century. The firstdigitalelectronic calculating machines were developed duringWorld War II, bothelectromechanicaland usingthermionic valves. The firstsemiconductortransistorsin the late 1940s were followed by thesilicon-basedMOSFET(MOS transistor) andmonolithic integrated circuitchip technologies in the late 1950s, leading to themicroprocessorand themicrocomputer revolutionin the 1970s. The speed, power, and versatility of computers have been increasing dramatically ever since then, withtransistor countsincreasing at a rapid pace (Moore's lawnoted that counts doubled every two years), leading to theDigital Revolutionduring the late 20th and early 21st centuries. Conventionally, a modern computer consists of at least oneprocessing element, typically acentral processing unit(CPU) in the form of amicroprocessor, together with some type ofcomputer memory, typicallysemiconductor memorychips. The processing element carries out arithmetic and logical operations, and a sequencing and control unit can change the order of operations in response to storedinformation. Peripheral devices include input devices (keyboards,mice,joysticks, etc.), output devices (monitors,printers, etc.), andinput/output devicesthat perform both functions (e.g.touchscreens). Peripheral devices allow information to be retrieved from an external source, and they enable the results of operations to be saved and retrieved. It was not until the mid-20th century that the word acquired its modern definition; according to theOxford English Dictionary, the first known use of the wordcomputerwas in a different sense, in a 1613 book calledThe Yong Mans Gleaningsby the English writerRichard Brathwait: "I haue [sic] read the truest computer of Times, and the best Arithmetician that euer [sic] breathed, and he reduceth thy dayes into a short number." This usage of the term referred to ahuman computer, a person who carried out calculations orcomputations. The word continued to have the same meaning until the middle of the 20th century. During the latter part of this period, women were often hired as computers because they could be paid less than their male counterparts.[1]By 1943, most human computers were women.[2] TheOnline Etymology Dictionarygives the first attested use ofcomputerin the 1640s, meaning 'one who calculates'; this is an "agent noun from compute (v.)". TheOnline Etymology Dictionarystates that the use of the term to mean"'calculating machine' (of any type) is from 1897." TheOnline Etymology Dictionaryindicates that the "modern use" of the term, to mean 'programmable digital electronic computer' dates from "1945 under this name; [in a] theoretical [sense] from 1937, asTuring machine".[3]The name has remained, although modern computers are capable of many higher-level functions. Devices have been used to aid computation for thousands of years, mostly usingone-to-one correspondencewithfingers. The earliest counting device was most likely a form oftally stick. Later record keeping aids throughout theFertile Crescentincluded calculi (clay spheres, cones, etc.) which represented counts of items, likely livestock or grains, sealed in hollow unbaked clay containers.[a][4]The use ofcounting rodsis one example. Theabacuswas initially used for arithmetic tasks. TheRoman abacuswas developed from devices used inBabyloniaas early as 2400 BCE. Since then, many other forms of reckoning boards or tables have been invented. In a medieval Europeancounting house, a checkered cloth would be placed on a table, and markers moved around on it according to certain rules, as an aid to calculating sums of money.[5] TheAntikythera mechanismis believed to be the earliest known mechanicalanalog computer, according toDerek J. de Solla Price.[6]It was designed to calculate astronomical positions. It was discovered in 1901 in theAntikythera wreckoff the Greek island ofAntikythera, betweenKytheraandCrete, and has been dated to approximatelyc.100 BCE. Devices of comparable complexity to the Antikythera mechanism would not reappear until the fourteenth century.[7] Many mechanical aids to calculation and measurement were constructed for astronomical and navigation use. Theplanispherewas astar chartinvented byAbū Rayhān al-Bīrūnīin the early 11th century.[8]Theastrolabewas invented in theHellenistic worldin either the 1st or 2nd centuries BCE and is often attributed toHipparchus. A combination of theplanisphereanddioptra, the astrolabe was effectively an analog computer capable of working out several different kinds of problems inspherical astronomy. An astrolabe incorporating a mechanicalcalendarcomputer[9][10]andgear-wheels was invented by Abi Bakr ofIsfahan,Persiain 1235.[11]Abū Rayhān al-Bīrūnī invented the first mechanical gearedlunisolar calendarastrolabe,[12]an early fixed-wiredknowledge processing machine[13]with agear trainand gear-wheels,[14]c.1000 AD. Thesector, a calculating instrument used for solving problems in proportion,trigonometry, multiplication and division, and for various functions, such as squares and cube roots, was developed in the late 16th century and found application in gunnery, surveying and navigation. Theplanimeterwas a manual instrument to calculate the area of a closed figure by tracing over it with a mechanical linkage. Theslide rulewas invented around 1620–1630, by the English clergymanWilliam Oughtred, shortly after the publication of the concept of thelogarithm. It is a hand-operated analog computer for doing multiplication and division. As slide rule development progressed, added scales provided reciprocals, squares and square roots, cubes and cube roots, as well astranscendental functionssuch as logarithms and exponentials, circular andhyperbolictrigonometry and otherfunctions. Slide rules with special scales are still used for quick performance of routine calculations, such as theE6Bcircular slide rule used for time and distance calculations on light aircraft. In the 1770s,Pierre Jaquet-Droz, a Swisswatchmaker, built a mechanical doll (automaton) that could write holding a quill pen. By switching the number and order of its internal wheels different letters, and hence different messages, could be produced. In effect, it could be mechanically "programmed" to read instructions. Along with two other complex machines, the doll is at the Musée d'Art et d'Histoire ofNeuchâtel,Switzerland, and still operates.[15] In 1831–1835, mathematician and engineerGiovanni Planadevised aPerpetual Calendar machine, which through a system of pulleys and cylinders could predict theperpetual calendarfor every year from 0 CE (that is, 1 BCE) to 4000 CE, keeping track of leap years and varying day length. Thetide-predicting machineinvented by the Scottish scientistSir William Thomsonin 1872 was of great utility to navigation in shallow waters. It used a system of pulleys and wires to automatically calculate predicted tide levels for a set period at a particular location. Thedifferential analyser, a mechanical analog computer designed to solvedifferential equationsbyintegration, used wheel-and-disc mechanisms to perform the integration. In 1876, Sir William Thomson had already discussed the possible construction of such calculators, but he had been stymied by the limited output torque of theball-and-disk integrators.[16]In a differential analyzer, the output of one integrator drove the input of the next integrator, or a graphing output. Thetorque amplifierwas the advance that allowed these machines to work. Starting in the 1920s,Vannevar Bushand others developed mechanical differential analyzers. In the 1890s, the Spanish engineerLeonardo Torres Quevedobegan to develop a series of advancedanalog machinesthat could solve real and complex roots ofpolynomials,[17][18][19][20]which were published in 1901 by theParis Academy of Sciences.[21] Charles Babbage, an English mechanical engineer andpolymath, originated the concept of a programmable computer. Considered the "father of the computer",[22]he conceptualized and invented the firstmechanical computerin the early 19th century. After working on hisdifference enginehe announced his invention in 1822, in a paper to theRoyal Astronomical Society, titled "Note on the application of machinery to the computation of astronomical and mathematical tables".[23]He also designed to aid in navigational calculations, in 1833 he realized that a much more general design, ananalytical engine, was possible. The input of programs and data was to be provided to the machine viapunched cards, a method being used at the time to direct mechanicalloomssuch as theJacquard loom. For output, the machine would have aprinter, a curve plotter and a bell. The machine would also be able to punch numbers onto cards to be read in later. The engine would incorporate anarithmetic logic unit,control flowin the form ofconditional branchingandloops, and integratedmemory, making it the first design for a general-purpose computer that could be described in modern terms asTuring-complete.[24][25] The machine was about a century ahead of its time. All the parts for his machine had to be made by hand – this was a major problem for a device with thousands of parts. Eventually, the project was dissolved with the decision of theBritish Governmentto cease funding. Babbage's failure to complete the analytical engine can be chiefly attributed to political and financial difficulties as well as his desire to develop an increasingly sophisticated computer and to move ahead faster than anyone else could follow. Nevertheless, his son,Henry Babbage, completed a simplified version of the analytical engine's computing unit (themill) in 1888. He gave a successful demonstration of its use in computing tables in 1906. In his workEssays on Automaticspublished in 1914,Leonardo Torres Quevedowrote a brief history of Babbage's efforts at constructing a mechanical Difference Engine and Analytical Engine. The paper contains a design of a machine capable to calculate formulas likeax(y−z)2{\displaystyle a^{x}(y-z)^{2}}, for a sequence of sets of values. The whole machine was to be controlled by aread-onlyprogram, which was complete with provisions forconditional branching. He also introduced the idea offloating-point arithmetic.[26][27][28]In 1920, to celebrate the 100th anniversary of the invention of thearithmometer, Torres presented in Paris the Electromechanical Arithmometer, which allowed a user to input arithmetic problems through akeyboard, and computed and printed the results,[29][30][31][32]demonstrating the feasibility of an electromechanical analytical engine.[33] During the first half of the 20th century, many scientificcomputingneeds were met by increasingly sophisticated analog computers, which used a direct mechanical or electrical model of the problem as a basis forcomputation. However, these were not programmable and generally lacked the versatility and accuracy of modern digital computers.[34]The first modern analog computer was atide-predicting machine, invented bySir William Thomson(later to become Lord Kelvin) in 1872. Thedifferential analyser, a mechanical analog computer designed to solve differential equations by integration using wheel-and-disc mechanisms, was conceptualized in 1876 byJames Thomson, the elder brother of the more famous Sir William Thomson.[16] The art of mechanical analog computing reached its zenith with thedifferential analyzer, completed in 1931 byVannevar BushatMIT.[35]By the 1950s, the success of digital electronic computers had spelled the end for most analog computing machines, but analog computers remained in use during the 1950s in some specialized applications such as education (slide rule) and aircraft (control systems).[citation needed] Claude Shannon's 1937master's thesislaid the foundations of digital computing, with his insight of applying Boolean algebra to the analysis and synthesis of switching circuits being the basic concept which underlies all electronic digital computers.[36][37] By 1938, theUnited States Navyhad developed theTorpedo Data Computer, an electromechanical analog computer forsubmarinesthat used trigonometry to solve the problem of firing a torpedo at a moving target. DuringWorld War II, similar devices were developed in other countries.[38] Early digital computers wereelectromechanical; electric switches drove mechanical relays to perform the calculation. These devices had a low operating speed and were eventually superseded by much faster all-electric computers, originally usingvacuum tubes. TheZ2, created by German engineerKonrad Zusein 1939 inBerlin, was one of the earliest examples of an electromechanical relay computer.[39] In 1941, Zuse followed his earlier machine up with theZ3, the world's first working electromechanicalprogrammable, fully automatic digital computer.[42][43]The Z3 was built with 2000relays, implementing a 22bitword lengththat operated at aclock frequencyof about 5–10Hz.[44]Program code was supplied on punchedfilmwhile data could be stored in 64 words of memory or supplied from the keyboard. It was quite similar to modern machines in some respects, pioneering numerous advances such asfloating-point numbers. Rather than the harder-to-implement decimal system (used inCharles Babbage's earlier design), using abinarysystem meant that Zuse's machines were easier to build and potentially more reliable, given the technologies available at that time.[45]The Z3 was not itself a universal computer but could be extended to beTuring complete.[46][47] Zuse's next computer, theZ4, became the world's first commercial computer; after initial delay due to the Second World War, it was completed in 1950 and delivered to theETH Zurich.[48]The computer was manufactured by Zuse's own company,Zuse KG, which was founded in 1941 as the first company with the sole purpose of developing computers in Berlin.[48]The Z4 served as the inspiration for the construction of theERMETH, the first Swiss computer and one of the first in Europe.[49] Purelyelectronic circuitelements soon replaced their mechanical and electromechanical equivalents, at the same time that digital calculation replaced analog. The engineerTommy Flowers, working at thePost Office Research Stationin London in the 1930s, began to explore the possible use of electronics for thetelephone exchange. Experimental equipment that he built in 1934 went into operation five years later, converting a portion of thetelephone exchangenetwork into an electronic data processing system, using thousands ofvacuum tubes.[34]In the US,John Vincent AtanasoffandClifford E. BerryofIowa State Universitydeveloped and tested theAtanasoff–Berry Computer(ABC) in 1942,[50]the first "automatic electronic digital computer".[51]This design was also all-electronic and used about 300 vacuum tubes, with capacitors fixed in a mechanically rotating drum for memory.[52] During World War II, the British code-breakers atBletchley Parkachieved a number of successes at breaking encrypted German military communications. The German encryption machine,Enigma, was first attacked with the help of the electro-mechanicalbombeswhich were often run by women.[53][54]To crack the more sophisticated GermanLorenz SZ 40/42machine, used for high-level Army communications,Max Newmanand his colleagues commissioned Flowers to build theColossus.[52]He spent eleven months from early February 1943 designing and building the first Colossus.[55]After a functional test in December 1943, Colossus was shipped to Bletchley Park, where it was delivered on 18 January 1944[56]and attacked its first message on 5 February.[52] Colossus was the world's firstelectronic digitalprogrammable computer.[34]It used a large number of valves (vacuum tubes). It had paper-tape input and was capable of being configured to perform a variety ofboolean logicaloperations on its data, but it was not Turing-complete. Nine Mk II Colossi were built (The Mk I was converted to a Mk II making ten machines in total). Colossus Mark I contained 1,500 thermionic valves (tubes), but Mark II with 2,400 valves, was both five times faster and simpler to operate than Mark I, greatly speeding the decoding process.[57][58] TheENIAC[59](Electronic Numerical Integrator and Computer) was the first electronicprogrammablecomputer built in the U.S. Although the ENIAC was similar to the Colossus, it was much faster, more flexible, and it was Turing-complete. Like the Colossus, a "program" on the ENIAC was defined by thestatesof its patch cables and switches, a far cry from thestored programelectronic machines that came later. Once a program was written, it had to be mechanically set into the machine with manual resetting of plugs and switches. The programmers of the ENIAC were six women, often known collectively as the "ENIAC girls".[60][61] It combined the high speed of electronics with the ability to be programmed for many complex problems. It could add or subtract 5000 times a second, a thousand times faster than any other machine. It also had modules to multiply, divide, and square root. High speed memory was limited to 20 words (about 80 bytes). Built under the direction ofJohn MauchlyandJ. Presper Eckertat the University of Pennsylvania, ENIAC's development and construction lasted from 1943 to full operation at the end of 1945. The machine was huge, weighing 30 tons, using 200 kilowatts of electric power and contained over 18,000 vacuum tubes, 1,500 relays, and hundreds of thousands of resistors, capacitors, and inductors.[62] The principle of the modern computer was proposed byAlan Turingin his seminal 1936 paper,[63]On Computable Numbers. Turing proposed a simple device that he called "Universal Computing machine" and that is now known as auniversal Turing machine. He proved that such a machine is capable of computing anything that is computable by executing instructions (program) stored on tape, allowing the machine to be programmable. The fundamental concept of Turing's design is thestored program, where all the instructions for computing are stored in memory.Von Neumannacknowledged that the central concept of the modern computer was due to this paper.[64]Turing machines are to this day a central object of study intheory of computation. Except for the limitations imposed by their finite memory stores, modern computers are said to beTuring-complete, which is to say, they havealgorithmexecution capability equivalent to a universal Turing machine. Early computing machines had fixed programs. Changing its function required the re-wiring and re-structuring of the machine.[52]With the proposal of the stored-program computer this changed. A stored-program computer includes by design aninstruction setand can store in memory a set of instructions (aprogram) that details thecomputation. The theoretical basis for the stored-program computer was laid out byAlan Turingin his 1936 paper. In 1945, Turing joined theNational Physical Laboratoryand began work on developing an electronic stored-program digital computer. His 1945 report "Proposed Electronic Calculator" was the first specification for such a device. John von Neumann at theUniversity of Pennsylvaniaalso circulated hisFirst Draft of a Report on the EDVACin 1945.[34] TheManchester Babywas the world's firststored-program computer. It was built at theUniversity of Manchesterin England byFrederic C. Williams,Tom KilburnandGeoff Tootill, and ran its first program on 21 June 1948.[65]It was designed as atestbedfor theWilliams tube, the firstrandom-accessdigital storage device.[66]Although the computer was described as "small and primitive" by a 1998 retrospective, it was the first working machine to contain all of the elements essential to a modern electronic computer.[67]As soon as the Baby had demonstrated the feasibility of its design, a project began at the university to develop it into a practically useful computer, theManchester Mark 1. The Mark 1 in turn quickly became the prototype for theFerranti Mark 1, the world's first commercially available general-purpose computer.[68]Built byFerranti, it was delivered to the University of Manchester in February 1951. At least seven of these later machines were delivered between 1953 and 1957, one of them toShelllabs inAmsterdam.[69]In October 1947 the directors of British catering companyJ. Lyons & Companydecided to take an active role in promoting the commercial development of computers. Lyons'sLEO Icomputer, modelled closely on the CambridgeEDSACof 1949, became operational in April 1951[70]and ran the world's first routine office computerjob. The concept of afield-effect transistorwas proposed byJulius Edgar Lilienfeldin 1925.John BardeenandWalter Brattain, while working underWilliam ShockleyatBell Labs, built the first workingtransistor, thepoint-contact transistor, in 1947, which was followed by Shockley'sbipolar junction transistorin 1948.[71][72]From 1955 onwards, transistors replacedvacuum tubesin computer designs, giving rise to the "second generation" of computers. Compared to vacuum tubes, transistors have many advantages: they are smaller, and require less power than vacuum tubes, so give off less heat.Junction transistorswere much more reliable than vacuum tubes and had longer, indefinite, service life. Transistorized computers could contain tens of thousands of binary logic circuits in a relatively compact space. However, early junction transistors were relatively bulky devices that were difficult to manufacture on amass-productionbasis, which limited them to a number of specialized applications.[73] At theUniversity of Manchester, a team under the leadership ofTom Kilburndesigned and built a machine using the newly developed transistors instead of valves.[74]Their firsttransistorized computerand the first in the world, wasoperational by 1953, and a second version was completed there in April 1955. However, the machine did make use of valves to generate its 125 kHz clock waveforms and in the circuitry to read and write on its magneticdrum memory, so it was not the first completely transistorized computer. That distinction goes to theHarwell CADETof 1955,[75]built by the electronics division of theAtomic Energy Research EstablishmentatHarwell.[75][76] Themetal–oxide–silicon field-effect transistor(MOSFET), also known as the MOS transistor, was invented at Bell Labs between 1955 and 1960[77][78][79][80][81][82]and was the first truly compact transistor that could be miniaturized and mass-produced for a wide range of uses.[73]With itshigh scalability,[83]and much lower power consumption and higher density than bipolar junction transistors,[84]the MOSFET made it possible to buildhigh-density integrated circuits.[85][86]In addition to data processing, it also enabled the practical use of MOS transistors asmemory cellstorage elements, leading to the development of MOSsemiconductor memory, which replaced earliermagnetic-core memoryin computers. The MOSFET led to themicrocomputer revolution,[87]and became the driving force behind thecomputer revolution.[88][89]The MOSFET is the most widely used transistor in computers,[90][91]and is the fundamental building block ofdigital electronics.[92] The next great advance in computing power came with the advent of theintegrated circuit(IC). The idea of the integrated circuit was first conceived by a radar scientist working for theRoyal Radar Establishmentof theMinistry of Defence,Geoffrey W.A. Dummer. Dummer presented the first public description of an integrated circuit at the Symposium on Progress in Quality Electronic Components inWashington, D.C., on 7 May 1952.[93] The first working ICs were invented byJack KilbyatTexas InstrumentsandRobert NoyceatFairchild Semiconductor.[94]Kilby recorded his initial ideas concerning the integrated circuit in July 1958, successfully demonstrating the first working integrated example on 12 September 1958.[95]In his patent application of 6 February 1959, Kilby described his new device as "a body of semiconductor material ... wherein all the components of the electronic circuit are completely integrated".[96][97]However, Kilby's invention was ahybrid integrated circuit(hybrid IC), rather than amonolithic integrated circuit(IC) chip.[98]Kilby's IC had external wire connections, which made it difficult to mass-produce.[99] Noyce also came up with his own idea of an integrated circuit half a year later than Kilby.[100]Noyce's invention was the first true monolithic IC chip.[101][99]His chip solved many practical problems that Kilby's had not. Produced at Fairchild Semiconductor, it was made ofsilicon, whereas Kilby's chip was made ofgermanium. Noyce's monolithic IC wasfabricatedusing theplanar process, developed by his colleagueJean Hoerniin early 1959. In turn, the planar process was based onCarl Froschand Lincoln Derick work on semiconductor surface passivation by silicon dioxide.[102][103][104][105][106][107] Modern monolithic ICs are predominantly MOS (metal–oxide–semiconductor) integrated circuits, built fromMOSFETs(MOS transistors).[108]The earliest experimental MOS IC to be fabricated was a 16-transistor chip built by Fred Heiman and Steven Hofstein atRCAin 1962.[109]General Microelectronicslater introduced the first commercial MOS IC in 1964,[110]developed by Robert Norman.[109]Following the development of theself-aligned gate(silicon-gate) MOS transistor by Robert Kerwin,Donald Kleinand John Sarace at Bell Labs in 1967, the firstsilicon-gateMOS IC withself-aligned gateswas developed byFederico Fagginat Fairchild Semiconductor in 1968.[111]The MOSFET has since become the most critical device component in modern ICs.[108] The development of the MOS integrated circuit led to the invention of themicroprocessor,[112][113]and heralded an explosion in the commercial and personal use of computers. While the subject of exactly which device was the first microprocessor is contentious, partly due to lack of agreement on the exact definition of the term "microprocessor", it is largely undisputed that the first single-chip microprocessor was theIntel 4004,[114]designed and realized by Federico Faggin with his silicon-gate MOS IC technology,[112]along withTed Hoff,Masatoshi ShimaandStanley MazoratIntel.[b][116]In the early 1970s, MOS IC technology enabled theintegrationof more than 10,000 transistors on a single chip.[86] System on a Chip(SoCs) are complete computers on amicrochip(or chip) the size of a coin.[117]They may or may not have integratedRAMandflash memory. If not integrated, the RAM is usually placed directly above (known asPackage on package) or below (on the opposite side of thecircuit board) the SoC, and the flash memory is usually placed right next to the SoC. This is done to improve data transfer speeds, as the data signals do not have to travel long distances. Since ENIAC in 1945, computers have advanced enormously, with modern SoCs (such as the Snapdragon 865) being the size of a coin while also being hundreds of thousands of times more powerful than ENIAC, integrating billions of transistors, and consuming only a few watts of power. The firstmobile computerswere heavy and ran from mains power. The 50 lb (23 kg)IBM 5100was an early example. Later portables such as theOsborne 1andCompaq Portablewere considerably lighter but still needed to be plugged in. The first laptops, such as theGrid Compass, removed this requirement by incorporating batteries – and with the continued miniaturization of computing resources and advancements in portable battery life, portable computers grew in popularity in the 2000s.[118]The same developments allowed manufacturers to integrate computing resources into cellular mobile phones by the early 2000s. Thesesmartphonesandtabletsrun on a variety of operating systems and recently became the dominant computing device on the market.[119]These are powered bySystem on a Chip(SoCs), which are complete computers on a microchip the size of a coin.[117] Computers can be classified in a number of different ways, including: A computer does not need to beelectronic, nor even have aprocessor, norRAM, nor even ahard disk. While popular usage of the word "computer" is synonymous with a personal electronic computer,[c]a typical modern definition of a computer is: "A device that computes, especially a programmable [usually] electronic machine that performs high-speed mathematical or logical operations or that assembles, stores, correlates, or otherwise processes information."[124]According to this definition, any device thatprocesses informationqualifies as a computer. The termhardwarecovers all of those parts of a computer that are tangible physical objects.Circuits, computer chips, graphic cards, sound cards, memory (RAM), motherboard, displays, power supplies, cables, keyboards, printers and "mice" input devices are all hardware. A general-purpose computer has four main components: thearithmetic logic unit(ALU), thecontrol unit, thememory, and theinput and output devices(collectively termed I/O). These parts are interconnected bybuses, often made of groups ofwires. Inside each of these parts are thousands to trillions of smallelectrical circuitswhich can be turned off or on by means of anelectronic switch. Each circuit represents abit(binary digit) of information so that when the circuit is on it represents a "1", and when off it represents a "0" (in positive logic representation). The circuits are arranged inlogic gatesso that one or more of the circuits may control the state of one or more of the other circuits. When unprocessed data is sent to the computer with the help of input devices, the data is processed and sent to output devices. The input devices may be hand-operated or automated. The act of processing is mainly regulated by the CPU. Some examples of input devices are: The means through which computer gives output are known as output devices. Some examples of output devices are: Thecontrol unit(often called a control system or central controller) manages the computer's various components; it reads and interprets (decodes) the program instructions, transforming them into control signals that activate other parts of the computer.[e]Control systems in advanced computers may change the order of execution of some instructions to improve performance. A key component common to all CPUs is theprogram counter, a special memory cell (aregister) that keeps track of which location in memory the next instruction is to be read from.[f] The control system's function is as follows— this is a simplified description, and some of these steps may be performed concurrently or in a different order depending on the type of CPU: Since the program counter is (conceptually) just another set of memory cells, it can be changed by calculations done in the ALU. Adding 100 to the program counter would cause the next instruction to be read from a place 100 locations further down the program. Instructions that modify the program counter are often known as "jumps" and allow for loops (instructions that are repeated by the computer) and often conditional instruction execution (both examples ofcontrol flow). The sequence of operations that the control unit goes through to process an instruction is in itself like a short computer program, and indeed, in some more complex CPU designs, there is another yet smaller computer called amicrosequencer, which runs amicrocodeprogram that causes all of these events to happen. The control unit, ALU, and registers are collectively known as acentral processing unit(CPU). Early CPUs were composed of many separate components. Since the 1970s, CPUs have typically been constructed on a singleMOS integrated circuitchip called amicroprocessor. The ALU is capable of performing two classes of operations: arithmetic and logic.[125]The set of arithmetic operations that a particular ALU supports may be limited to addition and subtraction, or might include multiplication, division,trigonometryfunctions such as sine, cosine, etc., andsquare roots. Some can operate only on whole numbers (integers) while others usefloating pointto representreal numbers, albeit with limited precision. However, any computer that is capable of performing just the simplest operations can be programmed to break down the more complex operations into simple steps that it can perform. Therefore, any computer can be programmed to perform any arithmetic operation—although it will take more time to do so if its ALU does not directly support the operation. An ALU may also compare numbers and returnBoolean truth values(true or false) depending on whether one is equal to, greater than or less than the other ("is 64 greater than 65?"). Logic operations involveBoolean logic:AND,OR,XOR, andNOT. These can be useful for creating complicatedconditional statementsand processingBoolean logic. Superscalarcomputers may contain multiple ALUs, allowing them to process several instructions simultaneously.[126]Graphics processorsand computers withSIMDandMIMDfeatures often contain ALUs that can perform arithmetic onvectorsandmatrices. A computer's memory can be viewed as a list of cells into which numbers can be placed or read. Each cell has a numbered "address" and can store a single number. The computer can be instructed to "put the number 123 into the cell numbered 1357" or to "add the number that is in cell 1357 to the number that is in cell 2468 and put the answer into cell 1595." The information stored in memory may represent practically anything. Letters, numbers, even computer instructions can be placed into memory with equal ease. Since the CPU does not differentiate between different types of information, it is the software's responsibility to give significance to what the memory sees as nothing but a series of numbers. In almost all modern computers, eachmemory cellis set up to storebinary numbersin groups of eight bits (called abyte). Each byte is able to represent 256 different numbers (28= 256); either from 0 to 255 or −128 to +127. To store larger numbers, several consecutive bytes may be used (typically, two, four or eight). When negative numbers are required, they are usually stored intwo's complementnotation. Other arrangements are possible, but are usually not seen outside of specialized applications or historical contexts. A computer can store any kind of information in memory if it can be represented numerically. Modern computers have billions or even trillions of bytes of memory. The CPU contains a special set of memory cells calledregistersthat can be read and written to much more rapidly than the main memory area. There are typically between two and one hundred registers depending on the type of CPU. Registers are used for the most frequently needed data items to avoid having to access main memory every time data is needed. As data is constantly being worked on, reducing the need to access main memory (which is often slow compared to the ALU and control units) greatly increases the computer's speed. Computer main memory comes in two principal varieties: RAM can be read and written to anytime the CPU commands it, but ROM is preloaded with data and software that never changes, therefore the CPU can only read from it. ROM is typically used to store the computer's initial start-up instructions. In general, the contents of RAM are erased when the power to the computer is turned off, but ROM retains its data indefinitely. In a PC, the ROM contains a specialized program called theBIOSthat orchestrates loading the computer'soperating systemfrom the hard disk drive into RAM whenever the computer is turned on or reset. Inembedded computers, which frequently do not have disk drives, all of the required software may be stored in ROM. Software stored in ROM is often calledfirmware, because it is notionally more like hardware than software.Flash memoryblurs the distinction between ROM and RAM, as it retains its data when turned off but is also rewritable. It is typically much slower than conventional ROM and RAM however, so its use is restricted to applications where high speed is unnecessary.[g] In more sophisticated computers there may be one or more RAMcache memories, which are slower than registers but faster than main memory. Generally computers with this sort of cache are designed to move frequently needed data into the cache automatically, often without the need for any intervention on the programmer's part. I/O is the means by which a computer exchanges information with the outside world.[128]Devices that provide input or output to the computer are calledperipherals.[129]On a typical personal computer, peripherals include input devices like the keyboard andmouse, and output devices such as thedisplayandprinter.Hard disk drives,floppy diskdrives andoptical disc drivesserve as both input and output devices.Computer networkingis another form of I/O. I/O devices are often complex computers in their own right, with their own CPU and memory. Agraphics processing unitmight contain fifty or more tiny computers that perform the calculations necessary to display3D graphics.[citation needed]Moderndesktop computerscontain many smaller computers that assist the main CPU in performing I/O. A 2016-era flat screen display contains its own computer circuitry. While a computer may be viewed as running one gigantic program stored in its main memory, in some systems it is necessary to give the appearance of running several programs simultaneously. This is achieved by multitasking, i.e. having the computer switch rapidly between running each program in turn.[130]One means by which this is done is with a special signal called aninterrupt, which can periodically cause the computer to stop executing instructions where it was and do something else instead. By remembering where it was executing prior to the interrupt, the computer can return to that task later. If several programs are running "at the same time". Then the interrupt generator might be causing several hundred interrupts per second, causing a program switch each time. Since modern computers typically execute instructions several orders of magnitude faster than human perception, it may appear that many programs are running at the same time, even though only one is ever executing in any given instant. This method of multitasking is sometimes termed "time-sharing" since each program is allocated a "slice" of time in turn.[131] Before the era of inexpensive computers, the principal use for multitasking was to allow many people to share the same computer. Seemingly, multitasking would cause a computer that is switching between several programs to run more slowly, in direct proportion to the number of programs it is running, but most programs spend much of their time waiting for slow input/output devices to complete their tasks. If a program is waiting for the user to click on the mouse or press a key on the keyboard, then it will not take a "time slice" until theeventit is waiting for has occurred. This frees up time for other programs to execute so that many programs may be run simultaneously without unacceptable speed loss. Some computers are designed to distribute their work across several CPUs in a multiprocessing configuration, a technique once employed in only large and powerful machines such assupercomputers,mainframe computersandservers. Multiprocessor andmulti-core(multiple CPUs on a single integrated circuit) personal and laptop computers are now widely available, and are being increasingly used in lower-end markets as a result. Supercomputers in particular often have highly unique architectures that differ significantly from the basic stored-program architecture and from general-purpose computers.[h]They often feature thousands of CPUs, customized high-speed interconnects, and specialized computing hardware. Such designs tend to be useful for only specialized tasks due to the large scale of program organization required to use most of the available resources at once. Supercomputers usually see usage in large-scalesimulation,graphics rendering, andcryptographyapplications, as well as with other so-called "embarrassingly parallel" tasks. Softwarerefers to parts of the computer which do not have a material form, such as programs, data, protocols, etc. Software is that part of a computer system that consists of encoded information or computer instructions, in contrast to the physicalhardwarefrom which the system is built. Computer software includes computer programs,librariesand related non-executabledata, such asonline documentationordigital media. It is often divided intosystem softwareandapplication software. Computer hardware and software require each other and neither can be realistically used on its own. When software is stored in hardware that cannot easily be modified, such as withBIOSROMin anIBM PC compatiblecomputer, it is sometimes called "firmware". There are thousands of different programming languages—some intended for general purpose, others useful for only highly specialized applications. The defining feature of modern computers which distinguishes them from all other machines is that they can beprogrammed. That is to say that some type ofinstructions(theprogram) can be given to the computer, and it will process them. Modern computers based on thevon Neumann architectureoften have machine code in the form of animperative programming language. In practical terms, a computer program may be just a few instructions or extend to many millions of instructions, as do the programs forword processorsandweb browsersfor example. A typical modern computer can execute billions of instructions per second (gigaflops) and rarely makes a mistake over many years of operation. Large computer programs consisting of several million instructions may take teams ofprogrammersyears to write, and due to the complexity of the task almost certainly contain errors. This section applies to most commonRAM machine–based computers. In most cases, computer instructions are simple: add one number to another, move some data from one location to another, send a message to some external device, etc. These instructions are read from the computer'smemoryand are generally carried out (executed) in the order they were given. However, there are usually specialized instructions to tell the computer to jump ahead or backwards to some other place in the program and to carry on executing from there. These are called "jump" instructions (orbranches). Furthermore, jump instructions may be made to happenconditionallyso that different sequences of instructions may be used depending on the result of some previous calculation or some external event. Many computers directly supportsubroutinesby providing a type of jump that "remembers" the location it jumped from and another instruction to return to the instruction following that jump instruction. Program execution might be likened to reading a book. While a person will normally read each word and line in sequence, they may at times jump back to an earlier place in the text or skip sections that are not of interest. Similarly, a computer may sometimes go back and repeat the instructions in some section of the program over and over again until some internal condition is met. This is called theflow of controlwithin the program and it is what allows the computer to perform tasks repeatedly without human intervention. Comparatively, a person using a pocketcalculatorcan perform a basic arithmetic operation such as adding two numbers with just a few button presses. But to add together all of the numbers from 1 to 1,000 would take thousands of button presses and a lot of time, with a near certainty of making a mistake. On the other hand, a computer may be programmed to do this with just a few simple instructions. The following example is written in theMIPS assembly language: Once told to run this program, the computer will perform the repetitive addition task without further human intervention. It will almost never make a mistake and a modern PC can complete the task in a fraction of a second. In most computers, individual instructions are stored asmachine codewith each instruction being given a unique number (its operation code oropcodefor short). The command to add two numbers together would have one opcode; the command to multiply them would have a different opcode, and so on. The simplest computers are able to perform any of a handful of different instructions; the more complex computers have several hundred to choose from, each with a unique numerical code. Since the computer's memory is able to store numbers, it can also store the instruction codes. This leads to the important fact that entire programs (which are just lists of these instructions) can be represented as lists of numbers and can themselves be manipulated inside the computer in the same way as numeric data. The fundamental concept of storing programs in the computer's memory alongside the data they operate on is the crux of the von Neumann, or stored program, architecture.[133][134]In some cases, a computer might store some or all of its program in memory that is kept separate from the data it operates on. This is called theHarvard architectureafter theHarvard Mark Icomputer. Modern von Neumann computers display some traits of the Harvard architecture in their designs, such as inCPU caches. While it is possible to write computer programs as long lists of numbers (machine language) and while this technique was used with many early computers,[i]it is extremely tedious and potentially error-prone to do so in practice, especially for complicated programs. Instead, each basic instruction can be given a short name that is indicative of its function and easy to remember – amnemonicsuch as ADD, SUB, MULT or JUMP. These mnemonics are collectively known as a computer'sassembly language. Converting programs written in assembly language into something the computer can actually understand (machine language) is usually done by a computer program called an assembler. Programming languages provide various ways of specifying programs for computers to run. Unlikenatural languages, programming languages are designed to permit no ambiguity and to be concise. They are purely written languages and are often difficult to read aloud. They are generally either translated intomachine codeby acompileror anassemblerbefore being run, or translated directly at run time by aninterpreter. Sometimes programs are executed by a hybrid method of the two techniques. Machine languages and the assembly languages that represent them (collectively termedlow-level programming languages) are generally unique to the particular architecture of a computer's central processing unit (CPU). For instance, anARM architectureCPU (such as may be found in asmartphoneor ahand-held videogame) cannot understand the machine language of anx86CPU that might be in aPC.[j]Historically a significant number of other CPU architectures were created and saw extensive use, notably including the MOS Technology 6502 and 6510 in addition to the Zilog Z80. Although considerably easier than in machine language, writing long programs in assembly language is often difficult and is also error prone. Therefore, most practical programs are written in more abstracthigh-level programming languagesthat are able to express the needs of theprogrammermore conveniently (and thereby help reduce programmer error). High level languages are usually "compiled" into machine language (or sometimes into assembly language and then into machine language) using another computer program called acompiler.[k]High level languages are less related to the workings of the target computer than assembly language, and more related to the language and structure of the problem(s) to be solved by the final program. It is therefore often possible to use different compilers to translate the same high level language program into the machine language of many different types of computer. This is part of the means by which software like video games may be made available for different computer architectures such as personal computers and variousvideo game consoles. Program design of small programs is relatively simple and involves the analysis of the problem, collection of inputs, using the programming constructs within languages, devising or using established procedures and algorithms, providing data for output devices and solutions to the problem as applicable.[135]As problems become larger and more complex, features such as subprograms, modules, formal documentation, and new paradigms such as object-oriented programming are encountered.[136]Large programs involving thousands of line of code and more require formal software methodologies.[137]The task of developing largesoftwaresystems presents a significant intellectual challenge.[138]Producing software with an acceptably high reliability within a predictable schedule and budget has historically been difficult;[139]the academic and professional discipline of software engineering concentrates specifically on this challenge.[140] Errors in computer programs are called "bugs". They may be benign and not affect the usefulness of the program, or have only subtle effects. However, in some cases they may cause the program or the entire system to "hang", becoming unresponsive to input such asmouseclicks or keystrokes, to completely fail, or tocrash.[141]Otherwise benign bugs may sometimes be harnessed for malicious intent by an unscrupulous user writing anexploit, code designed to take advantage of a bug and disrupt a computer's proper execution. Bugs are usually not the fault of the computer. Since computers merely execute the instructions they are given, bugs are nearly always the result of programmer error or an oversight made in the program's design.[l]AdmiralGrace Hopper, an American computer scientist and developer of the firstcompiler, is credited for having first used the term "bugs" in computing after a dead moth was found shorting a relay in theHarvard Mark IIcomputer in September 1947.[142] Computers have been used to coordinate information between multiple physical locations since the 1950s. The U.S. military'sSAGEsystem was the first large-scale example of such a system, which led to a number of special-purpose commercial systems such asSabre.[143] In the 1970s, computer engineers at research institutions throughout the United States began to link their computers together using telecommunications technology. The effort was funded by ARPA (nowDARPA), and thecomputer networkthat resulted was called theARPANET.[144]The technologies that made the Arpanet possible spread and evolved. In time, the network spread beyond academic and military institutions and became known as the Internet. The emergence of networking involved a redefinition of the nature and boundaries of computers. Computer operating systems and applications were modified to include the ability to define and access the resources of other computers on the network, such as peripheral devices, stored information, and the like, as extensions of the resources of an individual computer. Initially these facilities were available primarily to people working in high-tech environments, but in the 1990s, computer networking become almost ubiquitous, due to the spread of applications like e-mail and theWorld Wide Web, combined with the development of cheap, fast networking technologies likeEthernetandADSL. The number of computers that are networked is growing phenomenally. A very large proportion of personal computers regularly connect to the Internet to communicate and receive information. "Wireless" networking, often utilizing mobile phone networks, has meant networking is becoming increasingly ubiquitous even in mobile computing environments. There is active research to make unconventional computers out of many promising new types of technology, such asoptical computers,DNA computers,neural computers, andquantum computers. Most computers are universal, and are able to calculate anycomputable function, and are limited only by their memory capacity and operating speed. However different designs of computers can give very different performance for particular problems; for example quantum computers can potentially break some modern encryption algorithms (byquantum factoring) very quickly. There are many types ofcomputer architectures: Of all theseabstract machines, a quantum computer holds the most promise for revolutionizing computing.[145]Logic gatesare a common abstraction which can apply to most of the abovedigitaloranalogparadigms. The ability to store and execute lists of instructions calledprogramsmakes computers extremely versatile, distinguishing them fromcalculators. TheChurch–Turing thesisis a mathematical statement of this versatility: any computer with aminimum capability (being Turing-complete)is, in principle, capable of performing the same tasks that any other computer can perform. Therefore, any type of computer (netbook,supercomputer,cellular automaton, etc.) is able to perform the same computational tasks, given enough time and storage capacity. In the 20th century,artificial intelligencesystems were predominantlysymbolic: they executed code that was explicitly programmed by software developers.[146]Machine learningmodels, however, have a set parameters that are adjusted throughout training, so that the model learns to accomplish a task based on the provided data. The efficiency of machine learning (and in particular ofneural networks) has rapidly improved with progress in hardware forparallel computing, mainlygraphics processing units(GPUs).[147]Somelarge language modelsare able to control computers or robots.[148][149]AI progress may lead to the creation ofartificial general intelligence(AGI), a type of AI that could accomplish virtually any intellectual task at least as well as humans.[150] As the use of computers has spread throughout society, there are an increasing number of careers involving computers. The need for computers to work well together and to be able to exchange information has spawned the need for many standards organizations, clubs and societies of both a formal and informal nature.
https://en.wikipedia.org/wiki/Computer
High-performance computing(HPC) is the use ofsupercomputersandcomputer clustersto solve advanced computation problems. HPC integratessystems administration(including network and security knowledge) andparallel programminginto a multidisciplinary field that combinesdigital electronics,computer architecture,system software,programming languages,algorithmsand computational techniques.[1]HPC technologies are the tools and systems used to implement and create high performance computing systems.[2]Recently[when?], HPC systems have shifted from supercomputing to computingclustersandgrids.[1]Because of the need of networking in clusters and grids, High Performance Computing Technologies are being promoted[by whom?]by the use of acollapsed network backbone, because the collapsed backbone architecture is simple to troubleshoot and upgrades can be applied to a single router as opposed to multiple ones. HPC integrates with data analytics inAI engineeringworkflows to generate new data streams that increase simulation ability to answer the "what if" questions.[3] The term is most commonly associated with computing used for scientific research orcomputational science. A related term,high-performance technical computing(HPTC), generally refers to the engineering applications of cluster-based computing (such ascomputational fluid dynamicsand the building and testing ofvirtual prototypes). HPC has also been applied tobusinessuses such asdata warehouses,line of business(LOB) applications, andtransaction processing. High-performance computing (HPC) as a term arose after the term "supercomputing".[4]HPC is sometimes used as a synonym for supercomputing; but, in other contexts, "supercomputer" is used to refer to a more powerful subset of "high-performance computers", and the term "supercomputing" becomes a subset of "high-performance computing". The potential for confusion over the use of these terms is apparent. Because most current applications are not designed for HPC technologies but are retrofitted, they are not designed or tested for scaling to more powerful processors or machines.[2]Since networking clusters and grids usemultiple processorsand computers, these scaling problems can cripple critical systems in future supercomputing systems. Therefore, either the existing tools do not address the needs of the high performance computing community or the HPC community is unaware of these tools.[2]A few examples of commercial HPC technologies include: In government and research institutions, scientists simulategalaxy formation and evolution, fusion energy, and global warming, as well as work to create more accurate short- and long-term weather forecasts.[5]The world's tenth most powerful supercomputer in 2008,IBM Roadrunner(located at theUnited States Department of Energy'sLos Alamos National Laboratory)[6]simulated the performance, safety, and reliability of nuclear weapons and certifies their functionality.[7] TOP500 ranks the world's 500 fastest high-performance computers, as measured by theHigh Performance LINPACK(HPL) benchmark. Not all existing computers are ranked, either because they are ineligible (e.g., they cannot run the HPL benchmark) or because their owners have not submitted an HPL score (e.g., because they do not wish the size of their system to become public information, for defense reasons). In addition, the use of the single LINPACK benchmark is controversial, in that no single measure can test all aspects of a high-performance computer. To help overcome the limitations of the LINPACK test, the U.S. government commissioned one of its originators,Jack Dongarraof the University of Tennessee, to create a suite of benchmark tests that includes LINPACK and others, called the HPC Challenge benchmark suite. This evolving suite has been used in some HPC procurements, but, because it is not reducible to a single number, it has been unable to overcome the publicity advantage of the less useful TOP500 LINPACK test. The TOP500 list is updated twice a year, once in June at the ISC European Supercomputing Conference and again at a US Supercomputing Conference in November. Many ideas for the new wave ofgrid computingwere originally borrowed from HPC. Traditionally, HPC has involved anon-premisesinfrastructure, investing in supercomputers or computer clusters. Over the last decade,cloud computinghas grown in popularity for offering computer resources in the commercial sector regardless of their investment capabilities.[8]Some characteristics like scalability andcontainerizationalso have raised interest in academia.[9]Howeversecurity in the cloudconcerns such as data confidentiality are still considered when deciding between cloud or on-premise HPC resources.[8] Below is a list of the main HPCs by computing power, as reported in the Top500 list:[10]
https://en.wikipedia.org/wiki/High-performance_computing
Meta(from theμετά,meta, meaning 'after' or 'beyond') is an adjective meaning 'more comprehensive' or 'transcending'.[1] In modern nomenclature, the prefix meta can also serve as a prefix meaning self-referential, as a field of study or endeavor (metatheory: theory about a theory;metamathematics: mathematical theories about mathematics; meta-axiomatics or meta-axiomaticity: axioms about axiomatic systems; metahumor: joking about the ways humor is expressed; etc.).[2] InGreek, the prefixmeta-is generally less esoteric than inEnglish; Greekmeta-is equivalent to theLatinwordspost-orad-. The use of the prefix in this sense occurs occasionally inscientific Englishterms derived fromGreek. For example, the termMetatheria(the name for thecladeofmarsupialmammals) uses the prefixmeta-in the sense that theMetatheriaoccur on the tree of life adjacent to theTheria(theplacental mammals). Inepistemology, and often in common use, the prefixmeta-is used to mean 'about (its own category)'. For example,metadatais data about data (who has produced them, when, what format the data are in and so on). In a database, metadata is also data about data stored in a data dictionary, describing information (data) about database tables such as the table name, table owner, details about columns, etc. – essentially describing the table. In psychology,metamemoryrefers to an individual'sknowledgeabout whether or not they would remember something if they concentrated on recalling it. The modern sense of "an X about X" has given rise to concepts like "meta-cognition" (cognition about cognition), "meta-emotion" (emotion about emotion), "meta-discussion" (discussion about discussion), "meta-joke" (joke about jokes), and "metaprogramming" (writing programs about writing programs). In arule-based system, ametaruleis a rule governing the application of other rules.[3] "Metagaming", accordingly, refers to games about games. However, it has a different meaning depending on the context. Inrole-playing games, this means that someone with a higher level of knowledge is playing; that is, that the player incorporates factors that are outside the actual framework of the game – the player has knowledge that was not acquired through experiencing the game, but through external sources. This type of metagaming is often frowned upon in many role-playing game communities because it impairs game balance and equality of opportunity.[4]Metagaming can also refer to a game that is used to create or change the rules while playing a game. One can play this type of metagame and choose which rules apply during the game itself, potentially changing the level of difficulty. Such metagames include campaign role-playing games likeHalo 3.[5]Complex card or board games, e.g.pokerorchess, are also often referred to as metagames. According to Nigel Howard, this type of metagame is defined as a decision-making process that is derived from the analysis of possible outcomes in relation to external variables that change a problem.[6] Any subject can be said to have ametatheory, a theoretical consideration of its properties – such as itsfoundations,methods,form, andutility– on a higher level of abstraction. In linguistics, grammar is considered to be ametalanguage: a language operating on a higher level to describe properties of the plain language, and not itself. The prefix comes from theGreekprepositionandprefixmeta-(μετα-), from μετά,[7]which typically means "after", "beside", "with" or "among". Other meanings include "beyond", "adjacent" and "self", and it is also used in the forms μετ- before vowels and μεθ- "meth-" beforeaspirated vowels. The earliest form of the word "meta" is theMycenaean Greekme-ta, written inLinear Bsyllabic script.[8]The Greek preposition iscognatewith theOld Englishprepositionmid"with", still found as a prefix inmidwife. Its use in English is the result ofback-formationfrom the word "metaphysics". In originMetaphysicswas just the title of one of the principal works ofAristotle; it was so named (byAndronicus of Rhodes) because in the customary ordering of the works of Aristotle it was the book followingPhysics; it thus meant nothing more than "[the book that comes] after [the book entitled]Physics". However, even Latin writers misinterpreted this as entailing metaphysics constituted "the science of what is beyond the physical".[9]Nonetheless, Aristotle'sMetaphysicsenunciates considerations of a nature[clarification needed]above physical reality, which one can examine through certain philosophy – for example, such a thing as anunmoved mover. The use of the prefix was later extended to other contexts, based on the understanding of metaphysics as meaning "the science of what is beyond the physical". TheOxford English Dictionarycites uses of themeta-prefix as "beyond, about" (such as meta-economics and meta-philosophy) going back to 1917. However, these formations are parallel to the original "metaphysics" and "metaphysical", that is, as a prefix to general nouns (fields of study) or adjectives. Going by theOEDcitations, it began being used with specific nouns in connection with mathematical logic sometime before 1929. (In 1920David Hilbertproposed a research project in what was called "metamathematics.") A notable early citation isW. V. O. Quine's 1937 use of the word "metatheorem",[10]where meta- has the modern meaning of "an X about X". Douglas Hofstadter, in his 1979 bookGödel, Escher, Bach(and in the 1985 sequel,Metamagical Themas), popularized this meaning of the term. The book, which deals withself-referenceandstrange loops, and touches on Quine and his work, was influential in many computer-related subcultures and may be responsible for the popularity of the prefix, for its use as a solo term, and for the many recent coinages which use it.[11]Hofstadter uses meta as a stand-alone word, as an adjective, and as a directional preposition ("going meta," a term he coins for the old rhetorical trick of taking a debate or analysis to another level of abstraction, as when somebody says "This debate isn't going anywhere"). This book may also be responsible for the association of "meta" with strange loops, as opposed to just abstraction. According to Hofstadter, it is aboutself-reference, which means a sentence, idea or formula refers to itself. The Merriam-Webster Dictionary describes it as "showing or suggesting an explicit awareness of itself or oneself as a member of its category: cleverly self-referential".[12]The sentence "This sentence contains thirty-six letters," and the sentence which embeds it, are examples of "metasentences" referencing themselves in this way. As maintained in the bookGödel, Escher, Bach, a strange loop is given if different logical statements or theories are put together in contradiction, thus distorting the meaning and generating logical paradoxes. One example is theliar paradox, a paradox in philosophy or logic that arises when a sentence claims its own falsehood (or untruth); for instance: "This sentence is not true." Until the beginning of the 20th century, this kind of paradox was a considerable problem for a philosophical theory of truth.Alfred Tarskisolved this difficulty by proving that such paradoxes do not exist with a consistent separation of object language and metalanguage.[13]"For every formalized language, a formally correct and factually applicable definition of the true statement can be constructed in the metalanguage with the sole help of expressions of a general-logical character, expressions of the language itself and of terms from the morphology of the language, but on the condition that the metalanguage is of a higher order than the language that is the subject of the investigation."[14] Metagamingis a general term describing an approach to playing a game as optimally as possible within its current rules. The shorthandmetahas beenbackronymedas "Most Effective Tactics Available" to tersely explain the concept. In the world ofcompetitive games, rule imprecisions and non-goal oriented play are not commonplace. As a result, the extent of metagaming narrows down mostly to studying strategies of top players and exploiting commonly-used strategies for an advantage.[15]Those may evolve as updates are released or new, better, strategies are discovered by top players.[16]The opposite metagame of playing a relatively unknown strategy for surprisal is often calledoff-meta.[15] This usage is particularly common in games that have large, organized play systems or tournament circuits. Some examples of this kind of environment are tournament scenes for tabletop or computer collectible card games likeMagic: The Gathering,Gwent: The Witcher Card GameorHearthstone, tabletop war-gaming such asWarhammer 40,000andFlames of War, or team-basedmultiplayeronline gamessuch asStar Conflict,Dota 2,League of Legends, andTeam Fortress 2. In some games, such asHeroes of the Storm, variedlevel designmakes the battleground a significant factor in the metagame.[16]
https://en.wikipedia.org/wiki/Meta_(prefix)
Metaknowledgeormeta-knowledgeisknowledgeabout knowledge.[1] Some authors divide meta-knowledge into orders: Other authors call zero order meta-knowledgefirst order knowledge, and call first order meta-knowledgesecond order knowledge; meta-knowledge is also known ashigher order knowledge.[3] Meta-knowledge is a fundamental conceptual instrument in such research and scientific domains as,knowledge engineering,knowledge management, and others dealing with study and operations on knowledge, seen as a unifiedobject/entities, abstracted from local conceptualizations and terminologies. Examples of the first-level individual meta-knowledge are methods of planning, modeling,tagging, learning and every modification of adomain knowledge. Indeed, universal meta-knowledge frameworks have to be valid for the organization of meta-levels of individual meta-knowledge. Meta-knowledge may be automatically harvested from electronic publication archives, to reveal patterns in research, relationships between researchers and institutions and to identify contradictory results.[1] This article aboutepistemologyis astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Meta-knowledge
Metamathematicsis the study ofmathematicsitself using mathematical methods. This study producesmetatheories, which aremathematical theoriesabout other mathematical theories. Emphasis on metamathematics (and perhaps the creation of the term itself) owes itself toDavid Hilbert'sattemptto secure thefoundations of mathematicsin the early part of the 20th century. Metamathematics provides "a rigorous mathematical technique for investigating a great variety of foundation problems for mathematics andlogic" (Kleene 1952, p. 59). An important feature of metamathematics is its emphasis on differentiating between reasoning from inside a system and from outside a system. An informal illustration of this is categorizing the proposition "2+2=4" as belonging to mathematics while categorizing the proposition "'2+2=4' is valid" as belonging to metamathematics. Metamathematicalmetatheoremsabout mathematics itself were originally differentiated from ordinarymathematical theoremsin the 19th century to focus on what was then called thefoundational crisis of mathematics.Richard's paradox(Richard 1905) concerning certain 'definitions' of real numbers in the English language is an example of the sort of contradictions that can easily occur if one fails to distinguish between mathematics and metamathematics. Something similar can be said around the well-knownRussell's paradox(Does the set of all those sets that do not contain themselves contain itself?). Metamathematics was intimately connected tomathematical logic, so that the early histories of the two fields, during the late 19th and early 20th centuries, largely overlap. More recently, mathematical logic has often included the study of new pure mathematics, such asset theory,category theory,recursion theoryand puremodel theory. Serious metamathematical reflection began with the work ofGottlob Frege, especially hisBegriffsschrift, published in 1879. David Hilbertwas the first to invoke the term "metamathematics" with regularity (seeHilbert's program), in the early 20th century. In his hands, it meant something akin to contemporaryproof theory, in which finitary methods are used to study various axiomatized mathematical theorems (Kleene 1952, p. 55). Other prominent figures in the field includeBertrand Russell,Thoralf Skolem,Emil Post,Alonzo Church,Alan Turing,Stephen Kleene,Willard Quine,Paul Benacerraf,Hilary Putnam,Gregory Chaitin,Alfred Tarski,Paul CohenandKurt Gödel. Today,metalogicand metamathematics broadly overlap, and both have been substantially subsumed by mathematical logic in academia. The discovery ofhyperbolic geometryhad importantphilosophicalconsequences for metamathematics. Before its discovery there was just one geometry and mathematics; the idea that another geometry existed was considered improbable. WhenGaussdiscovered hyperbolic geometry, it is said that he did not publish anything about it out of fear of the "uproar of theBoeotians", which would ruin his status asprinceps mathematicorum(Latin, "the Prince of Mathematicians").[1]The "uproar of the Boeotians" came and went, and gave an impetus to metamathematics and great improvements inmathematical rigour,analytical philosophyandlogic. Begriffsschrift(German for, roughly, "concept-script") is a book onlogicbyGottlob Frege, published in 1879, and theformal systemset out in that book. Begriffsschriftis usually translated asconcept writingorconcept notation; the full title of the book identifies it as "aformulalanguage, modeled on that ofarithmetic, of purethought." Frege's motivation for developing his formal approach to logic resembledLeibniz's motivation for hiscalculus ratiocinator(despite that, in hisForewordFrege clearly denies that he reached this aim, and also that his main aim would be constructing an ideal language like Leibniz's, what Frege declares to be quite hard and idealistic, however, not impossible task). Frege went on to employ his logical calculus in his research on thefoundations of mathematics, carried out over the next quarter century. Principia Mathematica, or "PM" as it is often abbreviated, was an attempt to describe a set ofaxiomsandinference rulesinsymbolic logicfrom which all mathematical truths could in principle be proven. As such, this ambitious project is of great importance in the history of mathematics and philosophy,[2]being one of the foremost products of the belief that such an undertaking may be achievable. However, in 1931,Gödel's incompleteness theoremproved definitively that PM, and in fact any other attempt, could never achieve this goal; that is, for any set of axioms and inference rules proposed to encapsulate mathematics, there would in fact be some truths of mathematics which could not be deduced from them. One of the main inspirations and motivations forPMwas the earlier work ofGottlob Fregeon logic, which Russell discovered allowed for the construction ofparadoxical sets.PMsought to avoid this problem by ruling out the unrestricted creation of arbitrary sets. This was achieved by replacing the notion of a general set with notion of a hierarchy of sets of different 'types', a set of a certain type only allowed to contain sets of strictly lower types. Contemporary mathematics, however, avoids paradoxes such as Russell's in less unwieldy ways, such as the system ofZermelo–Fraenkel set theory. Gödel's incompleteness theorems are twotheoremsofmathematical logicthat establish inherent limitations of all but the most trivialaxiomatic systemscapable of doingarithmetic. The theorems, proven byKurt Gödelin 1931, are important both in mathematical logic and in thephilosophy of mathematics. The two results are widely, but not universally, interpreted as showing thatHilbert's programto find a complete and consistent set ofaxiomsfor allmathematicsis impossible, giving a negative answer toHilbert's second problem. The first incompleteness theorem states that no consistent system of axioms whose theorems can be listed by an "effective procedure" (e.g., a computer program, but it could be any sort of algorithm) is capable of proving all truths about the relations of thenatural numbers(arithmetic). For any such system, there will always be statements about the natural numbers that are true, but that are unprovable within the system. The second incompleteness theorem, an extension of the first, shows that such a system cannot demonstrate its own consistency. The T-schema or truthschema(not to be confused with 'Convention T') is used to give aninductive definitionof truth which lies at the heart of any realisation ofAlfred Tarski'ssemantic theory of truth. Some authors refer to it as the "Equivalence Schema", a synonym introduced byMichael Dummett.[3] The T-schema is often expressed innatural language, but it can be formalized inmany-sorted predicate logicormodal logic; such a formalisation is called aT-theory. T-theories form the basis of much fundamental work inphilosophical logic, where they are applied in several important controversies inanalytic philosophy. As expressed in semi-natural language (where 'S' is the name of the sentence abbreviated to S): 'S' is trueif and only ifS Example: 'snow is white' is true if and only if snow is white. TheEntscheidungsproblem(Germanfor 'decision problem') is a challenge posed byDavid Hilbertin 1928.[4]TheEntscheidungsproblemasks for analgorithmthat takes as input a statement of afirst-order logic(possibly with a finite number ofaxiomsbeyond the usual axioms of first-order logic) and answers "Yes" or "No" according to whether the statement isuniversally valid, i.e., valid in every structure satisfying the axioms. Bythe completeness theorem of first-order logic, a statement is universally valid if and only if it can be deduced from the axioms, so theEntscheidungsproblemcan also be viewed as asking for an algorithm to decide whether a given statement is provable from the axioms using the rules of logic. In 1936,Alonzo ChurchandAlan Turingpublished independent papers[5]showing that a general solution to theEntscheidungsproblemis impossible, assuming that the intuitive notation of "effectively calculable" is captured by the functions computable by aTuring machine(or equivalently, by those expressible in thelambda calculus). This assumption is now known as theChurch–Turing thesis.
https://en.wikipedia.org/wiki/Meta-mathematics
Middlewarein the context ofdistributed applicationsissoftwarethat provides services beyond those provided by theoperating systemto enable the various components of a distributed system to communicate and manage data. Middleware supports and simplifies complexdistributed applications. It includesweb servers,application servers, messaging and similar tools that support application development and delivery. Middleware is especially integral to modern information technology based onXML,SOAP,Web services, andservice-oriented architecture. Middleware often enablesinteroperabilitybetween applications that run on different operating systems, by supplying services so the application can exchange data in a standards-based way. Middleware sits "in the middle" betweenapplication softwarethat may be working on differentoperating systems. It is similar to the middle layer of athree-tiersingle system architecture, except that it is stretched across multiple systems or applications. Examples includeEAIsoftware, telecommunications software,transaction monitors, and messaging-and-queueing software. The distinction between operating system and middleware functionality is, to some extent, arbitrary. While core kernel functionality can only be provided by the operating system itself, some functionality previously provided by separately sold middleware is now integrated in operating systems. A typical example is theTCP/IPstack for telecommunications, nowadays included virtually in every operating system. Middleware is defined as software that provides a link between separate software applications. It is sometimes referred to as plumbing because it connects two applications and passes data between them. Middleware allows data contained in one database to be accessed through another. This makes it particularly useful forenterprise application integrationanddata integrationtasks. In more abstract terms, middleware is "The software layer that lies between theoperating systemand applications on each side of a distributed computing system in a network."[1] Middleware gained popularity in the 1980s as a solution to the problem of how to link newer applications to older legacy systems, although the term had been in use since 1968.[2]It also facilitateddistributed processing, the connection of multiple applications to create a larger application, usually over a network. Middleware services provide a more functional set ofapplication programming interfacesto allow an application to: when compared to the operating system and network services. Middleware offers some unique technological advantages for business and industry. For example, traditional database systems are usually deployed in closed environments where users access the system only via a restricted network orintranet(e.g., an enterprise’s internal network). With the phenomenal growth of theWorld Wide Web, users can access virtually any database for which they have proper access rights from anywhere in the world. Middleware addresses the problem of varying levels ofinteroperabilityamong different database structures. Middleware facilitates transparent access to legacydatabase management systems(DBMSs) or applications via aweb serverwithout regard to database-specific characteristics.[3] Businesses frequently use middleware applications to link information from departmental databases, such as payroll, sales, and accounting, or databases housed in multiple geographic locations.[4]In the highly competitive healthcare community, laboratories make extensive use of middleware applications fordata mining,laboratory information system(LIS) backup, and to combine systems during hospital mergers. Middleware helps bridge the gap between separate LISs in a newly formed healthcare network following a hospital buyout.[5] Middleware can help software developers avoid having to writeapplication programming interfaces(API) for every control program, by serving as an independent programming interface for their applications. ForFuture Internetnetwork operation through traffic monitoring inmulti-domain scenarios, using mediator tools (middleware) is a powerful help since they allowoperators, searchers andservice providersto superviseQuality of serviceand analyse eventual failures intelecommunication services.[6]The Middleware stack is devised of several components (CSMS, TV Statistics & Client applications). It is known as the software brains of OTT platforms as it controls and interconnects all the components of the solution. The Content and Subscriber Management System (CSMS) is the central part of the solution commonly referred to as an administration portal. Apart from being the main interface for operator personnel to administer the TV service (Subscribers, Content, Packages, etc.) it also controls the majority of TV services and interacts with streaming & CDN and DRM serves to deliver Live, VOD and recorded content to the end users. It also integrates with external systems for billing, provisioning and with EPG and VOD content providers. Client applications authorize the CSMS and communicate with it, to provide required TV services to the end users on different devices.[7] Finally, e-commerce uses middleware to assist in handling rapid and secure transactions over many different types of computer environments.[8]In short, middleware has become a critical element across a broad range of industries, thanks to its ability to bring together resources across dissimilar networks or computing platforms. In 2004 members of theEuropean Broadcasting Union(EBU) carried out a study of Middleware with respect to system integration in broadcast environments. This involved system design engineering experts from 10 major European broadcasters working over a 12-month period to understand the effect of predominantly software-based products to media production and broadcasting system design techniques. The resulting reports Tech 3300 and Tech 3300s were published and are freely available from the EBU web site.[9][10] Message-oriented middleware(MOM)[11]is middleware where transactions or event notifications are delivered between disparate systems or components by way of messages, often via anenterprise messaging system. With MOM, messages sent to the client are collected and stored until they are acted upon, while the client continues with other processing. [13]IntelligentMiddleware(IMW) provides real-time intelligence and event management throughintelligent agents. The IMW manages the real-time processing of high volume sensor signals and turns these signals into intelligent and actionable business information. The actionable information is then delivered in end-user power dashboards to individual users or is pushed to systems within or outside the enterprise. It is able to support various heterogeneous types of hardware and software and provides an API for interfacing with external systems. It should have a highly scalable,distributed architecturewhich embeds intelligence throughout the network to transform raw data systematically into actionable and relevant knowledge. It can also be packaged with tools to view and manage operations and build advanced network applications most effectively. Content-centric middleware offers a simpleprovider-consumerabstraction through which applications can issue requests for uniquely identified content, without worrying about where or how it is obtained. Juno is one example, which allows applications to generate content requests associated with high-level delivery requirements.[14]The middleware then adapts the underlying delivery to access the content from sources that are best suited to matching the requirements. This is therefore similar toPublish/subscribemiddleware, as well as theContent-centric networkingparadigm. Policy appliance is a generic term referring to any form of middleware that manages policy rules. They can mediate between data owners or producers, data aggregators, and data users. Among heterogeneous institutional systems or networks they may be used to enforce, reconcile, and monitor agreed information management policies and laws across systems (or between jurisdictions) with divergent information policies or needs. Policy appliances can interact with smart data (data that carries with it contextual relevant terms for its own use),intelligent agents(queries that are self-credentialed, authenticating, or contextually adaptive), orcontext-awareapplications to control information flows, protect security and confidentiality, and maintain privacy. Policy appliances support policy-based information management processes by enabling rules-based processing, selective disclosure, and accountability and oversight.[15] Examples of policy appliance technologies for rules-based processing include analytic filters,contextual search, semantic programs, labeling and wrapper tools, andDRM, among others; policy appliance technologies for selective disclosure include anonymization, content personalization, subscription and publishing tools, among others; and, policy appliance technologies for accountability and oversight includeauthentication, authorization, immutable and non-repudiable logging, and audit tools, among others. Other sources[citation needed]include these additional classifications: IBM,Red Hat,Oracle CorporationandMicrosoftare some of the vendors that provide middleware software. Vendors such asAxway,SAP,TIBCO,Informatica,Objective Interface Systems,Pervasive, ScaleOut Software andwebMethodswere specifically founded to provide more niche middleware solutions. Groups such as theApache Software Foundation,OpenSAF, theObjectWeb Consortium(now OW2) and OASIS'AMQPencourage the development ofopen sourcemiddleware. Microsoft .NET "Framework" architecture is essentially "Middleware" with typical middleware functions distributed between the various products, with most inter-computer interaction by industry standards, open APIs or RAND software licence.Solaceprovides middleware in purpose-built hardware for implementations that may experience scale. StormMQ providesMessage Oriented Middlewareas a service.
https://en.wikipedia.org/wiki/Metacomputing_software
Metaprogrammingis acomputer programmingtechnique in whichcomputer programshave the ability to treat other programs as theirdata. It means that a program can be designed to read, generate, analyse, or transform other programs, and even modify itself, while running.[1][2]In some cases, this allows programmers to minimize the number of lines of code to express a solution, in turn reducing development time.[3]It also allows programs more flexibility to efficiently handle new situations with no recompiling. Metaprogramming can be used to move computations fromruntimetocompile time, to generate code usingcompile time computations, and to enableself-modifying code. The ability of aprogramming languageto be its ownmetalanguageallowsreflective programming, and is termedreflection.[4]Reflection is a valuable language feature to facilitate metaprogramming. Metaprogramming was popular in the 1970s and 1980s using list processing languages such asLisp.Lisp machinehardware gained some notice in the 1980s, and enabled applications that could process code. They were often used forartificial intelligenceapplications. Metaprogramming enables developers to write programs and develop code that falls under thegeneric programmingparadigm. Having the programming language itself as afirst-class data type(as inLisp,Prolog,SNOBOL, orRebol) is also very useful; this is known ashomoiconicity.Generic programminginvokes a metaprogramming facility within a language by allowing one to write code without the concern of specifying data types since they can be supplied asparameterswhen used. Metaprogramming usually works in one of three ways.[5] Lispis probably the quintessential language with metaprogramming facilities, both because of its historical precedence and because of the simplicity and power of its metaprogramming. In Lisp metaprogramming, the unquote operator (typically a comma) introduces code that isevaluated at program definition timerather than at run time. The metaprogramming language is thus identical to the host programming language, and existing Lisp routines can be directly reused for metaprogramming if desired. This approach has been implemented in other languages by incorporating an interpreter in the program, which works directly with the program's data. There are implementations of this kind for some common high-level languages, such asRemObjects’Pascal ScriptforObject Pascal. A simple example of a metaprogram is thisPOSIX Shellscript, which is an example ofgenerative programming: This script (or program) generates a new 993-line program that prints out the numbers 1–992. This is only an illustration of how to use code to write more code; it is not the most efficient way to print out a list of numbers. Nonetheless, a programmer can write and execute this metaprogram in less than a minute, and will have generated over 1000 lines of code in that amount of time. Aquineis a special kind of metaprogram that produces its own source code as its output. Quines are generally of recreational or theoretical interest only. Not all metaprogramming involves generative programming. If programs are modifiable at runtime, or if incremental compiling is available (such as inC#,Forth,Frink,Groovy,JavaScript,Lisp,Elixir,Lua,Nim,Perl,PHP,Python,Rebol,Ruby,Rust,R,SAS,Smalltalk, andTcl), then techniques can be used to perform metaprogramming without generating source code. One style of generative approach is to employdomain-specific languages(DSLs). A fairly common example of using DSLs involves generative metaprogramming:lexandyacc, two tools used to generatelexical analysersandparsers, let the user describe the language usingregular expressionsandcontext-free grammars, and embed the complex algorithms required to efficiently parse the language. One usage of metaprogramming is to instrument programs in order to dodynamic program analysis. Some argue that there is a sharp learning curve to make complete use of metaprogramming features.[8]Since metaprogramming gives more flexibility and configurability at runtime, misuse or incorrect use of metaprogramming can result in unwarranted and unexpected errors that can be extremely difficult to debug to an average developer. It can introduce risks in the system and make it more vulnerable if not used with care. Some of the common problems, which can occur due to wrong use of metaprogramming are inability of the compiler to identify missing configuration parameters, invalid or incorrect data can result in unknown exception or different results.[9]Due to this, some believe[8]that only high-skilled developers should work on developing features which exercise metaprogramming in a language or platform and average developers must learn how to use these features as part of convention. TheIBM/360and derivatives had powerfulmacro assemblerfacilities that were often used to generate completeassembly languageprograms[citation needed]or sections of programs (for different operating systems for instance). Macros provided withCICStransaction processingsystem had assembler macros that generatedCOBOLstatements as a pre-processing step. Other assemblers, such asMASM, also support macros. Metaclassesare provided by the following programming languages: Use ofdependent typesallows proving that generated code is never invalid.[15]However, this approach is leading-edge and rarely found outside of research programming languages. The list of notable metaprogramming systems is maintained atList of program transformation systems.
https://en.wikipedia.org/wiki/Metaprogramming
Asupercomputeris a type ofcomputerwith a high level of performance as compared to a general-purpose computer. The performance of a supercomputer is commonly measured infloating-pointoperations per second (FLOPS) instead ofmillion instructions per second(MIPS). Since 2022, supercomputers have existed which can perform over 1018FLOPS, so calledexascale supercomputers.[3]For comparison, a desktop computer has performance in the range of hundreds of gigaFLOPS (1011) to tens of teraFLOPS (1013).[4][5]Since November 2017, all of theworld's fastest 500 supercomputersrun onLinux-based operating systems.[6]Additional research is being conducted in the United States, theEuropean Union, Taiwan, Japan, and China to build faster, more powerful and technologically superior exascale supercomputers.[7] Supercomputers play an important role in the field ofcomputational science, and are used for a wide range of computationally intensive tasks in various fields, includingquantum mechanics,weather forecasting,climate research,oil and gas exploration,molecular modeling(computing the structures and properties of chemical compounds, biologicalmacromolecules, polymers, and crystals), and physical simulations (such as simulations of the early moments of the universe, airplane and spacecraftaerodynamics, the detonation ofnuclear weapons, andnuclear fusion). They have been essential in the field ofcryptanalysis.[8] Supercomputers were introduced in the 1960s, and for several decades the fastest was made bySeymour CrayatControl Data Corporation(CDC),Cray Researchand subsequent companies bearing his name or monogram. The first such machines were highly tuned conventional designs that ran more quickly than their more general-purpose contemporaries. Through the decade, increasing amounts ofparallelismwere added, with one to fourprocessorsbeing typical. In the 1970s,vector processorsoperating on large arrays of data came to dominate. A notable example is the highly successfulCray-1of 1976. Vector computers remained the dominant design into the 1990s. From then until today,massively parallelsupercomputers with tens of thousands of off-the-shelf processors became the norm.[9][10] The U.S. has long been a leader in the supercomputer field, initially through Cray's nearly uninterrupted dominance, and later through a variety of technology companies. Japan made significant advancements in the field during the 1980s and 1990s, while China has become increasingly active in supercomputing in recent years. As of November 2024[update], Lawrence Livermore National Laboratory'sEl Capitanis the world's fastest supercomputer.[11]The US has five of the top 10; Italy two, Japan, Finland, Switzerland have one each.[12]In June 2018, all combined supercomputers on the TOP500 list broke the 1 exaFLOPS mark.[13] In 1960,UNIVACbuilt theLivermore Atomic Research Computer(LARC), today considered among the first supercomputers, for the US Navy Research and Development Center. It still used high-speeddrum memory, rather than the newly emergingdisk drivetechnology.[14]Also, among the first supercomputers was theIBM 7030 Stretch. The IBM 7030 was built by IBM for theLos Alamos National Laboratory, which then in 1955 had requested a computer 100 times faster than any existing computer. The IBM 7030 usedtransistors, magnetic core memory,pipelinedinstructions, prefetched data through a memory controller and included pioneering random access disk drives. The IBM 7030 was completed in 1961 and despite not meeting the challenge of a hundredfold increase in performance, it was purchased by the Los Alamos National Laboratory. Customers in England and France also bought the computer, and it became the basis for theIBM 7950 Harvest, a supercomputer built forcryptanalysis.[15] The third pioneering supercomputer project in the early 1960s was theAtlasat theUniversity of Manchester, built by a team led byTom Kilburn. He designed the Atlas to have memory space for up to a million words of 48 bits, but because magnetic storage with such a capacity was unaffordable, the actual core memory of the Atlas was only 16,000 words, with a drum providing memory for a further 96,000 words. TheAtlas Supervisorswappeddata in the form of pages between the magnetic core and the drum. The Atlas operating system also introducedtime-sharingto supercomputing, so that more than one program could be executed on the supercomputer at any one time.[16]Atlas was a joint venture betweenFerrantiandManchester Universityand was designed to operate at processing speeds approaching one microsecond per instruction, about one million instructions per second.[17] TheCDC 6600, designed bySeymour Cray, was finished in 1964 and marked the transition fromgermaniumtosilicontransistors. Silicon transistors could run more quickly and the overheating problem was solved by introducing refrigeration to the supercomputer design.[18]Thus, the CDC6600 became the fastest computer in the world. Given that the 6600 outperformed all the other contemporary computers by about 10 times, it was dubbed asupercomputerand defined the supercomputing market, when one hundred computers were sold at $8 million each.[19][20][21][22] Cray left CDC in 1972 to form his own company,Cray Research.[20]Four years after leaving CDC, Cray delivered the 80 MHzCray-1in 1976, which became one of the most successful supercomputers in history.[23][24]TheCray-2was released in 1985. It had eightcentral processing units(CPUs),liquid coolingand the electronics coolant liquidFluorinertwas pumped through thesupercomputer architecture. It reached 1.9gigaFLOPS, making it the first supercomputer to break the gigaflop barrier.[25] The only computer to seriously challenge the Cray-1's performance in the 1970s was theILLIAC IV. This machine was the first realized example of a truemassively parallelcomputer, in which many processors worked together to solve different parts of a single larger problem. In contrast with the vector systems, which were designed to run a single stream of data as quickly as possible, in this concept, the computer instead feeds separate parts of the data to entirely different processors and then recombines the results. The ILLIAC's design was finalized in 1966 with 256 processors and offer speed up to 1 GFLOPS, compared to the 1970s Cray-1's peak of 250 MFLOPS. However, development problems led to only 64 processors being built, and the system could never operate more quickly than about 200 MFLOPS while being much larger and more complex than the Cray. Another problem was that writing software for the system was difficult, and getting peak performance from it was a matter of serious effort. But the partial success of the ILLIAC IV was widely seen as pointing the way to the future of supercomputing. Cray argued against this, famously quipping that "If you were plowing a field, which would you rather use? Two strong oxen or 1024 chickens?"[26]But by the early 1980s, several teams were working on parallel designs with thousands of processors, notably theConnection Machine(CM) that developed from research atMIT. The CM-1 used as many as 65,536 simplified custommicroprocessorsconnected together in anetworkto share data. Several updated versions followed; the CM-5 supercomputer is a massively parallel processing computer capable of many billions of arithmetic operations per second.[27] In 1982,Osaka University'sLINKS-1 Computer Graphics Systemused amassively parallelprocessing architecture, with 514microprocessors, including 257Zilog Z8001control processorsand 257iAPX86/20floating-point processors. It was mainly used for rendering realistic3D computer graphics.[28]Fujitsu's VPP500 from 1992 is unusual since, to achieve higher speeds, its processors usedGaAs, a material normally reserved for microwave applications due to its toxicity.[29]Fujitsu'sNumerical Wind Tunnelsupercomputer used 166 vector processors to gain the top spot in 1994 with a peak speed of 1.7gigaFLOPS (GFLOPS)per processor.[30][31]TheHitachi SR2201obtained a peak performance of 600 GFLOPS in 1996 by using 2048 processors connected via a fast three-dimensionalcrossbarnetwork.[32][33][34]TheIntel Paragoncould have 1000 to 4000Intel i860processors in various configurations and was ranked the fastest in the world in 1993. The Paragon was aMIMDmachine which connected processors via a high speed two-dimensional mesh, allowing processes to execute on separate nodes, communicating via theMessage Passing Interface.[35] Software development remained a problem, but the CM series sparked off considerable research into this issue. Similar designs using custom hardware were made by many companies, including theEvans & Sutherland ES-1,MasPar,nCUBE,Intel iPSCand theGoodyear MPP. But by the mid-1990s, general-purpose CPU performance had improved so much in that a supercomputer could be built using them as the individual processing units, instead of using custom chips. By the turn of the 21st century, designs featuring tens of thousands of commodity CPUs were the norm, with later machines addinggraphic unitsto the mix.[9][10] In 1998,David Baderdeveloped the firstLinuxsupercomputer using commodity parts.[36]While at the University of New Mexico, Bader sought to build a supercomputer running Linux using consumer off-the-shelf parts and a high-speed low-latency interconnection network. The prototype utilized an Alta Technologies "AltaCluster" of eight dual, 333 MHz, Intel Pentium II computers running a modified Linux kernel. Bader ported a significant amount of software to provide Linux support for necessary components as well as code from members of the National Computational Science Alliance (NCSA) to ensure interoperability, as none of it had been run on Linux previously.[37]Using the successful prototype design, he led the development of "RoadRunner," the first Linux supercomputer for open use by the national science and engineering community via the National Science Foundation's National Technology Grid. RoadRunner was put into production use in April 1999. At the time of its deployment, it was considered one of the 100 fastest supercomputers in the world.[37][38]Though Linux-based clusters using consumer-grade parts, such asBeowulf, existed prior to the development of Bader's prototype and RoadRunner, they lacked the scalability, bandwidth, and parallel computing capabilities to be considered "true" supercomputers.[37] Systems with a massive number of processors generally take one of two paths. In thegrid computingapproach, the processing power of many computers, organized as distributed, diverse administrative domains, is opportunistically used whenever a computer is available.[39]In another approach, many processors are used in proximity to each other, e.g. in acomputer cluster. In such a centralizedmassively parallelsystem the speed and flexibility of theinterconnectbecomes very important and modern supercomputers have used various approaches ranging from enhancedInfinibandsystems to three-dimensionaltorus interconnects.[40][41]The use ofmulti-core processorscombined with centralization is an emerging direction, e.g. as in theCyclops64system.[42][43] As the price, performance andenergy efficiencyofgeneral-purpose graphics processing units(GPGPUs) have improved, a number ofpetaFLOPSsupercomputers such asTianhe-IandNebulaehave started to rely on them.[44]However, other systems such as theK computercontinue to use conventional processors such asSPARC-based designs and the overall applicability ofGPGPUsin general-purpose high-performance computing applications has been the subject of debate, in that while a GPGPU may be tuned to score well on specific benchmarks, its overall applicability to everyday algorithms may be limited unless significant effort is spent to tune the application to it.[45]However, GPUs are gaining ground, and in 2012 theJaguarsupercomputer was transformed intoTitanby retrofitting CPUs with GPUs.[46][47][48] High-performance computers have an expected life cycle of about three years before requiring an upgrade.[49]TheGyoukousupercomputer is unique in that it uses both a massively parallel design andliquid immersion cooling. A number of special-purpose systems have been designed, dedicated to a single problem. This allows the use of specially programmedFPGAchips or even customASICs, allowing better price/performance ratios by sacrificing generality. Examples of special-purpose supercomputers includeBelle,[50]Deep Blue,[51]andHydra[52]for playingchess,Gravity Pipefor astrophysics,[53]MDGRAPE-3for protein structure prediction and molecular dynamics,[54]andDeep Crackfor breaking theDEScipher.[55] Throughout the decades, the management ofheat densityhas remained a key issue for most centralized supercomputers.[58][59][60]The large amount of heat generated by a system may also have other effects, e.g. reducing the lifetime of other system components.[61]There have been diverse approaches to heat management, from pumpingFluorinertthrough the system, to a hybrid liquid-air cooling system or air cooling with normalair conditioningtemperatures.[62][63]A typical supercomputer consumes large amounts of electrical power, almost all of which is converted into heat, requiring cooling. For example,Tianhe-1Aconsumes 4.04megawatts(MW) of electricity.[64]The cost to power and cool the system can be significant, e.g. 4 MW at $0.10/kWh is $400 an hour or about $3.5 million per year. Heat management is a major issue in complex electronic devices and affects powerful computer systems in various ways.[65]Thethermal design powerandCPU power dissipationissues in supercomputing surpass those of traditionalcomputer coolingtechnologies. The supercomputing awards forgreen computingreflect this issue.[66][67][68] The packing of thousands of processors together inevitably generates significant amounts ofheat densitythat need to be dealt with. TheCray-2wasliquid cooled, and used aFluorinert"cooling waterfall" which was forced through the modules under pressure.[62]However, the submerged liquid cooling approach was not practical for the multi-cabinet systems based on off-the-shelf processors, and inSystem Xa special cooling system that combined air conditioning with liquid cooling was developed in conjunction with theLiebert company.[63] In theBlue Genesystem, IBM deliberately used low power processors to deal with heat density.[69]The IBMPower 775, released in 2011, has closely packed elements that require water cooling.[70]The IBMAquasarsystem uses hot water cooling to achieve energy efficiency, the water being used to heat buildings as well.[71][72] The energy efficiency of computer systems is generally measured in terms of "FLOPS per watt". In 2008,RoadrunnerbyIBMoperated at 376MFLOPS/W.[73][74]In November 2010, theBlue Gene/Qreached 1,684 MFLOPS/W[75][76]and in June 2011 the top two spots on theGreen 500list were occupied byBlue Genemachines in New York (one achieving 2097 MFLOPS/W) with theDEGIMA clusterin Nagasaki placing third with 1375 MFLOPS/W.[77] Because copper wires can transfer energy into a supercomputer with much higher power densities than forced air or circulating refrigerants can removewaste heat,[78]the ability of the cooling systems to remove waste heat is a limiting factor.[79][80]As of 2015[update], many existing supercomputers have more infrastructure capacity than the actual peak demand of the machine – designers generally conservatively design the power and cooling infrastructure to handle more than the theoretical peak electrical power consumed by the supercomputer. Designs for future supercomputers are power-limited – thethermal design powerof the supercomputer as a whole, the amount that the power and cooling infrastructure can handle, is somewhat more than the expected normal power consumption, but less than the theoretical peak power consumption of the electronic hardware.[81] Since the end of the 20th century,supercomputer operating systemshave undergone major transformations, based on the changes insupercomputer architecture.[82]While early operating systems were custom tailored to each supercomputer to gain speed, the trend has been to move away from in-house operating systems to the adaptation of generic software such asLinux.[83] Since modernmassively parallelsupercomputers typically separate computations from other services by using multiple types ofnodes, they usually run different operating systems on different nodes, e.g. using a small and efficientlightweight kernelsuch asCNKorCNLon compute nodes, but a larger system such as a fullLinux distributionon server andI/Onodes.[84][85][86] While in a traditional multi-user computer systemjob schedulingis, in effect, ataskingproblem for processing and peripheral resources, in a massively parallel system, the job management system needs to manage the allocation of both computational and communication resources, as well as gracefully deal with inevitable hardware failures when tens of thousands of processors are present.[87] Although most modern supercomputers useLinux-based operating systems, each manufacturer has its own specific Linux distribution, and no industry standard exists, partly due to the fact that the differences in hardware architectures require changes to optimize the operating system to each hardware design.[82][88] The parallel architectures of supercomputers often dictate the use of special programming techniques to exploit their speed. Software tools for distributed processing include standardAPIssuch asMPI[90]andPVM,VTL, andopen sourcesoftware such asBeowulf. In the most common scenario, environments such asPVMandMPIfor loosely connected clusters andOpenMPfor tightly coordinated shared memory machines are used. Significant effort is required to optimize an algorithm for the interconnect characteristics of the machine it will be run on; the aim is to prevent any of the CPUs from wasting time waiting on data from other nodes.GPGPUshave hundreds of processor cores and are programmed using programming models such asCUDAorOpenCL. Moreover, it is quite difficult to debug and test parallel programs.Special techniquesneed to be used for testing and debugging such applications. Opportunistic supercomputing is a form of networkedgrid computingwhereby a "super virtual computer" of manyloosely coupledvolunteer computing machines performs very large computing tasks. Grid computing has been applied to a number of large-scaleembarrassingly parallelproblems that require supercomputing performance scales. However, basic grid andcloud computingapproaches that rely onvolunteer computingcannot handle traditional supercomputing tasks such as fluid dynamic simulations.[91] The fastest grid computing system is thevolunteer computing projectFolding@home(F@h). As of April 2020[update], F@h reported 2.5 exaFLOPS ofx86processing power. Of this, over 100 PFLOPS are contributed by clients running on various GPUs, and the rest from various CPU systems.[92] TheBerkeley Open Infrastructure for Network Computing(BOINC) platform hosts a number of volunteer computing projects. As of February 2017[update], BOINC recorded a processing power of over 166 petaFLOPS through over 762 thousand active Computers (Hosts) on the network.[93] As of October 2016[update],Great Internet Mersenne Prime Search's (GIMPS) distributedMersenne Primesearch achieved about 0.313 PFLOPS through over 1.3 million computers.[94]The PrimeNet server has supported GIMPS's grid computing approach, one of the earliest volunteer computing projects, since 1997. Quasi-opportunistic supercomputing is a form ofdistributed computingwhereby the "super virtual computer" of many networked geographically disperse computers performs computing tasks that demand huge processing power.[95]Quasi-opportunistic supercomputing aims to provide a higher quality of service thanopportunistic grid computingby achieving more control over the assignment of tasks to distributed resources and the use of intelligence about the availability and reliability of individual systems within the supercomputing network. However, quasi-opportunistic distributed execution of demanding parallel computing software in grids should be achieved through the implementation of grid-wise allocation agreements, co-allocation subsystems, communication topology-aware allocation mechanisms, fault tolerant message passing libraries and data pre-conditioning.[95] Cloud computingwith its recent and rapid expansions and development have grabbed the attention of high-performance computing (HPC) users and developers in recent years. Cloud computing attempts to provide HPC-as-a-service exactly like other forms of services available in the cloud such assoftware as a service,platform as a service, andinfrastructure as a service. HPC users may benefit from the cloud in different angles such as scalability, resources being on-demand, fast, and inexpensive. On the other hand, moving HPC applications have a set of challenges too. Good examples of such challenges arevirtualizationoverhead in the cloud, multi-tenancy of resources, and network latency issues. Much research is currently being done to overcome these challenges and make HPC in the cloud a more realistic possibility.[96][97][98][99] In 2016, Penguin Computing, Parallel Works, R-HPC,Amazon Web Services,Univa,Silicon Graphics International,Rescale, Sabalcore, and Gomput started to offer HPCcloud computing. The Penguin On Demand (POD) cloud is abare-metalcompute model to execute code, but each user is givenvirtualizedlogin node. POD computing nodes are connected via non-virtualized10 Gbit/sEthernetor QDRInfiniBandnetworks. User connectivity to the PODdata centerranges from 50 Mbit/s to 1 Gbit/s.[100]Citing Amazon's EC2 Elastic Compute Cloud, Penguin Computing argues thatvirtualizationof compute nodes is not suitable for HPC. Penguin Computing has also criticized that HPC clouds may have allocated computing nodes to customers that are far apart, causing latency that impairs performance for some HPC applications.[101] Supercomputers generally aim for the maximum in capability computing rather than capacity computing. Capability computing is typically thought of as using the maximum computing power to solve a single large problem in the shortest amount of time. Often a capability system is able to solve a problem of a size or complexity that no other computer can, e.g. a very complexweather simulationapplication.[102] Capacity computing, in contrast, is typically thought of as using efficient cost-effective computing power to solve a few somewhat large problems or many small problems.[102]Architectures that lend themselves to supporting many users for routine everyday tasks may have a lot of capacity but are not typically considered supercomputers, given that they do not solve a single very complex problem.[102] In general, the speed of supercomputers is measured andbenchmarkedinFLOPS(floating-point operations per second), and not in terms ofMIPS(million instructions per second), as is the case with general-purpose computers.[103]These measurements are commonly used with anSI prefixsuch astera-, combined into the shorthand TFLOPS (1012FLOPS, pronouncedteraflops), orpeta-, combined into the shorthand PFLOPS (1015FLOPS, pronouncedpetaflops.)Petascalesupercomputers can process one quadrillion (1015) (1000 trillion) FLOPS.Exascaleis computing performance in the exaFLOPS (EFLOPS) range. An EFLOPS is one quintillion (1018) FLOPS (one million TFLOPS). However, The performance of a supercomputer can be severely impacted by fluctuation brought on by elements like system load, network traffic, and concurrent processes, as mentioned by Brehm and Bruhwiler (2015).[104] No single number can reflect the overall performance of a computer system, yet the goal of the Linpack benchmark is to approximate how fast the computer solves numerical problems and it is widely used in the industry.[105]The FLOPS measurement is either quoted based on the theoretical floating point performance of a processor (derived from manufacturer's processor specifications and shown as "Rpeak" in the TOP500 lists), which is generally unachievable when running real workloads, or the achievable throughput, derived from theLINPACK benchmarksand shown as "Rmax" in the TOP500 list.[106]The LINPACK benchmark typically performsLU decompositionof a large matrix.[107]The LINPACK performance gives some indication of performance for some real-world problems, but does not necessarily match the processing requirements of many other supercomputer workloads, which for example may require more memory bandwidth, or may require better integer computing performance, or may need a high performance I/O system to achieve high levels of performance.[105] Since 1993, the fastest supercomputers have been ranked on the TOP500 list according to theirLINPACK benchmarkresults. The list does not claim to be unbiased or definitive, but it is a widely cited current definition of the "fastest" supercomputer available at any given time. This is a list of the computers which appeared at the top of theTOP500 listsince June 1993,[108]and the "Peak speed" is given as the "Rmax" rating. In 2018,Lenovobecame the world's largest provider for the TOP500 supercomputers with 117 units produced.[109] Legend:[112] The stages of supercomputer application are summarized in the following table: The IBMBlue Gene/P computer has been used to simulate a number of artificial neurons equivalent to approximately one percent of a human cerebral cortex, containing 1.6 billion neurons with approximately 9 trillion connections. The same research group also succeeded in using a supercomputer to simulate a number of artificial neurons equivalent to the entirety of a rat's brain.[121] Modern weather forecasting relies on supercomputers. TheNational Oceanic and Atmospheric Administrationuses supercomputers to crunch hundreds of millions of observations to help make weather forecasts more accurate.[122] In 2011, the challenges and difficulties in pushing the envelope in supercomputing were underscored byIBM's abandonment of theBlue Waterspetascale project.[123] TheAdvanced Simulation and Computing Programcurrently uses supercomputers to maintain and simulate the United States nuclear stockpile.[124] In early 2020,COVID-19was front and center in the world. Supercomputers used different simulations to find compounds that could potentially stop the spread. These computers run for tens of hours using multiple paralleled running CPU's to model different processes.[125][126][127] In the 2010s, China, the United States, the European Union, and others competed to be the first to create a 1exaFLOP(1018or one quintillion FLOPS) supercomputer.[128]Erik P. DeBenedictis ofSandia National Laboratorieshas theorized that a zettaFLOPS (1021or one sextillion FLOPS) computer is required to accomplish fullweather modeling, which could cover a two-week time span accurately.[129][130][131]Such systems might be built around 2030.[132] ManyMonte Carlo simulationsuse the same algorithm to process a randomly generated data set; particularly,integro-differential equationsdescribingphysical transport processes, therandom paths, collisions, and energy and momentum depositions of neutrons, photons, ions, electrons, etc.The next step for microprocessors may be into thethird dimension; and specializing to Monte Carlo, the many layers could be identical, simplifying the design and manufacture process.[133] The cost of operating high performance supercomputers has risen, mainly due to increasing power consumption. In the mid-1990s a top 10 supercomputer required in the range of 100 kilowatts, in 2010 the top 10 supercomputers required between 1 and 2 megawatts.[134]A 2010 study commissioned byDARPAidentified power consumption as the most pervasive challenge in achievingExascale computing.[135]At the time a megawatt per year in energy consumption cost about 1 million dollars. Supercomputing facilities were constructed to efficiently remove the increasing amount of heat produced by modern multi-corecentral processing units. Based on the energy consumption of the Green 500 list of supercomputers between 2007 and 2011, a supercomputer with 1 exaFLOPS in 2011 would have required nearly 500 megawatts. Operating systems were developed for existing hardware to conserve energy whenever possible.[136]CPU cores not in use during the execution of a parallelized application were put into low-power states, producing energy savings for some supercomputing applications.[137] The increasing cost of operating supercomputers has been a driving factor in a trend toward bundling of resources through a distributed supercomputer infrastructure. National supercomputing centers first emerged in the US, followed by Germany and Japan. The European Union launched thePartnership for Advanced Computing in Europe(PRACE) with the aim of creating a persistent pan-European supercomputer infrastructure with services to support scientists across theEuropean Unionin porting, scaling and optimizing supercomputing applications.[134]Iceland built the world's first zero-emission supercomputer. Located at the Thor Data Center inReykjavík, Iceland, this supercomputer relies on completely renewable sources for its power rather than fossil fuels. The colder climate also reduces the need for active cooling, making it one of the greenest facilities in the world of computers.[138] Funding supercomputer hardware also became increasingly difficult. In the mid-1990s a top 10 supercomputer cost about 10 million euros, while in 2010 the top 10 supercomputers required an investment of between 40 and 50 million euros.[134]In the 2000s national governments put in place different strategies to fund supercomputers. In the UK the national government funded supercomputers entirely and high performance computing was put under the control of a national funding agency. Germany developed a mixed funding model, pooling local state funding and federal funding.[134] Examples of supercomputers in fiction includeHAL 9000,Multivac,The Machine Stops,GLaDOS,The Evitable Conflict,Vulcan's Hammer,Colossus,WOPR,AM, andDeep Thought. A supercomputer fromThinking Machineswas mentioned as the supercomputer used to sequence theDNAextracted from preserved parasites in theJurassic Parkseries.
https://en.wikipedia.org/wiki/Supercomputing
Incomputer science,computational intelligence(CI) refers toconcepts,paradigms,algorithmsandimplementationsofsystemsthat are designed to show "intelligent" behavior in complex and changing environments.[1]These systems are aimed at mastering complex tasks in a wide variety of technical or commercial areas and offer solutions thatrecognize and interpret patterns, control processes, supportdecision-makingor autonomously manoeuvrevehiclesorrobotsin unknown environments, among other things.[2]These concepts and paradigms are characterized by the ability tolearnoradaptto new situations, togeneralize, toabstract, to discover andassociate.[3]Nature-analog ornature-inspired methodsplay a key role, such as inneuroevolutionforComputational Intelligence.[1] CI approaches primarily address those complex real-world problems for which mathematical or traditional modeling is not appropriate for various reasons: the processes cannot be described exactly with complete knowledge, the processes are toocomplexformathematical reasoning, they contain some uncertainties during the process, such as unforeseen changes in the environment or in the process itself, or the processes are simplystochasticin nature. Thus, CI techniques are properly aimed at processes that areill-defined, complex,nonlinear, time-varying and/or stochastic.[4] A recent definition of theIEEEComputational Intelligence Societeydescribes CI asthe theory, design, application and development of biologically and linguistically motivated computational paradigms. Traditionally the three main pillars of CI have beenNeural Networks,Fuzzy SystemsandEvolutionary Computation. ... CI is an evolving field and at present in addition to the three main constituents, it encompasses computing paradigms likeambient intelligence,artificial life,cultural learning, artificial endocrine networks, social reasoning, and artificial hormone networks. ... Over the last few years there has been an explosion of research onDeep Learning, in particulardeep convolutional neural networks. Nowadays, deep learning has become the core method forartificial intelligence. In fact, some of the most successful AI systems are based on CI.[5]However, as CI is an emerging and developing field there is no final definition of CI,[6][7][8]especially in terms of the list of concepts and paradigms that belong to it.[3][9][10] The general requirements for the development of an “intelligent system” are ultimately always the same, namely the simulation of intelligent thinking and action in a specific area of application. To do this, the knowledge about this area must be represented in a model so that it can be processed. The quality of the resulting system depends largely on how well the model was chosen in the development process. Sometimesdata-drivenmethods are suitable for finding a good model and sometimes logic-based knowledge representations deliver better results. Hybrid models are usually used in real applications.[2] According to actual textbooks, the following methods and paradigms, which largely complement each other, can be regarded as parts of CI:[11][12][13][14][15][16][17] Artificial intelligence (AI) is used in the media, but also by some of the scientists involved, as a kind of umbrella term for the various techniques associated with it or with CI.[5][18]Craenen and Eiben state that attempts to define or at least describe CI can usually be assigned to one or more of the following groups: The relationship between CI and AI has been a frequently discussed topic during the development of CI. While the above list implies that they are synonyms, the vast majority of AI/CI researchers working on the subject consider them to be distinct fields, where either[8][18] The view of the first of the above three points goes back toZadeh, the founder of the fuzzy set theory, who differentiated machine intelligence into hard andsoft computingtechniques, which are used in artificial intelligence on the one hand and computational intelligence on the other.[19][20]In hard computing (HC) and AI, inaccuracy and uncertainty are undesirable characteristics of a system, while soft computing (SC) and thus CI focus on dealing with these characteristics.[14]The adjacent figure illustrates these relationships and lists the most important CI techniques.[6]Another frequently mentioned distinguishing feature is the representation of information in symbolic form in AI and in sub-symbolic form in CI techniques.[17][21] Hard computing is a conventional computing method based on the principles of certainty and accuracy and it is deterministic. It requires a precisely stated analytical model of the task to be processed and a prewritten program, i.e. a fixed set of instructions. The models used are based onBoolean logic(also calledcrisp logic[22]), where e.g. an element can be either a member of a set or not and there is nothing in between. When applied to real-world tasks, systems based on HC result in specific control actions defined by a mathematical model or algorithm. If an unforeseen situation occurs that is not included in the model or algorithm used, the action will most likely fail.[23][24][25][26] Soft computing, on the other hand, is based on the fact that the human mind is capable of storing information and processing it in a goal-oriented way, even if it is imprecise and lacks certainty.[20]SC is based on the model of the human brain with probabilistic thinking, fuzzy logic and multi-valued logic. Soft computing can process a wealth of data and perform a large number of computations, which may not be exact, in parallel. For hard problems for which no satisfying exact solutions based on HC are available, SC methods can be applied successfully. SC methods are usually stochastic in nature i.e., they are a randomly defined processes that can be analyzed statistically but not with precision. Up to now, the results of some CI methods, such as deep learning, cannot be verified and it is also not clear what they are based on. This problem represents an important scientific issue for the future.[23][24][25][26] AI and CI are catchy terms,[18]but they are also so similar that they can be confused. The meaning of both terms has developed and changed over a long period of time,[27][28]with AI being used first.[3][9]Bezdek describes this impressively and concludes that such buzzwords are frequently used and hyped by the scientific community, science management and (science) journalism.[18]Not least because AI and biological intelligence are emotionally charged terms[3][18]and it is still difficult to find a generally accepted definition for the basic termintelligence.[3][10] In 1950,Alan Turing, one of the founding fathers of computer science, developed a test for computer intelligence known as theTuring test.[29]In this test, a person can ask questions via a keyboard and a monitor without knowing whether his counterpart is a human or a computer. A computer is considered intelligent if the interrogator cannot distinguish the computer from a human. This illustrates the discussion about intelligent computers at the beginning of the computer age. The termComputational Intelligencewas first used as the title of the journal of the same name in 1985[30][31]and later by the IEEE Neural Networks Council (NNC), which was founded 1989 by a group of researchers interested in the development of biological and artificial neural networks.[32]On November 21, 2001, the NNC became the IEEE Neural Networks Society, to become theIEEE Computational Intelligence Societytwo years later by including new areas of interest such as fuzzy systems and evolutionary computation. The NNC helped organize the first IEEE World Congress on Computational Intelligence in Orlando, Florida in 1994.[32]On this conference the first clear definition of Computational Intelligence was introduced by Bezdek:A system is computationally intelligent when it: deals with only numerical (low-level) data, haspattern-recognitioncomponents, does not use knowledge in the AI sense; and additionally when it (begins to) exhibit (1) computational adaptivity; (2) computational fault tolerance; (3) speed approaching human-like turnaround and (4) error rates that approximate human performance.[33] Today, with machine learning and deep learning in particular utilizing a breadth ofsupervised,unsupervised, andreinforcement learningapproaches, the CI landscape has been greatly enhanced, with novell intelligent approaches. The main applications of Computational Intelligence include computer science, engineering,data analysisandbio-medicine. Unlike conventional Boolean logic, fuzzy logic is based onfuzzy sets. In both models, a property of an object is defined as belonging to a set; in fuzzy logic, however, the membership is not sharply defined by a yes/no distinction, but is graded gradually. This is done usingmembership functionsthat assign areal numberbetween 0 and 1 to each element as the degree of membership. The new set operations introduced in this way define the operations of an associated logic calculus that allows the modeling ofinference processes, i.e.logical reasoning.[34]Therefore, fuzzy logic is well suited for engineering decisions without clear certainties and uncertainties or with imprecise data - as with natural language-processing technologies[35]but it doesn't have learning abilities.[36] This technique tends to apply to a wide range of domains such ascontrol engineering,[37]image processing,[38]fuzzy data clustering[38][39]and decision making.[35]Fuzzy logic-based control systems can be found, for example, in the field of household appliances in washing machines, dish washers, microwave ovens, etc. or in the area of motor vehicles in gear transmission and braking systems. This principle can also be encountered when using a video camera, as it helps to stabilize the image when the camera is held unsteadily. Other areas such as medical diagnostics, satellite controllers and business strategy selection are just a few more examples of today's application of fuzzy logic.[35][40] An important field of CI is the development ofartificial neural networks(ANN) based on thebiological ones, which can be defined by three main components: the cell-body which processes the information, the axon, which is a device enabling the signal conducting, and the synapse, which controls signals.[41][42]Therefore, ANNs are very well suited for distributed information processing systems, enabling the process and the learning from experiential data.[43][44]ANNs aim to mimic cognitive processes of the human brain. The main advantages of this technology therefore include fault tolerance, pattern recognition even with noisy images and the ability to learn.[41][44] Concerning its applications, neural networks can be classified into five groups:data analysisandclassification,associative memory,data clusteringorcompression, generation of patterns, andcontrol systems.[45][43][41]The numerous applications include, for example, the analysis and classification of medical data, including the creation ofdiagnoses,speech recognition,data mining,image processing,forecasting,robot control, credit approval, pattern recognition,faceand fraud detection and dealing with nonlinearities of a system in order to control it.[41][43][45]ANNs have the latter area of application and data clustering in common with fuzzy logic. Generative systems based on deep learning and convolutional neural networks, such aschatGPTorDeepL, are a relatively new field of application. Evolutionary computation can be seen as a family of methods and algorithms forglobal optimization, which are usually based on apopulationof candidate solutions. They are inspired bybiological evolutionand are often summarized asevolutionary algorithms.[46]These include thegenetic algorithms,evolution strategy,genetic programmingand many others.[47]They are considered as problem solvers for tasks not solvable by traditional mathematical methods[48]and are frequently used foroptimizationincludingmulti-objective optimization.[49]Since they work with a population of candidate solutions that are processed in parallel during an iteration, they can easily be distributed to different computer nodes of a cluster.[50]As often more than one offspring is generated per pairing, the evaluations of these offspring, which are usually the most time-consuming part of the optimization process, can also be performed in parallel.[51] In the course of optimization, the population learns about the structure of the search space and stores this information in the chromosomes of the solution candidates. After a run, this knowledge can be reused for similar tasks by adapting some of the “old” chromosomes and using them to seed a new population.[52][53] Swarm intelligence is based on the collective behavior of decentralized, self-organizing systems, typically consisting of a population of simple agents that interact locally with each other and with their environment. Despite the absence of a centralized control structure that dictates how the individual agents should behave, local interactions between such agents often lead to the emergence of global behavior.[54][55][56]Among the recognized representatives of algorithms based on swarm intelligence areparticle swarm optimizationandant colony optimization.[57]Both aremetaheuristicoptimization algorithms that can be used to (approximately) solve difficultnumericalor complexcombinatorial optimizationtasks.[58][59][60]Since both methods, like the evolutionary algorithms, are based on a population and also on local interaction, they can be easily parallelized[61][62]and show comparable learning properties.[63][64] In complex application domains, Bayesian networks provide a means to efficiently store and evaluate uncertain knowledge. A Bayesian network is aprobabilistic graphical modelthat represents a set of random variables and their conditional dependencies by adirected acyclic graph. The probabilistic representation makes it easy to draw conclusions based on new information. In addition, Bayesian networks are well suited for learning from data.[13]Their wide range of applications includes medical diagnostics, risk management, information retrieval, and text analysis, e.g. for spam filters. Their wide range of applications includes medical diagnostics, risk management, information retrieval, text analysis, e.g. for spam filters, credit rating of companies, and the operation of complex industrial processes.[65] Artificial immune systems are another group of population-based metaheuristic learning algorithms designed to solve clustering and optimization problems. These algorithms are inspired by the principles of theoretical immunology and the processes of the vertebrate immune system, and use the learning and memory properties of the immune system to solve a problem. Operators similar to those known from evolutionary algorithms are used to clone and mutate artificial lymphocytes.[66][67]Artificial immune systems offer interesting capabilities such as adaptability, self-learning, and robustness that can be used for various tasks in data processing,[67]manufacturing systems,[68]system modeling and control, fault detection, or cybersecurity.[66] Still looking for a way of "reasoning" close to the humans' one,learning theoryis one of the main approaches of CI. In psychology, learning is the process of bringing together cognitive, emotional and environmental effects and experiences to acquire, enhance or change knowledge, skills, values and world views.[69][70][71]Learning theories then helps understanding how these effects and experiences are processed, and then helps making predictions based on previous experience.[72] Being one of the main elements of fuzzy logic, probabilistic methods firstly introduced byPaul ErdosandJoel Spencerin 1974,[73][74]aim to evaluate the outcomes of a Computation Intelligent system, mostly defined byrandomness.[75]Therefore, probabilistic methods bring out the possible solutions to a problem, based on prior knowledge. According tobibliometricsstudies, computational intelligence plays a key role in research.[76]All the majoracademic publishersare accepting manuscripts in which a combination of Fuzzy logic, neural networks and evolutionary computation is discussed. On the other hand, Computational intelligence isn't available in the universitycurriculum.[77]The amount oftechnical universitiesin which students can attend a course is limited. Only British Columbia, Technical University of Dortmund (involved in the European fuzzy boom) and Georgia Southern University are offering courses from this domain. The reason why major university are ignoring the topic is because they don't have the resources. The existing computer science courses are so complex, that at the end of thesemesterthere is no room forfuzzy logic.[78]Sometimes it is taught as a subproject in existing introduction courses, but in most cases the universities are preferring courses about classical AI concepts based onBoolean logic, turing machines andtoy problemslike blocks world. Since a while with the upraising ofSTEM education, the situation has changed a bit.[79]There are some efforts available in whichmultidisciplinaryapproaches are preferred which allows the student to understandcomplex adaptive systems.[80]These objectives are discussed only on a theoretical basis. The curriculum of real universities wasn't adapted yet.
https://en.wikipedia.org/wiki/Computational_intelligence
DNA computingis an emerging branch ofunconventional computingwhich usesDNA,biochemistry, andmolecular biologyhardware, instead of the traditionalelectronic computing. Research and development in this area concerns theory, experiments, and applications of DNA computing. Although the field originally started with the demonstration of a computing application byLen Adlemanin 1994, it has now been expanded to several other avenues such as the development of storage technologies,[1][2][3]nanoscale imaging modalities,[4][5][6]synthetic controllers and reaction networks,[7][8][9][10]etc. Leonard Adlemanof theUniversity of Southern Californiainitially developed this field in 1994.[11]Adleman demonstrated aproof-of-conceptuse of DNA as a form of computation which solved the seven-pointHamiltonian path problem. Since the initial Adleman experiments, advances have occurred and variousTuring machineshave been proven to be constructible.[12][13] Since then the field has expanded into several avenues. In 1995, the idea for DNA-based memory was proposed by Eric Baum[14]who conjectured that a vast amount of data can be stored in a tiny amount of DNA due to its ultra-high density. This expanded the horizon of DNA computing into the realm of memory technology although thein vitrodemonstrations were made after almost a decade. The field of DNA computing can be categorized as a sub-field of the broaderDNA nanosciencefield started by Ned Seeman about a decade before Len Adleman's demonstration.[15]Ned's original idea in the 1980s was to build arbitrary structures using bottom-up DNA self-assembly for applications in crystallography. However, it morphed into the field of structural DNA self-assembly[16][17][18]which as of 2020 is extremely sophisticated. Self-assembled structure from a few nanometers tall all the way up to several tens of micrometers in size have been demonstrated in 2018. In 1994, Prof. Seeman's group demonstrated early DNA lattice structures using a small set of DNA components. While the demonstration by Adleman showed the possibility of DNA-based computers, the DNA design was trivial because as the number of nodes in a graph grows, the number of DNA components required in Adleman's implementation would grow exponentially. Therefore, computer scientists and biochemists started exploring tile-assembly where the goal was to use a small set of DNA strands as tiles to perform arbitrary computations upon growth. Other avenues that were theoretically explored in the late 90's include DNA-based security and cryptography,[19]computational capacity of DNA systems,[20]DNA memories and disks,[21]and DNA-based robotics.[22] Before 2002,Lila Karishowed that the DNA operations performed by genetic recombination in some organisms are Turing complete.[23] In 2003, John Reif's group first demonstrated the idea of a DNA-based walker that traversed along a track similar to a line follower robot. They used molecular biology as a source of energy for the walker. Since this first demonstration, a wide variety of DNA-based walkers have been demonstrated. In 1994Leonard Adlemanpresented the first prototype of a DNA computer. TheTT-100was a test tube filled with 100 microliters of a DNA solution. He managed to solve an instance of the directedHamiltonian pathproblem.[24]In Adleman's experiment, the Hamiltonian Path Problem was implemented notationally as the "travelling salesman problem". For this purpose, different DNA fragments were created, each one of them representing a city that had to be visited. Every one of these fragments is capable of a linkage with the other fragments created. These DNA fragments were produced and mixed in atest tube. Within seconds, the small fragments form bigger ones, representing the different travel routes. Through a chemical reaction, the DNA fragments representing the longer routes were eliminated. The remains are the solution to the problem, but overall, the experiment lasted a week.[25]However, current technical limitations prevent the evaluation of the results. Therefore, the experiment isn't suitable for the application, but it is nevertheless aproof of concept. First results to these problems were obtained byLeonard Adleman. In 2002, J. Macdonald, D. Stefanović and M. Stojanović created a DNA computer able to playtic-tac-toeagainst a human player.[26]The calculator consists of nine bins corresponding to the nine squares of the game. Each bin contains a substrate and various combinations of DNA enzymes. The substrate itself is composed of a DNA strand onto which was grafted a fluorescent chemical group at one end, and the other end, a repressor group. Fluorescence is only active if the molecules of the substrate are cut in half. The DNA enzymes simulatelogical functions. For example, such a DNA will unfold if two specific types of DNA strand are introduced to reproduce the logic function AND. By default, the computer is considered to have played first in the central square. The human player starts with eight different types of DNA strands corresponding to the eight remaining boxes that may be played. To play box number i, the human player pours into all bins the strands corresponding to input #i. These strands bind to certain DNA enzymes present in the bins, resulting, in one of these bins, in the deformation of the DNA enzymes which binds to the substrate and cuts it. The corresponding bin becomes fluorescent, indicating which box is being played by the DNA computer. The DNA enzymes are divided among the bins in such a way as to ensure that the best the human player can achieve is a draw, as in real tic-tac-toe. Kevin Cherry andLulu Qianat Caltech developed a DNA-based artificial neural network that can recognize 100-bit hand-written digits. They achieved this by programming on a computer in advance with the appropriate set of weights represented by varying concentrations weight molecules which are later added to the test tube that holds the input DNA strands.[27][28] One of the challenges of DNA computing is its slow speed. While DNA is a biologically compatible substrate, i.e., it can be used at places where silicon technology cannot, its computational speed is still very slow. For example, the square-root circuit used as a benchmark in the field takes over 100 hours to complete.[29]While newer ways with external enzyme sources are reporting faster and more compact circuits,[30]Chatterjee et al. demonstrated an interesting idea in the field to speed up computation through localized DNA circuits,[31]a concept being further explored by other groups.[32]This idea, while originally proposed in the field of computer architecture, has been adopted in this field as well. In computer architecture, it is very well-known that if the instructions are executed in sequence, having them loaded in the cache will inevitably lead to fast performance, also called the principle of localization. This is because with instructions in fast cache memory, there is no need swap them in and out of main memory, which can be slow.[31]Similarly, in localized DNA computing, the DNA strands responsible for computation are fixed on a breadboard-like substrate ensuring physical proximity of the computing gates. Such localized DNA computing techniques have been shown to potentially reduce the computation time by orders of magnitude.[31] Subsequent research on DNA computing has produced reversible DNA computing, bringing the technology one step closer to the silicon-based computing used in (for example)PCs. In particular, John Reif and his group at Duke University have proposed two different techniques to reuse the computing DNA complexes. The first design uses dsDNA gates,[33]while the second design uses DNA hairpin complexes.[34]While both designs face some issues (such as reaction leaks), this appears to represent a significant breakthrough in the field of DNA computing. Some other groups have also attempted to address the gate reusability problem.[35][36] Using strand displacement reactions (SRDs), reversible proposals are presented in the "Synthesis Strategy of Reversible Circuits on DNA Computers" paper for implementing reversible gates and circuits on DNA computers by combining DNA computing and reversible computing techniques. This paper also proposes a universal reversible gate library (URGL) for synthesizing n-bit reversible circuits on DNA computers with an average length and cost of the constructed circuits better than the previous methods.[37] There are multiple methods for building a computing device based on DNA, each with its own advantages and disadvantages. Most of these build the basic logic gates (AND,OR,NOT) associated withdigital logicfrom a DNA basis. Some of the different bases include DNAzymes,deoxyoligonucleotides, enzymes, and toehold exchange. The most fundamental operation in DNA computing and molecular programming is the strand displacement mechanism. Currently, there are two ways to perform strand displacement: Besides simple strand displacement schemes, DNA computers have also been constructed using the concept of toehold exchange.[28]In this system, an input DNA strand binds to asticky end, or toehold, on another DNA molecule, which allows it to displace another strand segment from the molecule. This allows the creation of modular logic components such as AND, OR, and NOT gates and signal amplifiers, which can be linked into arbitrarily large computers. This class of DNA computers does not require enzymes or any chemical capability of the DNA.[38] The full stack for DNA computing looks very similar to a traditional computer architecture. At the highest level, a C-like general purpose programming language is expressed using a set ofchemical reaction networks (CRNs). This intermediate representation gets translated to domain-level DNA design and then implemented using a set of DNA strands. In 2010, Erik Winfree's group showed that DNA can be used as a substrate to implement arbitrary chemical reactions. This opened the way to design and synthesis of biochemical controllers since the expressive power of CRNs is equivalent to a Turing machine.[7][8][9][10]Such controllers can potentially be usedin vivofor applications such as preventing hormonal imbalance. Catalytic DNA (deoxyribozymeor DNAzyme) catalyze a reaction when interacting with the appropriate input, such as a matchingoligonucleotide. These DNAzymes are used to build logic gates analogous to digital logic in silicon; however, DNAzymes are limited to one-, two-, and three-input gates with no current implementation for evaluating statements in series. The DNAzyme logic gate changes its structure when it binds to a matching oligonucleotide and the fluorogenic substrate it is bonded to is cleaved free. While other materials can be used, most models use a fluorescence-based substrate because it is very easy to detect, even at the single molecule limit.[39]The amount of fluorescence can then be measured to tell whether or not a reaction took place. The DNAzyme that changes is then "used", and cannot initiate any more reactions. Because of this, these reactions take place in a device such as a continuous stirred-tank reactor, where old product is removed and new molecules added. Two commonly used DNAzymes are named E6 and 8-17. These are popular because they allow cleaving of a substrate in any arbitrary location.[40]Stojanovic and MacDonald have used the E6 DNAzymes to build theMAYA I[41]andMAYA II[42]machines, respectively; Stojanovic has also demonstrated logic gates using the 8-17 DNAzyme.[43]While these DNAzymes have been demonstrated to be useful for constructing logic gates, they are limited by the need of a metal cofactor to function, such as Zn2+or Mn2+, and thus are not usefulin vivo.[39][44] A design called astem loop, consisting of a single strand of DNA which has a loop at an end, are a dynamic structure that opens and closes when a piece of DNA bonds to the loop part. This effect has been exploited to create severallogic gates. These logic gates have been used to create the computers MAYA I andMAYA IIwhich can playtic-tac-toeto some extent.[45] Enzyme-based DNA computers are usually of the form of a simpleTuring machine; there is analogous hardware, in the form of an enzyme, and software, in the form of DNA.[46] Benenson, Shapiro and colleagues have demonstrated a DNA computer using theFokIenzyme[47]and expanded on their work by going on to show automata that diagnose and react toprostate cancer: under expression of the genesPPAP2BandGSTP1and an over expression ofPIM1andHPN.[48]Their automata evaluated the expression of each gene, one gene at a time, and on positive diagnosis then released a single strand DNA molecule (ssDNA) that is an antisense forMDM2. MDM2 is a repressor ofprotein 53, which itself is a tumor suppressor.[49]On negative diagnosis it was decided to release a suppressor of the positive diagnosis drug instead of doing nothing. A limitation of this implementation is that two separate automata are required, one to administer each drug. The entire process of evaluation until drug release took around an hour to complete. This method also requires transition molecules as well as the FokI enzyme to be present. The requirement for the FokI enzyme limits applicationin vivo, at least for use in "cells of higher organisms".[50]It should also be pointed out that the 'software' molecules can be reused in this case. DNA nanotechnology has been applied to the related field of DNA computing. DNA tiles can be designed to contain multiple sticky ends with sequences chosen so that they act asWang tiles. A DX array has been demonstrated whose assembly encodes anXORoperation; this allows the DNA array to implement acellular automatonwhich generates afractalcalled theSierpinski gasket. This shows that computation can be incorporated into the assembly of DNA arrays, increasing its scope beyond simple periodic arrays.[51] DNA computing is a form ofparallel computingin that it takes advantage of the many different molecules of DNA to try many different possibilities at once.[52]For certain specialized problems, DNA computers are faster and smaller than any other computer built so far. Furthermore, particular mathematical computations have been demonstrated to work on a DNA computer. DNA computing does not provide any new capabilities from the standpoint ofcomputability theory, the study of which problems are computationally solvable using different models of computation. For example, if the space required for the solution of a problem grows exponentially with the size of the problem (EXPSPACEproblems) onvon Neumann machines, it still grows exponentially with the size of the problem on DNA machines. For very large EXPSPACE problems, the amount of DNA required is too large to be practical. A partnership betweenIBMandCaltechwas established in 2009 aiming at "DNA chips" production.[53]A Caltech group is working on the manufacturing of these nucleic-acid-based integrated circuits. One of these chips can compute whole square roots.[54]A compiler has been written inPerl.[55] The slow processing speed of a DNA computer (the response time is measured in minutes, hours or days, rather than milliseconds) is compensated by its potential to make a high amount of multiple parallel computations. This allows the system to take a similar amount of time for a complex calculation as for a simple one. This is achieved by the fact that millions or billions of molecules interact with each other simultaneously. However, it is much harder to analyze the answers given by a DNA computer than by a digital one.
https://en.wikipedia.org/wiki/DNA_computing
Natural Computingis ascientific journalcoveringnatural computingresearch. It has been published quarterly bySpringer Verlag(Springer Netherlands) in print (ISSN1567-7818) and online (ISSN1572-9796) since 2002.[1] "Natural Computing refers to computational processes observed in nature, and human-designed computing inspired by nature ... molecular computing andquantum computing... use of algorithms to consider evolution as a computational process, and neural networks in light of computational trends in brain research."[1] It includes 19 open access articles as of 19 June 2016[2]and has animpact factorof 1.310.[1] This article about acomputer sciencejournalis astub. You can help Wikipedia byexpanding it. See tips for writing articles about academic journals. Further suggestions might be found on the article'stalk page.
https://en.wikipedia.org/wiki/Natural_Computing_(journal)
Synthetic biology(SynBio) is a multidisciplinary field of science that focuses on living systems and organisms. It appliesengineeringprinciples to develop new biological parts, devices, and systems or to redesign existing systems found in nature.[1] It is a branch of science that encompasses a broad range of methodologies from various disciplines, such asbiochemistry,biotechnology,biomaterials,material science/engineering,genetic engineering,molecular biology,molecular engineering,systems biology,membrane science,biophysics,chemical and biological engineering,electrical and computer engineering,control engineeringandevolutionary biology. It includes designing and constructingbiological modules,biological systems, andbiological machines, or re-designing existing biological systems for useful purposes.[2] Additionally, it is the branch of science that focuses on the new abilities of engineering into existing organisms to redesign them for useful purposes.[3] In order to produce predictable and robust systems with novel functionalities that do not already exist in nature, it is also necessary to apply the engineeringparadigmof systems design to biological systems. According to theEuropean Commission, this possibly involves a molecular assembler based on biomolecular systems such as theribosome.[4] 1910:First identifiable use of the termsynthetic biologyinStéphane Leduc's publicationThéorie physico-chimique de la vie et générations spontanées.[5]He also noted this term in another publication,La Biologie Synthétiquein 1912.[6] 1944: Canadian-American scientistOswald Averyshows thatDNAis the material of whichgenesandchromosomesare made. This becomes the bedrock on which all subsequent genetic research is built.[7] 1953:Francis CrickandJames Watsonpublish the structure of the DNA inNature. 1961: Jacob and Monod postulate cellular regulation by molecular networks from their study of thelacoperon inE. coliand envisioned the ability to assemble new systems from molecular components.[8] 1973: First molecular cloning and amplification of DNA in a plasmid is published inP.N.A.S.by Cohen, Boyer et al. constituting the dawn of synthetic biology.[9] 1978:Arber,NathansandSmithwin theNobel Prize in Physiology or Medicinefor the discovery ofrestriction enzymes, leading Szybalski to offer an editorial comment in the journalGene: The work on restriction nucleases not only permits us easily to construct recombinant DNA molecules and to analyze individual genes, but also has led us into the new era of synthetic biology where not only existing genes are described and analyzed but also new gene arrangements can be constructed and evaluated.[10] 1988: First DNA amplification by thepolymerase chain reaction(PCR) using a thermostable DNA polymerase is published inScienceby Mullis et al.[11]This obviated adding new DNA polymerase after each PCR cycle, thus greatly simplifying DNA mutagenesis and assembly. 2000: Two papers inNaturereportsynthetic biological circuits, a genetic toggle switch and a biological clock, by combining genes withinE. colicells.[12][13] 2003: The most widely used standardized DNA parts,BioBrickplasmids, are invented byTom Knight.[14]These parts will become central to theInternational Genetically Engineered Machine(iGEM) competition founded atMITin the following year. 2003: Researchers engineer an artemisinin precursor pathway inE. coli.[15] 2004: First international conference for synthetic biology, Synthetic Biology 1.0 (SB1.0) is held at MIT. 2005: Researchers develop a light-sensing circuit inE. coli.[16]Another group designs circuits capable of multicellular pattern formation.[17] 2006: Researchers engineer a synthetic circuit that promotes bacterial invasion of tumour cells.[18] 2010: Researchers publish inSciencethe first synthetic bacterial genome, calledM. mycoidesJCVI-syn1.0.[19][20]The genome is made from chemically-synthesized DNA using yeast recombination. 2011: Functional synthetic chromosome arms are engineered in yeast.[21] 2012:CharpentierandDoudnalabs publish inSciencethe programming ofCRISPR-Cas9bacterial immunity for targeting DNA cleavage.[22]This technology greatly simplified and expanded eukaryotic gene editing. 2019: Scientists atETH Zurichreport the creation of the firstbacterial genome, namedCaulobacter ethensis-2.0, made entirely by a computer, although a relatedviable formofC. ethensis-2.0does not yet exist.[23][24] 2019: Researchers report the production of a newsynthetic(possiblyartificial) form ofviablelife, a variant of thebacteriaEscherichia coli, by reducing the natural number of 64codonsin the bacterialgenometo 59 codons instead, in order to encode 20amino acids.[25][26] 2020: Scientists created the firstxenobot, a programmable synthetic organism derived from frog cells and designed by AI.[27]Demis HassabisandJohn M. Jumperpresented an AI model calledAlphaFold2. With its help, they have been able to predict the structure of virtually all the 200 million proteins that researchers have identified. Since their breakthrough, AlphaFold2 has been used by more than two million people from 190 countries. Among a myriad of scientific applications, researchers can now better understand antibiotic resistance and create images of enzymes that can decompose plastic.[28][29] 2021: Scientists reported that xenobots are able to self-replicate by gathering loose cells in the environment and then forming new xenobots.[30] 2023: Advancements in RNA therapeutics, including vaccines, RNA circuits, and genetic modifications, have improved safety and efficiency in synthetic biology. RNA-based therapeutics are considered safer than DNA-based systems as they do not integrate into the host genome, reducing the risk of unintended genetic alterations. Additionally, RNA-based systems, constructed from RNA devices and circuits, act more rapidly than DNA-based counterparts since they do not require transcription. These advancements have expanded the potential applications of RNA in gene therapy, personalized medicine, and vaccine development.[31] It is a field whose scope is expanding in terms of systems integration, engineered organisms, and practical findings.[1] Engineers view biology astechnology(in other words, a given system includesbiotechnologyor itsbiological engineering).[32]Synthetic biology includes the broad redefinition and expansion of biotechnology, with the ultimate goal of being able to design and build engineered live biological systems that process information, manipulate chemicals, fabricate materials and structures, produce energy, provide food, and maintain and enhance human health, as well as advance fundamental knowledge of biological systems(seeBiomedical engineering)and our environment.[33] Researchers and companies working in synthetic biology are using nature's power to solve issues in agriculture, manufacturing, and medicine.[3] Due to more powerfulgenetic engineeringcapabilities and decreased DNA synthesis andsequencing costs, the field of synthetic biology is rapidly growing. In 2016, more than 350 companies across 40 countries were actively engaged in synthetic biology applications; all these companies had an estimated net worth of $3.9 billion in the global market.[34]Synthetic biology currently has no generally accepted definition. Here are a few examples: It is the science of emerging genetic and physical engineering to produce new (and, therefore, synthetic) life forms. To develop organisms with novel or enhanced characteristics, this emerging field of study combines biology, engineering, and related disciplines' knowledge and techniques to design chemically synthesised DNA.[35][36] Biomolecular engineering includes approaches that aim to create a toolkit of functional units that can be introduced to present new technological functions in living cells.Genetic engineeringincludes approaches to construct synthetic chromosomes or minimal organisms likeMycoplasma laboratorium. Biomolecular design refers to the general idea of de novo design and additive combination of biomolecular components. Each of these approaches shares a similar task: to develop a more synthetic entity at a higher level of complexity by inventively manipulating a simpler part at the preceding level.[37][38]Optimizing these exogenous pathways in unnatural systems takes iterative fine-tuning of the individual biomolecular components to select the highest concentrations of the desired product.[39] On the other hand, "re-writers" are synthetic biologists interested in testing the irreducibility of biological systems. Due to the complexity of natural biological systems, it would be simpler to rebuild the natural systems of interest from the ground up; to provide engineered surrogates that are easier to comprehend, control and manipulate.[40]Re-writers draw inspiration fromrefactoring, a process sometimes used to improve computer software. Bioengineering, synthetic genomics, protocell synthetic biology, unconventional molecular biology, and in silico techniques are the five categories of synthetic biology.[41] It is necessary to review the distinctions and analogies between the categories of synthetic biology for its social and ethical assessment, to distinguish between issues affecting the whole field and particular to a specific one.[41] The subfield of bioengineering concentrates on creating novel metabolic and regulatory pathways, and is currently the one that likely draws the attention of most researchers and funding. It is primarily motivated by the desire to establish biotechnology as a legitimate engineering discipline. When referring to this area of synthetic biology, the word "bioengineering" should not be confused with "traditional genetic engineering", which involves introducing a single transgene into the intended organism. Bioengineers adapted synthetic biology to provide a substantially more integrated perspective on how to alter organisms or metabolic systems.[41] A typical example of single-gene genetic engineering is the insertion of the human insulin gene into bacteria to create transgenic proteins. The creation of whole new signalling pathways, containing numerous genes and regulatory components (such as an oscillator circuit to initiate the periodic production of green fluorescent protein (GFP) in mammalian cells), is known as bioengineering as part of synthetic biology.[41] By utilising simplified and abstracted metabolic and regulatory modules as well as other standardized parts that may be freely combined to create new pathways or creatures, bioengineering aims to create innovative biological systems. In addition to creating infinite opportunities for novel applications, this strategy is anticipated to make bioengineering more predictable and controllable than traditional biotechnology.[41] The formation of animals with a chemically manufactured (minimal) genome is another facet of synthetic biology that is highlighted by synthetic genomics. This area of synthetic biology has been made possible by ongoing advancements in DNA synthesis technology, which now makes it feasible to produce DNA molecules with thousands of base pairs at a reasonable cost. The goal is to combine these molecules into complete genomes and transplant them into living cells, replacing the host cell's genome and reprogramming its metabolism to perform different functions.[41] Scientists have previously demonstrated the potential of this approach by creating infectious viruses by synthesising the genomes of multiple viruses. These significant advances in science and technology triggered the initial public concerns concerning the risks associated with this technology.[41] A simple genome might also work as a "chassis genome" that could be enlarged quickly by gene inclusion created for particular tasks. Such "chassis creatures" would be more suited for the insertion of new functions than wild organisms since they would have fewer biological pathways that could potentially conflict with the new functionalities in addition to having specific insertion sites. Synthetic genomics strives to create creatures with novel "architectures," much like the bioengineering method. It adopts an integrative or holistic perspective of the organism. In this case, the objective is the creation of chassis genomes based on necessary genes and other required DNA sequences rather than the design of metabolic or regulatory pathways based on abstract criteria.[41] The in vitro generation of synthetic cells is the protocell branch of synthetic biology. Lipid vesicles, which have all the necessary components to function as a complete system, can be used to create these artificial cells. In the end, these synthetic cells should meet the requirements for being deemed alive, namely the capacity for self-replication, self-maintenance, and evolution. The protocell technique has this as its end aim, however there are other intermediary steps that fall short of meeting all the criteria for a living cell. In order to carry out a specific function, these lipid vesicles contain cell extracts or more specific sets of biological macromolecules and complex structures, such as enzymes, nucleic acids, or ribosomes. For instance, liposomes may carry out particular polymerase chain reactions or synthesise a particular protein.[41] Protocell synthetic biology takes artificial life one step closer to reality by eventually synthesizing not only the genome but also every component of the cell in vitro, as opposed to the synthetic genomics approach, which relies on coercing a natural cell to carry out the instructions encoded by the introduced synthetic genome. Synthetic biologists in this field view their work as basic study into the conditions necessary for life to exist and its origin more than in any of the other techniques. The protocell technique, however, also lends itself well to applications; similar to other synthetic biology byproducts, protocells could be employed for the manufacture of biopolymers and medicines.[41] The objective of the "unnatural molecular biology" strategy is to create new varieties of life that are based on a different kind of molecular biology, such as new types of nucleic acids or a new genetic code. The creation of new types of nucleotides that can be built into unique nucleic acids could be accomplished by changing certain DNA or RNA constituents, such as the bases or the backbone sugars.[41] The normal genetic code is being altered by inserting quadruplet codons or changing some codons to encode new amino acids, which would subsequently permit the use of non-natural amino acids with unique features in protein production. It is a scientific and technological problem to adjust the enzymatic machinery of the cell for both approaches.[41] A new sort of life would be formed by organisms with a genome built on synthetic nucleic acids or on a totally new coding system for synthetic amino acids. This new style of life would have some benefits but also some new dangers. On release into the environment, there would be no horizontal gene transfer or outcrossing of genes with natural species. Furthermore, these kinds of synthetic organisms might be created to require non-natural materials for protein or nucleic acid synthesis, rendering them unable to thrive in the wild if they accidentally escaped.[41] On the other hand, if these organisms ultimately were able to survive outside of controlled space, they might have a particular benefit over natural organisms because they would be resistant to predatory living organisms or natural viruses, that could lead to an unmanaged spread of the synthetic organisms.[41] Synthetic biologyin silicoand the various strategies are interconnected. The development of complex designs, whether they are metabolic pathways, fundamental cellular processes, or chassis genomes, is one of the major difficulties faced by the four synthetic-biology methods outlined above. Because of this, synthetic biology has a robust in silico branch, similar to systems biology, that aims to create computational models for the design of common biological components or synthetic circuits, which are essentially simulations of synthetic organisms.[41] The practical application of simulations and models through bioengineering or other fields of synthetic biology is the long-term goal of in silico synthetic biology. Many of the computational simulations of synthetic organisms up to this point possess little to no direct analogy to living things. Due to this, in silico synthetic biology is regarded as a separate group in this article.[41] It is sensible to integrate the five areas under the umbrella of synthetic biology as one unified area of study. Even though they focus on various facets of life, such as metabolic regulation, essential elements, or biochemical makeup, these five strategies all work toward the same end: creating new types of living organisms. Additionally, the varied methodologies begin with numerous methodological approaches, which leads to the diversity of synthetic biology approaches.[41] Synthetic biology is an interdisciplinary field that draws from and is inspired by many different scientific disciplines, not one single field or technique. Synthetic biologists all have the same underlying objective of designing and producing new forms of life, despite the fact that they may employ various methodologies, techniques, and research instruments. Any evaluation of synthetic biology, whether it examines ethical, legal, or safety considerations, must take into account the fact that while some questions, risks, and issues are unique to each technique, in other circumstances, synthetic biology as a whole must be taken into consideration.[41] Synthetic biology has traditionally been divided into four different engineering approaches: top down, parallel, orthogonal and bottom up.[42] To replicate emergent behaviours from natural biology and build artificial life, unnatural chemicals are used. The other looks for interchangeable components from biological systems to put together and create systems that do not work naturally. In either case, a synthetic objective compels researchers to venture into new area in order to engage and resolve issues that cannot be readily resolved by analysis. Due to this, new paradigms are driven to arise in ways that analysis cannot easily do. In addition to equipments that oscillate, creep, and play tic-tac-toe, synthetic biology has produced diagnostic instruments that enhance the treatment of patients with infectious diseases.[43] It involves using metabolic and genetic engineering techniques to impart new functions to living cells.[44]By comparing universal genes and eliminating non-essential ones to create a basic genome, this method seeks to lessen the complexity of existing cells. These initiatives are founded on the hypothesis of a single genesis for cellular life, the so-calledLast Universal Common Ancestor, which supports the presence of a universal minimal genome that gave rise to all living things. Recent studies, however, raise the possibility that the eukaryotic and prokaryotic cells that make up the tree of life may have evolved from a group of primordial cells rather than from a single cell. As a result, even while the Holy Grail-like pursuit of the "minimum genome" has grown elusive, cutting out a number of non-essential functions impairs an organism's fitness and leads to "fragile" genomes.[42] This approach involves creating new biological systems in vitro by bringing together 'non-living' biomolecular components,[45]often with the aim of constructing anartificial cell. Reproduction, replication, and assembly are three crucial self-organizational principles that are taken into account in order to accomplish this. Cells, which are made up of a container and a metabolism, are considered "hardware" in the definition of reproduction, whereas replication occurs when a system duplicates a perfect copy of itself, as in the case of DNA, which is considered "software." When vesicles or containers (such as Oparin's coacervates) formed of tiny droplets of molecules that are organic like lipids or liposomes, membrane-like structures comprising phospholipids, aggregate, assembly occur.[42] The study of protocells exists along with other in vitro synthetic biology initiatives that seek to produce minimal cells, metabolic pathways, or "never-born proteins" as well as to mimic physiological functions including cell division and growth. Recently a cell-free system capable of self-sustaining using CO2 was engineered by bottom-up integrating metabolism with gene expression.[46]This research, which is primarily essential, deserves proper recognition as synthetic biology research.[42] Parallel engineering is also known as bioengineering. The basic genetic code is the foundation for parallel engineering research, which uses conventional biomolecules like nucleic acids and the 20 amino acids to construct biological systems. For a variety of applications in biocomputing, bioenergy, biofuels, bioremediation, optogenetics, and medicine, it involves the standardisation of DNA components, engineering of switches, biosensors, genetic circuits, logic gates, and cellular communication operators. For directing the expression of two or more genes and/or proteins, the majority of these applications often rely on the use of one or more vectors (or plasmids). Small, circular, double-strand DNA units known as plasmids, which are primarily found in prokaryotic but can also occasionally be detected in eukaryotic cells, may replicate autonomously of chromosomal DNA.[42] It is also known as perpendicular engineering. This strategy, also referred to as "chemical synthetic biology," principally seeks to alter or enlarge the genetic codes of living systems utilising artificial DNA bases and/or amino acids. This subfield is also connected toxenobiology, a newly developed field that combines systems chemistry, synthetic biology,exobiology, and research into the origins of life. In recent decades, researchers have created compounds that are structurally similar to the DNA canonical bases to see if those "alien" or xeno (XNA) molecules may be employed as genetic information carriers. Similar to this, noncanonical moieties have taken the place of the DNA sugar (deoxyribose).[42]In order to express information other than the 20 conventional amino acids of proteins, the genetic code can be altered or enlarged. One method involves incorporating a specified unnatural, noncanonical, or xeno amino acid (XAA) into one or more proteins at one or more precise places using orthogonal enzymes and a transfer RNA adaptor from an other organism. By using "directed evolution," which entails repeated cycles of gene mutagenesis (genotypic diversity production), screening or selection (of a specific phenotypic trait), and amplification of a better variant for the following iterative round, orthogonal enzymes are produced Numerous XAAs have been effectively incorporated into proteins in more complex creatures like worms and flies as well as in bacteria, yeast, and human cell lines. As a result of canonical DNA sequence changes, directed evolution also enables the development of orthogonal ribosomes, which make it easier to incorporate XAAs into proteins or create "mirror life," or biological systems that contain biomolecules made up of enantiomers with different chiral orientations.[42] Several novel enabling technologies were critical to the success of synthetic biology. Concepts includestandardizationof biological parts and hierarchical abstraction to permit using those parts in synthetic systems.[47]DNA serves as the guide for how biological processes should function, like the score to a complex symphony of life. Our ability to comprehend and design biological systems has undergone significant modifications as a result of developments in the previous few decades in both reading (sequencing) and writing (synthesis) DNA sequences. These developments have produced ground-breaking techniques for designing, assembling, and modifying DNA-encoded genes, materials, circuits, and metabolic pathways, enabling an ever-increasing amount of control over biological systems and even entire organisms.[48] Basic technologies include reading and writing DNA (sequencing and fabrication). Measurements under multiple conditions are needed for accurate modeling andcomputer-aided design(CAD). Driven by dramatic decreases in costs ofoligonucleotide("oligos") synthesis and the advent of PCR, the sizes of DNA constructions from oligos have increased to the genomic level.[49]In 2000, researchers reported synthesis of the 9.6 kbp (kilo bp)Hepatitis C virusgenome from chemically synthesized 60 to 80-mers.[50]In 2002, researchers atStony Brook Universitysucceeded in synthesizing the 7741 bppoliovirusgenome from its published sequence, producing the second synthetic genome, spanning two years.[51]In 2003, the 5386 bp genome of thebacteriophagePhi X 174was assembled in about two weeks.[52]In 2006, the same team, at theJ. Craig Venter Institute, constructed and patented asynthetic genomeof a novel minimal bacterium,Mycoplasma laboratoriumand were working on getting it functioning in a living cell.[53][54][55] In 2007, it was reported that several companies were offeringsynthesis of genetic sequencesup to 2000 base pairs (bp) long, for a price of about $1 per bp and a turnaround time of less than two weeks.[56]Oligonucleotidesharvested from a photolithographic- or inkjet-manufacturedDNA chipcombined with PCR and DNA mismatch error-correction allows inexpensive large-scale changes ofcodonsin genetic systems to improvegene expressionor incorporate novel amino-acids(seeGeorge M. Church's and Anthony Forster's synthetic cell projects.[57][58]). This favors a synthesis-from-scratch approach. Additionally, theCRISPR/Cassystem has emerged as a promising technique for gene editing. It was described as "the most important innovation in the synthetic biology space in nearly 30 years".[59]While other methods take months or years to edit gene sequences, CRISPR speeds that time up to weeks.[59]Due to its ease of use and accessibility, however, it has raised ethical concerns, especially surrounding its use inbiohacking.[60][61][62] DNA sequencingdetermines the order ofnucleotidebases in a DNA molecule. Synthetic biologists use DNA sequencing in their work in several ways. First, large-scale genome sequencing efforts continue to provide information on naturally occurring organisms. This information provides a rich substrate from which synthetic biologists can construct parts and devices. Second, sequencing can verify that the fabricated system is as intended. Third, fast, cheap, and reliable sequencing can facilitate rapid detection and identification of synthetic systems and organisms.[63] This is the ability of a system or component to operate without reference to its context.[64] The most used[65]: 22–23standardized DNA parts areBioBrickplasmids, invented byTom Knightin 2003.[14]Biobricks are stored at theRegistry of Standard Biological Partsin Cambridge, Massachusetts. The BioBrick standard has been used by tens of thousands of students worldwide in theinternational Genetically Engineered Machine(iGEM) competition. BioBrick Assembly Standard 10 promotes modularity by allowing BioBrick coding sequences to be spliced out and exchanged using restriction enzymes EcoRI or XbaI (BioBrick prefix) and SpeI and PstI (BioBrick suffix).[65]: 22–23 Sequence overlapbetween two genetic elements (genesorcoding sequences), calledoverlapping genes, can prevent their individual manipulation.[66]To increase genome modularity, the practice of genome refactoring or improving "the internal structure of an existing system for future use, while simultaneously maintaining external system function"[67]has been adopted across synthetic biology disciplines.[66]Some notable examples of refactoring including the nitrogen fixation cluster[68]and type III secretion system[69]along with bacteriophages T7[67]and ΦX174.[70] While DNA is most important for information storage, a large fraction of the cell's activities are carried out by proteins. Tools can send proteins to specific regions of the cell and to link different proteins together. The interaction strength between protein partners should be tunable between a lifetime of seconds (desirable for dynamic signaling events) up to an irreversible interaction (desirable for device stability or resilient to harsh conditions). Interactions such ascoiled coils,[71]SH3 domain-peptide binding[72]orSpyTag/SpyCatcher[73]offer such control. In addition, it is necessary to regulate protein-protein interactions in cells, such as with light (usinglight-oxygen-voltage-sensing domains) or cell-permeable small molecules bychemically induced dimerization.[74] In a living cell, molecular motifs are embedded in a bigger network with upstream and downstream components. These components may alter the signaling capability of the modeling module. In the case of ultrasensitive modules, the sensitivity contribution of a module can differ from the sensitivity that the module sustains in isolation.[75][76] Models inform the design of engineered biological systems by better predicting system behavior prior to fabrication. Synthetic biology benefits from better models of how biological molecules bind substrates and catalyze reactions, how DNA encodes the information needed to specify the cell and how multi-component integrated systems behave. Multiscale models of gene regulatory networks focus on synthetic biology applications. Simulations can model all biomolecular interactions intranscription,translation, regulation and induction of gene regulatory networks.[77][78][79][80] Only extensive modelling can enable the exploration of dynamic gene expression in a form suitable for research and design due to the numerous involved species and the intricacy of their relationships. Dynamic simulations of the entire biomolecular interconnection involved in regulation, transport, transcription, induction, and translation enable the molecular level detailing of designs. As opposed to modelling artificial networks a posteriori, this is contrasted.[81] Microfluidics, in particular droplet microfluidics, is an emerging tool used to construct new components, and to analyze and characterize them.[82][83]It is widely employed in screening assays.[84] Studies have considered the components of theDNA transcriptionmechanism. One desire of scientists creatingsynthetic biological circuitsis to be able to control the transcription of synthetic DNA in unicellular organisms (prokaryotes) and in multicellular organisms (eukaryotes). One study tested the adjustability of synthetictranscription factors(sTFs) in areas of transcription output and cooperative ability among multiple transcription factor complexes.[85]Researchers were able to mutate functional regions calledzinc fingers, the DNA specific component of sTFs, to decrease their affinity for specific operator DNA sequence sites, and thus decrease the associated site-specific activity of the sTF (usually transcriptional regulation). They further used the zinc fingers as components of complex-forming sTFs, which are theeukaryotic translationmechanisms.[85] Synthetic biology initiatives frequently aim to redesign organisms so that they can create a material, such as a drug or fuel, or acquire a new function, such as the ability to sense something in the environment. Examples of what researchers are creating using synthetic biology include: Abiosensorrefers to an engineered organism, usually a bacterium, that is capable of reporting some ambient phenomenon such as the presence of heavy metals or toxins. One such system is theLux operonofAliivibrio fischeri,[87]which codes for the enzyme that is the source of bacterialbioluminescence, and can be placed after a respondentpromoterto express the luminescence genes in response to a specific environmental stimulus.[88]One such sensor created, consisted of abioluminescent bacterialcoating on a photosensitivecomputer chipto detect certainpetroleumpollutants. When the bacteria sense the pollutant, they luminesce.[89]Another example of a similar mechanism is the detection of landmines by an engineeredE.colireporter strain capable of detectingTNTand its main degradation productDNT, and consequently producing a green fluorescent protein (GFP).[90] Modified organisms can sense environmental signals and send output signals that can be detected and serve diagnostic purposes. Microbe cohorts have been used.[91] Biosensors could also be used to detect pathogenic signatures—such as ofSARS-CoV-2—and can bewearable.[92][93] For the purpose of detecting and reacting to various and temporary environmental factors, cells have developed a wide range of regulatory circuits, ranging from transcriptional to post-translational. These circuits are made up of transducer modules that filter the signals and activate a biological response, as well as carefully designed sensitive sections that attach analytes and regulate signal-detection thresholds. Modularity and selectivity are programmed to biosensor circuits at the transcriptional, translational, and post-translational levels, to achieve the delicate balancing of the two basic sensing modules.[94] However, not all synthetic nutrition products are animal food products – for instance, as of 2021, there are also products ofsynthetic coffeethat are reported to be close to commercialization.[102][103][104]Similar fields of research and production based on synthetic biology that can be used for the production of food and drink are: Photosynthetic microbial cells have been used as a step to synthetic production ofspider silk.[109][110] Abiological computerrefers to an engineered biological system that can perform computer-like operations, which is a dominant paradigm in synthetic biology. Researchers built and characterized a variety oflogic gatesin a number of organisms,[111]and demonstrated both analog and digital computation in living cells. They demonstrated that bacteria can be engineered to perform both analog and/or digital computation.[112][113]In 2007, in human cells, research demonstrated a universal logic evaluator that operates in mammalian cells.[114]Subsequently, researchers utilized this paradigm to demonstrate a proof-of-concept therapy that uses biological digital computation to detect and kill human cancer cells in 2011.[115]In 2016, another group of researchers demonstrated that principles ofcomputer engineeringcan be used to automate digital circuit design in bacterial cells.[116]In 2017, researchers demonstrated the 'Boolean logic and arithmetic through DNA excision' (BLADE) system to engineer digital computation in human cells.[117]In 2019, researchers implemented aperceptronin biological systems opening the way formachine learningin these systems.[118] Cells use interacting genes and proteins, which are called gene circuits, to implement diverse function, such as responding to environmental signals, decision making and communication. Three key components are involved: DNA, RNA and Synthetic biologist designed gene circuits that can control gene expression from several levels including transcriptional, post-transcriptional and translational levels. Traditional metabolic engineering has been bolstered by the introduction of combinations of foreign genes and optimization by directed evolution. This includes engineeringE. coliandyeastfor commercial production of a precursor of theantimalarial drug,Artemisinin.[119] Entire organisms have yet to be created from scratch, although living cells can betransformedwith new DNA. Several ways allow constructing synthetic DNA components and even entiresynthetic genomes, but once the desired genetic code is obtained, it is integrated into a living cell that is expected to manifest the desired new capabilities orphenotypeswhile growing and thriving.[120]Cell transformation is used to createbiological circuits, which can be manipulated to yield desired outputs.[12][13] By integrating synthetic biology withmaterials science, it would be possible to use cells as microscopic molecular foundries to produce materials whose properties were genetically encoded. Re-engineering has produced Curli fibers, theamyloidcomponent of extracellular material ofbiofilms, as a platform for programmablenanomaterial. These nanofibers were genetically constructed for specific functions, including adhesion to substrates, nanoparticle templating and protein immobilization.[121] Natural proteins can be engineered, for example, bydirected evolution, novel protein structures that match or improve on the functionality of existing proteins can be produced. One group generated ahelix bundlethat was capable of bindingoxygenwith similar properties ashemoglobin, yet did not bindcarbon monoxide.[123]A similar protein structure was generated to support a variety ofoxidoreductaseactivities[124]while another formed a structurally and sequentially novelATPase.[125]Another group generated a family of G-protein coupled receptors that could be activated by the inert small moleculeclozapine N-oxidebut insensitive to the nativeligand,acetylcholine; these receptors are known asDREADDs.[126]Novel functionalities or protein specificity can also be engineered using computational approaches. One study was able to use two different computational methods: a bioinformatics and molecular modeling method to mine sequence databases, and a computational enzyme design method to reprogram enzyme specificity. Both methods resulted in designed enzymes with greater than 100 fold specificity for production of longer chain alcohols from sugar.[127] Another common investigation isexpansionof the natural set of 20amino acids. Excludingstop codons, 61codonshave been identified, but only 20 amino acids are coded generally in all organisms. Certain codons are engineered to code for alternative amino acids including: nonstandard amino acids such as O-methyltyrosine; or exogenous amino acids such as 4-fluorophenylalanine. Typically, these projects make use of re-codednonsense suppressortRNA-Aminoacyl tRNA synthetasepairs from other organisms, though in most cases substantial engineering is required.[128] Other researchers investigated protein structure and function by reducing the normal set of 20 amino acids. Limited protein sequence libraries are made by generating proteins where groups of amino acids may be replaced by a single amino acid.[129]For instance, severalnon-polaramino acids within a protein can all be replaced with a single non-polar amino acid.[130]One project demonstrated that an engineered version ofChorismate mutasestill had catalytic activity when only nine amino acids were used.[131] Researchers and companies practice synthetic biology to synthesizeindustrial enzymeswith high activity, optimal yields and effectiveness. These synthesized enzymes aim to improve products such as detergents and lactose-free dairy products, as well as make them more cost effective.[132]The improvements of metabolic engineering by synthetic biology is an example of a biotechnological technique utilized in industry to discover pharmaceuticals and fermentive chemicals. Synthetic biology may investigate modular pathway systems in biochemical production and increase yields of metabolic production. Artificial enzymatic activity and subsequent effects on metabolic reaction rates and yields may develop "efficient new strategies for improving cellular properties ... for industrially important biochemical production".[133] Scientists can encode digital information onto a single strand ofsynthetic DNA. In 2012,George M. Churchencoded one of his books about synthetic biology in DNA. The 5.3Mbof data was more than 1000 times greater than the previous largest amount of information to be stored in synthesized DNA.[134]A similar project encoded the completesonnetsofWilliam Shakespearein DNA.[135]More generally, algorithms such as NUPACK,[136]ViennaRNA,[137]Ribosome Binding Site Calculator,[138]Cello,[139]and Non-Repetitive Parts Calculator[140]enables the design of new genetic systems. Many technologies have been developed for incorporatingunnatural nucleotidesand amino acids into nucleic acids and proteins, both in vitro and in vivo. For example, in May 2014, researchers announced that they had successfully introduced two new artificialnucleotidesinto bacterial DNA. By including individual artificial nucleotides in the culture media, they were able to exchange the bacteria 24 times; they did not generatemRNAor proteins able to use the artificial nucleotides.[141][142][143] Synthetic biology raisedNASA'sinterest as it could help to produce resources for astronauts from a restricted portfolio of compounds sent from Earth.[144][145][146]On Mars, in particular, synthetic biology could lead to production processes based on local resources, making it a powerful tool in the development of occupied outposts with less dependence on Earth.[144]Work has gone into developing plant strains that are able to cope with the harsh Martian environment, using similar techniques to those employed to increase resilience to certain environmental factors in agricultural crops.[147] One important topic in synthetic biology issynthetic life, that is concerned with hypothetical organisms createdin vitrofrombiomoleculesand/orchemical analogues thereof. Synthetic life experiments attempt to either probe theorigins of life, study some of the properties of life, or more ambitiously to recreate life from non-living (abiotic) components. Synthetic life biology attempts to create living organisms capable of carrying out important functions, from manufacturing pharmaceuticals to detoxifying polluted land and water.[149]In medicine, it offers prospects of using designer biological parts as a starting point for new classes of therapies and diagnostic tools.[149] A living "artificial cell" has been defined as a completely synthetic cell that can captureenergy, maintainion gradients, containmacromoleculesas well as store information and have the ability tomutate.[150]It has been claimed that this would be difficult,[150]although researcher have created contenders for such artificial cells.[151] A completely synthetic bacterial chromosome was produced in 2010 byCraig Venter, and his team introduced it to genomically emptied bacterial host cells.[19]The host cells were able to grow and replicate.[152][153]TheMycoplasma laboratoriumis the only living organism with completely engineered genome. The first living organism with 'artificial' expanded DNA code was presented in 2014; the team usedE. colithat had its genome extracted and replaced with a chromosome with an expanded genetic code. Thenucleosidesadded ared5SICSanddNaM.[143] In May 2019, in a milestone effort, researchers reported the creation of a newsynthetic(possiblyartificial) form ofviablelife, a variant of thebacteriaEscherichia coli, by reducing the natural number of 64codonsin the bacterialgenometo 59 codons instead, in order to encode 20amino acids.[25][26] In 2017, the internationalBuild-a-Celllarge-scale open-source research collaboration for the construction of synthetic living cells was started,[154]followed by national synthetic cell organizations in several countries, including FabriCell,[155]MaxSynBio[156]and BaSyC.[157]The European synthetic cell efforts were unified in 2019 as SynCellEU initiative.[158] In 2023, researchers were able to create the first synthetically made human embryos derived from stem cells.[159] In therapeutics, synthetic biology has achieved significant advancements in altering and simplifying the therapeutics scope in a relatively short period of time. In fact, new therapeutic platforms, from the discovery of disease mechanisms and drug targets to the manufacture and transport of small molecules, are made possible by the logical and model-guided design construction of biological components.[64] Synthetic biology devices have been designed to act as therapies in therapeutic treatment. It is possible to control complete created viruses and organisms to target particular pathogens and diseased pathways. Thus, in two independent studies 91,92, researchers utilised genetically modified bacteriophages to fight antibiotic-resistant bacteria by giving them genetic features that specifically target and hinder bacterial defences against antibiotic activity.[160] In the therapy ofcancer, since conventional medicines frequently indiscriminately target tumours and normal tissues, artificially created viruses and organisms that can identify and connect their therapeutic action to pathological signals may be helpful. For example,p53pathway activity in human cells was put into adenoviruses to control how they replicated.[160] Bacteria have long been used in cancer treatment.BifidobacteriumandClostridiumselectively colonize tumors and reduce their size.[161]Recently synthetic biologists reprogrammed bacteria to sense and respond to a particular cancer state. Most often bacteria are used to deliver a therapeutic molecule directly to the tumor to minimize off-target effects. To target the tumor cells,peptidesthat can specifically recognize a tumor were expressed on the surfaces of bacteria. Peptides used include anaffibody moleculethat specifically targets humanepidermal growth factor receptor 2[162]and a syntheticadhesin.[163]The other way is to allow bacteria to sense thetumor microenvironment, for example hypoxia, by building an AND logic gate into bacteria.[164]Then the bacteria only release target therapeutic molecules to the tumor through eitherlysis[165]or thebacterial secretion system.[166]Lysis has the advantage that it can stimulate the immune system and control growth. Multiple types of secretion systems can be used and other strategies as well. The system is inducible by external signals. Inducers include chemicals, electromagnetic or light waves. Multiple species and strains are applied in these therapeutics. Most commonly used bacteria areSalmonella typhimurium,Escherichia coli,Bifidobacteria,Streptococcus,Lactobacillus,ListeriaandBacillus subtilis. Each of these species have their own property and are unique to cancer therapy in terms of tissue colonization, interaction with immune system and ease of application. Engineered yeast-based platform Synthetic biologists are developing genetically modified live yeast that can deliver therapeutic biologic medicines. When orally delivered, these live yeast act like micro-factories and will make therapeutic molecules directly in the gastrointestinal tract. Because yeast are eukaryotic, a key benefit is that they can be administered together with antibiotics. Probiotic yeast expressing human P2Y2 purinergic receptor suppressed intestinal inflammation in mouse models of inflammatory bowel disease.[167]A liveS. boulardiiyeast delivering a tetra-specific anti-toxin that potently neutralizes Toxin A and Toxin B ofClostridioides difficilehas been developed. This therapeutic anti-toxin is a fusion of four single-domain antibodies (nanobodies) that potently and broadly neutralize the two major virulence factors of C. difficile at the site of infection in preclinical models.[168]The first in human clinical trial of engineered live yeast for the treatment ofClostridioides difficile infectionis anticipated in 2024 and will be sponsored by the developerFzata, Inc. The immune system plays an important role in cancer and can be harnessed to attack cancer cells. Cell-based therapies focus onimmunotherapies, mostly by engineeringT cells. T cell receptors were engineered and 'trained' to detect cancerepitopes.Chimeric antigen receptors(CARs) are composed of a fragment of anantibodyfused to intracellular T cell signaling domains that can activate and trigger proliferation of the cell. Multiple second generation CAR-based therapies have been approved by FDA.[169] Gene switches were designed to enhance safety of the treatment. Kill switches were developed to terminate the therapy should the patient show severe side effects.[170]Mechanisms can more finely control the system and stop and reactivate it.[171][172]Since the number of T-cells are important for therapy persistence and severity, growth of T-cells is also controlled to dial the effectiveness and safety of therapeutics.[173] Although several mechanisms can improve safety and control, limitations include the difficulty of inducing large DNA circuits into the cells and risks associated with introducing foreign components, especially proteins, into cells. The most popular biofuel is ethanol produced from corn or sugar cane, but this method of producing biofuels is troublesome and constrained due to the high agricultural cost and inadequate fuel characteristics of ethanol. A substitute and potential source of renewable energy is microbes that have had their metabolic pathways altered to be more efficient at converting biomass into biofuels. Only if their production costs could be made to match or even beat those of present fuel production can these techniques be expected to be successful. Related to this, there are several medicines whose pricey manufacturing procedures prevent them from having a larger therapeutic range. The creation of new materials and the microbiological manufacturing of biomaterials would both benefit substantially from novel artificial biology tools.[160] The clustered frequently interspaced short palindromic repetitions (CRISPR)/CRISPR associated (Cas) system is a powerful method of genome engineering in a range of organisms because of its simplicity, modularity, and scalability. In this technique, a guide RNA (gRNA) attracts the CRISPR nuclease Cas9 to a particular spot in the genome, causing a double strand break. Several DNA repair processes, including homology-directed recombination and non-homology end joining, can be used to accomplish the desired genome change (i.e., gene deletion or insertion). Additionally, dCas9 (dead Cas9 or nuclease-deficient Cas9), a Cas9 double mutant (H840A, D10A), has been utilised to control gene expression in bacteria or when linked to a stimulation of suppression site in yeast.[174] To build and develop biological systems, regulating components including regulators, ribosome-binding sites (RBSs), and terminators are crucial. Despite years of study, there are many various varieties and numbers of promoters and terminators for Escherichia coli, but also for the well-researched model organism Saccharomyces cerevisiae, as well as for other organisms of interest, these tools are quite scarce. Numerous techniques have been invented for the finding and identification of promoters and terminators in order to overcome this constraint, including genome mining, random mutagenesis, hybrid engineering, biophysical modelling, combinatorial design, and rational design.[174] Synthetic biology has been used fororganoids, which are lab-grown organs with application to medical research and transplantation.[175] 3D bioprinting can be used to reconstruct tissue from various regions of the body. The precursor to the adoption of 3D printing in healthcare was a series of trials conducted by researchers at Boston Children's Hospital. The team built replacement urinary bladders by hand for seven patients by constructing scaffolds, then layering the scaffolds with cells from the patients and allowing them to grow. The trials were a success as the patients remained in good health 7 years after implantation, which led a research fellow named Anthony Atala, MD, to search for ways to automate the process.[176]Patients with end-stage bladder disease can now be treated by using bio-engineered bladder tissues to rebuild the damaged organ.[177]This technology can also potentially be applied to bone, skin, cartilage and muscle tissue.[178]Though one long-term goal of 3D bioprinting technology is to reconstruct an entire organ as well as minimize the problem of the lack of organs for transplantation.[179]There has been little success in bioprinting of fully functional organs e.g. liver, skin, meniscus or pancreas.[180][181][182]Unlike implantable stents, organs have complex shapes and are significantly harder to bioprint. A bioprinted heart, for example, must not only meet structural requirements, but also vascularization, mechanical load, and electrical signal propagation requirements.[183]In 2022, the first success of a clinical trial for a 3D bioprinted transplant that is made from the patient's own cells, anexternal earto treatmicrotia,[184]was reported.[185] 3D bioprinting contributes to significant advances in the medical field oftissue engineeringby allowing for research to be done on innovative materials calledbiomaterials. Some of the most notable bioengineered substances are usually stronger than the average bodily materials, including soft tissue and bone. These constituents can act as future substitutes, even improvements, for the original body materials. In addition, theDefense Threat Reduction Agencyaims to print mini organs such as hearts, livers, and lungs as the potential to test new drugs more accurately and perhaps eliminate the need for testing in animals.[186]For bioprinted food like meat see#Food and drink. There is ongoing research and development into synthetic biology based methods for inducingregeneration in humans[relevant?]as well the creation of transplantableartificial organs. Synthetic biology can beused forcreating nanoparticles which can be usedfor drug-deliveryas well as for other purposes.[187]Complementing research and development seeks to and has createdsynthetic cellsthat mimics functions of biological cells. Applications include medicine such asdesigner-nanoparticlesthat make blood cells eat away—from the inside out—portions ofatherosclerotic plaquethat cause heart attacks.[188][189][190]Synthetic micro-droplets foralgal cellsor synergistic algal-bacterial multicellularspheroidmicrobial reactors, for example, could be used to producehydrogenashydrogen economybiotechnology.[191][192] Mammalian designer cells are engineered by humans to behave a specific way, such as an immune cell that expresses a synthetic receptor designed to combat a specific disease.[193][194]Electrogenetics is an application of synthetic biology that involves utilizing electrical fields to stimulate a response in engineered cells.[195]Controlling the designer cells can be done with relative ease through the use of common electronic devices, such as smartphones. Additionally, electrogenetics allows for the possibility of creating devices that are much smaller and compact than devices that use other stimulus through the use of microscopic electrodes.[195]One example of how electrogenetics is used to benefit public health is through stimulating designer cells that are able to produce/deliver therapeutics.[196]This was implemented inElectroHEK cells, cells that contain voltage-gated calcium channels that are electrosensitive, meaning that the ion channel can be controlled by electrical conduction between electrodes and theElectroHEK cells.[195]The expression levels of the artificial gene that theseElectroHEK cells contained was shown to be able to be controlled by changing the voltage or electrical pulse length. Further studies have expanded on this robust system, one of which is a beta cell line system designed to control the release of insulin based on electric signals.[197] The creation of new life and the tampering of existing life has raisedethical concernsin the field of synthetic biology and are actively being discussed.[198][199] Common ethical questions include: The ethical aspects of synthetic biology has three main features:biosafety,biosecurity, and the creation of new life forms.[201]Other ethical issues mentioned include the regulation of new creations, patent management of new creations, benefit distribution, and research integrity.[202][198] Ethical issues have surfaced forrecombinant DNAandgenetically modified organism(GMO) technologies and extensive regulations ofgenetic engineeringand pathogen research were in place in many jurisdictions.Amy Gutmann, former head of the Presidential Bioethics Commission, argued that we should avoid the temptation to over-regulate synthetic biology in general, and genetic engineering in particular. According to Gutmann, "Regulatory parsimony is especially important in emerging technologies...where the temptation to stifle innovation on the basis of uncertainty and fear of the unknown is particularly great. The blunt instruments of statutory and regulatory restraint may not only inhibit the distribution of new benefits, but can be counterproductive to security and safety by preventing researchers from developing effective safeguards.".[203] One ethical question is whether or not it is acceptable to create new life forms, sometimes known as "playing God". Currently, the creation of new life forms not present in nature is at small-scale, the potential benefits and dangers remain unknown, and careful consideration and oversight are ensured for most studies.[198]Many advocates express the great potential value—to agriculture, medicine, and academic knowledge, among other fields—of creating artificial life forms. Creation of new entities could expand scientific knowledge well beyond what is currently known from studying natural phenomena. Yet there is concern that artificial life forms may reduce nature's "purity" (i.e., nature could be somehow corrupted by human intervention and manipulation) and potentially influence the adoption of more engineering-like principles instead of biodiversity- and nature-focused ideals. Some are also concerned that if an artificial life form were to be released into nature, it could hamper biodiversity by beating out natural species for resources (similar to howalgal bloomskill marine species). Another concern involves the ethical treatment of newly created entities if they happen tosense pain,sentience, and self-perception. There is an ongoing debate as to whether such life forms should be granted moral or legal rights, though no consensus exists as to how these rights would be administered or enforced. Ethics and moral rationales that support certain applications of synthetic biology include their potential mitigation of substantial global problems of detrimental environmental impacts of conventionalagriculture(includingmeat production),animal welfare,food security, andhuman health,[204][205][206][207]as well as potential reduction of human labor needs and, via therapies of diseases, reduction of human suffering and prolonged life. What is most ethically appropriate when considering biosafety measures? How can accidental introduction of synthetic life in the natural environment be avoided? Much ethical consideration and critical thought has been given to these questions. Biosafety not only refers to biological containment; it also refers to strides taken to protect the public from potentially hazardous biological agents. Even though such concerns are important and remain unanswered, not all products of synthetic biology present concern for biological safety or negative consequences for the environment. It is argued that most synthetic technologies are benign and are incapable of flourishing in the outside world due to their "unnatural" characteristics as there is yet to be an example of a transgenic microbe conferred with a fitness advantage in the wild. In general, existinghazard controls, risk assessment methodologies, and regulations developed for traditionalgenetically modified organisms(GMOs) are considered to be sufficient for synthetic organisms. "Extrinsic"biocontainmentmethods in a laboratory context include physical containment throughbiosafety cabinetsandgloveboxes, as well aspersonal protective equipment. In an agricultural context, they include isolation distances andpollenbarriers, similar to methods forbiocontainment of GMOs. Synthetic organisms may offer increased hazard control because they can be engineered with "intrinsic" biocontainment methods that limit their growth in an uncontained environment, or preventhorizontal gene transferto natural organisms. Examples of intrinsic biocontainment includeauxotrophy, biologicalkill switches, inability of the organism to replicate or to pass modified or synthetic genes to offspring, and the use ofxenobiologicalorganisms using alternative biochemistry, for example using artificialxeno nucleic acids(XNA) instead of DNA.[208][209] Some ethical issues relate to biosecurity, where biosynthetic technologies could be deliberately used to cause harm to society and/or the environment. Since synthetic biology raises ethical issues and biosecurity issues, humanity must consider and plan on how to deal with potentially harmful creations, and what kinds of ethical measures could possibly be employed to deter nefarious biosynthetic technologies. With the exception of regulating synthetic biology and biotechnology companies,[210][211]however, the issues are not seen as new because they were raised during the earlierrecombinant DNAandgenetically modified organism(GMO) debates, and extensive regulations ofgenetic engineeringand pathogen research are already in place in many jurisdictions.[212] Additionally, the development of synthetic biology tools has made it easier for individuals with less education, training, and access to equipment to modify and use pathogenic organisms as bioweapons. This increases the threat ofbioterrorism, especially as terrorist groups become aware of the significant social, economic, and political disruption caused by pandemics likeCOVID-19. As new techniques are developed in the field of synthetic biology, the risk of bioterrorism is likely to continue to grow.[213]Juan Zarate, who served as Deputy National Security Advisor for Combating Terrorism from 2005 to 2009, noted that "the severity and extreme disruption of a novel coronavirus will likely spur the imagination of the most creative and dangerous groups and individuals to reconsider bioterrorist attacks."[214] TheEuropean Union-funded project SYNBIOSAFE[215]has issued reports on how to manage synthetic biology. A 2007 paper identified key issues in safety, security, ethics, and the science-society interface, which the project defined as public education and ongoing dialogue among scientists, businesses, government and ethicists.[216][217]The key security issues that SYNBIOSAFE identified involved engaging companies that sell synthetic DNA and thebiohackingcommunity of amateur biologists. Key ethical issues concerned the creation of new life forms. A subsequent report focused on biosecurity, especially the so-calleddual-usechallenge. For example, while synthetic biology may lead to more efficient production of medical treatments, it may also lead to synthesis or modification of harmful pathogens (e.g.,smallpox).[218]The biohacking community remains a source of special concern, as the distributed and diffuse nature of open-source biotechnology makes it difficult to track, regulate or mitigate potential concerns over biosafety and biosecurity.[219] COSY, another European initiative, focuses on public perception and communication.[220][221][222]To better communicate synthetic biology and its societal ramifications to a broader public, COSY and SYNBIOSAFE publishedSYNBIOSAFE, a 38-minute documentary film, in October 2009.[215] The International Association Synthetic Biology has proposed self-regulation.[223]This proposes specific measures that the synthetic biology industry, especially DNA synthesis companies, should implement. In 2007, a group led by scientists from leading DNA-synthesis companies published a "practical plan for developing an effective oversight framework for the DNA-synthesis industry".[210] In January 2009, theAlfred P. Sloan Foundationfunded theWoodrow Wilson Center, theHastings Center, and theJ. Craig Venter Instituteto examine the public perception, ethics and policy implications of synthetic biology.[224] On July 9–10, 2009, the National Academies' Committee of Science, Technology & Law convened a symposium on "Opportunities and Challenges in the Emerging Field of Synthetic Biology".[225] After the publication of thefirst synthetic genomeand the accompanying media coverage about "life" being created, PresidentBarack Obamaestablished thePresidential Commission for the Study of Bioethical Issuesto study synthetic biology.[226]The commission convened a series of meetings, and issued a report in December 2010 titled "New Directions: The Ethics of Synthetic Biology and Emerging Technologies." The commission stated that "while Venter's achievement marked a significant technical advance in demonstrating that a relatively large genome could be accurately synthesized and substituted for another, it did not amount to the "creation of life".[227]It noted that synthetic biology is an emerging field, which creates potential risks and rewards. The commission did not recommend policy or oversight changes and called for continued funding of the research and new funding for monitoring, study of emerging ethical issues and public education.[212] Synthetic biology, as a major tool for biological advances, results in the "potential for developing biological weapons, possible unforeseen negative impacts on human health ... and any potential environmental impact".[228]The proliferation of such technology could also make the production ofbiologicalandchemical weaponsavailable to a wider array of state andnon-state actors.[229]These security issues may be avoided by regulating industry uses of biotechnology through policy legislation. Federal guidelines on genetic manipulation are being proposed by "the President's Bioethics Commission ... in response to the announced creation of a self-replicating cell from a chemically synthesized genome, put forward 18 recommendations not only for regulating the science ... for educating the public".[228] On March 13, 2012, over 100 environmental and civil society groups, includingFriends of the Earth, theInternational Center for Technology Assessment, and theETC Group, issued the manifestoThe Principles for the Oversight of Synthetic Biology. This manifesto calls for a worldwide moratorium on the release and commercial use of synthetic organisms until more robust regulations and rigorous biosafety measures are established. The groups specifically call for an outright ban on the use of synthetic biology on thehuman genomeorhuman microbiome.[230][231]Richard Lewontinwrote that some of the safety tenets for oversight discussed inThe Principles for the Oversight of Synthetic Biologyare reasonable, but that the main problem with the recommendations in the manifesto is that "the public at large lacks the ability to enforce any meaningful realization of those recommendations".[232] The hazards of synthetic biology includebiosafetyhazards to workers and the public,biosecurityhazards stemming from deliberate engineering of organisms to cause harm, and environmental hazards.[233]The biosafety hazards are similar to those for existing fields of biotechnology, mainly exposure to pathogens and toxic chemicals, although novel synthetic organisms may have novel risks.[208]For biosecurity, there is concern that synthetic or redesigned organisms could theoretically be used forbioterrorism. Potential risks include recreating known pathogens from scratch, engineering existing pathogens to be more dangerous, and engineering microbes to produce harmful biochemicals.[234]Lastly, environmental hazards include adverse effects onbiodiversityandecosystem services, including potential changes to land use resulting from agricultural use of synthetic organisms.[235][236]Synthetic biology is an example of adual-use technologywith the potential to be used in ways that could intentionally or unintentionally harm humans and/or damage the environment. Often "scientists, their host institutions and funding bodies" consider whether the planned research could be misused and sometimes implement measures to reduce the likelihood of misuse.[237] Existing risk analysis systems for GMOs are generally considered sufficient for synthetic organisms, although there may be difficulties for an organism built "bottom-up" from individual genetic sequences.[209][238]Synthetic biology generally falls under existing regulations for GMOs and biotechnology in general, and any regulations that exist for downstream commercial products, although there are generally no regulations in any jurisdiction that are specific to synthetic biology.[239][240]
https://en.wikipedia.org/wiki/Synthetic_biology
Linear optical quantum computingorlinear optics quantum computation(LOQC), alsophotonic quantum computing (PQC), is a paradigm ofquantum computation, allowing (under certain conditions, described below)universal quantum computation. LOQC usesphotonsas information carriers, mainly useslinear opticalelements, oroptical instruments(includingreciprocal mirrorsandwaveplates) to processquantum information, and uses photon detectors andquantum memoriesto detect and store quantum information.[1][2][3] Although there are many other implementations forquantum information processing(QIP) and quantum computation,optical quantum systemsare prominent candidates, since they link quantum computation andquantum communicationin the same framework. In optical systems for quantum information processing, the unit of light in a given mode—orphoton—is used to represent aqubit.Superpositionsof quantum states can be easily represented,encrypted, transmitted and detected using photons. Besides, linear optical elements of optical systems may be the simplest building blocks to realize quantum operations andquantum gates. Each linear optical element equivalently applies aunitary transformationon a finite number of qubits. The system of finite linear optical elements constructs a network of linear optics, which can realize anyquantum circuitdiagram orquantum networkbased on the quantum circuit model. Quantum computing with continuous variables is also possible under the linear optics scheme.[4] The universality of 1- and 2-bitgatesto implement arbitrary quantum computation has been proven.[5][6][7][8]Up toN×N{\displaystyle N\times N}unitary matrix operations (U(N){\displaystyle U(N)}) can be realized by only using mirrors, beam splitters and phase shifters[9](this is also a starting point ofboson samplingand ofcomputational complexityanalysis for LOQC). It points out that eachU(N){\displaystyle U(N)}operator withN{\displaystyle N}inputs andN{\displaystyle N}outputs can be constructed viaO(N2){\displaystyle {\mathcal {O}}(N^{2})}linear optical elements. Based on the reason of universality and complexity, LOQC usually only uses mirrors, beam splitters, phase shifters and their combinations such asMach–Zehnder interferometerswith phase shifts to implement arbitraryquantum operators. If using a non-deterministic scheme, this fact also implies that LOQC could be resource-inefficient in terms of the number of optical elements and time steps needed to implement a certain quantum gate or circuit, which is a major drawback of LOQC. Operations via linear optical elements (beam splitters, mirrors and phase shifters, in this case) preserve the photon statistics of input light. For example, acoherent(classical) light input produces a coherent light output; a superposition of quantum states input yields aquantum light stateoutput.[3]Due to this reason, people usually use single photon source case to analyze the effect of linear optical elements and operators. Multi-photon cases can be implied through some statistical transformations. An intrinsic problem in using photons as information carriers is that photons hardly interact with each other. This potentially causes a scalability problem for LOQC, since nonlinear operations are hard to implement, which can increase the complexity of operators and hence can increase the resources required to realize a given computational function. One way to solve this problem is to bring nonlinear devices into the quantum network. For instance, theKerr effectcan be applied into LOQC to make a single-photoncontrolled-NOTand other operations.[10][11] It was believed that adding nonlinearity to the linear optical network was sufficient to realize efficient quantum computation.[12]However, to implement nonlinear optical effects is a difficult task. In 2000, Knill, Laflamme and Milburn proved that it is possible to create universal quantum computers solely with linear optical tools.[2]Their work has become known as the "KLM scheme" or "KLM protocol", which uses linear optical elements, single photon sources and photon detectors as resources to construct a quantum computation scheme involving onlyancillaresources,quantum teleportationsanderror corrections. It uses another way of efficient quantum computation with linear optical systems, and promotes nonlinear operations solely with linear optical elements.[3] At its root, the KLM scheme induces an effective interaction between photons by making projective measurements withphotodetectors, which falls into the category of non-deterministic quantum computation. It is based on a non-linear sign shift between two qubits that uses two ancilla photons and post-selection.[13]It is also based on the demonstrations that the probability of success of the quantum gates can be made close to one by using entangled states prepared non-deterministically andquantum teleportationwith single-qubit operations[14][15]Otherwise, without a high enough success rate of a single quantum gate unit, it may require an exponential amount of computing resources. Meanwhile, the KLM scheme is based on the fact that proper quantum coding can reduce the resources for obtaining accurately encoded qubits efficiently with respect to the accuracy achieved, and can make LOQC fault-tolerant for photon loss, detector inefficiency and phasedecoherence. As a result, LOQC can be robustly implemented through the KLM scheme with a low enough resource requirement to suggest practical scalability, making it as promising a technology for QIP as other known implementations. The more limitedboson samplingmodel was suggested and analyzed by Aaronson and Arkhipov in 2010.[16]It is not believed to be universal,[16]but can still solve problems that are believed to be beyond the ability of classical computers, such as theboson sampling problem. On 3 December 2020, a team led by Chinese PhysicistPan Jianwei(潘建伟) andLu Chaoyang(陆朝阳) fromUniversity of Science and Technology of ChinainHefei,AnhuiProvince submitted their results to Science in which they solved a problem that is virtually unassailable by any classical computer; thereby provingQuantum supremacyof their photon-basedquantum computercalledJiu Zhang Quantum Computer(九章量子计算机).[17]The boson sampling problem was solved in 200 seconds, they estimated that China'sSunway TaihuLightSupercomputer would take 2.5 billion years to solve - a quantum supremacy of around 10^14. Jiu Zhang was named in honor of China's oldest surviving mathematical text (Jiǔ zhāng suàn shù)The Nine Chapters on the Mathematical Art[18] DiVincenzo's criteriafor quantum computation and QIP[19][20]give that a universal system for QIP should satisfy at least the following requirements: As a result of using photons and linear optical circuits, in general LOQC systems can easily satisfy conditions 3, 6 and 7.[3]The following sections mainly focus on the implementations of quantum information preparation, readout, manipulation, scalability and error corrections, in order to discuss the advantages and disadvantages of LOQC as a candidate for QIP Aqubitis one of the fundamental QIP units. Aqubit statewhich can be represented byα|0⟩+β|1⟩{\displaystyle \alpha |0\rangle +\beta |1\rangle }is asuperposition statewhich, ifmeasuredin theorthonormal basis{|0⟩,|1⟩}{\displaystyle \{|0\rangle ,|1\rangle \}}, has probability|α|2{\displaystyle |\alpha |^{2}}of being in the|0⟩{\displaystyle |0\rangle }state and probability|β|2{\displaystyle |\beta |^{2}}of being in the|1⟩{\displaystyle |1\rangle }state, where|α|2+|β|2=1{\displaystyle |\alpha |^{2}+|\beta |^{2}=1}is the normalization condition. An optical mode is a distinguishable optical communication channel, which is usually labeled by subscripts of a quantum state. There are many ways to define distinguishable optical communication channels. For example, a set of modes could be differentpolarizationof light which can be picked out with linear optical elements, variousfrequencies, or a combination of the two cases above. In the KLM protocol, each of the photons is usually in one of two modes, and the modes are different between the photons (the possibility that a mode is occupied by more than one photon is zero). This is not the case only during implementations ofcontrolled quantum gatessuch as CNOT. When the state of the system is as described, the photons can be distinguished, since they are in different modes, and therefore a qubit state can be represented using a single photon in two modes, vertical (V) and horizontal (H): for example,|0⟩≡|0,1⟩VH{\displaystyle |0\rangle \equiv |0,1\rangle _{VH}}and|1⟩≡|1,0⟩VH{\displaystyle |1\rangle \equiv |1,0\rangle _{VH}}. It is common to refer to the states defined via occupation of modes asFock states. In boson sampling, photons are not distinguished, and therefore cannot directly represent the qubit state. Instead, we represent the qubit state of the entire quantum system by using the Fock states ofM{\displaystyle M}modes which are occupied byN{\displaystyle N}indistinguishable single photons (this is a(M+N−1M){\displaystyle {\tbinom {M+N-1}{M}}}-level quantum system). To prepare a desired multi-photon quantum state for LOQC, a single-photon state is first required. Therefore,non-linear optical elements, such assingle-photon generatorsand some optical modules, will be employed. For example,optical parametric down-conversioncan be used to conditionally generate the|1⟩≡|1,0⟩VH{\displaystyle |1\rangle \equiv |1,0\rangle _{VH}}state in the vertical polarization channel at timet{\displaystyle t}(subscripts are ignored for this single qubit case). By using a conditional single-photon source, the output state is guaranteed, although this may require several attempts (depending on the success rate). A joint multi-qubit state can be prepared in a similar way. In general, an arbitrary quantum state can be generated for QIP with a proper set of photon sources. To achieve universal quantum computing, LOQC should be capable of realizing a complete set ofuniversal gates. This can be achieved in the KLM protocol but not in the boson sampling model. Ignoring error correction and other issues, the basic principle in implementations of elementary quantum gates using only mirrors, beam splitters and phase shifters is that by using theselinear opticalelements, one can construct any arbitrary 1-qubit unitary operation; in other words, those linear optical elements support a complete set of operators on any single qubit. The unitary matrix associated with a beam splitterBθ,ϕ{\displaystyle \mathbf {B} _{\theta ,\phi }}is: whereθ{\displaystyle \theta }andϕ{\displaystyle \phi }are determined by thereflection amplituder{\displaystyle r}and thetransmission amplitudet{\displaystyle t}(relationship will be given later for a simpler case). For a symmetric beam splitter, which has a phase shiftϕ=π2{\displaystyle \phi ={\frac {\pi }{2}}}under the unitary transformation condition|t|2+|r|2=1{\displaystyle |t|^{2}+|r|^{2}=1}andt∗r+tr∗=0{\displaystyle t^{*}r+tr^{*}=0}, one can show that which is a rotation of the single qubit state about thex{\displaystyle x}-axis by2θ=2cos−1⁡(|t|){\displaystyle 2\theta =2\cos ^{-1}(|t|)}in theBloch sphere. A mirror is a special case where the reflecting rate is 1, so that the corresponding unitary operator is arotation matrixgiven by For most cases of mirrors used in QIP, theincident angleθ=45∘{\displaystyle \theta =45^{\circ }}. Similarly, a phase shifter operatorPϕ{\displaystyle \mathbf {P} _{\phi }}associates with a unitary operator described byU(Pϕ)=eiϕ{\displaystyle U(\mathbf {P} _{\phi })=e^{i\phi }}, or, if written in a 2-mode format which is equivalent to a rotation of−ϕ{\displaystyle -\phi }about thez{\displaystyle z}-axis. Since any twoSU(2){\displaystyle SU(2)}rotationsalong orthogonal rotating axes can generate arbitrary rotations in the Bloch sphere, one can use a set of symmetric beam splitters and mirrors to realize an arbitrarySU(2){\displaystyle SU(2)}operators for QIP. The figures below are examples of implementing aHadamard gateand aPauli-X-gate(NOT gate) by using beam splitters (illustrated as rectangles connecting two sets of crossing lines with parametersθ{\displaystyle \theta }andϕ{\displaystyle \phi }) and mirrors (illustrated as rectangles connecting two sets of crossing lines with parameterR(θ){\displaystyle R(\theta )}). In the above figures, a qubit is encoded using two mode channels (horizontal lines):|0⟩{\displaystyle \left\vert 0\right\rangle }represents a photon in the top mode, and|1⟩{\displaystyle \left\vert 1\right\rangle }represents a photon in the bottom mode. In reality, assembling a whole bunch (possibly on the order of104{\displaystyle 10^{4}}[21]) of beam splitters and phase shifters in an optical experimental table is challenging and unrealistic. To make LOQC functional, useful and compact, one solution is to miniaturize all linear optical elements, photon sources and photon detectors, and to integrate them onto a chip. If using asemiconductorplatform, single photon sources and photon detectors can be easily integrated. To separate modes, there have been integratedarrayed waveguide grating(AWG) which are commonly used as optical (de)multiplexers inwavelength division multiplexed(WDM). In principle, beam splitters and other linear optical elements can also be miniaturized or replaced by equivalentnanophotonicselements. Some progress in these endeavors can be found in the literature, for example, Refs.[22][23][24]In 2013, the first integrated photonic circuit for quantum information processing has been demonstrated using photonic crystal waveguide to realize the interaction between guided field and atoms.[25] The advantage of the KLM protocol over the boson sampling model is that while the KLM protocol is a universal model, boson sampling is not believed to be universal. On the other hand, it seems that the scalability issues in boson sampling are more manageable than those in the KLM protocol. In boson sampling only a single measurement is allowed, a measurement of all the modes at the end of the computation. The only scalability problem in this model arises from the requirement that all the photons arrive at the photon detectors within a short-enough time interval and with close-enough frequencies.[16] In the KLM protocol, there are non-deterministic quantum gates, which are essential for the model to be universal. These rely on gate teleportation, where multiple probabilistic gates are prepared offline and additional measurements are performed mid-circuit. Those two factors are the cause for additional scalability problems in the KLM protocol. In the KLM protocol the desired initial state is one in which each of the photons is in one of two modes, and the possibility that a mode is occupied by more than one photon is zero. In boson sampling, however, the desired initial state is specific, requiring that the firstN{\displaystyle N}modes are each occupied by a single photon[16](N{\displaystyle N}is the number of photons andM≥N{\displaystyle M\geq N}is the number of modes) and all the other states are empty. Another, earlier model which relies on the representation of several qubits by a single photon is based on the work of C. Adami and N. J. Cerf.[1]By using both the location and the polarization of photons, a single photon in this model can represent several qubits; however, as a result,CNOT-gatecan only be implemented between the two qubits represented by the same photon. The figures below are examples of making an equivalentHadamard-gateand CNOT-gate using beam splitters (illustrated as rectangles connecting two sets of crossing lines with parametersθ{\displaystyle \theta }andϕ{\displaystyle \phi }) and phase shifters (illustrated as rectangles on a line with parameterϕ{\displaystyle \phi }). In the optical realization of the CNOT gate, the polarization and location are the control and target qubit, respectively.
https://en.wikipedia.org/wiki/Linear_optical_quantum_computing
Inintegrated circuits,optical interconnectsrefers to any system of transmitting signals from one part of an integrated circuit to another using light. Optical interconnects have been the topic of study due to the high latency and power consumption incurred by conventional metalinterconnectsin transmitting electrical signals over long distances, such as in interconnects classed asglobal interconnects. TheInternational Technology Roadmap for Semiconductors(ITRS) has highlighted interconnect scaling as a problem for the semiconductor industry. In electrical interconnects, nonlinearsignals(e.g. digital signals) are transmitted by copper wires conventionally, and these electrical wires all haveresistanceandcapacitancewhich severely limits the rise time of signals when the dimension of the wires are scaled down. Optical solution are used to transmit signals through long distances to substitute interconnection between dies within theintegrated circuit(IC) package. In order to control the optical signals inside the small IC package properly,microelectromechanical system(MEMS) technology can be used to integrate the optical components (i.e.optical waveguides,optical fibers,lens,mirrors, opticalactuators, opticalsensorsetc.) and the electronic parts together effectively. Conventional physical metal wires possess bothresistanceandcapacitance, limiting the rise time of signals. Bits of information will overlap with each other when the frequency of signal is increased to a certain level.[1] Optical interconnections can provide benefits over conventional metal wires which include:[1] However, there are still many technical challenges in implementing dense optical interconnects to silicon CMOS chips. These challenges are listed as below:[2]
https://en.wikipedia.org/wiki/Optical_interconnect
Aphotonic crystalis anopticalnanostructurein which therefractive indexchanges periodically. This affects the propagation of light in the same way that the structure ofnatural crystalsgives rise toX-ray diffractionand that the atomic lattices (crystal structure) ofsemiconductorsaffect their conductivity ofelectrons. Photonic crystals occur in nature in the form ofstructural colorationandanimal reflectors, and, as artificially produced, promise to be useful in a range of applications. Photonic crystals can be fabricated for one, two, or three dimensions. One-dimensional photonic crystals can be made ofthin filmlayers deposited on each other. Two-dimensional ones can be made byphotolithography, or by drilling holes in a suitable substrate. Fabrication methods for three-dimensional ones include drilling under different angles, stacking multiple 2-D layers on top of each other,direct laser writing, or, for example, instigating self-assembly of spheres in a matrix and dissolving the spheres. Photonic crystals can, in principle, find uses wherever light must be manipulated. For example,dielectric mirrorsare one-dimensional photonic crystals which can produce ultra-high reflectivity mirrors at a specified wavelength. Two-dimensional photonic crystals calledphotonic-crystal fibersare used forfiber-optic communication, among other applications. Three-dimensional crystals may one day be used inoptical computers, and could lead to more efficientphotovoltaic cells.[3] Although the energy of light (and allelectromagnetic radiation) is quantized in units calledphotons, the analysis of photonic crystals requires onlyclassical physics. "Photonic" in the name is a reference tophotonics, a modern designation for the study of light (optics) and optical engineering. Indeed, the first research into what we now call photonic crystals may have been as early as 1887 when the English physicistLord Rayleighexperimented with periodic multi-layerdielectricstacks, showing they can effect a photonicband-gapin one dimension. Research interest grew with work in 1987 byEli YablonovitchandSajeev Johnon periodic optical structures with more than one dimension—now called photonic crystals. Photonic crystals are composed of periodicdielectric, metallo-dielectric—or evensuperconductormicrostructures ornanostructuresthat affectelectromagnetic wavepropagation in the same way that theperiodic potentialin asemiconductorcrystal affects the propagation ofelectrons, determining allowed and forbidden electronicenergy bands. Photonic crystals contain regularly repeating regions of high and lowrefractive index. Light waves may propagate through this structure or propagation may be disallowed, depending on their wavelength. Wavelengths that may propagate in a given direction are calledmodes, and the ranges of wavelengths which propagate are calledbands. Disallowed bands ofwavelengthsare calledphotonicband gaps. This gives rise to distinct optical phenomena, such as inhibition ofspontaneous emission,[4]high-reflecting omni-directional mirrors, and low-loss-waveguiding. The bandgap of photonic crystals can be understood as the destructiveinterferenceof multiple reflections of light propagating in the crystal at each interface between layers of high- and low- refractive index regions, akin to the bandgaps of electrons in solids. There are two strategies for opening up the complete photonic band gap. The first one is to increase the refractive index contrast for the band gap in each direction becomes wider and the second one is to make theBrillouin zonemore similar to sphere.[5]However, the former is limited by the available technologies and materials and the latter is restricted by thecrystallographic restriction theorem. For this reason, the photonic crystals with a complete band gap demonstrated to date haveface-centered cubiclattice with the most spherical Brillouin zone and made of high-refractive-index semiconductor materials. Another approach is to exploit quasicrystalline structures with no crystallography limits. A complete photonic bandgap was reported for low-index polymer quasicrystalline samples manufactured by 3D printing.[6] The periodicity of the photonic crystal structure must be around or greater than half the wavelength (in the medium) of the light waves in order for interference effects to be exhibited.Visible lightranges in wavelength between about 400 nm (violet) to about 700 nm (red) and the resulting wavelength inside a material requires dividing that by the averageindex of refraction. The repeating regions of high and low dielectric constant must, therefore, be fabricated at this scale. In one dimension, this is routinely accomplished using the techniques ofthin-film deposition. Photonic crystals have been studied in one form or another since 1887, but no one used the termphotonic crystaluntil over 100 years later—afterEli YablonovitchandSajeev Johnpublished two milestone papers on photonic crystals in 1987.[4][7]The early history is well-documented in the form of a story when it was identified as one of the landmark developments in physics by theAmerican Physical Society.[8] Before 1987, one-dimensional photonic crystals in the form of periodic multi-layer dielectric stacks (such as theBragg mirror) were studied extensively.Lord Rayleighstarted their study in 1887,[9]by showing that such systems have a one-dimensional photonic band-gap, a spectral range of large reflectivity, known as astop-band. Today, such structures are used in a diverse range of applications—from reflective coatings to enhancing LED efficiency to highly reflective mirrors in certain laser cavities (see, for example,VCSEL). The pass-bands and stop-bands in photonic crystals were first reduced to practice byMelvin M. Weiner[10]who called those crystals "discrete phase-ordered media." Weiner achieved those results by extending Darwin's[11]dynamical theory for x-ray Bragg diffraction to arbitrary wavelengths, angles of incidence, and cases where the incident wavefront at a lattice plane is scattered appreciably in the forward-scattered direction. A detailed theoretical study of one-dimensional optical structures was performed byVladimir P. Bykov,[12]who was the first to investigate the effect of a photonic band-gap on the spontaneous emission from atoms and molecules embedded within the photonic structure. Bykov also speculated as to what could happen if two- or three-dimensional periodic optical structures were used.[13]The concept of three-dimensional photonic crystals was then discussed by Ohtaka in 1979,[14]who also developed a formalism for the calculation of the photonic band structure. However, these ideas did not take off until after the publication of two milestone papers in 1987 by Yablonovitch and John. Both these papers concerned high-dimensional periodic optical structures, i.e., photonic crystals. Yablonovitch's main goal was to engineer photonicdensity of statesto control thespontaneous emissionof materials embedded in the photonic crystal. John's idea was to use photonic crystals to affect localisation and control of light. After 1987, the number of research papers concerning photonic crystals began to grow exponentially. However, due to the difficulty of fabricating these structures at optical scales (seeFabrication challenges), early studies were either theoretical or in the microwave regime, where photonic crystals can be built on the more accessible centimetre scale. (This fact is due to a property of theelectromagnetic fieldsknown as scale invariance. In essence, electromagnetic fields, as the solutions toMaxwell's equations, have no natural length scale—so solutions for centimetre scale structure at microwave frequencies are the same as for nanometre scale structures at optical frequencies.) By 1991, Yablonovitch had demonstrated the first three-dimensional photonic band-gap in the microwave regime.[5]The structure that Yablonovitch was able to produce involved drilling an array of holes in a transparent material, where the holes of each layer form an inverse diamond structure – today it is known asYablonovite. In 1996,Thomas Kraussdemonstrated a two-dimensional photonic crystal at optical wavelengths.[15]This opened the way to fabricate photonic crystals in semiconductor materials by borrowing methods from the semiconductor industry. Pavel Cheben demonstrated a new type of photonic crystal waveguide – subwavelength grating (SWG) waveguide.[16][17]The SWG waveguide operates in subwavelength region, away from the bandgap. It allows the waveguide properties to be controlled directly by the nanoscale engineering of the resultingmetamaterialwhile mitigating wave interference effects. This provided “a missing degree of freedom in photonics”[18]and resolved an important limitation insilicon photonicswhich was its restricted set of available materials insufficient to achieve complex optical on-chip functions.[19][20] Today, such techniques use photonic crystal slabs, which are two dimensional photonic crystals "etched" into slabs of semiconductor.Total internal reflectionconfines light to the slab, and allows photonic crystal effects, such as engineering photonic dispersion in the slab. Researchers around the world are looking for ways to use photonic crystal slabs in integrated computer chips, to improve optical processing of communications—both on-chip and between chips.[citation needed] Autocloning fabrication technique, proposed forinfraredand visible range photonic crystals by Sato et al. in 2002, useselectron-beam lithographyanddry etching: lithographically formed layers of periodic grooves are stacked by regulatedsputter depositionand etching, resulting in "stationary corrugations" and periodicity.Titanium dioxide/silicaandtantalum pentoxide/silica devices were produced, exploiting their dispersion characteristics and suitability to sputter deposition.[21] Such techniques have yet to mature into commercial applications, but two-dimensional photonic crystals are commercially used inphotonic crystal fibres[22](otherwise known as holey fibres, because of the air holes that run through them). Photonic crystal fibres were first developed byPhilip Russellin 1998, and can be designed to possess enhanced properties over (normal)optical fibres. Study has proceeded more slowly in three-dimensional than in two-dimensional photonic crystals. This is because of more difficult fabrication.[22]Three-dimensional photonic crystal fabrication had no inheritable semiconductor industry techniques to draw on. Attempts have been made, however, to adapt some of the same techniques, and quite advanced examples have been demonstrated,[23]for example in the construction of "woodpile" structures constructed on a planar layer-by-layer basis. Another strand of research has tried to construct three-dimensional photonic structures fromself-assembly—essentially letting a mixture of dielectricnanospheressettle from solution into three-dimensionally periodic structures that have photonic band-gaps.Vasily Astratov's group from theIoffe Instituterealized in 1995 that natural and syntheticopalsare photonic crystals with an incomplete bandgap.[24]The first demonstration of an "inverse opal" structure with a complete photonic bandgap came in 2000, from researchers at theUniversity of Toronto, and Institute of Materials Science of Madrid (ICMM-CSIC), Spain.[25]The ever-expanding field of natural photonics,bioinspirationandbiomimetics—the study of natural structures to better understand and use them in design—is also helping researchers in photonic crystals.[26][27][28][29]For example, in 2006 a naturally occurring photonic crystal was discovered in the scales of a Brazilian beetle.[30]Analogously, in 2012 a diamond crystal structure was found in a weevil[31][32]and a gyroid-type architecture in a butterfly.[33]More recently, gyroid photonic crystals have been found in the feather barbs ofblue-winged leafbirdsand are responsible for the bird's shimmery blue coloration.[34]Some publications suggest the feasibility of the complete photonic band gap in the visible range in photonic crystals with optically saturated media that can be implemented by using laser light as an external optical pump.[35] The fabrication method depends on the number of dimensions that the photonic bandgap must exist in. To produce a one-dimensional photonic crystal,thin filmlayers of different dielectric constant may be periodically deposited on a surface which leads to a band gap in a particular propagation direction (such as normal to the surface). ABragg gratingis an example of this type of photonic crystal. One-dimensional photonic crystals can include layers ofnon-linearoptical materials in which the non-linear behaviour is accentuated due to field enhancement at wavelengths near a so-called degenerate band edge. This field enhancement (in terms of intensity) can reachN2{\displaystyle N^{2}}whereNis the total number of layers. However, by using layers which include an opticallyanisotropicmaterial, it has been shown that the field enhancement can reachN4{\displaystyle N^{4}}, which, in conjunction with non-linear optics, has potential applications such as in the development of an all-optical switch.[36] A one-dimensional photonic crystal can be implemented using repeated alternating layers of ametamaterialand vacuum.[37]If the metamaterial is such that the relativepermittivityandpermeabilityfollow the same wavelength dependence, then the photonic crystal behaves identically forTE and TM modes, that is, for bothsandppolarizationsof light incident at an angle. Recently, researchers fabricated a graphene-based Bragg grating (one-dimensional photonic crystal) and demonstrated that it supports excitation of surface electromagnetic waves in the periodic structure by using 633 nm He-Ne laser as the light source.[38]Besides, a novel type of one-dimensional graphene-dielectric photonic crystal has also been proposed. This structure can act as a far-IR filter and can support low-loss surface plasmons for waveguide and sensing applications.[39]1D photonic crystals doped with bio-active metals (i.e.silver) have been also proposed as sensing devices forbacterialcontaminants.[40]Similar planar 1D photonic crystals made of polymers have been used to detectvolatile organic compoundsvapors in atmosphere.[41][42]In addition to solid-phase photonic crystals, some liquid crystals with defined ordering can demonstrate photonic color.[43]For example, studies have shown several liquid crystals with short- or long-range one-dimensional positional ordering can form photonic structures.[43] In two dimensions, holes may be drilled in a substrate that is transparent to the wavelength of radiation that the bandgap is designed to block. Triangular and square lattices of holes have been successfully employed. TheHoley fiberorphotonic crystal fibercan be made by taking cylindrical rods of glass in hexagonal lattice, and then heating and stretching them, the triangle-like airgaps between the glass rods become the holes that confine the modes. There are several structure types that have been constructed:[citation needed] ] Not only band gap, photonic crystals may have another effect if we partially remove the symmetry through the creation a nanosizecavity. This defect allows you to guide or to trap the light with the same function asnanophotonic resonatorand it is characterized by the strong dielectric modulation in the photonic crystals.[50]For the waveguide, the propagation of light depends on the in-plane control provided by the photonic band gap and to the long confinement of light induced by dielectric mismatch. For the light trap, the light is strongly confined in the cavity resulting further interactions with the materials. First, if we put a pulse of light inside the cavity, it will be delayed by nano- or picoseconds and this is proportional to thequality factorof the cavity. Finally, if we put an emitter inside the cavity, the emission light also can be enhanced significantly and or even the resonant coupling can go through Rabi oscillation. This is related withcavity quantum electrodynamicsand the interactions are defined by the weak and strong coupling of the emitter and the cavity. The first studies for the cavity in one-dimensional photonic slabs are usually ingrating[51]ordistributed feedbackstructures.[52]For two-dimensional photonic crystal cavities,[53][54][55]they are useful to make efficient photonic devices in telecommunication applications as they can provide very high quality factor up to millions with smaller-than-wavelengthmode volume. For three-dimensional photonic crystal cavities, several methods have been developed including lithographic layer-by-layer approach,[56]surfaceion beam lithography,[57]andmicromanipulationtechnique.[58]All those mentioned photonic crystal cavities that tightly confine light offer very useful functionality for integrated photonic circuits, but it is challenging to produce them in a manner that allows them to be easily relocated.[59]There is no full control with the cavity creation, the cavity location, and the emitter position relative to the maximum field of the cavity while the studies to solve those problems are still ongoing. Movable cavity of nanowire in photonic crystals is one of solutions to tailor this light matter interaction.[60] Higher-dimensional photonic crystal fabrication faces two major challenges: One promising fabrication method for two-dimensionally periodic photonic crystals is a photonic-crystal fiber, such as aholey fiber. Using fiber draw techniques developed forcommunications fiberit meets these two requirements, and photonic crystal fibres are commercially available. Another promising method for developing two-dimensional photonic crystals is the so-called photonic crystal slab. These structures consist of a slab of material—such assilicon—that can be patterned using techniques from the semiconductor industry. Such chips offer the potential to combine photonic processing with electronic processing on a single chip. For three dimensional photonic crystals, various techniques have been used—includingphotolithographyand etching techniques similar to those used forintegrated circuits.[23]Some of these techniques are already commercially available. To avoid the complex machinery ofnanotechnological methods, some alternate approaches involve growing photonic crystals fromcolloidal crystalsas self-assembled structures. Mass-scale 3D photonic crystal films and fibres can now be produced using a shear-assembly technique that stacks 200–300 nm colloidal polymer spheres into perfect films offcclattice. Because the particles have a softer transparent rubber coating, the films can be stretched and molded, tuning the photonic bandgaps and producing striking structuralcoloreffects. The photonic band gap (PBG) is essentially the gap between the air-line and the dielectric-line in thedispersion relationof the PBG system. To design photonic crystal systems, it is essential to engineer the location and size of thebandgapby computational modeling using any of the following methods: Essentially, these methods solve for the frequencies (normal modes) of the photonic crystal for each value of the propagation direction given by the wave vector, or vice versa. The various lines in the band structure, correspond to the different cases ofn, the band index. For an introduction to photonic band structure, see K. Sakoda's[65]and Joannopoulos[50]books. Theplane wave expansionmethod can be used to calculate the band structure using aneigenformulation of the Maxwell's equations, and thus solving for the eigen frequencies for each of the propagation directions, of the wave vectors. It directly solves for the dispersion diagram. Electric field strength values can also be calculated over the spatial domain of the problem using the eigen vectors of the same problem. For the picture shown to the right, corresponds to the band-structure of a 1D distributed Bragg reflector (DBR) with air-core interleaved with a dielectric material of relative permittivity 12.25, and a lattice period to air-core thickness ratio (d/a) of 0.8, is solved using 101 planewaves over the first irreducibleBrillouin zone. TheInverse dispersionmethod also exploited plane wave expansion but formulates Maxwell's equation as an eigenproblem for the wave vector k while the frequencyω{\displaystyle \omega }is considered as a parameter.[62]Thus, it solves the dispersion relationk(ω){\displaystyle k(\omega )}instead ofω(k){\displaystyle \omega (k)}, which plane wave method does. The inverse dispersion method makes it possible to find complex value of the wave vector e.g. in the bandgap, which allows one to distinguish photonic crystals from metamaterial. Besides, the method is ready for the frequency dispersion of the permittivity to be taken into account. To speed calculation of the frequency band structure, theReduced Bloch Mode Expansion (RBME)method can be used.[66]The RBME method applies "on top" of any of the primary expansion methods mentioned above. For large unit cell models, the RBME method can reduce time for computing the band structure by up to two orders of magnitude. Photonic crystals are attractive optical materials for controlling and manipulating light flow. One dimensional photonic crystals are already in widespread use, in the form ofthin-film optics, with applications from low and high reflection coatings on lenses and mirrors tocolour changing paintsandinks.[67][68][47]Higher-dimensional photonic crystals are of great interest for both fundamental and applied research, and the two dimensional ones are beginning to find commercial applications. The first commercial products involving two-dimensionally periodic photonic crystals are already available in the form of photonic-crystal fibers, which use a microscale structure to confine light with radically different characteristics compared to conventionaloptical fiberfor applications in nonlinear devices and guiding exotic wavelengths. The three-dimensional counterparts are still far from commercialization but may offer additional features such asoptical nonlinearityrequired for the operation of optical transistors used inoptical computers, when some technological aspects such as manufacturability and principal difficulties such as disorder are under control.[69] SWG photonic crystal waveguides have facilitated new integrated photonic devices for controlling transmission of light signals in photonic integrated circuits, including fibre-chip couplers, waveguide crossovers, wavelength and mode multiplexers, ultra-fast optical switches, athermal waveguides, biochemical sensors, polarization management circuits, broadband interference couplers, planar waveguide lenses, anisotropic waveguides, nanoantennas and optical phased arrays.[19][70][71]SWG nanophotonic couplers permit highly-efficient and polarization-independent coupling between photonic chips and external devices.[17]They have been adopted for fibre-chip coupling in volume optoelectronic chip manufacturing.[72][73][74]These coupling interfaces are particularly important because every photonic chip needs to be optically connected with the external world and the chips themselves appear in many established and emerging applications, such as 5G networks, data center interconnects, chip-to-chip interconnects, metro- and long-haul telecommunication systems, and automotive navigation. In addition to the foregoing, photonic crystals have been proposed as platforms for the development of solar cells[75]and optical sensors,[76]including chemical sensors and biosensors.[77][78]
https://en.wikipedia.org/wiki/Photonic_crystal#Applications
Aphotonic integrated circuit(PIC) orintegrated optical circuitis a microchip containing two or more photonic components that form a functioning circuit. This technology detects, generates, transports, and processes light. Photonic integrated circuits usephotons(or particles of light) as opposed toelectronsthat are used byelectronic integrated circuits. The major difference between the two is that a photonic integrated circuit provides functions for information signals imposed onopticalwavelengths typically in thevisible spectrumor near-infrared(850–1650 nm). One of the most commercially utilized material platforms for photonic integrated circuits isindium phosphide(InP), which allows for the integration of various optically active and passive functions on the same chip. Initial examples of photonic integrated circuits were simple 2-sectiondistributed Bragg reflector(DBR) lasers, consisting of two independently controlled device sections—a gain section and a DBR mirror section. Consequently, all modern monolithic tunable lasers, widely tunable lasers, externally modulated lasers and transmitters, integrated receivers, etc. are examples of photonic integrated circuits. As of 2012, devices integrate hundreds of functions onto a single chip.[1]Pioneering work in this arena was performed at Bell Laboratories. The most notable academic centers of excellence of photonic integrated circuits in InP are the University of California at Santa Barbara, USA, theEindhoven University of Technology, and theUniversity of Twentein the Netherlands. A 2005 development[2]showed that silicon can, even though it is an indirect bandgap material, still be used to generatelaserlight via the Raman nonlinearity. Such lasers are not electrically driven but optically driven and therefore still necessitate a further optical pump laser source. Photonicsis the science behind the detection, generation, and manipulation ofphotons. According toquantum mechanicsand the concept ofwave–particle dualityfirst proposed byAlbert Einsteinin 1905, light acts as both an electromagnetic wave and a particle. For example, total internal reflection in anoptical fibreallows it to act as awaveguide. Integrated circuitsusing electrical components were first developed in the late 1940s and early 1950s, but it took until 1958 for them to become commercially available. When the laser and laser diode were invented in the 1960s, the term "photonics" fell into more common usage to describe the application of light to replace applications previously achieved through the use of electronics. By the 1980s, photonics gained traction through its role in fibre optic communication. At the start of the decade, an assistant in a new research group atDelft University Of Technology, Meint Smit, started pioneering in the field of integrated photonics. He is credited with inventing theArrayed Waveguide Grating (AWG), a core component of modern digital connections for the Internet and phones. Smit has received several awards, including an ERC Advanced Grant, a Rank Prize for Optoelectronics and a LEOS Technical Achievement Award.[3] In October 2022, during an experiment held at theTechnical University of DenmarkinCopenhagen, a photonic chip transmitted 1.84petabitsper second of data over afibre-optic cablemore than 7.9 kilometres long. First, the data stream was split into 37 sections, each of which was sent down a separate core of the fibre-optic cable. Next, each of these channels was split into 223 parts corresponding to equidistant spikes of light across the spectrum.[4] Unlike electronic integration wheresiliconis the dominant material, system photonic integrated circuits have been fabricated from a variety of material systems, including electro-optic crystals such aslithium niobate, silica on silicon,silicon on insulator, various polymers, andsemiconductormaterials which are used to makesemiconductor laserssuch asGaAsandInP. The different material systems are used because they each provide different advantages and limitations depending on the function to be integrated. For instance, silica (silicon dioxide) based PICs have very desirable properties for passive photonic circuits such as AWGs (see below) due to their comparatively low losses and low thermal sensitivity, GaAs or InP based PICs allow the direct integration of light sources and Silicon PICs enable co-integration of the photonics with transistor based electronics.[5] The fabrication techniques are similar to those used in electronic integrated circuits in whichphotolithographyis used to pattern wafers for etching and material deposition. Unlike electronics where the primary device is thetransistor, there is no single dominant device. The range of devices required on a chip includes low loss interconnectwaveguides, power splitters,optical amplifiers,optical modulators, filters,lasersand detectors. These devices require a variety of different materials and fabrication techniques making it difficult to realize all of them on a single chip.[citation needed] Newer techniques using resonant photonic interferometry is making way for UV LEDs to be used for optical computing requirements with much cheaper costs leading the way to petahertz consumer electronics.[citation needed] The primary application for photonic integrated circuits is in the area offiber-optic communicationthough applications in other fields such asbiomedical[6]andphotonic computingare also possible. Thearrayed waveguide gratings(AWGs) which are commonly used as optical (de)multiplexers inwavelength division multiplexed(WDM)fiber-optic communicationsystems are an example of a photonic integrated circuit which has replaced previous multiplexing schemes which utilized multiple discrete filter elements. Since separating optical modes is a need forquantum computing, this technology may be helpful to miniaturize quantum computers (seelinear optical quantum computing). Another example of a photonic integrated chip in wide use today infiber-optic communicationsystems is the externally modulated laser (EML) which combines adistributed feed back laser diodewith anelectro-absorption modulator[7]on a singleInPbased chip. As global data consumption rises and demand for faster networks continues to grow, the world needs to find more sustainable solutions to the energy crisis and climate change. At the same time, ever more innovative applications for sensor technology, such asLidarinautonomous driving vehicles, appear on the market.[8]There is a need to keep pace with technological challenges. The expansion of5Gdata networks and data centres, safer autonomous driving vehicles, and more efficient food production cannot be sustainably met by electronic microchip technology alone. However, combining electrical devices with integrated photonics provides a more energy efficient way to increase the speed and capacity of data networks, reduce costs and meet an increasingly diverse range of needs across various industries. The primary application for PICs is in the area offibre-optic communication. Thearrayed waveguide grating(AWG) which are commonly used as optical (de)multiplexers inwavelength division multiplexed(WDM) fibre-optic communication systems are an example of a photonic integrated circuit.[9]Another example in fibre-optic communication systems is the externally modulated laser (EML) which combines adistributed feedback laser diodewith anelectro-absorption modulator. The PICs can also increase bandwidth and data transfer speeds by deploying few-modes optical planar waveguides. Especially, if modes can be easily converted from conventional single-mode planar waveguides into few-mode waveguides, and selectively excite the desired modes. For example, a bidirectional spatial mode slicer and combiner[10]can be used to achieve the desired higher or lower-order modes. Its principle of operation depends on cascading stages of V-shape and/ or M-shape graded-index planar waveguides. Not only can PICs increase bandwidth and data transfer speeds, they can reduce energy consumption indata centres, which spend a large proportion of energy on cooling servers.[11] Using advanced biosensors and creating more affordable diagnosticbiomedicalinstruments, integrated photonics opens the door tolab-on-a-chip (LOC)technology, cutting waiting times, and taking diagnosis out of laboratories and into the hands of doctors and patients. Based on an ultrasensitive photonic biosensor, SurfiX Diagnostics' diagnostics platform provides a variety of point-of-care tests.[12]Similarly, Amazec Photonics has developed a fibre optic sensing technology with photonic chips which enables high-resolution temperature sensing (fractions of 0.1 milliKelvin) without having to inject the temperature sensor within the body.[13]This way, medical specialists are able to measure both cardiac output and circulating blood volume from outside the body. Another example of optical sensor technology is EFI's "OptiGrip" device, which offers greater control over tissue feeling for minimal invasive surgery. PICs can be applied in sensor systems, like Lidar (which stands for light detection and ranging), to monitor the surroundings of vehicles.[14]It can also be deployed in-car connectivity throughLi-Fi, which is similar to WiFi but uses light. This technology facilitates communication between vehicles and urban infrastructure to improve driver safety. For example, some modern vehicles pick up traffic signs and remind the driver of the speed limit. In terms of engineering, fibre optic sensors can be used to detect different quantities, such as pressure, temperature, vibrations, accelerations, and mechanical strain.[15]Sensing technology from PhotonFirst uses integrated photonics to measure things like shape changes in aeroplanes, electric vehicle battery temperature, and infrastructure strain. Sensors play a role in innovations in agriculture and the food industry in order to reduce wastage and detect diseases.[16]Light sensing technology powered by PICs can measure variables beyond the range of the human eye, allowing the food supply chain to detect disease, ripeness and nutrients in fruit and plants. It can also help food producers to determinesoil qualityand plant growth, as well as measuring CO2emissions. A new, miniaturised, near-infrared sensor, developed by MantiSpectra, is small enough to fit into a smartphone, and can be used to analyse chemical compounds of products like milk and plastics.[17] In 2025, researchers atColumbia Engineeringdeveloped a 3D photonic-electronic chip that could significantly improveAIhardware. By combining light-based data movement withCMOSelectronics, this chip addressed AI's energy and data transfer bottlenecks, improving both efficiency and bandwidth. The breakthrough allowed for high-speed, energy-efficient data communication, enabling AI systems to process vast amounts of data with minimal power. With a bandwidth of 800 Gb/s and a density of 5.3 Tb/s/mm², this technology offered major advances for AI, autonomous vehicles, and high-performance computing.[18] The fabrication techniques are similar to those used in electronic integrated circuits, in whichphotolithographyis used to pattern wafers for etching and material deposition. The platforms considered most versatile are indium phosphide (InP) and silicon photonics (SiPh): The term "silicon photonics" actually refers to the technology rather than the material. It combines high density photonic integrated circuits (PICs) withcomplementary metal oxide semiconductor(CMOS) electronics fabrication. The most technologically mature and commercially used platform is silicon on insulator (SOI). Other platforms include: By combining and configuring different chip types (including existing electronic chips) in a hybrid orheterogeneous integration, it is possible to leverage the strengths of each. Taking this complementary approach to integration addresses the demand for increasingly sophisticated energy-efficient solutions. As of 2010, photonic integration was an active topic in U.S. Defense contracts.[19][20]It was included by theOptical Internetworking Forumfor inclusion in 100 gigahertz optical networking standards.[21]A recent study presents a novel two-dimensionalphotonic crystaldesign for electro-reflective modulators, offering reducedsizeand enhanced efficiency compared to traditional bulky structures. This design achieves highopticaltransmission ratios with precise angle control, addressing critical challenges in miniaturizingoptoelectronicdevices for improved performance in PICs. In this structure, both lateral and verticalfabrication technologiesare combined, introducing a novel approach that merges two-dimensionaldesigns[22]with three-dimensional structures. This hybrid technique offers new possibilities for enhancing the functionality and integration ofphotoniccomponents within photonic integrated circuits.[23]
https://en.wikipedia.org/wiki/Photonic_integrated_circuit
Photonic moleculesare a form of matter in whichphotonsbind together to form "molecules".[1][2][3]They were first predicted in 2007. Photonic molecules are formed when individual (massless) photons "interact with each other so strongly that they act as though they have mass".[4]In an alternative definition (which is not equivalent), photons confined to two or more coupled optical cavities also reproduce the physics of interactingatomic energy levels, and have been termed as photonic molecules. Researchers drew analogies between the phenomenon and the fictional "lightsaber" fromStar Wars.[4][5] Gaseousrubidiumatomswere pumped into a vacuum chamber. The cloud wascooled using lasersto just a few degrees above absolute zero. Using weak laser pulses, small numbers of photons were fired into the cloud.[4] As the photons entered the cloud, their energy excited atoms along their path, causing them to lose speed. Inside the cloud medium, the photons dispersively coupled to strongly interacting atoms in highly excitedRydberg states. This caused the photons to behave as massive particles with strong mutual attraction (photon molecules). Eventually the photons exited the cloud together as normal photons (often entangled in pairs).[4] The effect is caused by a so-calledRydberg blockade, which, in the presence of one excited atom, prevents nearby atoms from being excited to the same degree. In this case, as two photons enter the atomic cloud, the first excites an atom, annihilating itself in the interaction, but the transmitted energy must move forward inside the excited atom before the second photon can excite nearby atoms. In effect the two photons push and pull each other through the cloud as their energy is passed from one atom to the next, forcing them to interact. This photonic interaction is mediated by the electromagnetic interaction between photons and atoms.[4] The interaction of the photons suggests that the effect could be employed to build a system that can preserve quantum information, and process it using quantum logic operations.[4] The system could also be useful in classical computing, given the much-lower power required to manipulate photons than electrons.[4] It may be possible to arrange the photonic molecules in such a way within the medium that they form larger two-dimensional structures (similar to drawings).[4] The term photonic molecule has been also used since 1998 for an unrelated phenomenon involving electromagnetically interacting optical microcavities. The properties of quantized confined photon states in optical micro- and nanocavities are very similar to those of confined electron states in atoms.[6]Owing to this similarity, optical microcavities can be termed 'photonic atoms'. Taking this analogy even further, a cluster of several mutually-coupled photonic atoms forms a photonic molecule.[7]When individual photonic atoms are brought into close proximity, their optical modes interact and give rise to a spectrum of hybridized super-modes of photonic molecules.[8]This is very similar to what happens when two isolated systems are coupled, like twohydrogen atomic orbitalscoming together to form thebondingandantibondingorbitals of thehydrogen molecule, which are hybridized super-modes of the total coupled system. "A micrometer-sized piece of semiconductor can trap photons inside it in such a way that they act like electrons in an atom. Now the 21 September PRL describes a way to link two of these "photonic atoms" together. The result of such a close relationship is a "photonic molecule," whose optical modes bear a strong resemblance to the electronic states of a diatomic molecule like hydrogen."[9]"Photonic molecules, named by analogy with chemical molecules, are clusters of closely located electromagnetically interacting microcavities or "photonic atoms"."[10]"Optically coupled microcavities have emerged as photonic structures with promising properties for investigation of fundamental science as well as for applications."[11] The first photonic realization of the two-level system of a photonic molecule was by Spreew et al.,[12]who usedoptical fibersto realize aring resonator, although they did not use the term "photonic molecule". The two modes forming the molecule could then be thepolarizationmodes of the ring or the clockwise and counterclockwise modes of the ring. This was followed by the demonstration of a lithographically fabricated photonic molecule, inspired by an analogy with a simple diatomic molecule.[13]However, other nature-inspired PM structures (such as ‘photonic benzene’) have been proposed and shown to support confined optical modes closely analogous to the ground-state molecular orbitals of their chemical counterparts.[14] Photonic molecules offer advantages over isolated photonic atoms in a variety of applications, including bio(chemical) sensing,[15][16]cavity optomechanics,[17][18]and microlasers,[19][20][21][22]Photonic molecules can also be used as quantum simulators of many-body physics and as building blocks of future optical quantum information processing networks.[23] In complete analogy, clusters of metalnanoparticles– which support confined surface plasmon states – have been termed ‘plasmonic molecules.”[24][25][26][27][28] Finally, hybrid photonic-plasmonic (or opto-plasmonic)[29][30][31][32]and elastic molecules[33]have also been proposed and demonstrated.
https://en.wikipedia.org/wiki/Photonic_molecule
Anoptical transistor, also known as an optical switch or alight valve, is a device that switches or amplifiesoptical signals. Light occurring on an optical transistor's input changes the intensity of light emitted from the transistor's output while output power is supplied by an additional optical source. Since the input signal intensity may be weaker than that of the source, an optical transistor amplifies the optical signal. The device is the optical analog of theelectronic transistorthat forms the basis of modern electronic devices. Optical transistors provide a means to control light using only light and has applications inoptical computingandfiber-optic communicationnetworks. Such technology has the potential to exceed the speed of electronics[citation needed], while conserving morepower. The fastest demonstrated all-optical switching signal is 900attoseconds(attosecond =10^-18 second), which paves the way to develop ultrafast optical transistors.[1] Sincephotonsinherently do not interact with each other, an optical transistor must employ an operating medium to mediate interactions. This is done without converting optical to electronic signals as an intermediate step. Implementations using a variety of operating mediums have been proposed and experimentally demonstrated. However, their ability to compete with modern electronics is currently limited. Optical transistors could be used to improve the performance offiber-optic communicationnetworks. Althoughfiber-optic cablesare used to transfer data, tasks such as signal routing are done electronically. This requires optical-electronic-optical conversion, which form bottlenecks. In principle, all-opticaldigital signal processingand routing is achievable using optical transistors arranged intophotonic integrated circuits.[2]The same devices could be used to create new types ofoptical amplifiersto compensate for signal attenuation along transmission lines. A more elaborate application of optical transistors is the development of an optical digital computer in which signals are photonic (i.e., light-transmitting media) rather than electronic (wires). Further, optical transistors that operate using single photons could form an integral part ofquantum information processingwhere they can be used to selectively address individual units of quantum information, known asqubits. Optical transistors could in theory be impervious to the high radiation of space and extraterrestrial planets, unlike electronic transistors which suffer fromSingle-event upset. The most commonly argued case for optical logic is that optical transistor switching times can be much faster than in conventional electronic transistors. This is due to the fact that the speed of light in an optical medium is typically much faster than the drift velocity of electrons in semiconductors. Optical transistors can be directly linked tofiber-optic cableswhereas electronics requires coupling viaphotodetectorsandLEDsorlasers. The more natural integration of all-optical signal processors with fiber-optics would reduce the complexity and delay in the routing and other processing of signals in optical communication networks. It remains questionable whether optical processing can reduce the energy required to switch a single transistor to be less than that for electronic transistors. To realistically compete, transistors require a few tens of photons per operation. It is clear, however, that this is achievable in proposed single-photon transistors[3][4]for quantum information processing. Perhaps the most significant advantage of optical over electronic logic is reduced power consumption. This comes from the absence ofcapacitancein the connections between individuallogic gates. In electronics, the transmission line needs to be charged to thesignal voltage. The capacitance of a transmission line is proportional to its length and it exceeds the capacitance of the transistors in a logic gate when its length is equal to that of a single gate. The charging of transmission lines is one of the main energy losses in electronic logic. This loss is avoided in optical communication where only enough energy to switch an optical transistor at the receiving end must be transmitted down a line. This fact has played a major role in the uptake of fiber optics for long-distance communication but is yet to be exploited at the microprocessor level. Besides the potential advantages of higher speed, lower power consumption and high compatibility with optical communication systems, optical transistors must satisfy a set of benchmarks before they can compete with electronics.[5]No single design has yet satisfied all these criteria whilst outperforming speed and power consumption of state of the art electronics. The criteria include: Several schemes have been proposed to implement all-optical transistors. In many cases, a proof of concept has been experimentally demonstrated. Among the designs are those based on:
https://en.wikipedia.org/wiki/Photonic_transistor
Programmable photonicsis a subfield ofphotonicsandoptical computingthat studies the development ofphotonic integrated circuits(PICs) for computation whose circuits can be altered at runtime to run different programs, rather than manufacturing each PIC for a specific program. Almost all modern electronicintegrated circuitsare programmable and thus programmable photonics is an important step in making optical computing mainstream; a non-programmable electronic integrated circuit (analogous to a non-programmablePIC) would be e.g. anASICthat can only run inference of a specificLLM. Programmable PICs most frequently alter their circuits at runtime by using electronics to manipulate therefractive indexof specific regions/features in the lens via thermal changes, relying on thethermo-optic coefficientof the lens material.[1]The circuits themselves usually are formed byMach–Zehnder interferometerarrays that can perform arbitrarylinearoperations, e.g.Fourier transforms;[2]other operations, such as logic gates, requirenonlinear opticstechniques likesecond-harmonic generation.[3]
https://en.wikipedia.org/wiki/Programmable_photonics
Silicon photonicsis the study and application ofphotonicsystems which usesiliconas anoptical medium.[1][2][3][4][5]The silicon is usually patterned withsub-micrometreprecision, intomicrophotoniccomponents.[4]These operate in theinfrared, most commonly at the 1.55 micrometrewavelengthused by mostfiber optic telecommunicationsystems.[6]The silicon typically lies on top of a layer of silica in what (by analogy witha similar constructioninmicroelectronics) is known assilicon on insulator(SOI).[4][5] Silicon photonic devices can be made using existingsemiconductor fabricationtechniques, and because silicon is already used as the substrate for mostintegrated circuits, it is possible to create hybrid devices in which theopticalandelectroniccomponents are integrated onto a single microchip.[6]Consequently, silicon photonics is being actively researched by many electronics manufacturers includingIBMandIntel, as well as by academic research groups, as a means for keeping on track withMoore's Law, by usingoptical interconnectsto provide fasterdata transferboth between and withinmicrochips.[7][8][9] The propagation oflightthrough silicon devices is governed by a range ofnonlinear opticalphenomena including theKerr effect, theRaman effect,two-photon absorptionand interactions betweenphotonsandfree charge carriers.[10]The presence of nonlinearity is of fundamental importance, as it enables light to interact with light,[11]thus permitting applications such as wavelength conversion and all-optical signal routing, in addition to the passive transmission of light. Siliconwaveguidesare also of great academic interest, due to their unique guiding properties, they can be used for communications, interconnects, biosensors,[12][13]and they offer the possibility to support exotic nonlinear optical phenomena such assoliton propagation.[14][15][16] In a typical optical link, data is first transferred from the electrical to the optical domain using anelectro-optic modulatoror a directly modulated laser. An electro-optic modulator can vary the intensity and/or the phase of the optical carrier. In silicon photonics, a common technique to achieve modulation is to vary the density of free charge carriers. Variations of electron and hole densities change the real and the imaginary part of the refractive index of silicon as described by the empirical equations of Soref and Bennett.[17]Modulators can consist of both forward-biasedPIN diodes, which generally generate large phase-shifts but suffer of lower speeds,[18]as well as of reverse-biasedp–n junctions.[19]A prototype optical interconnect with microring modulators integrated with germanium detectors has been demonstrated.[20][21]Non-resonant modulators, such asMach-Zehnder interferometers, have typical dimensions in the millimeter range and are usually used in telecom or datacom applications. Resonant devices, such as ring-resonators, can have dimensions of only tens of micrometers, therefore occupying much smaller areas. In 2013, researchers demonstrated a resonant depletion modulator that can be fabricated using standard Silicon-on-Insulator Complementary Metal-Oxide-Semiconductor (SOI CMOS) manufacturing processes.[22]A similar device has been demonstrated as well in bulk CMOS rather than in SOI.[23][24] On the receiver side, the optical signal is typically converted back to the electrical domain using a semiconductorphotodetector. The semiconductor used for carrier generation usually had a band-gap smaller than the photon energy, and the most common choice is pure germanium.[25][26]Most detectors use ap–n junctionfor carrier extraction, however, detectors based onmetal–semiconductor junctions(withgermaniumas the semiconductor) have been integrated into silicon waveguides as well.[27]More recently, silicon-germaniumavalanche photodiodescapable of operating at 40 Gbit/s have been fabricated.[28][29]Complete transceivers have been commercialized in the form of active optical cables.[30] Optical communications are conveniently classified by the reach, or length, of their links. The majority of silicon photonic communications have so far been limited to telecom[31]and datacom applications,[32][33]where the reach is of several kilometers or several meters respectively. Silicon photonics, however, is expected to play a significant role in computercom as well, where optical links have a reach in the centimeter to meter range. In fact, progress in computer technology (and the continuation ofMoore's Law) is becoming increasingly dependent on fasterdata transferbetween and within microchips.[34]Optical interconnectsmay provide a way forward, and silicon photonics may prove particularly useful, once integrated on the standard silicon chips.[6][35][36]In 2006, Intel Senior Vice President - and future CEO -Pat Gelsingerstated that, "Today, optics is a niche technology. Tomorrow, it's the mainstream of every chip that we build."[8]In 2010 Intel demonstrated a 50 Gbit/s connection made with silicon photonics.[37] The first microprocessor with optical input/output (I/O) was demonstrated in December 2015 using an approach known as "zero-change" CMOS photonics.[38]This is known as fiber-to-the-processor.[39]This first demonstration was based on a 45 nm SOI node, and the bi-directional chip-to-chip link was operated at a rate of 2×2.5 Gbit/s. The total energy consumption of the link was calculated to be of 16 pJ/b and was dominated by the contribution of the off-chip laser. Some researchers believe an on-chiplasersource is required.[40]Others think that it should remain off-chip because of thermal problems (the quantum efficiency decreases with temperature, and computer chips are generally hot) and because of CMOS-compatibility issues. One such device is thehybrid silicon laser, in which the silicon is bonded to a differentsemiconductor(such asindium phosphide) as thelasing medium.[41]Other devices include all-siliconRaman laser[42]or an all-silicon Brillouin lasers[43]wherein silicon serves as the lasing medium. In 2012, IBM announced that it had achieved optical components at the 90 nanometer scale that can be manufactured using standard techniques and incorporated into conventional chips.[7][44]In September 2013, Intel announced technology to transmit data at speeds of 100 gigabits per second along a cable approximately five millimeters in diameter for connecting servers inside data centers. Conventional PCI-E data cables carry data at up to eight gigabits per second, while networking cables reach 40 Gbit/s. The latest version of theUSBstandard tops out at ten Gbit/s. The technology does not directly replace existing cables in that it requires a separate circuit board to interconvert electrical and optical signals. Its advanced speed offers the potential of reducing the number of cables that connect blades on a rack and even of separating processor, storage and memory into separate blades to allow more efficient cooling and dynamic configuration.[45] Graphenephotodetectors have the potential to surpass germanium devices in several important aspects, although they remain about one order of magnitude behind current generation capacity, despite rapid improvement. Graphene devices can work at very high frequencies, and could in principle reach higher bandwidths. Graphene can absorb a broader range of wavelengths than germanium. That property could be exploited to transmit more data streams simultaneously in the same beam of light. Unlike germanium detectors, graphene photodetectors do not require applied voltage, which could reduce energy needs. Finally, graphene detectors in principle permit a simpler and less expensive on-chip integration. However, graphene does not strongly absorb light. Pairing a silicon waveguide with a graphene sheet better routes light and maximizes interaction. The first such device was demonstrated in 2011. Manufacturing such devices using conventional manufacturing techniques has not been demonstrated.[46] Another application of silicon photonics is in signal routers foroptical communication. Construction can be greatly simplified by fabricating the optical and electronic parts on the same chip, rather than having them spread across multiple components.[47]A wider aim is all-optical signal processing, whereby tasks which are conventionally performed by manipulating signals in electronic form are done directly in optical form.[3][48]An important example is all-optical switching, whereby the routing of optical signals is directly controlled by other optical signals.[49]Another example is all-optical wavelength conversion.[50] In 2013, astartup companynamed "Compass-EOS", based inCaliforniaand inIsrael, was the first to present a commercial silicon-to-photonics router.[51] Silicon microphotonics can potentially increase theInternet's bandwidth capacity by providing micro-scale, ultra low power devices. Furthermore, the power consumption ofdatacentersmay be significantly reduced if this is successfully achieved. Researchers atSandia,[52]Kotura,NTT,Fujitsuand various academic institutes have been attempting to prove this functionality. A 2010 paper reported on a prototype 80 km, 12.5 Gbit/s transmission using microring silicon devices.[53] As of 2015, US startup companyMagic Leapis working on alight-fieldchip using silicon photonics for the purpose of anaugmented realitydisplay.[54] Silicon photonics has been used in artificial intelligence inference processors that are more energy efficient than those using conventional transistors. This can be done using Mach-Zehnder interferometers (MZIs) which can be combined withnanoelectromechanical systemsto modulate the light passing though it, by physically bending the MZI which changes the phase of the light.[55][56][57] Silicon istransparenttoinfrared lightwith wavelengths above about 1.1 micrometres.[58]Silicon also has a very highrefractive index, of about 3.5.[58]The tight optical confinement provided by this high index allows for microscopicoptical waveguides, which may have cross-sectional dimensions of only a few hundrednanometers.[10]Single mode propagation can be achieved,[10]thus (likesingle-mode optical fiber) eliminating the problem ofmodal dispersion. The strongdielectric boundary effectsthat result from this tight confinement substantially alter theoptical dispersion relation. By selecting the waveguide geometry, it is possible to tailor the dispersion to have desired properties, which is of crucial importance to applications requiring ultrashort pulses.[10]In particular, thegroup velocity dispersion(that is, the extent to whichgroup velocityvaries with wavelength) can be closely controlled. In bulk silicon at 1.55 micrometres, the group velocity dispersion (GVD) isnormalin that pulses with longer wavelengths travel with higher group velocity than those with shorter wavelength. By selecting a suitable waveguide geometry, however, it is possible to reverse this, and achieveanomalousGVD, in which pulses with shorter wavelengths travel faster.[59][60][61]Anomalous dispersion is significant, as it is a prerequisite forsolitonpropagation, andmodulational instability.[62] In order for the silicon photonic components to remain optically independent from the bulk silicon of thewaferon which they are fabricated, it is necessary to have a layer of intervening material. This is usuallysilica, which has a much lower refractive index (of about 1.44 in the wavelength region of interest[63]), and thus light at the silicon-silica interface will (like light at the silicon-air interface) undergototal internal reflection, and remain in the silicon. This construct is known as silicon on insulator.[4][5]It is named after the technology ofsilicon on insulatorin electronics, whereby components are built upon a layer ofinsulatorin order to reduceparasitic capacitanceand so improve performance.[64]Silicon photonics have also been built with silicon nitride as the material in the optical waveguides.[65][66] Silicon has a focusingKerr nonlinearity, in that therefractive indexincreases with optical intensity.[10]This effect is not especially strong in bulk silicon, but it can be greatly enhanced by using a silicon waveguide to concentrate light into a very small cross-sectional area.[14]This allowsnonlinear opticaleffects to be seen at low powers. The nonlinearity can be enhanced further by using aslot waveguide, in which the high refractive index of the silicon is used to confine light into a central region filled with a strongly nonlinearpolymer.[67] Kerr nonlinearity underlies a wide variety of optical phenomena.[62]One example isfour wave mixing, which has been applied in silicon to realiseoptical parametric amplification,[68]parametric wavelength conversion,[50]and frequency comb generation.,[69][70] Kerr nonlinearity can also causemodulational instability, in which it reinforces deviations from an optical waveform, leading to the generation ofspectral-sidebands and the eventual breakup of the waveform into a train of pulses.[71]Another example (as described below) is soliton propagation. Silicon exhibitstwo-photon absorption(TPA), in which a pair ofphotonscan act to excite anelectron-hole pair.[10]This process is related to the Kerr effect, and by analogy withcomplex refractive index, can be thought of as theimaginary-part of acomplexKerr nonlinearity.[10]At the 1.55 micrometre telecommunication wavelength, this imaginary part is approximately 10% of the real part.[72] The influence of TPA is highly disruptive, as it both wastes light, and generates unwantedheat.[73]It can be mitigated, however, either by switching to longer wavelengths (at which the TPA to Kerr ratio drops),[74]or by using slot waveguides (in which the internal nonlinear material has a lower TPA to Kerr ratio).[67]Alternatively, the energy lost through TPA can be partially recovered (as is described below) by extracting it from the generated charge carriers.[75] Thefree charge carrierswithin silicon can both absorb photons and change its refractive index.[76]This is particularly significant at high intensities and for long durations, due to the carrier concentration being built up by TPA. The influence of free charge carriers is often (but not always) unwanted, and various means have been proposed to remove them. One such scheme is toimplantthe silicon withheliumin order to enhancecarrier recombination.[77]A suitable choice of geometry can also be used to reduce the carrier lifetime.Rib waveguides(in which the waveguides consist of thicker regions in a wider layer of silicon) enhance both the carrier recombination at the silica-silicon interface and thediffusionof carriers from the waveguide core.[78] A more advanced scheme for carrier removal is to integrate the waveguide into theintrinsic regionof aPIN diode, which isreverse biasedso that the carriers are attracted away from the waveguide core.[79]A more sophisticated scheme still, is to use the diode as part of a circuit in whichvoltageandcurrentare out of phase, thus allowing power to be extracted from the waveguide.[75]The source of this power is the light lost to two photon absorption, and so by recovering some of it, the net loss (and the rate at which heat is generated) can be reduced. As is mentioned above, free charge carrier effects can also be used constructively, in order to modulate the light.[18][19][80] Second-order nonlinearities cannot exist in bulk silicon because of thecentrosymmetryof its crystalline structure. By applying strain however, the inversion symmetry of silicon can be broken. This can be obtained for example by depositing asilicon nitridelayer on a thin silicon film.[81]Second-order nonlinear phenomena can be exploited foroptical modulation,spontaneous parametric down-conversion,parametric amplification,ultra-fast optical signal processingandmid-infraredgeneration. Efficient nonlinear conversion however requiresphase matchingbetween the optical waves involved. Second-order nonlinear waveguides based on strained silicon can achievephase matchingbydispersion-engineering.[82]So far, however, experimental demonstrations are based only on designs which are notphase matched.[83]It has been shown thatphase matchingcan be obtained as well in silicon doubleslot waveguidescoated with a highly nonlinear organic cladding[84]and in periodically strained silicon waveguides.[85] Silicon exhibits theRaman effect, in which a photon is exchanged for a photon with a slightly different energy, corresponding to an excitation or a relaxation of the material. Silicon's Raman transition is dominated by a single, very narrow frequency peak, which is problematic for broadband phenomena such asRaman amplification, but is beneficial for narrowband devices such asRaman lasers.[10]Early studies of Raman amplification and Raman lasers started at UCLA which led to demonstration of net gain Silicon Raman amplifiers and silicon pulsed Raman laser with fiber resonator (Optics express 2004). Consequently, all-silicon Raman lasers have been fabricated in 2005.[42] In the Raman effect, photons are red- or blue-shifted byoptical phononswith a frequency of about 15 THz. However, silicon waveguides also supportacoustic phononexcitations. The interaction of these acoustic phonons with light is calledBrillouin scattering. The frequencies and mode shapes of these acoustic phonons are dependent on the geometry and size of the silicon waveguides, making it possible to produce strong Brillouin scattering at frequencies ranging from a few MHz to tens of GHz.[86][87]Stimulated Brillouin scattering has been used to make narrowband optical amplifiers[88][89][90]as well as all-silicon Brillouin lasers.[43]The interaction between photons and acoustic phonons is also studied in the field ofcavity optomechanics, although 3D optical cavities are not necessary to observe the interaction.[91]For instance, besides in silicon waveguides the optomechanical coupling has also been demonstrated in fibers[92]and in chalcogenide waveguides.[93] The evolution of light through silicon waveguides can be approximated with a cubicNonlinear Schrödinger equation,[10]which is notable for admittingsech-likesolitonsolutions.[94]Theseoptical solitons(which are also known inoptical fiber) result from a balance betweenself phase modulation(which causes the leading edge of the pulse to beredshiftedand the trailing edge blueshifted) and anomalous group velocity dispersion.[62]Such solitons have been observed in silicon waveguides, by groups at the universities ofColumbia,[14]Rochester,[15]andBath.[16]
https://en.wikipedia.org/wiki/Silicon_photonics
Bayesian approaches to brain functioninvestigate the capacity of the nervous system to operate in situations of uncertainty in a fashion that is close to the optimal prescribed byBayesian statistics.[1][2]This term is used inbehavioural sciencesandneuroscienceand studies associated with this term often strive to explain thebrain's cognitive abilities based on statistical principles. It is frequently assumed that the nervous system maintains internalprobabilistic modelsthat are updated byneural processingof sensory information using methods approximating those ofBayesian probability.[3][4] This field of study has its historical roots in numerous disciplines includingmachine learning,experimental psychologyandBayesian statistics. As early as the 1860s, with the work ofHermann Helmholtzin experimental psychology, the brain's ability to extract perceptual information from sensory data was modeled in terms of probabilistic estimation.[5][6]The basic idea is that the nervous system needs to organize sensory data into an accurateinternal modelof the outside world. Bayesian probability has been developed by many important contributors.Pierre-Simon Laplace,Thomas Bayes,Harold Jeffreys,Richard CoxandEdwin Jaynesdeveloped mathematical techniques and procedures for treating probability as the degree of plausibility that could be assigned to a given supposition or hypothesis based on the available evidence.[7]In 1988Edwin Jaynespresented a framework for using Bayesian Probability to model mental processes.[8]It was thus realized early on that the Bayesian statistical framework holds the potential to lead to insights into the function of the nervous system. This idea was taken up in research onunsupervised learning, in particular the Analysis by Synthesis approach, branches ofmachine learning.[9][10]In 1983Geoffrey Hintonand colleagues proposed the brain could be seen as a machine making decisions based on the uncertainties of the outside world.[11]During the 1990s researchers includingPeter Dayan, Geoffrey Hinton and Richard Zemel proposed that the brain represents knowledge of the world in terms of probabilities and made specific proposals for tractable neural processes that could manifest such aHelmholtz Machine.[12][13][14] A wide range of studies interpret the results of psychophysical experiments in light of Bayesian perceptual models. Many aspects of human perceptual and motor behavior can be modeled with Bayesian statistics. This approach, with its emphasis on behavioral outcomes as the ultimate expressions of neural information processing, is also known for modeling sensory and motor decisions using Bayesian decision theory. Examples are the work ofLandy,[15][16]Jacobs,[17][18]Jordan, Knill,[19][20]Kording and Wolpert,[21][22]and Goldreich.[23][24][25] Many theoretical studies ask how the nervous system could implement Bayesian algorithms. Examples are the work of Pouget, Zemel, Deneve, Latham, Hinton and Dayan. George andHawkinspublished a paper that establishes a model of cortical information processing calledhierarchical temporal memorythat is based on Bayesian network ofMarkov chains. They further map this mathematical model to the existing knowledge about the architecture of cortex and show how neurons could recognize patterns by hierarchical Bayesian inference.[26] A number of recent electrophysiological studies focus on the representation of probabilities in the nervous system. Examples are the work ofShadlenand Schultz. Predictive codingis a neurobiologically plausible scheme for inferring the causes of sensory input based on minimizing prediction error.[27]These schemes are related formally toKalman filteringand other Bayesian update schemes. During the 1990s some researchers such asGeoffrey HintonandKarl Fristonbegan examining the concept offree energyas a calculably tractable measure of the discrepancy between actual features of the world and representations of those features captured by neural network models.[28]A synthesis has been attempted recently[29]byKarl Friston, in which the Bayesian brain emerges from a generalprinciple of free energy minimisation.[30]In this framework, both action and perception are seen as a consequence of suppressing free-energy, leading to perceptual[31]and active inference[32]and a more embodied (enactive) view of the Bayesian brain. Usingvariational Bayesianmethods, it can be shown howinternal modelsof the world are updated by sensory information to minimize free energy or the discrepancy between sensory input and predictions of that input. This can be cast (in neurobiologically plausible terms) as predictive coding or, more generally, Bayesian filtering. According to Friston:[33] "The free-energy considered here represents a bound on the surprise inherent in any exchange with the environment, under expectations encoded by its state or configuration. A system can minimise free energy by changing its configuration to change the way it samples the environment, or to change its expectations. These changes correspond to action and perception, respectively, and lead to an adaptive exchange with the environment that is characteristic of biological systems. This treatment implies that the system’s state and structure encode an implicit and probabilistic model of the environment."[33] This area of research was summarized in terms understandable by the layperson in a 2008 article inNew Scientistthat offered a unifying theory of brain function.[34]Friston makes the following claims about the explanatory power of the theory: "This model of brain function can explain a wide range of anatomical and physiological aspects of brain systems; for example, the hierarchical deployment of cortical areas, recurrent architectures using forward and backward connections and functional asymmetries in these connections. In terms of synaptic physiology, it predicts associative plasticity and, for dynamic models, spike-timing-dependent plasticity. In terms of electrophysiology it accounts for classical and extra-classical receptive field effects and long-latency or endogenous components of evoked cortical responses. It predicts the attenuation of responses encoding prediction error with perceptual learning and explains many phenomena like repetition suppression,mismatch negativityand the P300 in electroencephalography. In psychophysical terms, it accounts for the behavioural correlates of these physiological phenomena, e.g.,priming, and global precedence."[33] "It is fairly easy to show that both perceptual inference and learning rest on a minimisation of free energy or suppression of prediction error."[33]
https://en.wikipedia.org/wiki/Bayesian_approaches_to_brain_function
Inphysicsand thephilosophy of physics,quantum Bayesianismis a collection of related approaches to theinterpretation of quantum mechanics, the most prominent of which isQBism(pronounced "cubism"). QBism is an interpretation that takes an agent's actions and experiences as the central concerns of the theory. QBism deals with common questions in the interpretation of quantum theory about the nature ofwavefunctionsuperposition,quantum measurement, andentanglement.[1][2]According to QBism, many, but not all, aspects of the quantum formalism are subjective in nature. For example, in this interpretation, a quantum state is not an element of reality—instead, it represents thedegrees of beliefan agent has about the possible outcomes of measurements. For this reason, somephilosophers of sciencehave deemed QBism a form ofanti-realism.[3][4]The originators of the interpretation disagree with this characterization, proposing instead that the theory more properly aligns with a kind of realism they call "participatory realism", wherein reality consists ofmorethan can be captured by any putative third-person account of it.[5][6] This interpretation is distinguished by its use of asubjective Bayesianaccount of probabilities to understand the quantum mechanicalBorn ruleas anormativeaddition to gooddecision-making. Rooted in the prior work ofCarlton Caves, Christopher Fuchs, and Rüdiger Schack during the early 2000s, QBism itself is primarily associated with Fuchs and Schack and has more recently been adopted byDavid Mermin.[7]QBism draws from the fields ofquantum informationandBayesian probabilityand aims to eliminate the interpretational conundrums that have beset quantum theory. The QBist interpretation is historically derivative of the views of the various physicists that are often grouped together as "the"Copenhagen interpretation,[8][9]but is itself distinct from them.[9][10]Theodor Hänschhas characterized QBism as sharpening those older views and making them more consistent.[11] More generally, any work that uses a Bayesian or personalist (a.k.a. "subjective") treatment of the probabilities that appear in quantum theory is also sometimes calledquantum Bayesian. QBism, in particular, has been referred to as "the radical Bayesian interpretation".[12] In addition to presenting an interpretation of the existing mathematical structure of quantum theory, some QBists have advocated a research program ofreconstructingquantum theory from basic physical principles whose QBist character is manifest. The ultimate goal of this research is to identify what aspects of theontologyof the physical world make quantum theory a good tool for agents to use.[13]However, the QBist interpretation itself, as described in§ Core positions, does not depend on any particular reconstruction. E. T. Jaynes, a promoter of the use of Bayesian probability in statistical physics, once suggested that quantum theory is "[a] peculiar mixture describing in part realities of Nature, in part incomplete human information about Nature—all scrambled up byHeisenbergandBohrinto an omelette that nobody has seen how to unscramble".[15]QBism developed out of efforts to separate these parts using the tools ofquantum information theoryandpersonalist Bayesian probability theory. There are manyinterpretations of probability theory. Broadly speaking, these interpretations fall into one of three categories: those which assert that a probability is an objective property of reality (the propensity school), those who assert that probability is an objective property of the measuring process (frequentists), and those which assert that a probability is a cognitive construct which an agent may use to quantify their ignorance or degree of belief in a proposition (Bayesians). QBism begins by asserting that all probabilities, even those appearing in quantum theory, are most properly viewed as members of the latter category. Specifically, QBism adopts a personalist Bayesian interpretation along the lines of Italian mathematicianBruno de Finetti[16]and English philosopherFrank Ramsey.[17][18] According to QBists, the advantages of adopting this view of probability are twofold. First, for QBists the role of quantum states, such as the wavefunctions of particles, is to efficiently encode probabilities; so quantum states are ultimately degrees of belief themselves. (If one considers any single measurement that is a minimal, informationally completepositive operator-valued measure(POVM), this is especially clear: A quantum state is mathematically equivalent to a single probability distribution, the distribution over the possible outcomes of that measurement.[19]) Regarding quantum states as degrees of belief implies that the event of a quantum state changing when a measurement occurs—the "collapse of the wave function"—is simply the agent updating her beliefs in response to a new experience.[13]Second, it suggests that quantum mechanics can be thought of as a local theory, because theEinstein–Podolsky–Rosen (EPR)criterion of reality can be rejected. The EPR criterion states: "If, without in any way disturbing a system, we can predict with certainty (i.e., with probability equal tounity) the value of a physical quantity, then there exists an element of reality corresponding to that quantity."[20]Arguments that quantum mechanics should be considered anonlocal theorydepend upon this principle, but to a QBist, it is invalid, because a personalist Bayesian considers all probabilities, even those equal to unity, to be degrees of belief.[21][22]Therefore, while manyinterpretations of quantum theoryconclude that quantum mechanics is a nonlocal theory, QBists do not.[23] Christopher Fuchsintroduced the term "QBism" and outlined the interpretation in more or less its present form in 2010,[24]carrying further and demanding consistency of ideas broached earlier, notably in publications from 2002.[25][26]Several subsequent works have expanded and elaborated upon these foundations, notably aReviews of Modern Physicsarticle by Fuchs and Schack;[19]anAmerican Journal of Physicsarticle by Fuchs, Mermin, and Schack;[23]andEnrico Fermi Summer School[27]lecture notes by Fuchs and Stacey.[22] Prior to the 2010 article, the term "quantum Bayesianism" was used to describe the developments which have since led to QBism in its present form. However, as noted above, QBism subscribes to a particular kind of Bayesianism which does not suit everyone who might apply Bayesian reasoning to quantum theory (see, for example,§ Other uses of Bayesian probability in quantum physicsbelow). Consequently, Fuchs chose to call the interpretation "QBism", pronounced "cubism", preserving the Bayesian spirit via theCamelCasein the first two letters, but distancing it from Bayesianism more broadly. As thisneologismis a homophone ofCubismthe art movement, it has motivated conceptual comparisons between the two,[28]and media coverage of QBism has been illustrated with art byPicasso[7]andGris.[29]However, QBism itself was not influenced or motivated by Cubism and has no lineage to a potentialconnection between Cubist art and Bohr's views on quantum theory.[30] According to QBism, quantum theory is a tool which an agent may use to help manage their expectations, more like probability theory than a conventional physical theory.[13]Quantum theory, QBism claims, is fundamentally a guide for decision making which has been shaped by some aspects of physical reality. Chief among the tenets of QBism are the following:[31] Reactions to the QBist interpretation have ranged from enthusiastic[13][28]to strongly negative.[32]Some who have criticized QBism claim that it fails to meet the goal of resolving paradoxes in quantum theory. Bacciagaluppi argues that QBism's treatment of measurement outcomes does not ultimately resolve the issue of nonlocality,[33]and Jaeger finds QBism's supposition that the interpretation of probability is key for the resolution to be unnatural and unconvincing.[12]Norsen[34]has accused QBism ofsolipsism, and Wallace[35]identifies QBism as an instance ofinstrumentalism; QBists have argued insistently that these characterizations are misunderstandings, and that QBism is neither solipsist nor instrumentalist.[17][36]A critical article by Nauenberg[32]in theAmerican Journal of Physicsprompted a reply by Fuchs, Mermin, and Schack.[37] Some assert that there may be inconsistencies; for example, Stairs argues that when a probability assignment equals one, it cannot be a degree of belief as QBists say.[38]Further, while also raising concerns about the treatment of probability-one assignments, Timpson suggests that QBism may result in a reduction of explanatory power as compared to other interpretations.[1]Fuchs and Schack replied to these concerns in a later article.[39]Mermin advocated QBism in a 2012Physics Todayarticle,[2]which prompted considerable discussion. Several further critiques of QBism which arose in response to Mermin's article, and Mermin's replies to these comments, may be found in thePhysics Todayreaders' forum.[40][41]Section 2 of theStanford Encyclopedia of Philosophyentry on QBism also contains a summary of objections to the interpretation, and some replies.[42]Others are opposed to QBism on more general philosophical grounds; for example, Mohrhoff criticizes QBism from the standpoint ofKantian philosophy.[43] Certain authors find QBism internally self-consistent, but do not subscribe to the interpretation.[44]For example, Marchildon finds QBism well-defined in a way that, to him,many-worlds interpretationsare not, but he ultimately prefers aBohmian interpretation.[45]Similarly, Schlosshauer and Claringbold state that QBism is a consistent interpretation of quantum mechanics, but do not offer a verdict on whether it should be preferred.[46]In addition, some agree with most, but perhaps not all, of the core tenets of QBism; Barnum's position,[47]as well as Appleby's,[48]are examples. Popularizedor semi-popularized media coverage of QBism has appeared inNew Scientist,[49]Scientific American,[50]Nature,[51]Science News,[52]theFQXi Community,[53]theFrankfurter Allgemeine Zeitung,[29]Quanta Magazine,[16]Aeon,[54]Discover,[55]Nautilus Quarterly,[56]andBig Think.[57]In 2018, two popular-science books about the interpretation of quantum mechanics,Ball'sBeyond WeirdandAnanthaswamy'sThrough Two Doors at Once, devoted sections to QBism.[58][59]Furthermore,Harvard University Presspublished a popularized treatment of the subject,QBism: The Future of Quantum Physics, in 2016.[13] The philosophy literature has also discussed QBism from the viewpoints ofstructural realismand ofphenomenology.[60][61][62]Ballentine argues that "the initial assumption of QBism is not valid" because the inferential probability of Bayesian theory used by QBism is not applicable to quantum mechanics.[63] The views of many physicists (Bohr,Heisenberg,Rosenfeld,von Weizsäcker,Peres, etc.) are often grouped together as the "Copenhagen interpretation" of quantum mechanics. Several authors have deprecated this terminology, claiming that it is historically misleading and obscures differences between physicists that are as important as their similarities.[14][64]QBism shares many characteristics in common with the ideas often labeled as "the Copenhagen interpretation", but the differences are important; to conflate them or to regard QBism as a minor modification of the points of view of Bohr or Heisenberg, for instance, would be a substantial misrepresentation.[10][31] QBism takes probabilities to be personal judgments of the individual agent who is using quantum mechanics. This contrasts with older Copenhagen-type views, which hold that probabilities are given by quantum states that are in turn fixed by objective facts about preparation procedures.[13][65]QBism considers a measurement to be any action that an agent takes to elicit a response from the world and the outcome of that measurement to be the experience the world's response induces back on that agent. As a consequence, communication between agents is the only means by which different agents can attempt to compare their internal experiences. Most variants of the Copenhagen interpretation, however, hold that the outcomes of experiments are agent-independent pieces of reality for anyone to access.[10]QBism claims that these points on which it differs from previous Copenhagen-type interpretations resolve the obscurities that many critics have found in the latter, by changing the role that quantum theory plays (even though QBism does not yet provide a specific underlyingontology). Specifically, QBism posits that quantum theory is anormativetool which an agent may use to better navigate reality, rather than a set of mechanics governing it.[22][42] Approaches to quantum theory, like QBism,[66]which treat quantum states as expressions of information, knowledge, belief, or expectation are called "epistemic" interpretations.[6]These approaches differ from each other in what they consider quantum states to be information or expectations "about", as well as in the technical features of the mathematics they employ. Furthermore, not all authors who advocate views of this type propose an answer to the question of what the information represented in quantum states concerns. In the words of the paper that introduced theSpekkens Toy Model: if a quantum state is a state of knowledge, and it is not knowledge oflocaland noncontextualhidden variables, then what is it knowledge about? We do not at present have a good answer to this question. We shall therefore remain completely agnostic about the nature of the reality to which the knowledge represented by quantum states pertains. This is not to say that the question is not important. Rather, we see the epistemic approach as an unfinished project, and this question as the central obstacle to its completion. Nonetheless, we argue that even in the absence of an answer to this question, a case can be made for the epistemic view. The key is that one can hope to identify phenomena that are characteristic of states of incomplete knowledge regardless of what this knowledge is about.[67] Leifer and Spekkens propose a way of treating quantum probabilities as Bayesian probabilities, thereby considering quantum states as epistemic, which they state is "closely aligned in its philosophical starting point" with QBism.[68]However, they remain deliberately agnostic about what physical properties or entities quantum states are information (or beliefs) about, as opposed to QBism, which offers an answer to that question.[68]Another approach, advocated byBuband Pitowsky, argues that quantum states are information about propositions within event spaces that formnon-Boolean lattices.[69]On occasion, the proposals of Bub and Pitowsky are also called "quantum Bayesianism".[70] Zeilingerand Brukner have also proposed an interpretation of quantum mechanics in which "information" is a fundamental concept, and in which quantum states are epistemic quantities.[71]Unlike QBism, the Brukner–Zeilinger interpretation treats some probabilities as objectively fixed. In the Brukner–Zeilinger interpretation, a quantum state represents the information that a hypothetical observer in possession of all possible data would have. Put another way, a quantum state belongs in their interpretation to anoptimally informedagent, whereas in QBism,anyagent can formulate a state to encode her own expectations.[72]Despite this difference, in Cabello's classification, the proposals of Zeilinger and Brukner are also designated as "participatory realism", as QBism and the Copenhagen-type interpretations are.[6] Bayesian, or epistemic, interpretations of quantum probabilities were proposed in the early 1990s byBaezand Youssef.[73][74] R. F. Streaterargued that "[t]he first quantum Bayesian wasvon Neumann", basing that claim on von Neumann's textbookThe Mathematical Foundations of Quantum Mechanics.[75]Blake Stacey disagrees, arguing that the views expressed in that book on the nature of quantum states and the interpretation of probability are not compatible with QBism, or indeed, with any position that might be called quantum Bayesianism.[14] Comparisons have also been made between QBism and therelational quantum mechanics(RQM) espoused byCarlo Rovelliand others.[76][77]In both QBism and RQM, quantum states are not intrinsic properties of physical systems.[78]Both QBism and RQM deny the existence of an absolute, universal wavefunction. Furthermore, both QBism and RQM insist that quantum mechanics is a fundamentallylocaltheory.[23][79]In addition, Rovelli, like several QBist authors, advocates reconstructing quantum theory from physical principles in order to bring clarity to the subject of quantum foundations.[80](The QBist approaches to doing so are different from Rovelli's, and are describedbelow.) One important distinction between the two interpretations is their philosophy of probability: RQM does not adopt the Ramsey–de Finetti school of personalist Bayesianism.[6][17]Moreover, RQM does not insist that a measurement outcome is necessarily an agent's experience.[17] QBism should be distinguished from other applications ofBayesian inferencein quantum physics, and from quantum analogues of Bayesian inference.[19][73]For example, some in the field of computer science have introduced a kind of quantumBayesian network, which they argue could have applications in "medical diagnosis, monitoring of processes, and genetics".[81][82]Bayesian inference has also been applied in quantum theory for updating probability densities over quantum states,[83]andMaxEntmethods have been used in similar ways.[73][84]Bayesian methods forquantum state and process tomographyare an active area of research.[85] Conceptual concerns about the interpretation of quantum mechanics and the meaning of probability have motivated technical work. A quantum version of thede Finetti theorem, introduced by Caves, Fuchs, and Schack (independently reproving a result found using different means by Størmer[86]) to provide a Bayesian understanding of the idea of an "unknown quantum state",[87][88]has found application elsewhere, in topics likequantum key distribution[89]andentanglementdetection.[90] Adherents of several interpretations of quantum mechanics, QBism included, have been motivated to reconstruct quantum theory. The goal of these research efforts has been to identify a new set of axioms or postulates from which the mathematical structure of quantum theory can be derived, in the hope that with such a reformulation, the features of nature which made quantum theory the way it is might be more easily identified.[51][91]Although the core tenets of QBism do not demand such a reconstruction, some QBists—Fuchs,[26]in particular—have argued that the task should be pursued. One topic prominent in the reconstruction effort is the set of mathematical structures known as symmetric, informationally-complete, positive operator-valued measures (SIC-POVMs). QBist foundational research stimulated interest in these structures, which now have applications in quantum theory outside of foundational studies[92]and in pure mathematics.[93] The most extensively explored QBist reformulation of quantum theory involves the use of SIC-POVMs to rewrite quantum states (either pure ormixed) as a set of probabilities defined over the outcomes of a "Bureau of Standards" measurement.[94][95]That is, if one expresses adensity matrixas a probability distribution over the outcomes of a SIC-POVM experiment, one can reproduce all the statistical predictions implied by the density matrix from the SIC-POVM probabilities instead.[96]TheBorn rulethen takes the role of relating one valid probability distribution to another, rather than of deriving probabilities from something apparently more fundamental. Fuchs, Schack, and others have taken to calling this restatement of the Born rule theurgleichung,from the German for "primal equation" (seeUr-prefix), because of the central role it plays in their reconstruction of quantum theory.[19][97][98] The following discussion presumes some familiarity with the mathematics ofquantum informationtheory, and in particular, the modeling of measurement procedures byPOVMs. Consider a quantum system to which is associated ad{\textstyle d}-dimensionalHilbert space. If a set ofd2{\textstyle d^{2}}rank-1projectorsΠ^i{\displaystyle {\hat {\Pi }}_{i}}satisfyingtr⁡Π^iΠ^j=dδij+1d+1{\displaystyle \operatorname {tr} {\hat {\Pi }}_{i}{\hat {\Pi }}_{j}={\frac {d\delta _{ij}+1}{d+1}}}exists, then one may form a SIC-POVMH^i=1dΠ^i{\textstyle {\hat {H}}_{i}={\frac {1}{d}}{\hat {\Pi }}_{i}}. An arbitrary quantum stateρ^{\displaystyle {\hat {\rho }}}may be written as a linear combination of the SIC projectorsρ^=∑i=1d2[(d+1)P(Hi)−1d]Π^i,{\displaystyle {\hat {\rho }}=\sum _{i=1}^{d^{2}}\left[(d+1)P(H_{i})-{\frac {1}{d}}\right]{\hat {\Pi }}_{i},}whereP(Hi)=tr⁡ρ^H^i{\textstyle P(H_{i})=\operatorname {tr} {\hat {\rho }}{\hat {H}}_{i}}is the Born rule probability for obtaining SIC measurement outcomeHi{\displaystyle H_{i}}implied by the state assignmentρ^{\displaystyle {\hat {\rho }}}. We follow the convention that operators have hats while experiences (that is, measurement outcomes) do not. Now consider an arbitrary quantum measurement, denoted by the POVM{D^j}{\displaystyle \{{\hat {D}}_{j}\}}. The urgleichung is the expression obtained from forming the Born rule probabilities,Q(Dj)=tr⁡ρ^D^j{\textstyle Q(D_{j})=\operatorname {tr} {\hat {\rho }}{\hat {D}}_{j}}, for the outcomes of this quantum measurement,Q(Dj)=∑i=1d2[(d+1)P(Hi)−1d]P(Dj∣Hi),{\displaystyle Q(D_{j})=\sum _{i=1}^{d^{2}}\left[(d+1)P(H_{i})-{\frac {1}{d}}\right]P(D_{j}\mid H_{i}),}whereP(Dj∣Hi)≡tr⁡Π^iD^j{\displaystyle P(D_{j}\mid H_{i})\equiv \operatorname {tr} {\hat {\Pi }}_{i}{\hat {D}}_{j}}is the Born rule probability for obtaining outcomeDj{\displaystyle D_{j}}implied by the state assignmentΠ^i{\displaystyle {\hat {\Pi }}_{i}}. TheP(Dj∣Hi){\displaystyle P(D_{j}\mid H_{i})}term may be understood to be a conditional probability in a cascaded measurement scenario: Imagine that an agent plans to perform two measurements, first a SIC measurement and then the{Dj}{\displaystyle \{D_{j}\}}measurement. After obtaining an outcome from the SIC measurement, the agent will update her state assignment to a new quantum stateρ^′{\displaystyle {\hat {\rho }}'}before performing the second measurement. If she uses theLüdersrule[99]for state update and obtains outcomeHi{\displaystyle H_{i}}from the SIC measurement, thenρ^′=Π^i{\textstyle {\hat {\rho }}'={\hat {\Pi }}_{i}}. Thus the probability for obtaining outcomeDj{\displaystyle D_{j}}for the second measurement conditioned on obtaining outcomeHi{\displaystyle H_{i}}for the SIC measurement isP(Dj∣Hi){\displaystyle P(D_{j}\mid H_{i})}. Note that the urgleichung is structurally very similar to thelaw of total probability, which is the expressionP(Dj)=∑i=1d2P(Hi)P(Dj∣Hi).{\displaystyle P(D_{j})=\sum _{i=1}^{d^{2}}P(H_{i})P(D_{j}\mid H_{i}).}They functionally differ only by a dimension-dependentaffine transformationof the SIC probability vector. As QBism says that quantum theory is an empirically-motivated normative addition to probability theory, Fuchs and others find the appearance of a structure in quantum theory analogous to one in probability theory to be an indication that a reformulation featuring the urgleichung prominently may help to reveal the properties of nature which made quantum theory so successful.[19][22] The urgleichung does notreplacethe law of total probability. Rather, the urgleichung and the law of total probability apply in different scenarios becauseP(Dj){\displaystyle P(D_{j})}andQ(Dj){\displaystyle Q(D_{j})}refer to different situations.P(Dj){\displaystyle P(D_{j})}is the probability that an agent assigns for obtaining outcomeDj{\displaystyle D_{j}}on her second of two planned measurements, that is, for obtaining outcomeDj{\displaystyle D_{j}}after first making the SIC measurement and obtaining one of theHi{\displaystyle H_{i}}outcomes.Q(Dj){\displaystyle Q(D_{j})}, on the other hand, is the probability an agent assigns for obtaining outcomeDj{\displaystyle D_{j}}when she does not plan to first make the SIC measurement.The law of total probability is a consequence ofcoherencewithin the operational context of performing the two measurements as described. The urgleichung, in contrast, is a relation between different contexts which finds its justification in the predictive success of quantum physics. The SIC representation of quantum states also provides a reformulation of quantum dynamics. Consider a quantum stateρ^{\displaystyle {\hat {\rho }}}with SIC representationP(Hi){\textstyle P(H_{i})}. The time evolution of this state is found by applying aunitary operatorU^{\displaystyle {\hat {U}}}to form the new stateU^ρ^U^†{\textstyle {\hat {U}}{\hat {\rho }}{\hat {U}}^{\dagger }}, which has the SIC representation Pt(Hi)=tr⁡[(U^ρ^U^†)H^i]=tr⁡[ρ^(U^†H^iU^)].{\displaystyle P_{t}(H_{i})=\operatorname {tr} \left[({\hat {U}}{\hat {\rho }}{\hat {U}}^{\dagger }){\hat {H}}_{i}\right]=\operatorname {tr} \left[{\hat {\rho }}({\hat {U}}^{\dagger }{\hat {H}}_{i}{\hat {U}})\right].} The second equality is written in theHeisenberg pictureof quantum dynamics, with respect to which the time evolution of a quantum system is captured by the probabilities associated with a rotated SIC measurement{Dj}={U^†H^jU^}{\textstyle \{D_{j}\}=\{{\hat {U}}^{\dagger }{\hat {H}}_{j}{\hat {U}}\}}of the original quantum stateρ^{\displaystyle {\hat {\rho }}}. Then theSchrödinger equationis completely captured in the urgleichung for this measurement:Pt(Hj)=∑i=1d2[(d+1)P(Hi)−1d]P(Dj∣Hi).{\displaystyle P_{t}(H_{j})=\sum _{i=1}^{d^{2}}\left[(d+1)P(H_{i})-{\frac {1}{d}}\right]P(D_{j}\mid H_{i}).}In these terms, the Schrödinger equation is an instance of the Born rule applied to the passing of time; an agent uses it to relate how she will gamble on informationally complete measurements potentially performed at different times. Those QBists who find this approach promising are pursuing a complete reconstruction of quantum theory featuring the urgleichung as the key postulate.[97](The urgleichung has also been discussed in the context ofcategory theory.[100]) Comparisons between this approach and others not associated with QBism (or indeed with any particular interpretation) can be found in a book chapter by Fuchs and Stacey[101]and an article by Applebyet al.[97]As of 2017, alternative QBist reconstruction efforts are in the beginning stages.[102]
https://en.wikipedia.org/wiki/Quantum_Bayesianism
The current state ofquantum computing[1]is referred to as thenoisy intermediate-scale quantum(NISQ)era,[2][3]characterized by quantum processors containing up to 1,000qubitswhich are not advanced enough yet forfault-toleranceor large enough to achievequantum advantage.[4][5]These processors, which are sensitive to their environment (noisy) and prone toquantum decoherence, are not yet capable of continuousquantum error correction. This intermediate-scale is defined by thequantum volume, which is based on the moderate number of qubits andgatefidelity. The term NISQ was coined byJohn Preskillin 2018.[6][2] According toMicrosoft Azure Quantum's scheme, NISQ computation is considered level 1, the lowest of the quantum computing implementation levels.[7][8] In October 2023, the 1,000 qubit mark was passed for the first time by Atom Computing's 1,180 qubit quantum processor.[9]However, as of 2024, only two quantum processors have over 1,000 qubits, with sub-1,000 quantum processors still remaining the norm.[10] NISQ algorithmsarequantum algorithmsdesigned for quantum processors in the NISQ era. Common examples are thevariational quantum eigensolver(VQE) andquantum approximate optimization algorithm(QAOA), which use NISQ devices but offload some calculations to classical processors.[2]These algorithms have been successful inquantum chemistryand have potential applications in various fields including physics, materials science, data science, cryptography, biology, and finance.[2]However, due to noise during circuit execution, they often require error mitigation techniques.[11][5][12][13]These methods constitute a way of reducing the effect of noise by running a set of circuits and applying post-processing to the measured data. In contrast toquantum error correction, where errors are continuously detected and corrected during the run of the circuit, error mitigation can only use the outcome of the noisy circuits. The creation of a computer with tens of thousands of qubits and enough error correction would eventually end the NISQ era.[4]These beyond-NISQ devices would be able to, for example, implementShor's algorithmfor very large numbers and breakRSAencryption.[14] In April 2024, researchers at Microsoft announced a significant reduction in error rates that required only 4 logical qubits, suggesting that quantum computing at scale could be years away instead of decades.[15]
https://en.wikipedia.org/wiki/Noisy_intermediate-scale_quantum_era
Inquantum mechanics, notably inquantum information theory,fidelityquantifies the "closeness" between twodensity matrices. It expresses the probability that one state will pass a test to identify as the other. It is not ametricon the space of density matrices, but it can be used to define theBures metricon this space. The fidelity between two quantum statesρ{\displaystyle \rho }andσ{\displaystyle \sigma }, expressed asdensity matrices, is commonly defined as:[1][2] The square roots in this expression are well-defined because bothρ{\displaystyle \rho }andρσρ{\displaystyle {\sqrt {\rho }}\sigma {\sqrt {\rho }}}are positive semidefinite matrices, and thesquare root of a positive semidefinite matrixis defined via thespectral theorem. The Euclidean inner product from the classical definition is replaced by theHilbert–Schmidtinner product. As will be discussed in the following sections, this expression can be simplified in various cases of interest. In particular, for pure states,ρ=|ψρ⟩⟨ψρ|{\displaystyle \rho =|\psi _{\rho }\rangle \!\langle \psi _{\rho }|}andσ=|ψσ⟩⟨ψσ|{\displaystyle \sigma =|\psi _{\sigma }\rangle \!\langle \psi _{\sigma }|}, it equals:F(ρ,σ)=|⟨ψρ|ψσ⟩|2.{\displaystyle F(\rho ,\sigma )=|\langle \psi _{\rho }|\psi _{\sigma }\rangle |^{2}.}This tells us that the fidelity between pure states has a straightforward interpretation in terms of probability of finding the state|ψρ⟩{\displaystyle |\psi _{\rho }\rangle }when measuring|ψσ⟩{\displaystyle |\psi _{\sigma }\rangle }in a basis containing|ψρ⟩{\displaystyle |\psi _{\rho }\rangle }. Some authors use an alternative definitionF′:=F{\displaystyle F':={\sqrt {F}}}and call this quantity fidelity.[2]The definition ofF{\displaystyle F}however is more common.[3][4][5]To avoid confusion,F′{\displaystyle F'}could be called "square root fidelity". In any case it is advisable to clarify the adopted definition whenever the fidelity is employed. Given tworandom variablesX,Y{\displaystyle X,Y}with values(1,...,n){\displaystyle (1,...,n)}(categorical random variables) and probabilitiesp=(p1,p2,…,pn){\displaystyle p=(p_{1},p_{2},\ldots ,p_{n})}andq=(q1,q2,…,qn){\displaystyle q=(q_{1},q_{2},\ldots ,q_{n})}, the fidelity ofX{\displaystyle X}andY{\displaystyle Y}is defined to be the quantity The fidelity deals with themarginal distributionof the random variables. It says nothing about thejoint distributionof those variables. In other words, the fidelityF(X,Y){\displaystyle F(X,Y)}is the square of theinner productof(p1,…,pn){\displaystyle ({\sqrt {p_{1}}},\ldots ,{\sqrt {p_{n}}})}and(q1,…,qn){\displaystyle ({\sqrt {q_{1}}},\ldots ,{\sqrt {q_{n}}})}viewed as vectors inEuclidean space. Notice thatF(X,Y)=1{\displaystyle F(X,Y)=1}if and only ifp=q{\displaystyle p=q}. In general,0≤F(X,Y)≤1{\displaystyle 0\leq F(X,Y)\leq 1}. Themeasure∑ipiqi{\displaystyle \sum _{i}{\sqrt {p_{i}q_{i}}}}is known as theBhattacharyya coefficient. Given aclassicalmeasure of the distinguishability of twoprobability distributions, one can motivate a measure of distinguishability of two quantum states as follows: if an experimenter is attempting to determine whether aquantum stateis either of two possibilitiesρ{\displaystyle \rho }orσ{\displaystyle \sigma }, the most general possible measurement they can make on the state is aPOVM, which is described by a set ofHermitianpositive semidefiniteoperators{Fi}{\displaystyle \{F_{i}\}}. When measuring a stateρ{\displaystyle \rho }with this POVM,i{\displaystyle i}-th outcome is found with probabilitypi=tr⁡(ρFi){\displaystyle p_{i}=\operatorname {tr} (\rho F_{i})}, and likewise with probabilityqi=tr⁡(σFi){\displaystyle q_{i}=\operatorname {tr} (\sigma F_{i})}forσ{\displaystyle \sigma }. The ability to distinguish betweenρ{\displaystyle \rho }andσ{\displaystyle \sigma }is then equivalent to their ability to distinguish between the classical probability distributionsp{\displaystyle p}andq{\displaystyle q}. A natural question is then to ask what is the POVM the makes the two distributions as distinguishable as possible, which in this context means to minimize the Bhattacharyya coefficient over the possible choices of POVM. Formally, we are thus led to define the fidelity between quantum states as: It was shown by Fuchs and Caves[6]that the minimization in this expression can be computed explicitly, with solution the projective POVM corresponding to measuring in the eigenbasis ofσ−1/2|σρ|σ−1/2{\displaystyle \sigma ^{-1/2}|{\sqrt {\sigma }}{\sqrt {\rho }}|\sigma ^{-1/2}}, and results in the common explicit expression for the fidelity asF(ρ,σ)=(tr⁡ρσρ)2.{\displaystyle F(\rho ,\sigma )=\left(\operatorname {tr} {\sqrt {{\sqrt {\rho }}\sigma {\sqrt {\rho }}}}\right)^{2}.} An equivalent expression for the fidelity between arbitrary states via thetrace normis: where theabsolute valueof an operator is here defined as|A|≡A†A{\displaystyle |A|\equiv {\sqrt {A^{\dagger }A}}}. Since thetraceof a matrix is equal to the sum of itseigenvalues where theλj{\displaystyle \lambda _{j}}are the eigenvalues ofρσρ{\displaystyle {\sqrt {\rho }}\sigma {\sqrt {\rho }}}, which is positive semidefinite by construction and so the square roots of the eigenvalues are well defined. Because thecharacteristic polynomial of a product of two matricesis independent of the order, thespectrumof a matrix product is invariant under cyclic permutation, and so these eigenvalues can instead be calculated fromρσ{\displaystyle \rho \sigma }.[7]Reversing the trace property leads to If (at least) one of the two states is pure, for exampleρ=|ψρ⟩⟨ψρ|{\displaystyle \rho =|\psi _{\rho }\rangle \!\langle \psi _{\rho }|}, the fidelity simplifies toF(ρ,σ)=tr⁡(σρ)=⟨ψρ|σ|ψρ⟩.{\displaystyle F(\rho ,\sigma )=\operatorname {tr} (\sigma \rho )=\langle \psi _{\rho }|\sigma |\psi _{\rho }\rangle .}This follows observing that ifρ{\displaystyle \rho }is pure thenρ=ρ{\displaystyle {\sqrt {\rho }}=\rho }, and thusF(ρ,σ)=(tr⁡|ψρ⟩⟨ψρ|σ|ψρ⟩⟨ψρ|)2=⟨ψρ|σ|ψρ⟩(tr⁡|ψρ⟩⟨ψρ|)2=⟨ψρ|σ|ψρ⟩.{\displaystyle F(\rho ,\sigma )=\left(\operatorname {tr} {\sqrt {|\psi _{\rho }\rangle \langle \psi _{\rho }|\sigma |\psi _{\rho }\rangle \langle \psi _{\rho }|}}\right)^{2}=\langle \psi _{\rho }|\sigma |\psi _{\rho }\rangle \left(\operatorname {tr} {\sqrt {|\psi _{\rho }\rangle \langle \psi _{\rho }|}}\right)^{2}=\langle \psi _{\rho }|\sigma |\psi _{\rho }\rangle .} If both states are pure,ρ=|ψρ⟩⟨ψρ|{\displaystyle \rho =|\psi _{\rho }\rangle \!\langle \psi _{\rho }|}andσ=|ψσ⟩⟨ψσ|{\displaystyle \sigma =|\psi _{\sigma }\rangle \!\langle \psi _{\sigma }|}, then we get the even simpler expression:F(ρ,σ)=|⟨ψρ|ψσ⟩|2.{\displaystyle F(\rho ,\sigma )=|\langle \psi _{\rho }|\psi _{\sigma }\rangle |^{2}.} Some of the important properties of the quantum state fidelity are: Ifρ{\displaystyle \rho }andσ{\displaystyle \sigma }are bothqubitstates, the fidelity can be computed as[1][8] Qubit state means thatρ{\displaystyle \rho }andσ{\displaystyle \sigma }are represented by two-dimensional matrices. This result follows noticing thatM=ρσρ{\displaystyle M={\sqrt {\rho }}\sigma {\sqrt {\rho }}}is apositive semidefinite operator, hencetr⁡M=λ1+λ2{\displaystyle \operatorname {tr} {\sqrt {M}}={\sqrt {\lambda _{1}}}+{\sqrt {\lambda _{2}}}}, whereλ1{\displaystyle \lambda _{1}}andλ2{\displaystyle \lambda _{2}}are the (nonnegative) eigenvalues ofM{\displaystyle M}. Ifρ{\displaystyle \rho }(orσ{\displaystyle \sigma }) is pure, this result is simplified further toF(ρ,σ)=tr⁡(ρσ){\displaystyle F(\rho ,\sigma )=\operatorname {tr} (\rho \sigma )}sinceDet(ρ)=0{\displaystyle \mathrm {Det} (\rho )=0}for pure states. Direct calculation shows that the fidelity is preserved byunitary evolution, i.e. for anyunitary operatorU{\displaystyle U}. Let{Ek}k{\displaystyle \{E_{k}\}_{k}}be an arbitrarypositive operator-valued measure(POVM); that is, a set of positive semidefinite operatorsEk{\displaystyle E_{k}}satisfying∑kEk=I{\displaystyle \sum _{k}E_{k}=I}. Then, for any pair of statesρ{\displaystyle \rho }andσ{\displaystyle \sigma }, we haveF(ρ,σ)≤∑ktr⁡(Ekρ)tr⁡(Ekσ)≡∑kpkqk,{\displaystyle {\sqrt {F(\rho ,\sigma )}}\leq \sum _{k}{\sqrt {\operatorname {tr} (E_{k}\rho )}}{\sqrt {\operatorname {tr} (E_{k}\sigma )}}\equiv \sum _{k}{\sqrt {p_{k}q_{k}}},}where in the last step we denoted withpk≡tr⁡(Ekρ){\displaystyle p_{k}\equiv \operatorname {tr} (E_{k}\rho )}andqk≡tr⁡(Ekσ){\displaystyle q_{k}\equiv \operatorname {tr} (E_{k}\sigma )}the probability distributions obtained by measuringρ,σ{\displaystyle \rho ,\ \sigma }with the POVM{Ek}k{\displaystyle \{E_{k}\}_{k}}. This shows that the square root of the fidelity between two quantum states is upper bounded by theBhattacharyya coefficientbetween the corresponding probability distributions in any possible POVM. Indeed, it is more generally true thatF(ρ,σ)=min{Ek}F(p,q),{\displaystyle F(\rho ,\sigma )=\min _{\{E_{k}\}}F({\boldsymbol {p}},{\boldsymbol {q}}),}whereF(p,q)≡(∑kpkqk)2{\displaystyle F({\boldsymbol {p}},{\boldsymbol {q}})\equiv \left(\sum _{k}{\sqrt {p_{k}q_{k}}}\right)^{2}}, and the minimum is taken over all possible POVMs. More specifically, one can prove that the minimum is achieved by the projective POVM corresponding to measuring in the eigenbasis of the operatorσ−1/2|σρ|σ−1/2{\displaystyle \sigma ^{-1/2}|{\sqrt {\sigma }}{\sqrt {\rho }}|\sigma ^{-1/2}}.[9] As was previously shown, the square root of the fidelity can be written asF(ρ,σ)=tr⁡|ρσ|,{\displaystyle {\sqrt {F(\rho ,\sigma )}}=\operatorname {tr} |{\sqrt {\rho }}{\sqrt {\sigma }}|,}which is equivalent to the existence of a unitary operatorU{\displaystyle U}such that F(ρ,σ)=tr⁡(ρσU).{\displaystyle {\sqrt {F(\rho ,\sigma )}}=\operatorname {tr} ({\sqrt {\rho }}{\sqrt {\sigma }}U).}Remembering that∑kEk=I{\displaystyle \sum _{k}E_{k}=I}holds true for any POVM, we can then writeF(ρ,σ)=tr⁡(ρσU)=∑ktr⁡(ρEkσU)=∑ktr⁡(ρEkEkσU)≤∑ktr⁡(Ekρ)tr⁡(Ekσ),{\displaystyle {\sqrt {F(\rho ,\sigma )}}=\operatorname {tr} ({\sqrt {\rho }}{\sqrt {\sigma }}U)=\sum _{k}\operatorname {tr} ({\sqrt {\rho }}E_{k}{\sqrt {\sigma }}U)=\sum _{k}\operatorname {tr} ({\sqrt {\rho }}{\sqrt {E_{k}}}{\sqrt {E_{k}}}{\sqrt {\sigma }}U)\leq \sum _{k}{\sqrt {\operatorname {tr} (E_{k}\rho )\operatorname {tr} (E_{k}\sigma )}},}where in the last step we used Cauchy-Schwarz inequality as in|tr⁡(A†B)|2≤tr⁡(A†A)tr⁡(B†B){\displaystyle |\operatorname {tr} (A^{\dagger }B)|^{2}\leq \operatorname {tr} (A^{\dagger }A)\operatorname {tr} (B^{\dagger }B)}. The fidelity between two states can be shown to never decrease when a non-selectivequantum operationE{\displaystyle {\mathcal {E}}}is applied to the states:[10]F(E(ρ),E(σ))≥F(ρ,σ),{\displaystyle F({\mathcal {E}}(\rho ),{\mathcal {E}}(\sigma ))\geq F(\rho ,\sigma ),}for any trace-preservingcompletely positive mapE{\displaystyle {\mathcal {E}}}. We can define thetrace distancebetween two matrices A and B in terms of thetrace normby When A and B are both density operators, this is a quantum generalization of thestatistical distance. This is relevant because the trace distance provides upper and lower bounds on the fidelity as quantified by theFuchs–van de Graaf inequalities,[11] Often the trace distance is easier to calculate or bound than the fidelity, so these relationships are quite useful. In the case that at least one of the states is apure stateΨ, the lower bound can be tightened. We saw that for two pure states, their fidelity coincides with the overlap. Uhlmann's theorem[12]generalizes this statement to mixed states, in terms of their purifications: TheoremLet ρ and σ be density matrices acting onCn. Let ρ1⁄2be the unique positive square root of ρ and |ψρ⟩=∑i=1n(ρ1/2|ei⟩)⊗|ei⟩∈Cn⊗Cn{\displaystyle |\psi _{\rho }\rangle =\sum _{i=1}^{n}(\rho ^{{1}/{2}}|e_{i}\rangle )\otimes |e_{i}\rangle \in \mathbb {C} ^{n}\otimes \mathbb {C} ^{n}} be apurificationof ρ (therefore{|ei⟩}{\displaystyle \textstyle \{|e_{i}\rangle \}}is an orthonormal basis), then the following equality holds: where|ψσ⟩{\displaystyle |\psi _{\sigma }\rangle }is a purification of σ. Therefore, in general, the fidelity is the maximum overlap between purifications. A simple proof can be sketched as follows. Let|Ω⟩{\displaystyle \textstyle |\Omega \rangle }denote the vector and σ1⁄2be the unique positive square root of σ. We see that, due to the unitary freedom insquare root factorizationsand choosingorthonormal bases, an arbitrary purification of σ is of the form whereVi's areunitary operators. Now we directly calculate But in general, for any square matrixAand unitaryU, it is true that |tr(AU)| ≤ tr((A*A)1⁄2). Furthermore, equality is achieved ifU*is the unitary operator in thepolar decompositionofA. From this follows directly Uhlmann's theorem. We will here provide an alternative, explicit way to prove Uhlmann's theorem. Let|ψρ⟩{\displaystyle |\psi _{\rho }\rangle }and|ψσ⟩{\displaystyle |\psi _{\sigma }\rangle }be purifications ofρ{\displaystyle \rho }andσ{\displaystyle \sigma }, respectively. To start, let us show that|⟨ψρ|ψσ⟩|≤tr⁡|ρσ|{\displaystyle |\langle \psi _{\rho }|\psi _{\sigma }\rangle |\leq \operatorname {tr} |{\sqrt {\rho }}{\sqrt {\sigma }}|}. The general form of the purifications of the states is:|ψρ⟩=∑kλk|λk⟩⊗|uk⟩,|ψσ⟩=∑kμk|μk⟩⊗|vk⟩,{\displaystyle {\begin{aligned}|\psi _{\rho }\rangle &=\sum _{k}{\sqrt {\lambda _{k}}}|\lambda _{k}\rangle \otimes |u_{k}\rangle ,\\|\psi _{\sigma }\rangle &=\sum _{k}{\sqrt {\mu _{k}}}|\mu _{k}\rangle \otimes |v_{k}\rangle ,\end{aligned}}}were|λk⟩,|μk⟩{\displaystyle |\lambda _{k}\rangle ,|\mu _{k}\rangle }are theeigenvectorsofρ,σ{\displaystyle \rho ,\ \sigma }, and{uk}k,{vk}k{\displaystyle \{u_{k}\}_{k},\{v_{k}\}_{k}}are arbitrary orthonormal bases. The overlap between the purifications is⟨ψρ|ψσ⟩=∑jkλjμk⟨λj|μk⟩⟨uj|vk⟩=tr⁡(ρσU),{\displaystyle \langle \psi _{\rho }|\psi _{\sigma }\rangle =\sum _{jk}{\sqrt {\lambda _{j}\mu _{k}}}\langle \lambda _{j}|\mu _{k}\rangle \,\langle u_{j}|v_{k}\rangle =\operatorname {tr} \left({\sqrt {\rho }}{\sqrt {\sigma }}U\right),}where the unitary matrixU{\displaystyle U}is defined asU=(∑k|μk⟩⟨uk|)(∑j|vj⟩⟨λj|).{\displaystyle U=\left(\sum _{k}|\mu _{k}\rangle \!\langle u_{k}|\right)\,\left(\sum _{j}|v_{j}\rangle \!\langle \lambda _{j}|\right).}The conclusion is now reached via using the inequality|tr⁡(AU)|≤tr⁡(A†A)≡tr⁡|A|{\displaystyle |\operatorname {tr} (AU)|\leq \operatorname {tr} ({\sqrt {A^{\dagger }A}})\equiv \operatorname {tr} |A|}:|⟨ψρ|ψσ⟩|=|tr⁡(ρσU)|≤tr⁡|ρσ|.{\displaystyle |\langle \psi _{\rho }|\psi _{\sigma }\rangle |=|\operatorname {tr} ({\sqrt {\rho }}{\sqrt {\sigma }}U)|\leq \operatorname {tr} |{\sqrt {\rho }}{\sqrt {\sigma }}|.}Note that this inequality is thetriangle inequalityapplied to the singular values of the matrix. Indeed, for a generic matrixA≡∑jsj(A)|aj⟩⟨bj|{\displaystyle A\equiv \sum _{j}s_{j}(A)|a_{j}\rangle \!\langle b_{j}|}and unitaryU=∑j|bj⟩⟨wj|{\displaystyle U=\sum _{j}|b_{j}\rangle \!\langle w_{j}|}, we have|tr⁡(AU)|=|tr⁡(∑jsj(A)|aj⟩⟨bj|∑k|bk⟩⟨wk|)|=|∑jsj(A)⟨wj|aj⟩|≤∑jsj(A)|⟨wj|aj⟩|≤∑jsj(A)=tr⁡|A|,{\displaystyle {\begin{aligned}|\operatorname {tr} (AU)|&=\left|\operatorname {tr} \left(\sum _{j}s_{j}(A)|a_{j}\rangle \!\langle b_{j}|\,\,\sum _{k}|b_{k}\rangle \!\langle w_{k}|\right)\right|\\&=\left|\sum _{j}s_{j}(A)\langle w_{j}|a_{j}\rangle \right|\\&\leq \sum _{j}s_{j}(A)\,|\langle w_{j}|a_{j}\rangle |\\&\leq \sum _{j}s_{j}(A)\\&=\operatorname {tr} |A|,\end{aligned}}}wheresj(A)≥0{\displaystyle s_{j}(A)\geq 0}are the (always real and non-negative)singular valuesofA{\displaystyle A}, as in thesingular value decomposition. The inequality is saturated and becomes an equality when⟨wj|aj⟩=1{\displaystyle \langle w_{j}|a_{j}\rangle =1}, that is, whenU=∑k|bk⟩⟨ak|,{\displaystyle U=\sum _{k}|b_{k}\rangle \!\langle a_{k}|,}and thusAU=AA†≡|A|{\displaystyle AU={\sqrt {AA^{\dagger }}}\equiv |A|}. The above shows that|⟨ψρ|ψσ⟩|=tr⁡|ρσ|{\displaystyle |\langle \psi _{\rho }|\psi _{\sigma }\rangle |=\operatorname {tr} |{\sqrt {\rho }}{\sqrt {\sigma }}|}when the purifications|ψρ⟩{\displaystyle |\psi _{\rho }\rangle }and|ψσ⟩{\displaystyle |\psi _{\sigma }\rangle }are such thatρσU=|ρσ|{\displaystyle {\sqrt {\rho }}{\sqrt {\sigma }}U=|{\sqrt {\rho }}{\sqrt {\sigma }}|}. Because this choice is possible regardless of the states, we can finally conclude thattr⁡|ρσ|=max|⟨ψρ|ψσ⟩|.{\displaystyle \operatorname {tr} |{\sqrt {\rho }}{\sqrt {\sigma }}|=\max |\langle \psi _{\rho }|\psi _{\sigma }\rangle |.} Some immediate consequences of Uhlmann's theorem are So we can see that fidelity behaves almost like a metric. This can be formalized and made useful by defining As theanglebetween the statesρ{\displaystyle \rho }andσ{\displaystyle \sigma }. It follows from the above properties thatθρσ{\displaystyle \theta _{\rho \sigma }}is non-negative, symmetric in its inputs, and is equal to zero if and only ifρ=σ{\displaystyle \rho =\sigma }. Furthermore, it can be proved that it obeys the triangle inequality,[2]so this angle is a metric on the state space: theFubini–Study metric.[13]
https://en.wikipedia.org/wiki/Quantum_fidelity
Bell's theoremis a term encompassing a number of closely related results inphysics, all of which determine thatquantum mechanicsis incompatible withlocal hidden-variable theories, given some basic assumptions about the nature of measurement. The first such result was introduced byJohn Stewart Bellin 1964, building upon theEinstein–Podolsky–Rosen paradox, which had called attention to the phenomenon ofquantum entanglement. In the context of Bell's theorem, "local" refers to theprinciple of locality, the idea that aparticlecan only be influenced by its immediate surroundings, and that interactions mediated byphysical fieldscannot propagate faster than thespeed of light. "Hidden variables" are supposed properties of quantum particles that are not included in quantum theory but nevertheless affect the outcome of experiments. In the words of Bell, "If [a hidden-variable theory] is local it will not agree with quantum mechanics, and if it agrees with quantum mechanics it will not be local."[1] In his original paper,[2]Bell deduced that if measurements are performed independently on the two separated particles of an entangled pair, then the assumption that the outcomes depend upon hidden variables within each half implies a mathematical constraint on how the outcomes on the two measurements are correlated. Such a constraint would later be named aBell inequality. Bell then showed that quantum physics predicts correlations that violate thisinequality. Multiple variations on Bell's theorem were put forward in the years following his original paper, using different assumptions and obtaining different Bell (or "Bell-type") inequalities. The first rudimentary experiment designed to test Bell's theorem was performed in 1972 byJohn ClauserandStuart Freedman.[3]More advanced experiments, known collectively asBell tests, have been performed many times since. Often, these experiments have had the goal of "closing loopholes", that is, ameliorating problems of experimental design or set-up that could in principle affect the validity of the findings of earlier Bell tests. Bell tests have consistently found that physical systems obey quantum mechanics and violate Bell inequalities; which is to say that the results of these experiments are incompatible with local hidden-variable theories.[4][5] The exact nature of the assumptions required to prove a Bell-type constraint on correlations has been debated by physicists and byphilosophers. While the significance of Bell's theorem is not in doubt, differentinterpretations of quantum mechanicsdisagree about what exactly it implies. There are many variations on the basic idea, some employing stronger mathematical assumptions than others.[6]Significantly, Bell-type theorems do not refer to any particular theory of local hidden variables, but instead show that quantum physics violates general assumptions behind classical pictures of nature. The original theorem proved by Bell in 1964 is not the most amenable to experiment, and it is convenient to introduce the genre of Bell-type inequalities with a later example.[7] Hypothetical charactersAlice and Bobstand in widely separated locations. Their colleague Victor prepares a pair of particles and sends one to Alice and the other to Bob. When Alice receives her particle, she chooses to perform one of two possible measurements (perhaps by flipping a coin to decide which). Denote these measurements byA0{\displaystyle A_{0}}andA1{\displaystyle A_{1}}. BothA0{\displaystyle A_{0}}andA1{\displaystyle A_{1}}arebinarymeasurements: the result ofA0{\displaystyle A_{0}}is either+1{\displaystyle +1}or−1{\displaystyle -1}, and likewise forA1{\displaystyle A_{1}}. When Bob receives his particle, he chooses one of two measurements,B0{\displaystyle B_{0}}andB1{\displaystyle B_{1}}, which are also both binary. Suppose that each measurement reveals a property that the particle already possessed. For instance, if Alice chooses to measureA0{\displaystyle A_{0}}and obtains the result+1{\displaystyle +1}, then the particle she received carried a value of+1{\displaystyle +1}for a propertya0{\displaystyle a_{0}}.[note 1]Consider the combinationa0b0+a0b1+a1b0−a1b1=(a0+a1)b0+(a0−a1)b1.{\displaystyle a_{0}b_{0}+a_{0}b_{1}+a_{1}b_{0}-a_{1}b_{1}=(a_{0}+a_{1})b_{0}+(a_{0}-a_{1})b_{1}\,.}Because botha0{\displaystyle a_{0}}anda1{\displaystyle a_{1}}take the values±1{\displaystyle \pm 1}, then eithera0=a1{\displaystyle a_{0}=a_{1}}ora0=−a1{\displaystyle a_{0}=-a_{1}}. In the former case, the quantity(a0−a1)b1{\displaystyle (a_{0}-a_{1})b_{1}}must equal 0, while in the latter case,(a0+a1)b0=0{\displaystyle (a_{0}+a_{1})b_{0}=0}. So, one of the terms on the right-hand side of the above expression will vanish, and the other will equal±2{\displaystyle \pm 2}. Consequently, if the experiment is repeated over many trials, with Victor preparing new pairs of particles, the absolute value of the average of the combinationa0b0+a0b1+a1b0−a1b1{\displaystyle a_{0}b_{0}+a_{0}b_{1}+a_{1}b_{0}-a_{1}b_{1}}across all the trials will be less than or equal to 2. Nosingletrial can measure this quantity, because Alice and Bob can only choose one measurement each, but on the assumption that the underlying properties exist, the average value of the sum is just the sum of the averages for each term. Using angle brackets to denote averages|⟨A0B0⟩+⟨A0B1⟩+⟨A1B0⟩−⟨A1B1⟩|≤2.{\displaystyle |\langle A_{0}B_{0}\rangle +\langle A_{0}B_{1}\rangle +\langle A_{1}B_{0}\rangle -\langle A_{1}B_{1}\rangle |\leq 2\,.}This is a Bell inequality, specifically, theCHSH inequality.[7]: 115Its derivation here depends upon two assumptions: first, that the underlying physical propertiesa0,a1,b0,{\displaystyle a_{0},a_{1},b_{0},}andb1{\displaystyle b_{1}}exist independently of being observed or measured (sometimes called the assumption ofrealism); and second, that Alice's choice of action cannot influence Bob's result or vice versa (often called the assumption oflocality).[7]: 117 Quantum mechanics can violate the CHSH inequality, as follows. Victor prepares a pair ofqubitswhich he describes by theBell state|ψ⟩=|0⟩⊗|1⟩−|1⟩⊗|0⟩2,{\displaystyle |\psi \rangle ={\frac {|0\rangle \otimes |1\rangle -|1\rangle \otimes |0\rangle }{\sqrt {2}}},}where|0⟩{\displaystyle |0\rangle }and|1⟩{\displaystyle |1\rangle }are the eigenstates of one of thePauli matrices,σz=(100−1).{\displaystyle \sigma _{z}={\begin{pmatrix}1&0\\0&-1\end{pmatrix}}.}Victor then passes the first qubit to Alice and the second to Bob. Alice and Bob's choices of possiblemeasurementsare also defined in terms of the Pauli matrices. Alice measures either of the two observablesσz{\displaystyle \sigma _{z}}andσx{\displaystyle \sigma _{x}}:A0=σz,A1=σx=(0110);{\displaystyle A_{0}=\sigma _{z},\ A_{1}=\sigma _{x}={\begin{pmatrix}0&1\\1&0\end{pmatrix}};}and Bob measures either of the two observablesB0=−σx+σz2,B1=σx−σz2.{\displaystyle B_{0}=-{\frac {\sigma _{x}+\sigma _{z}}{\sqrt {2}}},\ B_{1}={\frac {\sigma _{x}-\sigma _{z}}{\sqrt {2}}}.}Victor can calculate the quantum expectation values for pairs of these observables using theBorn rule:⟨A0⊗B0⟩=12,⟨A0⊗B1⟩=12,⟨A1⊗B0⟩=12,⟨A1⊗B1⟩=−12.{\displaystyle \langle A_{0}\otimes B_{0}\rangle ={\frac {1}{\sqrt {2}}},\langle A_{0}\otimes B_{1}\rangle ={\frac {1}{\sqrt {2}}},\langle A_{1}\otimes B_{0}\rangle ={\frac {1}{\sqrt {2}}},\langle A_{1}\otimes B_{1}\rangle =-{\frac {1}{\sqrt {2}}}\,.}While only one of these four measurements can be made in a single trial of the experiment, the sum⟨A0⊗B0⟩+⟨A0⊗B1⟩+⟨A1⊗B0⟩−⟨A1⊗B1⟩=22{\displaystyle \langle A_{0}\otimes B_{0}\rangle +\langle A_{0}\otimes B_{1}\rangle +\langle A_{1}\otimes B_{0}\rangle -\langle A_{1}\otimes B_{1}\rangle =2{\sqrt {2}}}gives the sum of the average values that Victor expects to find across multiple trials. This value exceeds the classical upper bound of 2 that was deduced from the hypothesis of local hidden variables.[7]: 116The value22{\displaystyle 2{\sqrt {2}}}is in fact the largest that quantum physics permits for this combination of expectation values, making it aTsirelson bound.[10]: 140 The CHSH inequality can also be thought of asagamein which Alice and Bob try to coordinate their actions.[11][12]Victor prepares two bits,x{\displaystyle x}andy{\displaystyle y}, independently and at random. He sends bitx{\displaystyle x}to Alice and bity{\displaystyle y}to Bob. Alice and Bob win if they return answer bitsa{\displaystyle a}andb{\displaystyle b}to Victor, satisfyingxy=a+bmod2.{\displaystyle xy=a+b\mod 2\,.}Or, equivalently, Alice and Bob win if thelogical ANDofx{\displaystyle x}andy{\displaystyle y}is thelogical XORofa{\displaystyle a}andb{\displaystyle b}. Alice and Bob can agree upon any strategy they desire before the game, but they cannot communicate once the game begins. In any theory based on local hidden variables, Alice and Bob's probability of winning is no greater than3/4{\displaystyle 3/4}, regardless of what strategy they agree upon beforehand. However, if they share an entangled quantum state, their probability of winning can be as large as2+24≈0.85.{\displaystyle {\frac {2+{\sqrt {2}}}{4}}\approx 0.85\,.} Bell's 1964 paper shows that a very simple local hidden-variable model canin restricted circumstancesreproduce the predictions of quantum mechanics, but then he demonstrates that, in general, such models give different predictions.[2][13]: 806Bell considers a refinement byDavid Bohmof the Einstein–Podolsky–Rosen (EPR) thought experiment. In this scenario, a pair of particles are formed together in such a way that they are described by aspin singlet state(which is an example of an entangled state). The particles then move apart in opposite directions. Each particle is measured by aStern–Gerlach device, a measuring instrument that can be oriented in different directions and that reports one of two possible outcomes, representable by+1{\displaystyle +1}and−1{\displaystyle -1}. The configuration of each measuring instrument is represented by a unitvector, and the quantum-mechanical prediction for thecorrelationbetween two detectors with settingsa→{\displaystyle {\vec {a}}}andb→{\displaystyle {\vec {b}}}isP(a→,b→)=−a→⋅b→.{\displaystyle P({\vec {a}},{\vec {b}})=-{\vec {a}}\cdot {\vec {b}}.}In particular, if the orientation of the two detectors is the same (a→=b→{\displaystyle {\vec {a}}={\vec {b}}}), then the outcome of one measurement is certain to be the negative of the outcome of the other, givingP(a→,a→)=−1{\displaystyle P({\vec {a}},{\vec {a}})=-1}. And if the orientations of the two detectors are orthogonal (a→⋅b→=0{\displaystyle {\vec {a}}\cdot {\vec {b}}=0}), then the outcomes are uncorrelated, andP(a→,b→)=0{\displaystyle P({\vec {a}},{\vec {b}})=0}. Bell proves by example that these special casescanbe explained in terms of hidden variables, then proceeds to show that the full range of possibilities involving intermediate anglescannot. Bell posited that a local hidden-variable model for these correlations would explain them in terms of an integral over the possible values of some hidden parameterλ{\displaystyle \lambda }:P(a→,b→)=∫dλρ(λ)A(a→,λ)B(b→,λ),{\displaystyle P({\vec {a}},{\vec {b}})=\int d\lambda \,\rho (\lambda )A({\vec {a}},\lambda )B({\vec {b}},\lambda ),}whereρ(λ){\displaystyle \rho (\lambda )}is aprobability density function. The two functionsA(a→,λ){\displaystyle A({\vec {a}},\lambda )}andB(b→,λ){\displaystyle B({\vec {b}},\lambda )}provide the responses of the two detectors given the orientation vectors and the hidden variable:A(a→,λ)=±1,B(b→,λ)=±1.{\displaystyle A({\vec {a}},\lambda )=\pm 1,\,B({\vec {b}},\lambda )=\pm 1.}Crucially, the outcome of detectorA{\displaystyle A}does not depend uponb→{\displaystyle {\vec {b}}}, and likewise the outcome ofB{\displaystyle B}does not depend upona→{\displaystyle {\vec {a}}}, because the two detectors are physically separated. Now we suppose that the experimenter has achoiceof settings for the second detector: it can be set either tob→{\displaystyle {\vec {b}}}or toc→{\displaystyle {\vec {c}}}. Bell proves that the difference in correlation between these two choices of detector setting must satisfy the inequality|P(a→,b→)−P(a→,c→)|≤1+P(b→,c→).{\displaystyle |P({\vec {a}},{\vec {b}})-P({\vec {a}},{\vec {c}})|\leq 1+P({\vec {b}},{\vec {c}}).}However, it is easy to find situations where quantum mechanics violates the Bell inequality.[14]: 425–426For example, let the vectorsa→{\displaystyle {\vec {a}}}andb→{\displaystyle {\vec {b}}}be orthogonal, and letc→{\displaystyle {\vec {c}}}lie in their plane at a 45° angle from both of them. ThenP(a→,b→)=0,{\displaystyle P({\vec {a}},{\vec {b}})=0,}whileP(a→,c→)=P(b→,c→)=−22,{\displaystyle P({\vec {a}},{\vec {c}})=P({\vec {b}},{\vec {c}})=-{\frac {\sqrt {2}}{2}},}but22≰1−22.{\displaystyle {\frac {\sqrt {2}}{2}}\nleq 1-{\frac {\sqrt {2}}{2}}.}Therefore, there is no local hidden-variable model that can reproduce the predictions of quantum mechanics for all choices ofa→{\displaystyle {\vec {a}}},b→{\displaystyle {\vec {b}}}, andc→.{\displaystyle {\vec {c}}.}Experimental results contradict the classical curves and match the curve predicted by quantum mechanics as long as experimental shortcomings are accounted for.[6] Bell's 1964 theorem requires the possibility of perfect anti-correlations: the ability to make a completely certain prediction about the result from the second detector, knowing the result from the first.[6]The theorem builds upon the "EPR criterion of reality", a concept introduced in the 1935 paper by Einstein, Podolsky, and Rosen. This paper posits: "If, without in any way disturbing a system, we can predict with certainty (i.e., with probability equal to unity) the value of a physical quantity, then there exists an element of reality corresponding to that quantity."[15]Bell noted that this applies when the two detectors are oriented in the same direction (a→=b→{\displaystyle {\vec {a}}={\vec {b}}}), and so the EPR criterion would imply that some element of reality must predetermine the measurement result. Because the quantum description of a particle does not include any such element, the quantum description would have to be incomplete. In other words, Bell's 1964 paper shows that, assuming locality, the EPR criterion implies hidden variables and then he demonstrates that local hidden variables are incompatible with quantum mechanics.[16][17]Because experiments cannot achieve perfect correlations or anti-correlations in practice, Bell-type inequalities based on derivations that relax this assumption are tested instead.[6] Daniel Greenberger,Michael A. Horne, andAnton Zeilingerpresented a four-particle thought experiment in 1990, whichDavid Merminthen simplified to use only three particles.[18][19]In this thought experiment, Victor generates a set of three spin-1/2 particles described by the quantum state|ψ⟩=12(|000⟩−|111⟩),{\displaystyle |\psi \rangle ={\frac {1}{\sqrt {2}}}(|000\rangle -|111\rangle )\,,}where as above,|0⟩{\displaystyle |0\rangle }and|1⟩{\displaystyle |1\rangle }are the eigenvectors of the Pauli matrixσz{\displaystyle \sigma _{z}}. Victor then sends a particle each to Alice, Bob, and Charlie, who wait at widely separated locations. Alice measures eitherσx{\displaystyle \sigma _{x}}orσy{\displaystyle \sigma _{y}}on her particle, and so do Bob and Charlie. The result of each measurement is either+1{\displaystyle +1}or−1{\displaystyle -1}. Applying the Born rule to the three-qubit state|ψ⟩{\displaystyle |\psi \rangle }, Victor predicts that whenever the three measurements include oneσx{\displaystyle \sigma _{x}}and twoσy{\displaystyle \sigma _{y}}'s, the product of the outcomes will always be+1{\displaystyle +1}. This follows because|ψ⟩{\displaystyle |\psi \rangle }is an eigenvector ofσx⊗σy⊗σy{\displaystyle \sigma _{x}\otimes \sigma _{y}\otimes \sigma _{y}}with eigenvalue+1{\displaystyle +1}, and likewise forσy⊗σx⊗σy{\displaystyle \sigma _{y}\otimes \sigma _{x}\otimes \sigma _{y}}andσy⊗σy⊗σx{\displaystyle \sigma _{y}\otimes \sigma _{y}\otimes \sigma _{x}}. Therefore, knowing Alice's result for aσx{\displaystyle \sigma _{x}}measurement and Bob's result for aσy{\displaystyle \sigma _{y}}measurement, Victor can predict with probability 1 what result Charlie will return for aσy{\displaystyle \sigma _{y}}measurement. According to the EPR criterion of reality, there would be an "element of reality" corresponding to the outcome of aσy{\displaystyle \sigma _{y}}measurement upon Charlie's qubit. Indeed, this same logic applies to both measurements and all three qubits. Per the EPR criterion of reality, then, each particle contains an "instruction set" that determines the outcome of aσx{\displaystyle \sigma _{x}}orσy{\displaystyle \sigma _{y}}measurement upon it. The set of all three particles would then be described by the instruction set(ax,ay,bx,by,cx,cy),{\displaystyle (a_{x},a_{y},b_{x},b_{y},c_{x},c_{y})\,,}with each entry being either−1{\displaystyle -1}or+1{\displaystyle +1}, and eachσx{\displaystyle \sigma _{x}}orσy{\displaystyle \sigma _{y}}measurement simply returning the appropriate value. If Alice, Bob, and Charlie all perform theσx{\displaystyle \sigma _{x}}measurement, then the product of their results would beaxbxcx{\displaystyle a_{x}b_{x}c_{x}}. This value can be deduced from(axbycy)(aybxcy)(aybycx)=axbxcxay2by2cy2=axbxcx,{\displaystyle (a_{x}b_{y}c_{y})(a_{y}b_{x}c_{y})(a_{y}b_{y}c_{x})=a_{x}b_{x}c_{x}a_{y}^{2}b_{y}^{2}c_{y}^{2}=a_{x}b_{x}c_{x}\,,}because the square of either−1{\displaystyle -1}or+1{\displaystyle +1}is1{\displaystyle 1}. Each factor in parentheses equals+1{\displaystyle +1}, soaxbxcx=+1,{\displaystyle a_{x}b_{x}c_{x}=+1\,,}and the product of Alice, Bob, and Charlie's results will be+1{\displaystyle +1}with probability unity. But this is inconsistent with quantum physics: Victor can predict using the state|ψ⟩{\displaystyle |\psi \rangle }that the measurementσx⊗σx⊗σx{\displaystyle \sigma _{x}\otimes \sigma _{x}\otimes \sigma _{x}}will instead yield−1{\displaystyle -1}with probability unity. This thought experiment can also be recast as a traditional Bell inequality or, equivalently, as a nonlocal game in the same spirit as the CHSH game.[20]In it, Alice, Bob, and Charlie receive bitsx,y,z{\displaystyle x,y,z}from Victor, promised to always have an even number of ones, that is,x⊕y⊕z=0{\displaystyle x\oplus y\oplus z=0}, and send him back bitsa,b,c{\displaystyle a,b,c}. They win the game ifa,b,c{\displaystyle a,b,c}have an odd number of ones for all inputs exceptx=y=z=0{\displaystyle x=y=z=0}, when they need to have an even number of ones. That is, they win the gameif and only ifa⊕b⊕c=x∨y∨z{\displaystyle a\oplus b\oplus c=x\lor y\lor z}. With local hidden variables the highest probability of victory they can have is 3/4, whereas using the quantum strategy above they win it with certainty. This is an example ofquantum pseudo-telepathy. In quantum theory, orthonormal bases for aHilbert spacerepresent measurements that can be performed upon a system having that Hilbert space. Each vector in a basis represents a possible outcome of that measurement.[note 2]Suppose that a hidden variableλ{\displaystyle \lambda }exists, so that knowing the value ofλ{\displaystyle \lambda }would imply certainty about the outcome of any measurement. Given a value ofλ{\displaystyle \lambda }, each measurement outcome – that is, each vector in the Hilbert space – is eitherimpossibleorguaranteed.A Kochen–Specker configuration is a finite set of vectors made of multiple interlocking bases, with the property that a vector in it will always beimpossiblewhen considered as belonging to one basis andguaranteedwhen taken as belonging to another. In other words, a Kochen–Specker configuration is an "uncolorable set" that demonstrates the inconsistency of assuming a hidden variableλ{\displaystyle \lambda }can be controlling the measurement outcomes.[25]: 196–201 The Kochen–Specker type of argument, using configurations of interlocking bases, can be combined with the idea of measuring entangled pairs that underlies Bell-type inequalities. This was noted beginning in the 1970s by Kochen,[26]Heywood and Redhead,[27]Stairs,[28]and Brown and Svetlichny.[29]As EPR pointed out, obtaining a measurement outcome on one half of an entangled pair implies certainty about the outcome of a corresponding measurement on the other half. The "EPR criterion of reality" posits that because the second half of the pair was not disturbed, that certainty must be due to a physical property belonging to it.[30]In other words, by this criterion, a hidden variableλ{\displaystyle \lambda }must exist within the second, as-yet unmeasured half of the pair. No contradiction arises if only one measurement on the first half is considered. However, if the observer has a choice of multiple possible measurements, and the vectors defining those measurements form a Kochen–Specker configuration, then some outcome on the second half will be simultaneously impossible and guaranteed. This type of argument gained attention when an instance of it was advanced byJohn ConwayandSimon Kochenunder the name of thefree will theorem.[31][32][33]The Conway–Kochen theorem uses a pair of entangledqutritsand a Kochen–Specker configuration discovered byAsher Peres.[34] As Bell pointed out, some predictions of quantum mechanics can be replicated in local hidden-variable models, including special cases of correlations produced from entanglement. This topic has been studied systematically in the years since Bell's theorem. In 1989,Reinhard Wernerintroduced what are now calledWerner states, joint quantum states for a pair of systems that yield EPR-type correlations but also admit a hidden-variable model.[35]Werner states are bipartite quantum states that are invariant underunitariesof symmetrictensor-productform:ρAB=(U⊗U)ρAB(U†⊗U†).{\displaystyle \rho _{AB}=(U\otimes U)\rho _{AB}(U^{\dagger }\otimes U^{\dagger }).}In 2004,Robert Spekkensintroduced atoy modelthat starts with the premise of local, discretized degrees of freedom and then imposes a "knowledge balance principle" that restricts how much an observer can know about those degrees of freedom, thereby making them into hidden variables. The allowed states of knowledge ("epistemic states") about the underlying variables ("ontic states") mimic some features of quantum states. Correlations in the toy model can emulate some aspects of entanglement, likemonogamy, but by construction, the toy model can never violate a Bell inequality.[36][37] The question of whether quantum mechanics can be "completed" by hidden variables dates to the early years of quantum theory. In his1932 textbook on quantum mechanics, the Hungarian-born polymathJohn von Neumannpresented what he claimed to be a proof that there could be no "hidden parameters". The validity and definitiveness of von Neumann's proof were questioned byHans Reichenbach, in more detail byGrete Hermann, and possibly in conversation though not in print by Albert Einstein.[note 3](Simon KochenandErnst Speckerrejected von Neumann's key assumption as early as 1961, but did not publish a criticism of it until 1967.[43]) Einstein argued persistently that quantum mechanics could not be a complete theory. His preferred argument relied on a principle of locality: The EPR thought experiment is similar, also considering two separated systemsAandBdescribed by a joint wave function. However, the EPR paper adds the idea later known as the EPR criterion of reality, according to which the ability to predict with probability 1 the outcome of a measurement uponBimplies the existence of an "element of reality" withinB.[45] In 1951,David Bohmproposed a variant of the EPR thought experiment in which the measurements have discrete ranges of possible outcomes, unlike the position and momentum measurements considered by EPR.[46]The year before,Chien-Shiung Wuand Irving Shaknov had successfully measured polarizations of photons produced in entangled pairs, thereby making the Bohm version of the EPR thought experiment practically feasible.[47] By the late 1940s, the mathematicianGeorge Mackeyhad grown interested in the foundations of quantum physics, and in 1957 he drew up a list of postulates that he took to be a precise definition of quantum mechanics.[48]Mackey conjectured that one of the postulates was redundant, and shortly thereafter,Andrew M. Gleasonproved that it was indeed deducible from the other postulates.[49][50]Gleason's theoremprovided an argument that a broad class of hidden-variable theories are incompatible with quantum mechanics.[note 4]More specifically, Gleason's theorem rules out hidden-variable models that are "noncontextual". Any hidden-variable model for quantum mechanics must, in order to avoid the implications of Gleason's theorem, involve hidden variables that are not properties belonging to the measured system alone but also dependent upon the external context in which the measurement is made. This type of dependence is often seen as contrived or undesirable; in some settings, it is inconsistent withspecial relativity.[13][52]The Kochen–Specker theorem refines this statement by constructing a specific finite subset of rays on which no such probability measure can be defined.[13][53] Tsung-Dao Leecame close to deriving Bell's theorem in 1960. He considered events where twokaonswere produced traveling in opposite directions, and came to the conclusion that hidden variables could not explain the correlations that could be obtained in such situations. However, complications arose due to the fact that kaons decay, and he did not go so far as to deduce a Bell-type inequality.[39]: 308 Bell chose to publish his theorem in a comparatively obscure journal because it did not requirepage charges, in fact paying the authors who published there at the time. Because the journal did not provide free reprints of articles for the authors to distribute, however, Bell had to spend the money he received to buy copies that he could send to other physicists.[54]While the articles printed in the journal themselves listed the publication's name simply asPhysics, the covers carried the trilingual versionPhysics Physique Физикаto reflect that it would print articles in English, French and Russian.[42]: 92–100, 289 Prior to proving his 1964 result, Bell also proved a result equivalent to the Kochen–Specker theorem (hence the latter is sometimes also known as the Bell–Kochen–Specker or Bell–KS theorem). However, publication of this theorem was inadvertently delayed until 1966.[13][55]In that paper, Bell argued that because an explanation of quantum phenomena in terms of hidden variables would require nonlocality, the EPR paradox "is resolved in the way which Einstein would have liked least."[55] In 1967, the unusual titlePhysics Physique Физикаcaught the attention ofJohn Clauser, who then discovered Bell's paper and began to consider how to perform aBell testin the laboratory.[56]Clauser andStuart Freedmanwould go on to perform a Bell test in 1972.[57][58]This was only a limited test, because the choice of detector settings was made before the photons had left the source. In 1982,Alain Aspectand collaborators performed thefirst Bell testto remove this limitation.[59]This began a trend of progressively more stringent Bell tests. The GHZ thought experiment was implemented in practice, using entangled triplets of photons, in 2000.[60]By 2002, testing the CHSH inequality was feasible in undergraduate laboratory courses.[61] In Bell tests, there may be problems of experimental design or set-up that affect the validity of the experimental findings. These problems are often referred to as "loopholes". The purpose of the experiment is to test whether nature can be described bylocal hidden-variable theory, which would contradict the predictions of quantum mechanics. The most prevalent loopholes in real experiments are thedetectionandlocalityloopholes.[62]The detection loophole is opened when a small fraction of the particles (usually photons) are detected in the experiment, making it possible to explain the data with local hidden variables by assuming that the detected particles are an unrepresentative sample. The locality loophole is opened when the detections are not done with aspacelike separation, making it possible for the result of one measurement to influence the other without contradicting relativity. In some experiments there may be additional defects that make local-hidden-variable explanations of Bell test violations possible.[63] Although both the locality and detection loopholes had been closed in different experiments, a long-standing challenge was to close both simultaneously in the same experiment. This was finally achieved in three experiments in 2015.[64][65][66][67][68]Regarding these results,Alain Aspectwrites that "no experiment ... can be said to be totally loophole-free," but he says the experiments "remove the last doubts that we should renounce" local hidden variables, and refers to examples of remaining loopholes as being "far fetched" and "foreign to the usual way of reasoning in physics."[69] These efforts to experimentally validate violations of the Bell inequalities would later result in Clauser, Aspect, andAnton Zeilingerbeing awarded the 2022Nobel Prize in Physics.[70] Reactions to Bell's theorem have been many and varied. Maximilian Schlosshauer, Johannes Kofler, and Zeilinger write that Bell inequalities provide "a wonderful example of how we can have a rigorous theoretical result tested by numerous experiments, and yet disagree about the implications."[71] Copenhagen-type interpretationsgenerally take the violation of Bell inequalities as grounds to reject the assumption often calledcounterfactual definitenessor "realism", which is not necessarily the same as abandoning realism in a broader philosophical sense.[72][73]For example,Roland Omnèsargues for the rejection of hidden variables and concludes that "quantum mechanics is probably as realistic as any theory of its scope and maturity ever will be".[74]: 531Likewise,Rudolf Peierlstook the message of Bell's theorem to be that, because the premise of locality is physically reasonable, "hidden variables cannot be introduced without abandoning some of the results of quantum mechanics".[75][76] This is also the route taken by interpretations that descend from the Copenhagen tradition, such asconsistent histories(often advertised as "Copenhagen done right"),[77]:2839as well asQBism.[78] TheMany-worlds interpretation, also known as theEverettinterpretation, is dynamically local, meaning that it does not call foraction at a distance,[79]: 17and deterministic, because it consists of the unitary part of quantum mechanics without collapse. It can generate correlations that violate a Bell inequality because it violates an implicit assumption by Bell that measurements have a single outcome. In fact, Bell's theorem can be proven in the Many-Worlds framework from the assumption that a measurement has a single outcome. Therefore, a violation of a Bell inequality can be interpreted as a demonstration that measurements have multiple outcomes.[80] The explanation it provides for the Bell correlations is that when Alice and Bob make their measurements, they split into local branches. From the point of view of each copy of Alice, there are multiple copies of Bob experiencing different results, so Bob cannot have a definite result, and the same is true from the point of view of each copy of Bob. They will obtain a mutually well-defined result only when their future light cones overlap. At this point we can say that the Bell correlation starts existing, but it was produced by a purely local mechanism. Therefore, the violation of a Bell inequality cannot be interpreted as a proof of non-locality.[79]:28 Most advocates of the hidden-variables idea believe that experiments have ruled out local hidden variables.[note 5]They are ready to give up locality, explaining the violation of Bell's inequality by means of a non-localhidden variable theory, in which the particles exchange information about their states. This is the basis of theBohm interpretationof quantum mechanics, which requires that all particles in the universe be able to instantaneously exchange information with all others. One challenge for non-local hidden variable theories is to explain why this instantaneous communication can exist at the level of the hidden variables, but it cannot be used to send signals.[83]A 2007 experiment ruled out a large class of non-Bohmian non-local hidden variable theories, though not Bohmian mechanics itself.[84] Thetransactional interpretation, which postulates waves traveling both backwards and forwards in time, is likewise non-local.[85] A necessary assumption to derive Bell's theorem is that the hidden variables are not correlated with the measurement settings. This assumption has been justified on the grounds that the experimenter has "free will" to choose the settings, and that it is necessary to do science in the first place. A (hypothetical) theory where the choice of measurement is necessarily correlated with the system being measured is known assuperdeterministic.[62] A few advocates of deterministic models have not given up on local hidden variables. For example,Gerard 't Hoofthas argued that superdeterminism cannot be dismissed.[86] The following are intended for general audiences. The following are more technically oriented.
https://en.wikipedia.org/wiki/Bell%27s_theorem
Aninterpretation of quantum mechanicsis an attempt to explain how the mathematical theory ofquantum mechanicsmight correspond to experiencedreality. Quantum mechanics has held up to rigorous and extremely precise tests in an extraordinarily broad range of experiments. However, there exist a number of contending schools of thought over their interpretation. These views on interpretation differ on such fundamental questions as whether quantum mechanics isdeterministicorstochastic,localornon-local, which elements of quantum mechanics can be considered real, and what the nature ofmeasurementis, among other matters. While some variation of theCopenhagen interpretationis commonly presented in textbooks, many other interpretations have been developed. Despite nearly a century of debate and experiment, no consensus has been reached among physicists andphilosophers of physicsconcerning which interpretation best "represents" reality.[1][2] The definition of quantum theorists' terms, such aswave functionandmatrix mechanics, progressed through many stages. For instance,Erwin Schrödingeroriginally viewed the electron's wave function as its charge density smeared across space, butMax Bornreinterpreted the absolute square value of the wave function as the electron'sprobability densitydistributed across space;[3]: 24–33theBorn rule, as it is now called, matched experiment, whereas Schrödinger's charge density view did not. The views of several early pioneers of quantum mechanics, such asNiels BohrandWerner Heisenberg, are often grouped together as the "Copenhagen interpretation", though physicists and historians of physics have argued that this terminology obscures differences between the views so designated.[3][4]Copenhagen-type ideas were never universally embraced, and challenges to a perceived Copenhagen orthodoxy gained increasing attention in the 1950s with thepilot-wave interpretationofDavid Bohmand themany-worlds interpretationofHugh Everett III.[3][5][6] The physicistN. David Merminonce quipped, "New interpretations appear every year. None ever disappear."[7](Mermin also coined the saying "Shut up and calculate" to describe many physicists' attitude to quantum theory, a remark which is often misattributed toRichard Feynman.[8]) As a rough guide to development of the mainstream view during the 1990s and 2000s, a "snapshot" of opinions was collected in a poll by Schlosshauer et al. at the "Quantum Physics and the Nature of Reality" conference of July 2011.[9]The authors reference a similarly informal poll carried out byMax Tegmarkat the "Fundamental Problems in Quantum Theory" conference in August 1997. The main conclusion of the authors is that "theCopenhagen interpretationstill reigns supreme", receiving the most votes in their poll (42%), besides the rise to mainstream notability of the many-worlds interpretations: "The Copenhagen interpretation still reigns supreme here, especially if we lump it together with intellectual offsprings such asinformation-based interpretationsand thequantum Bayesianinterpretation. In Tegmark's poll, the Everett interpretation received 17% of the vote, which is similar to the number of votes (18%) in our poll." Some concepts originating from studies of interpretations have found more practical application inquantum information science.[10][11] TheCopenhagen interpretationis a collection of views about the meaning ofquantum mechanicsprincipally attributed toNiels BohrandWerner Heisenberg. It is one of the oldest attitudes towards quantum mechanics, as features of it date to the development of quantum mechanics during 1925–1927, and it remains one of the most commonly taught.[15][16]There is no definitive historical statement of what istheCopenhagen interpretation, and there were in particular fundamental disagreements between the views of Bohr and Heisenberg.[17][18]For example, Heisenberg emphasized a sharp "cut" between the observer (or the instrument) and the system being observed,[19]: 133while Bohr offered an interpretation that is independent of a subjective observer or measurement or collapse, which relies on an "irreversible" or effectively irreversible process that imparts the classical behavior of "observation" or "measurement".[20][21][22][23] Features common to Copenhagen-type interpretations include the idea that quantum mechanics is intrinsically indeterministic, with probabilities calculated using theBorn rule, and the principle ofcomplementarity, which states certain pairs of complementary properties cannot all be observed or measured simultaneously. Moreover, properties only result from the act of "observing" or "measuring"; the theory avoids assumingdefinite values from unperformed experiments. Copenhagen-type interpretations hold that quantum descriptions are objective, in that they are independent of physicists' mental arbitrariness.[24]: 85–90The statistical interpretation of wavefunctions due to Max Born differs sharply from Schrödinger's original intent, which was to have a theory with continuous time evolution and in which wavefunctions directly described physical reality.[3]: 24–33[25] Themany-worlds interpretationis an interpretation of quantum mechanics in which auniversal wavefunctionobeys the same deterministic,reversiblelaws at all times; in particular there is no (indeterministic andirreversible)wavefunction collapseassociated with measurement. The phenomena associated with measurement are claimed to be explained bydecoherence, which occurs when states interact with the environment. More precisely, the parts of the wavefunction describing observers become increasinglyentangledwith the parts of the wavefunction describing their experiments. Although all possible outcomes of experiments continue to lie in the wavefunction's support, the times at which they become correlated with observers effectively "split" the universe into mutually unobservablealternate histories. Quantum informationalapproaches[26][27]have attracted growing support.[28][9]They subdivide into two kinds.[29] The state is not an objective property of an individual system but is that information, obtained from a knowledge of how a system was prepared, which can be used for making predictions about future measurements. ... A quantum mechanical state being a summary of the observer's information about an individual physical system changes both by dynamical laws, and whenever the observer acquires new information about the system through the process of measurement. The existence of two laws for the evolution of the state vector ... becomes problematical only if it is believed that the state vector is an objective property of the system ... The "reduction of the wavepacket" does take place in the consciousness of the observer, not because of any unique physical process which takes place there, but only because the state is a construct of the observer and not an objective property of the physical system.[31] The essential idea behindrelational quantum mechanics, following the precedent ofspecial relativity, is that different observers may give different accounts of the same series of events: for example, to one observer at a given point in time, a system may be in a single, "collapsed"eigenstate, while to another observer at the same time, it may be in a superposition of two or more states. Consequently, if quantum mechanics is to be a complete theory, relational quantum mechanics argues that the notion of "state" describes not the observed system itself, but the relationship, or correlation, between the system and its observer(s). Thestate vectorof conventional quantum mechanics becomes a description of the correlation of somedegrees of freedomin the observer, with respect to the observed system. However, it is held by relational quantum mechanics that this applies to all physical objects, whether or not they are conscious or macroscopic. Any "measurement event" is seen simply as an ordinary physical interaction, an establishment of the sort of correlation discussed above. Thus the physical content of the theory has to do not with objects themselves, but the relations between them.[32][33] QBism, which originally stood for "quantum Bayesianism", is an interpretation of quantum mechanics that takes an agent's actions and experiences as the central concerns of the theory. This interpretation is distinguished by its use of asubjective Bayesianaccount of probabilities to understand the quantum mechanicalBorn ruleas anormativeaddition to good decision-making. QBism draws from the fields ofquantum informationandBayesian probabilityand aims to eliminate the interpretational conundrums that have beset quantum theory. QBism deals with common questions in the interpretation of quantum theory about the nature ofwavefunctionsuperposition,quantum measurement, andentanglement.[34][35]According to QBism, many, but not all, aspects of the quantum formalism are subjective in nature. For example, in this interpretation, a quantum state is not an element of reality—instead it represents thedegrees of beliefan agent has about the possible outcomes of measurements. For this reason, somephilosophers of sciencehave deemed QBism a form ofanti-realism.[36][37]The originators of the interpretation disagree with this characterization, proposing instead that the theory more properly aligns with a kind of realism they call "participatory realism", wherein reality consists ofmorethan can be captured by any putative third-person account of it.[38][39] Theconsistent historiesinterpretation generalizes the conventional Copenhagen interpretation and attempts to provide a natural interpretation ofquantum cosmology. The theory is based on a consistency criterion that allows the history of a system to be described so that the probabilities for each history obey the additive rules of classical probability. It is claimed to beconsistentwith theSchrödinger equation. According to this interpretation, the purpose of a quantum-mechanical theory is to predict the relative probabilities of various alternative histories (for example, of a particle). Theensemble interpretation, also called the statistical interpretation, can be viewed as a minimalist interpretation. That is, it claims to make the fewest assumptions associated with the standard mathematics. It takes the statistical interpretation of Born to the fullest extent. The interpretation states that the wave function does not apply to an individual system – for example, a single particle – but is an abstract statistical quantity that only applies to an ensemble (a vast multitude) of similarly prepared systems or particles. In the words of Einstein: The attempt to conceive the quantum-theoretical description as the complete description of the individual systems leads to unnatural theoretical interpretations, which become immediately unnecessary if one accepts the interpretation that the description refers to ensembles of systems and not to individual systems. The most prominent current advocate of the ensemble interpretation is Leslie E. Ballentine, professor atSimon Fraser University, author of the text bookQuantum Mechanics, A Modern Development. Thede Broglie–Bohm theoryof quantum mechanics (also known as the pilot wave theory) is a theory byLouis de Broglieand extended later byDavid Bohmto include measurements. Particles, which always have positions, are guided by the wavefunction. The wavefunction evolves according to theSchrödinger wave equation, and the wavefunction never collapses. The theory takes place in a single spacetime, isnon-local, and is deterministic. The simultaneous determination of a particle's position and velocity is subject to the usualuncertainty principleconstraint. The theory is considered to be ahidden-variable theory, and by embracing non-locality it satisfiesBell's inequality. Themeasurement problemis resolved, since the particles have definite positions at all times.[40]Collapse is explained asphenomenological.[41] Thetransactional interpretationof quantum mechanics (TIQM) byJohn G. Crameris an interpretation of quantum mechanics inspired by theWheeler–Feynman absorber theory.[42]It describes the collapse of the wave function as resulting from a time-symmetric transaction between a possibility wave from the source to the receiver (the wave function) and a possibility wave from the receiver to source (the complex conjugate of the wave function). This interpretation of quantum mechanics is unique in that it not only views the wave function as a real entity, but the complex conjugate of the wave function, which appears in the Born rule for calculating the expected value for an observable, as also real. Eugene Wignerargued that human experimenter consciousness (or maybe even animal consciousness) was critical for the collapse of the wavefunction, but he later abandoned this interpretation after learning aboutquantum decoherence.[43][44]Some specific proposals for consciousness caused wave-function collapse have been shown to beunfalsifiableand more broadly reasonable assumption about consciousness lead to the same conclusion.[45] Quantum logiccan be regarded as a kind of propositional logic suitable for understanding the apparent anomalies regarding quantum measurement, most notably those concerning composition of measurement operations of complementary variables. This research area and its name originated in the 1936 paper byGarrett BirkhoffandJohn von Neumann, who attempted to reconcile some of the apparent inconsistencies of classical Boolean logic with the facts related to measurement and observation in quantum mechanics. Modal interpretations of quantum mechanics were first conceived of in 1972 byBas van Fraassen, in his paper "A formal approach to the philosophy of science". Van Fraassen introduced a distinction between adynamicalstate, which describes what might be true about a system and which always evolves according to the Schrödinger equation, and avaluestate, which indicates what is actually true about a system at a given time. The term "modal interpretation" now is used to describe a larger set of models that grew out of this approach. TheStanford Encyclopedia of Philosophydescribes several versions, including proposals byKochen,Dieks, Clifton, Dickson, andBub.[46]According toMichel Bitbol, Schrödinger's views on how to interpret quantum mechanics progressed through as many as four stages, ending with a non-collapse view that in respects resembles the interpretations of Everett and van Fraassen. Because Schrödinger subscribed to a kind of post-Machianneutral monism, in which "matter" and "mind" are only different aspects or arrangements of the same common elements, treating the wavefunction as ontic and treating it as epistemic became interchangeable.[47] Time-symmetric interpretations of quantum mechanics were first suggested byWalter Schottkyin 1921.[48][49]Several theories have been proposed that modify the equations of quantum mechanics to be symmetric with respect to time reversal.[50][51][52][53][54][55](SeeWheeler–Feynman time-symmetric theory.) This createsretrocausality: events in the future can affect ones in the past, exactly as events in the past can affect ones in the future. In these theories, a single measurement cannot fully determine the state of a system (making them a type ofhidden-variables theory), but given two measurements performed at different times, it is possible to calculate the exact state of the system at all intermediate times. The collapse of the wavefunction is therefore not a physical change to the system, just a change in our knowledge of it due to the second measurement. Similarly, they explain entanglement as not being a true physical state but just an illusion created by ignoring retrocausality. The point where two particles appear to "become entangled" is simply a point where each particle is being influenced by events that occur to the other particle in the future. Not all advocates of time-symmetric causality favour modifying the unitary dynamics of standard quantum mechanics. Thus a leading exponent of the two-state vector formalism,Lev Vaidman, states that the two-state vector formalism dovetails well withHugh Everett'smany-worlds interpretation.[56] As well as the mainstream interpretations discussed above, a number of other interpretations have been proposed that have not made a significant scientific impact for whatever reason. These range from proposals by mainstream physicists to the moreoccultideas ofquantum mysticism. Some ideas are discussed in the context of interpreting quantum mechanics but are not necessarily regarded as interpretations themselves. Quantum Darwinism is a theory meant to explain the emergence of theclassical worldfrom thequantum worldas due to a process ofDarwiniannatural selectioninduced by the environment interacting with the quantum system; where the many possiblequantum statesare selected against in favor of a stablepointer state. It was proposed in 2003 byWojciech Zurekand a group of collaborators including Ollivier, Poulin, Paz and Blume-Kohout. The development of the theory is due to the integration of a number of Zurek's research topics pursued over the course of twenty-five years includingpointer states,einselectionanddecoherence. Objective-collapse theories differ from theCopenhagen interpretationby regarding both the wave function and the process of collapse as ontologically objective (meaning these exist and occur independent of the observer). In objective theories, collapse occurs either randomly ("spontaneous localization") or when some physical threshold is reached, with observers having no special role. Thus, objective-collapse theories are realistic, indeterministic, no-hidden-variables theories. Standard quantum mechanics does not specify any mechanism of collapse; quantum mechanics would need to be extended if objective collapse is correct. The requirement for an extension means that objective-collapse theories are alternatives to quantum mechanics rather than interpretations of it. Examples include The most common interpretations are summarized in the table below. The values shown in the cells of the table are not without controversy, for the precise meanings of some of the concepts involved are unclear and, in fact, are themselves at the center of the controversy surrounding the given interpretation. For another table comparing interpretations of quantum theory, see reference.[58] No experimental evidence exists that distinguishes among these interpretations. To that extent, the physical theory stands, and is consistent with itself and with reality. Nevertheless, designing experiments that would test the various interpretations is the subject of active research. Most of these interpretations have variants. For example, it is difficult to get a precise definition of the Copenhagen interpretation as it was developed and argued by many people. Although interpretational opinions are openly and widely discussed today, that was not always the case. A notable exponent of a tendency of silence wasPaul Diracwho once wrote: "The interpretation of quantum mechanics has been dealt with by many authors, and I do not want to discuss it here. I want to deal with more fundamental things."[67]This position is not uncommon among practitioners of quantum mechanics.[68]SimilarlyRichard Feynmanwrote many popularizations of quantum mechanics without ever publishing about interpretation issues like quantum measurement.[69]Others, likeNico van KampenandWillis Lamb, have openly criticized non-orthodox interpretations of quantum mechanics.[70][71] Almost all authors below are professional physicists.
https://en.wikipedia.org/wiki/Interpretations_of_quantum_mechanics
In physics,quantum tunnelling,barrier penetration, or simplytunnellingis aquantum mechanicalphenomenon in which an object such as an electron or atom passes through apotential energy barrierthat, according toclassical mechanics, should not be passable due to the object not having sufficient energy to pass or surmount the barrier. Tunneling is a consequence of thewave nature of matter, where the quantumwave functiondescribes the state of a particle or otherphysical system, and wave equations such as theSchrödinger equationdescribe their behavior. The probability of transmission of awave packetthrough a barrier decreases exponentially with the barrier height, the barrier width, and the tunneling particle's mass, so tunneling is seen most prominently in low-mass particles such aselectronsorprotonstunneling through microscopically narrow barriers. Tunneling is readily detectable with barriers of thickness about 1–3 nm or smaller for electrons, and about 0.1 nm or smaller for heavier particles such as protons or hydrogen atoms.[1]Some sources describe the mere penetration of a wave function into the barrier, without transmission on the other side, as a tunneling effect, such as in tunneling into the walls of afinite potential well.[2][3] Tunneling plays an essential role in physical phenomena such asnuclear fusion[4]andalpha radioactive decayof atomic nuclei.Tunneling applicationsinclude thetunnel diode,[5]quantum computing,flash memory, and thescanning tunneling microscope. Tunneling limits the minimum size of devices used inmicroelectronicsbecause electrons tunnel readily through insulating layers andtransistorsthat are thinner than about 1 nm.[6] The effect was predicted in the early 20th century. Its acceptance as a general physical phenomenon came mid-century.[7] Quantum tunnelling falls under the domain ofquantum mechanics. To understand thephenomenon, particles attempting to travel across apotential barriercan be compared to a ball trying to roll over a hill. Quantum mechanics and classical mechanics differ in their treatment of this scenario. Classical mechanics predicts that particles that do not have enough energy to classically surmount a barrier cannot reach the other side. Thus, a ball without sufficient energy to surmount the hill would roll back down. In quantum mechanics, a particle can, with a small probability,tunnelto the other side, thus crossing the barrier. The reason for this difference comes from treating matter ashaving properties of waves and particles. Thewave functionof aphysical systemof particles specifies everything that can be known about the system.[8]Therefore, problems in quantum mechanics analyze the system's wave function. Using mathematical formulations, such as theSchrödinger equation, the time evolution of a known wave function can be deduced. The square of theabsolute valueof this wave function is directly related to the probability distribution of the particle positions, which describes the probability that the particles would be measured at those positions. As shown in the animation, a wave packet impinges on the barrier, most of it is reflected and some is transmitted through the barrier. The wave packet becomes more de-localized: it is now on both sides of the barrier and lower in maximum amplitude, but equal in integrated square-magnitude, meaning that the probability the particle issomewhereremains unity. The wider the barrier and the higher the barrier energy, the lower the probability of tunneling. Some models of a tunneling barrier, such as therectangular barriersshown, can be analysed and solved algebraically.[9]: 96Most problems do not have an algebraic solution, so numerical solutions are used. "Semiclassical methods" offer approximate solutions that are easier to compute, such as theWKB approximation. The Schrödinger equation was published in 1926. The first person to apply the Schrödinger equation to a problem that involved tunneling between two classically allowed regions through a potential barrier wasFriedrich Hundin a series of articles published in 1927. He studied the solutions of adouble-well potentialand discussedmolecular spectra.[10]Leonid MandelstamandMikhail Leontovichdiscovered tunneling independently and published their results in 1928.[11] In 1927,Lothar Nordheim, assisted byRalph Fowler, published a paper that discussedthermionic emissionand reflection of electrons from metals. He assumed a surface potential barrier that confines the electrons within the metal and showed that the electrons have a finite probability of tunneling through or reflecting from the surface barrier when their energies are close to the barrier energy. Classically, the electron would either transmit or reflect with 100% certainty, depending on its energy. In 1928J. Robert Oppenheimerpublished two papers onfield emission,i.e.the emission of electrons induced by strong electric fields. Nordheim and Fowler simplified Oppenheimer's derivation and found values for the emitted currents andwork functionsthat agreed with experiments.[10] A great success of the tunnelling theory was the mathematical explanation foralpha decay, which was developed in 1928 byGeorge Gamowand independently byRonald GurneyandEdward Condon.[12][13][14][15]The latter researchers simultaneously solved the Schrödinger equation for a model nuclear potential and derived a relationship between thehalf-lifeof the particle and the energy of emission that depended directly on the mathematical probability of tunneling. All three researchers were familiar with the works on field emission,[10]and Gamow was aware of Mandelstam and Leontovich's findings.[16] In the early days of quantum theory, the termtunnel effectwas not used, and the effect was instead referred to as penetration of, or leaking through, a barrier. The German termwellenmechanische Tunneleffektwas used in 1931 by Walter Schottky.[10]The English termtunnel effectentered the language in 1932 when it was used by Yakov Frenkel in his textbook.[10] In 1957Leo Esakidemonstrated tunneling of electrons over a few nanometer wide barrier in asemiconductorstructure and developed adiodebased on tunnel effect.[17]In 1960, following Esaki's work,Ivar Giaevershowed experimentally that tunnelling also took place insuperconductors. The tunnelling spectrum gave direct evidence of thesuperconducting energy gap. In 1962,Brian Josephsonpredicted the tunneling of superconductingCooper pairs. Esaki, Giaever and Josephson shared the 1973Nobel Prize in Physicsfor their works on quantum tunneling in solids.[18][7] In 1981,Gerd BinnigandHeinrich Rohrerdeveloped a new type of microscope, calledscanning tunneling microscope, which is based on tunnelling and is used for imagingsurfacesat theatomiclevel. Binnig and Rohrer were awarded the Nobel Prize in Physics in 1986 for their discovery.[19] Tunnelling is the cause of some important macroscopic physical phenomena. Tunnelling is a source of current leakage invery-large-scale integration(VLSI) electronics and results in a substantial power drain and heating effects that plague such devices. It is considered the lower limit on how microelectronic device elements can be made.[20]Tunnelling is a fundamental technique used to program the floating gates offlash memory. Cold emission ofelectronsis relevant tosemiconductorsandsuperconductorphysics. It is similar tothermionic emission, where electrons randomly jump from the surface of a metal to follow a voltage bias because they statistically end up with more energy than the barrier, through random collisions with other particles. When the electric field is very large, the barrier becomes thin enough for electrons to tunnel out of the atomic state, leading to a current that varies approximately exponentially with the electric field.[21]These materials are important for flash memory, vacuum tubes, and some electron microscopes. A simple barrier can be created by separating two conductors with a very thininsulator. These are tunnel junctions, the study of which requires understanding quantum tunnelling.[22]Josephson junctionstake advantage of quantum tunnelling and superconductivity to create theJosephson effect. This has applications in precision measurements of voltages andmagnetic fields,[21]as well as themultijunction solar cell. Diodesare electricalsemiconductor devicesthat allowelectric currentflow in one direction more than the other. The device depends on adepletion layerbetweenN-typeandP-type semiconductorsto serve its purpose. When these are heavily doped the depletion layer can be thin enough for tunnelling. When a small forward bias is applied, the current due to tunnelling is significant. This has a maximum at the point where thevoltage biasis such that the energy level of the p and nconduction bandsare the same. As the voltage bias is increased, the two conduction bands no longer line up and the diode acts typically.[23] Because the tunnelling current drops off rapidly, tunnel diodes can be created that have a range of voltages for which current decreases as voltage increases. This peculiar property is used in some applications, such as high speed devices where the characteristic tunnelling probability changes as rapidly as the bias voltage.[23] Theresonant tunnelling diodemakes use of quantum tunnelling in a very different manner to achieve a similar result. This diode has a resonant voltage for which a current favors a particular voltage, achieved by placing two thin layers with a high energy conductance band near each other. This creates a quantumpotential wellthat has a discrete lowestenergy level. When this energy level is higher than that of the electrons, no tunnelling occurs and the diode is in reverse bias. Once the two voltage energies align, the electrons flow like an open wire. As the voltage further increases, tunnelling becomes improbable and the diode acts like a normal diode again before a second energy level becomes noticeable.[24] A European research project demonstratedfield effect transistorsin which the gate (channel) is controlled via quantum tunnelling rather than by thermal injection, reducing gate voltage from ≈1 volt to 0.2 volts and reducing power consumption by up to 100×. If these transistors can be scaled up intoVLSI chips, they would improve the performance per power ofintegrated circuits.[25][26] While theDrude-Lorentz modelofelectrical conductivitymakes excellent predictions about the nature of electrons conducting in metals, it can be furthered by using quantum tunnelling to explain the nature of the electron's collisions.[21]When a free electron wave packet encounters a long array of uniformly spacedbarriers, the reflected part of the wave packet interferes uniformly with the transmitted one between all barriers so that 100% transmission becomes possible. The theory predicts that if positively charged nuclei form a perfectly rectangular array, electrons will tunnel through the metal as free electrons, leading to extremely highconductance, and that impurities in the metal will disrupt it.[21] The scanning tunnelling microscope (STM), invented byGerd BinnigandHeinrich Rohrer, may allow imaging of individual atoms on the surface of a material.[21]It operates by taking advantage of the relationship between quantum tunnelling with distance. When the tip of the STM's needle is brought close to a conduction surface that has a voltage bias, measuring the current of electrons that are tunnelling between the needle and the surface reveals the distance between the needle and the surface. By usingpiezoelectric rodsthat change in size when voltage is applied, the height of the tip can be adjusted to keep the tunnelling current constant. The time-varying voltages that are applied to these rods can be recorded and used to image the surface of the conductor.[21]STMs are accurate to 0.001 nm, or about 1% of atomic diameter.[24] Quantum tunnelling is an essential phenomenon for nuclear fusion. The temperature instellar coresis generally insufficient to allow atomic nuclei to overcome theCoulomb barrierand achievethermonuclear fusion. Quantum tunnelling increases the probability of penetrating this barrier. Though this probability is still low, the extremely large number of nuclei in the core of a star is sufficient to sustain a steady fusion reaction.[27] Radioactive decay is the process of emission of particles and energy from the unstable nucleus of an atom to form a stable product. This is done via the tunnelling of a particle out of the nucleus (an electron tunneling into the nucleus iselectron capture). This was the first application of quantum tunnelling. Radioactive decay is a relevant issue forastrobiologyas this consequence of quantum tunnelling creates a constant energy source over a large time interval for environments outside thecircumstellar habitable zonewhere insolation would not be possible (subsurface oceans) or effective.[27] Quantum tunnelling may be one of the mechanisms of hypotheticalproton decay.[28][29] Chemical reactions in theinterstellar mediumoccur at extremely low energies. Probably the most fundamental ion-molecule reaction involves hydrogen ions with hydrogen molecules. The quantum mechanical tunnelling rate for the same reaction using thehydrogenisotopedeuterium, D−+ H2→ H−+ HD, has been measured experimentally in an ion trap. The deuterium was placed in anion trapand cooled. The trap was then filled with hydrogen. At the temperatures used in the experiment, the energy barrier for reaction would not allow the reaction to succeed with classical dynamics alone. Quantum tunneling allowed reactions to happen in rare collisions. It was calculated from the experimental data that collisions happened one in every hundred billion.[30] Inchemical kinetics, the substitution of a lightisotopeof an element with a heavier one typically results in a slower reaction rate. This is generally attributed to differences in the zero-point vibrational energies for chemical bonds containing the lighter and heavier isotopes and is generally modeled usingtransition state theory. However, in certain cases, large isotopic effects are observed that cannot be accounted for by a semi-classical treatment, and quantum tunnelling is required.R. P. Belldeveloped a modified treatment of Arrhenius kinetics that is commonly used to model this phenomenon.[31] By including quantum tunnelling, theastrochemicalsyntheses of various molecules ininterstellar cloudscan be explained, such as the synthesis ofmolecular hydrogen,water(ice) and theprebioticimportantformaldehyde.[27]Tunnelling of molecular hydrogen has been observed in the lab.[32] Quantum tunnelling is among the central non-trivial quantum effects inquantum biology.[33]Here it is important both as electron tunnelling andproton tunnelling. Electron tunnelling is a key factor in many biochemicalredox reactions(photosynthesis,cellular respiration) as well as enzymatic catalysis. Proton tunnelling is a key factor in spontaneousDNAmutation.[27] Spontaneous mutation occurs when normal DNA replication takes place after a particularly significant proton has tunnelled.[34]A hydrogen bond joins DNA base pairs. A double well potential along a hydrogen bond separates a potential energy barrier. It is believed that the double well potential is asymmetric, with one well deeper than the other such that the proton normally rests in the deeper well. For a mutation to occur, the proton must have tunnelled into the shallower well. The proton's movement from its regular position is called atautomeric transition. If DNA replication takes place in this state, the base pairing rule for DNA may be jeopardised, causing a mutation.[35]Per-Olov Lowdinwas the first to develop this theory of spontaneous mutation within thedouble helix. Other instances of quantum tunnelling-induced mutations in biology are believed to be a cause of ageing and cancer.[36] Thetime-independent Schrödinger equationfor one particle in onedimensioncan be written as−ℏ22md2dx2Ψ(x)+V(x)Ψ(x)=EΨ(x){\displaystyle -{\frac {\hbar ^{2}}{2m}}{\frac {d^{2}}{dx^{2}}}\Psi (x)+V(x)\Psi (x)=E\Psi (x)}ord2dx2Ψ(x)=2mℏ2(V(x)−E)Ψ(x)≡2mℏ2M(x)Ψ(x),{\displaystyle {\frac {d^{2}}{dx^{2}}}\Psi (x)={\frac {2m}{\hbar ^{2}}}\left(V(x)-E\right)\Psi (x)\equiv {\frac {2m}{\hbar ^{2}}}M(x)\Psi (x),}where The solutions of the Schrödinger equation take different forms for different values ofx, depending on whetherM(x) is positive or negative. WhenM(x) is constant and negative, then the Schrödinger equation can be written in the formd2dx2Ψ(x)=2mℏ2M(x)Ψ(x)=−k2Ψ(x),wherek2=−2mℏ2M.{\displaystyle {\frac {d^{2}}{dx^{2}}}\Psi (x)={\frac {2m}{\hbar ^{2}}}M(x)\Psi (x)=-k^{2}\Psi (x),\qquad {\text{where}}\quad k^{2}=-{\frac {2m}{\hbar ^{2}}}M.} The solutions of this equation represent travelling waves, with phase-constant +kor −k. Alternatively, ifM(x) is constant and positive, then the Schrödinger equation can be written in the formd2dx2Ψ(x)=2mℏ2M(x)Ψ(x)=κ2Ψ(x),whereκ2=2mℏ2M.{\displaystyle {\frac {d^{2}}{dx^{2}}}\Psi (x)={\frac {2m}{\hbar ^{2}}}M(x)\Psi (x)={\kappa }^{2}\Psi (x),\qquad {\text{where}}\quad {\kappa }^{2}={\frac {2m}{\hbar ^{2}}}M.} The solutions of this equation are rising and falling exponentials in the form ofevanescent waves. WhenM(x) varies with position, the same difference in behaviour occurs, depending on whether M(x) is negative or positive. It follows that the sign ofM(x) determines the nature of the medium, with negativeM(x) corresponding to medium A and positiveM(x) corresponding to medium B. It thus follows that evanescent wave coupling can occur if a region of positiveM(x) is sandwiched between two regions of negativeM(x), hence creating a potential barrier. The mathematics of dealing with the situation whereM(x) varies withxis difficult, except in special cases that usually do not correspond to physical reality. A full mathematical treatment appears in the 1965 monograph by Fröman and Fröman. Their ideas have not been incorporated into physics textbooks, but their corrections have little quantitative effect. The wave function is expressed as the exponential of a function:Ψ(x)=eΦ(x),{\displaystyle \Psi (x)=e^{\Phi (x)},}whereΦ″(x)+Φ′(x)2=2mℏ2(V(x)−E).{\displaystyle \Phi ''(x)+\Phi '(x)^{2}={\frac {2m}{\hbar ^{2}}}\left(V(x)-E\right).}Φ′(x){\displaystyle \Phi '(x)}is then separated into real and imaginary parts:Φ′(x)=A(x)+iB(x),{\displaystyle \Phi '(x)=A(x)+iB(x),}whereA(x) andB(x) are real-valued functions. Substituting the second equation into the first and using the fact that the imaginary part needs to be 0 results in: A′(x)+A(x)2−B(x)2=2mℏ2(V(x)−E).{\displaystyle A'(x)+A(x)^{2}-B(x)^{2}={\frac {2m}{\hbar ^{2}}}\left(V(x)-E\right).} To solve this equation using the semiclassical approximation, each function must be expanded as apower seriesinℏ{\displaystyle \hbar }. From the equations, the power series must start with at least an order ofℏ−1{\displaystyle \hbar ^{-1}}to satisfy the real part of the equation; for a good classical limit starting with the highest power of thePlanck constantpossible is preferable, which leads toA(x)=1ℏ∑k=0∞ℏkAk(x){\displaystyle A(x)={\frac {1}{\hbar }}\sum _{k=0}^{\infty }\hbar ^{k}A_{k}(x)}andB(x)=1ℏ∑k=0∞ℏkBk(x),{\displaystyle B(x)={\frac {1}{\hbar }}\sum _{k=0}^{\infty }\hbar ^{k}B_{k}(x),}with the following constraints on the lowest order terms,A0(x)2−B0(x)2=2m(V(x)−E){\displaystyle A_{0}(x)^{2}-B_{0}(x)^{2}=2m\left(V(x)-E\right)}andA0(x)B0(x)=0.{\displaystyle A_{0}(x)B_{0}(x)=0.} At this point two extreme cases can be considered. Case 1 If the amplitude varies slowly as compared to the phaseA0(x)=0{\displaystyle A_{0}(x)=0}andB0(x)=±2m(E−V(x)){\displaystyle B_{0}(x)=\pm {\sqrt {2m\left(E-V(x)\right)}}}which corresponds to classical motion. Resolving the next order of expansion yieldsΨ(x)≈Cei∫dx2mℏ2(E−V(x))+θ2mℏ2(E−V(x))4{\displaystyle \Psi (x)\approx C{\frac {e^{i\int dx{\sqrt {{\frac {2m}{\hbar ^{2}}}\left(E-V(x)\right)}}+\theta }}{\sqrt[{4}]{{\frac {2m}{\hbar ^{2}}}\left(E-V(x)\right)}}}} Case 2 If the phase varies slowly as compared to the amplitude,B0(x)=0{\displaystyle B_{0}(x)=0}andA0(x)=±2m(V(x)−E){\displaystyle A_{0}(x)=\pm {\sqrt {2m\left(V(x)-E\right)}}}which corresponds to tunneling. Resolving the next order of the expansion yieldsΨ(x)≈C+e+∫dx2mℏ2(V(x)−E)+C−e−∫dx2mℏ2(V(x)−E)2mℏ2(V(x)−E)4{\displaystyle \Psi (x)\approx {\frac {C_{+}e^{+\int dx{\sqrt {{\frac {2m}{\hbar ^{2}}}\left(V(x)-E\right)}}}+C_{-}e^{-\int dx{\sqrt {{\frac {2m}{\hbar ^{2}}}\left(V(x)-E\right)}}}}{\sqrt[{4}]{{\frac {2m}{\hbar ^{2}}}\left(V(x)-E\right)}}}} In both cases it is apparent from the denominator that both these approximate solutions are bad near the classical turning pointsE=V(x){\displaystyle E=V(x)}. Away from the potential hill, the particle acts similar to a free and oscillating wave; beneath the potential hill, the particle undergoes exponential changes in amplitude. By considering the behaviour at these limits and classical turning points a global solution can be made. To start, a classical turning point,x1{\displaystyle x_{1}}is chosen and2mℏ2(V(x)−E){\displaystyle {\frac {2m}{\hbar ^{2}}}\left(V(x)-E\right)}is expanded in a power series aboutx1{\displaystyle x_{1}}:2mℏ2(V(x)−E)=v1(x−x1)+v2(x−x1)2+⋯{\displaystyle {\frac {2m}{\hbar ^{2}}}\left(V(x)-E\right)=v_{1}(x-x_{1})+v_{2}(x-x_{1})^{2}+\cdots } Keeping only the first order term ensures linearity:2mℏ2(V(x)−E)=v1(x−x1).{\displaystyle {\frac {2m}{\hbar ^{2}}}\left(V(x)-E\right)=v_{1}(x-x_{1}).} Using this approximation, the equation nearx1{\displaystyle x_{1}}becomes adifferential equation:d2dx2Ψ(x)=v1(x−x1)Ψ(x).{\displaystyle {\frac {d^{2}}{dx^{2}}}\Psi (x)=v_{1}(x-x_{1})\Psi (x).} This can be solved usingAiry functionsas solutions.Ψ(x)=CAAi(v13(x−x1))+CBBi(v13(x−x1)){\displaystyle \Psi (x)=C_{A}Ai\left({\sqrt[{3}]{v_{1}}}(x-x_{1})\right)+C_{B}Bi\left({\sqrt[{3}]{v_{1}}}(x-x_{1})\right)} Taking these solutions for all classical turning points, a global solution can be formed that links the limiting solutions. Given the two coefficients on one side of a classical turning point, the two coefficients on the other side of a classical turning point can be determined by using this local solution to connect them. Hence, the Airy function solutions will asymptote into sine, cosine and exponential functions in the proper limits. The relationships betweenC,θ{\displaystyle C,\theta }andC+,C−{\displaystyle C_{+},C_{-}}areC+=12Ccos⁡(θ−π4){\displaystyle C_{+}={\frac {1}{2}}C\cos {\left(\theta -{\frac {\pi }{4}}\right)}}and With the coefficients found, the global solution can be found. Therefore, thetransmission coefficientfor a particle tunneling through a single potential barrier isT(E)=e−2∫x1x2dx2mℏ2[V(x)−E],{\displaystyle T(E)=e^{-2\int _{x_{1}}^{x_{2}}dx{\sqrt {{\frac {2m}{\hbar ^{2}}}\left[V(x)-E\right]}}},}wherex1,x2{\displaystyle x_{1},x_{2}}are the two classical turning points for the potential barrier. For a rectangular barrier, this expression simplifies to:T(E)=e−22mℏ2(V0−E)(x2−x1).{\displaystyle T(E)=e^{-2{\sqrt {{\frac {2m}{\hbar ^{2}}}(V_{0}-E)}}(x_{2}-x_{1})}.} Some physicists have claimed that it is possible for spin-zero particles to travel faster than thespeed of lightwhen tunnelling.[7]This appears to violate the principle ofcausality, since aframe of referencethen exists in which the particle arrives before it has left. In 1998,Francis E. Lowreviewed briefly the phenomenon of zero-time tunnelling.[37]More recently, experimental tunnelling time data ofphonons,photons, andelectronswas published byGünter Nimtz.[38]Another experiment overseen byA. M. Steinberg, seems to indicate that particles could tunnel at apparent speeds faster than light.[39][40] Other physicists, such asHerbert Winful,[41]disputed these claims. Winful argued that the wave packet of a tunnelling particle propagates locally, so a particle can't tunnel through the barrier non-locally. Winful also argued that the experiments that are purported to show non-local propagation have been misinterpreted. In particular, the group velocity of a wave packet does not measure its speed, but is related to the amount of time the wave packet is stored in the barrier. Moreover, if quantum tunneling is modeled with the relativisticDirac equation, well established mathematical theorems imply that the process is completely subluminal.[42][43] The concept of quantum tunneling can be extended to situations where there exists a quantum transport between regions that are classically not connected even if there is no associated potential barrier. This phenomenon is known as dynamical tunnelling.[44][45] The concept of dynamical tunnelling is particularly suited to address the problem of quantum tunnelling in high dimensions (d>1). In the case of anintegrable system, where bounded classical trajectories are confined ontotoriinphase space, tunnelling can be understood as the quantum transport between semi-classical states built on two distinct but symmetric tori.[46] In real life, most systems are not integrable and display various degrees of chaos. Classical dynamics is then said to be mixed and the system phase space is typically composed of islands of regular orbits surrounded by a large sea of chaotic orbits. The existence of the chaotic sea, where transport is classically allowed, between the two symmetric tori then assists the quantum tunnelling between them. This phenomenon is referred as chaos-assisted tunnelling.[47]and is characterized by sharp resonances of the tunnelling rate when varying any system parameter. Whenℏ{\displaystyle \hbar }is small in front of the size of the regular islands, the fine structure of the classical phase space plays a key role in tunnelling. In particular the two symmetric tori are coupled "via a succession of classically forbidden transitions across nonlinear resonances" surrounding the two islands.[48] Several phenomena have the same behavior as quantum tunnelling. Two examples areevanescent wave coupling[49](the application ofMaxwell's wave-equationtolight) and the application of thenon-dispersive wave-equationfromacousticsapplied to"waves on strings".[citation needed] These effects are modeled similarly to therectangular potential barrier. In these cases, onetransmission mediumthrough which thewave propagatesthat is the same or nearly the same throughout, and a second medium through which the wave travels differently. This can be described as a thin region of medium B between two regions of medium A. The analysis of a rectangular barrier by means of the Schrödinger equation can be adapted to these other effects provided that the wave equation hastravelling wavesolutions in medium A but realexponentialsolutions in medium B. Inoptics, medium A is a vacuum while medium B is glass. In acoustics, medium A may be a liquid or gas and medium B a solid. For both cases, medium A is a region of space where the particle'stotal energyis greater than itspotential energyand medium B is the potential barrier. These have an incoming wave and resultant waves in both directions. There can be more mediums and barriers, and the barriers need not be discrete. Approximations are useful in this case. A classical wave-particle association was originally analyzed as analogous to quantum tunneling,[50]but subsequent analysis found a fluid dynamics cause related to the vertical momentum imparted to particles near the barrier.[51]
https://en.wikipedia.org/wiki/Quantum_tunneling
Inquantum mechanics, theRenninger negative-result experimentis athought experimentthat illustrates some of the difficulties of understanding the nature ofwave function collapseandmeasurementin quantum mechanics. The statement is that a particle need not be detected in order for a quantum measurement to occur, and that the lack of a particle detection can also constitute a measurement. The thought experiment was first posed in 1953 byMauritius Renninger. The non-detection of a particle in one arm of an interferometer implies that the particle must be in the other arm. It can be understood to be a refinement of the paradox presented in theMott problem. TheMott problemconcerns the paradox of reconciling the spherical wave function describing the emission of analpha rayby a radioactive nucleus, with the linear tracks seen in acloud chamber. Formulated in 1927 byAlbert EinsteinandMax Born[citation needed], it was resolved by a calculation done by SirNevill Francis Mottthat showed that the correct quantum mechanical system must include the wave functions for the atoms in the cloud chamber as well as that for the alpha ray. The calculation showed that the resulting probability is non-zero only on straight lines raying out from the decayed atom; that is, once the measurement is performed, the wave-function becomes non-vanishing only near the classical trajectory of a particle. In Renninger's 1960 formulation, the cloud chamber is replaced by a pair of hemisphericalparticle detectors, completely surrounding a radioactive atom at the center that is about to decay by emitting an alpha ray. For the purposes of the thought experiment, the detectors are assumed to be 100% efficient, so that the emitted alpha ray is always detected. By consideration of the normal process of quantum measurement, it is clear that if one detector registers the decay, then the other will not: a single particle cannot be detected by both detectors. The core observation is that the non-observation of a particle on one of the shells is just as good a measurement as detecting it on the other. The strength of the paradox can be heightened by considering the two hemispheres to be of different diameters; with the outer shell a good distance farther away. In this case, after the non-observation of the alpha ray on the inner shell, one is led to conclude that the (originally spherical) wave function has "collapsed" to a hemisphere shape, and (because the outer shell is distant) is still in the process of propagating to the outer shell, where it is guaranteed to eventually be detected. In the standard quantum-mechanical formulation, the statement is that the wave-function has partially collapsed, and has taken on a hemispherical shape. The full collapse of the wave function, down to a single point, does not occur until it interacts with the outer hemisphere. The conundrum of this thought experiment lies in the idea that the wave function interacted with the inner shell, causing a partial collapse of the wave function, without actually triggering any of the detectors on the inner shell. This illustrates that wave function collapse can occur even in the absence of particle detection. There are a number of common objections to the standard interpretation of the experiment. Some of these objections, and standard rebuttals, are listed below. It is sometimes noted that the time of the decay of the nucleus cannot be controlled, and that the finitehalf-lifeinvalidates the result. This objection can be dispelled by sizing the hemispheres appropriately with regards to the half-life of the nucleus. The radii are chosen so that the more distant hemisphere is much farther away than the half-life of the decaying nucleus, times the flight-time of the alpha ray. To lend concreteness to the example, assume that the half-life of the decaying nucleus is 0.01 microsecond (mostelementary particledecay half-lives are much shorter; mostnuclear decayhalf-lives are much longer; some atomic electromagnetic excitations have a half-life about this long). If one were to wait 0.4 microseconds, then the probability that the particle will have decayed will be1−2−40≃1−10−12{\displaystyle 1-2^{-40}\simeq 1-10^{-12}}; that is, the probability will be very very close to one. The outer hemisphere is then placed at (speed of light) times (0.4 microseconds) away: that is, at about 120 meters away. The inner hemisphere is taken to be much closer, say at 1 meter. If, after (for example) 0.3 microseconds, one has not seen the decay product on the inner, closer, hemisphere, one can conclude that the particle has decayed with almost absolute certainty, but is still in-flight to the outer hemisphere. The paradox then concerns the correct description of the wave function in such a scenario. Another common objection states that the decay particle was always travelling in a straight line, and that only the probability of the distribution is spherical. This, however, is a mis-interpretation of theMott problem, and is false. The wave function was truly spherical, and is not theincoherent superposition(mixed state) of a large number of plane waves. The distinction between mixed andpure statesis illustrated more clearly in a different context, in the debate comparing the ideas behindlocal-hidden variablesand their refutation by means of theBell inequalities. A true quantum-mechanical wave would diffract from the inner hemisphere, leaving adiffractionpattern to be observed on the outer hemisphere. This is not really an objection, but rather an affirmation that a partial collapse of the wave function has occurred. If a diffraction pattern were not observed, one would be forced to conclude that the particle had collapsed down to a ray, and stayed that way, as it passed the inner hemisphere; this is clearly at odds with standard quantum mechanics. Diffraction from the inner hemisphere is expected. In this objection, it is noted that in real life, a decay product is either spin-1/2 (afermion) or aphoton(spin-1). This is taken to mean that the decay is not truly sphere symmetric, but rather has some other distribution, such as a p-wave. However, on closer examination, one sees this has no bearing on the spherical symmetry of the wave-function. Even if the initial state could be polarized; for example, by placing it in a magnetic field, the non-spherical decay pattern is still properly described by quantum mechanics. The above formulation is inherently phrased in a non-relativistic language; and it is noted that elementary particles have relativistic decay products. This objection only serves to confuse the issue. The experiment can be reformulated so that the decay product is slow-moving. At any rate,special relativityis not in conflict with quantum mechanics. This objection states that in real life, particle detectors are imperfect, and sometimes neither the detectors on the one hemisphere, nor the other, will go off. This argument only serves to confuse the issue, and has no bearing on the fundamental nature of the wave-function.
https://en.wikipedia.org/wiki/Renninger_negative-result_experiment
Wheeler's delayed-choice experimentdescribes a family ofthought experimentsinquantum physicsproposed byJohn Archibald Wheeler, with the most prominent among them appearing in 1978 and 1984.[1]These experiments illustrate the central point ofquantum theory: "It is wrong to attribute a tangibility to the photon in all its travel from the point of entry to its last instant of flight."[2]: 184 These experiments close aloopholein the traditionaldouble-slit experimentdemonstration that quantum behavior depends on the experimental arrangement. The experiment closes the loophole that a photon might adjust its behavior from particle to wave behavior or vice versa. By altering the apparatus after thephotonis supposed to be in "flight", the loophole is closed.[1] Cosmic versions of the delayed-choice experiment use photons emitted billions of years ago; the results are unchanged.[3]The concept of delayed choice has been productive of many revealing experiments.[1]New versions of the delayed choice concept use quantum effects to control the "choices", leading to thedelayed-choice quantum eraser. Wheeler's delayed-choice experiment demonstrates that no particle-propagation model consistent with relativity explains quantum theory.[2]:184Like thedouble-slit experiment, Wheeler's concept has two equivalent paths between a source and detector. Like thewhich-wayversions of the double-slit, the experiment is run in two versions: one designed to detect wave interference and one designed to detect particles. The new ingredient in Wheeler's approach is a delayed-choice between these two experiments. The decision to measure wave interference or particle path is delayed until just before the detection. The goal is to ensure that any traveling particle or wave will have passed the area of two distinct paths in the quantum system before the choice of experiment is made.[4]:967 Wheeler's cosmic scale thought experiment employs aquasaror other light source in a galaxy billions of light years away. Some of these stars are known to be located behind a massive galaxy that acts as agravitational lens, bending light rays pointing away from Earth back towards us. The result is two images of the star, one direct and one bent. Wheeler proposed to measure the interference between these two paths. Because the light observed in such an experiment was emitted and passed through the lens billions of years ago, no choice on Earth could alter the outcome of the experiment.[1] Wheeler then plays thedevil's advocateand suggests that perhaps for those experimental results to be obtained would mean that at the instant astronomers inserted their beam-splitter, photons that had left the quasar some millions of years ago retroactively decided to travel as waves, and that when the astronomers decided to pull their beam splitter out again that decision was telegraphed back through time to photons that were leaving some millions of years plus some minutes in the past, so that photons retroactively decided to travel as particles. Several ways of implementing Wheeler's basic idea have been made into real experiments and they support the conclusion that Wheeler anticipated[1]— that what is done at the exit port of the experimental device before the photon is detected will determine whether it displaysinterference phenomenaor not. A second kind of experiment resembles the ordinary double-slit experiment. The schematic diagram of this experiment shows that a lens on the far side of the double slits makes the path from each slit diverge slightly from the other after they cross each other fairly near to that lens. The result is that the two wavefunctions for each photon will be in superposition within a fairly short distance from the double slits, and if a detection screen is provided within the region wherein the wavefunctions are in superposition then interference patterns will be seen. There is no way by which any given photon could have been determined to have arrived from one or the other of the double slits. However, if the detection screen is removed the wavefunctions on each path will superimpose on regions of lower and lower amplitudes, and their combined probability values will be much less than the unreinforced probability values at the center of each path. When telescopes are aimed to intercept the center of the two paths, there will be equal probabilities of nearly 50% that a photon will show up in one of them. When a photon is detected by telescope 1, researchers may associate that photon with the wavefunction that emerged from the lower slit. When one is detected in telescope 2, researchers may associate that photon with the wavefunction that emerged from the upper slit. The explanation that supports this interpretation of experimental results is that a photon has emerged from one of the slits, and that is the end of the matter. A photon must have started at the laser, passed through one of the slits, and arrived by a single straight-line path at the corresponding telescope. Theretrocausalexplanation, which Wheeler does not accept, says that with the detection screen in place, interference must be manifested. For interference to be manifested, a light wave must have emerged from each of the two slits. Therefore, a single photon upon coming into the double-slit diaphragm must have "decided" that it needs to go through both slits to be able to interfere with itself on the detection screen. For no interference to be manifested, a single photon coming into the double-slit diaphragm must have "decided" to go by only one slit because that would make it show up at the camera in the appropriate single telescope. In this thought experiment the telescopes are always present, but the experiment can start with the detection screen being present but then being removed just after the photon leaves the double-slit diaphragm, or the experiment can start with the detection screen being absent and then being inserted just after the photon leaves the diaphragm. Some theorists argue that inserting or removing the screen in the midst of the experiment can force a photon to retroactively decide to go through the double-slits as a particle when it had previously transited it as a wave, or vice versa. Wheeler does not accept this interpretation. Thedouble slit experiment, like the other six idealized experiments (microscope, split beam, tilt-teeth,radiationpattern, one-photonpolarization, and polarization of paired photons), imposes a choice between complementary modes of observation. In each experiment we have found a way to delay that choice of type of phenomenon to be looked for up to the very final stage of development of the phenomenon, and it depends on whichever type of detection device we then fix upon. That delay makes no difference in the experimental predictions. On this score everything we find was foreshadowed in that solitary and pregnant sentence of Bohr, "...it...can make no difference, as regards observable effects obtainable by a definite experimental arrangement, whether our plans for constructing or handling the instruments are fixed beforehand or whether we prefer to postpone the completion of our planning until a later moment when the particle is already on its way from one instrument to another."[8] InBohm's interpretation of quantum mechanics, the particle obeys classical mechanics except that its movement takes place under the additional influence of itsquantum potential. A neutron for example has a definite trajectory and passes through one or the other of the two slits and not both, just as it is in the case of a classical particle. However the quantum particle in Bohm's interpretation is inseparable from its associated field. That field provides the quantum properties. In the delayed choice experiment, wave packets associated with the field propagate along both paths of the interferometer, but only one packet contains a particle. The field aspect of the neutron is responsible for the interference between the two paths. Changing the final setup at the detector to look for particle properties amounts to ignoring the field aspect of the neutron.[9][10][11]: 279 The past is determined and stays what it was up to the momentT1when the experimental configuration for detecting it as awavewas changed to that of detecting aparticleat the arrival timeT2. AtT1, when the experimental set up was changed, Bohm's quantum potential changes as needed, and the particle moves classically under the new quantum potential tillT2when it is detected as a particle. Thus Bohmian mechanics restores the conventional view of the world and its past. The past is out there as an objective history unalterable retroactively by delayed choice. The quantum potential contains information about the boundary conditions defining the system, and hence any change of the experimental set up is reflected in changes in the quantum potential which determines the dynamics of the particle.[11]: 6.7.1 John Wheeler's original discussion of the possibility of a delayed choice quantum appeared in an essay entitled "Law Without Law," which was published in a book he andWojciech Hubert Zurekedited calledQuantum Theory and Measurement, pp 182–213. He introduced his remarks by reprising the argument betweenAlbert Einstein, who wanted a comprehensible reality, andNiels Bohr, who thought that Einstein's concept of reality was too restricted. Wheeler indicates that Einstein and Bohr explored the consequences of the laboratory experiment that will be discussed below, one in which light can find its way from one corner of a rectangular array of semi-silvered and fully silvered mirrors to the other corner, and then can be made to reveal itself not only as having gone halfway around the perimeter by a single path and then exited, but also as having gone both ways around the perimeter and then to have "made a choice" as to whether to exit by one port or the other. Not only does this result hold for beams of light, but also for single photons of light. Wheeler remarked: The experiment in the form aninterferometer, discussed by Einstein and Bohr, could theoretically be used to investigate whether a photon sometimes sets off along a single path, always follows two paths but sometimes only makes use of one, or whether something else would turn up. However, it was easier to say, "We will, during random runs of the experiment, insert the second half-silvered mirror just before the photon is timed to get there," than it was to figure out a way to make such a rapid substitution. The speed of light is just too fast to permit a mechanical device to do this job, at least within the confines of a laboratory. Much ingenuity was needed to get around this problem. After several supporting experiments were published, Jacques et al. claimed that an experiment of theirs follows fully the original scheme proposed by Wheeler.[12][13]Their complicated experiment is based on theMach–Zehnder interferometer, involving a triggered diamond N–V colour centre photon generator, polarization, and an electro-optical modulator acting as a switchable beam splitter. Measuring in a closed configuration showed interference, while measuring in an open configuration allowed the path of the particle to be determined, which made interference impossible. The Wheeler version of the interferometer experiment could not be performed in a laboratory until recently because of the practical difficulty of inserting or removing the second beam-splitter in the brief time interval between the photon's entering the first beam-splitter and its arrival at the location provided for the second beam-splitter. This realization of the experiment is done by extending the lengths of both paths by inserting long lengths of fiber optic cable. So doing makes the time interval involved with transits through the apparatus much longer. A high-speed switchable device on one path, composed of a high-voltage switch, aPockels cell, and aGlan–Thompson prism, makes it possible to divert that path away from its ordinary destination so that path effectively comes to a dead end. With the detour in operation, nothing can reach either detector by way of that path, so there can be no interference. With it switched off the path resumes its ordinary mode of action and passes through the second beam-splitter, making interference reappear. This arrangement does not actually insert and remove the second beam-splitter, but it does make it possible to switch from a state in which interference appears to a state in which interference cannot appear, and do so in the interval between light entering the first beam-splitter and light exiting the second beam-splitter. If photons had "decided" to enter the first beam-splitter as either waves or a particles, they must have been directed to undo that decision and to go through the system in their other guise, and they must have done so without any physical process being relayed to the entering photons or the first beam-splitter because that kind of transmission would be too slow even at the speed of light. Wheeler's interpretation of the physical results would be that in one configuration of the two experiments a single copy of the wavefunction of an entering photon is received, with 50% probability, at one or the other detectors, and that under the other configuration two copies of the wave function, traveling over different paths, arrive at both detectors, are out of phase with each other, and therefore exhibit interference. In one detector the wave functions will be in phase with each other, and the result will be that the photon has 100% probability of showing up in that detector. In the other detector the wave functions will be 180° out of phase, will cancel each other exactly, and there will be a 0% probability of their related photons showing up in that detector.[14] The cosmic experiment envisioned by Wheeler could be described either as analogous to the interferometer experiment or as analogous to a double-slit experiment. The important thing is that by a third kind of device, a massive stellar object acting as a gravitational lens, photons from a source can arrive by two pathways. Depending on how phase differences between wavefunction pairs are arranged, correspondingly different kinds of interference phenomena can be observed. Whether to merge the incoming wavefunctions or not, and how to merge the incoming wavefunctions can be controlled by experimenters. There are none of the phase differences introduced into the wavefunctions by the experimental apparatus as there are in the laboratory interferometer experiments, so despite there being no double-slit device near the light source, the cosmic experiment is closer to the double-slit experiment. However, Wheeler planned for the experiment to merge the incoming wavefunctions by use of a beam splitter.[15] The main difficulty in performing this experiment is that the experimenter has no control over or knowledge of lengths of each of the two paths between the distant quasar. Matching path lengths in time requires using a delay device along one path. Before that task could be done, it would be necessary to find a way to calculate the time delay. One suggestion for synchronizing inputs from the two ends of this cosmic experimental apparatus lies in the characteristics ofquasarsand the possibility of identifying identical events of some signal characteristic. Information from the Twin Quasars that Wheeler used as the basis of his speculation reach earth approximately 14 months apart.[16]Finding a way to keep a quantum of light in some kind of loop for over a year would not be easy. Wheeler's version of the double-slit experiment is arranged so that the same photon that emerges from two slits can be detected in two ways. The first way lets the two paths come together, lets the two copies of the wavefunction overlap, and shows interference. The second way moves farther away from the photon source to a position where the distance between the two copies of the wavefunction is too great to show interference effects. The technical problem in the laboratory is how to insert a detector screen at a point appropriate to observe interference effects or to remove that screen to reveal the photon detectors that can be restricted to receiving photons from the narrow regions of space where the slits are found. One way to accomplish that task would be to use the recently developed electrically switchable mirrors and simply change directions of the two paths from the slits by switching a mirror on or off. As of early 2014[update]no such experiment has been announced. The cosmic experiment described by Wheeler has other problems, but directing wavefunction copies to one place or another long after the photon involved has presumably "decided" whether to be a wave or a particle requires no great speed at all. One has about a billion years to get the job done. The cosmic version of the interferometer experiment could be adapted to function as a cosmic double-slit device as indicated in the illustration.[17]: 66 The first real experiment to follow Wheeler's intention for a double-slit apparatus to be subjected to end-game determination of detection method is the one by Walbornet al.[18] Researchers with access to radio telescopes originally designed forSETIresearch have explicated the practical difficulties of conducting the interstellar Wheeler experiment.[19] Rather than mechanically activating a delay, newer versions of the delayed choice experiment design two paths controlled by quantum effects. The overall experiment then creates a superposition of the two outcomes, particle behavior or wave behavior. This line of experimentation proved very difficult to carry out when it was first conceived. Nevertheless, it has proven very valuable over the years since it has led researchers to provide "increasingly sophisticated demonstrations of the wave–particle duality of single quanta".[20][21]As one experimenter explains, "Wave and particle behavior can coexist simultaneously."[22] A recent experiment by Manninget al.confirms the standard predictions of standard quantum mechanics with an atom of Helium.[23] A macroscopic quantum delayed-choice experiment has been proposed: coherent coupling of twocarbon nanotubescould be controlled by amplified single phonon events.[24] Ma, Zeilingeret al.have summarized what can be known as a result of experiments that have arisen from Wheeler's proposals. They say: Our work demonstrates and confirms that whether the correlations between two entangled photons revealwelcher-weg["which-way"] information or an interference pattern of one (system) photon depends on the choice of measurement on the other (environment) photon, even when all of the events on the two sides that can be space-like separated are space-like separated. The fact that it is possible to decide whether a wave or particle feature manifests itself long after—and even space-like separated from—the measurement teaches us that we should not have any naive realistic picture for interpreting quantum phenomena. Any explanation of what goes on in a specific individual observation of one photon has to take into account the whole experimental apparatus of the complete quantum state consisting of both photons, and it can only make sense after all information concerning complementary variables has been recorded. Our results demonstrate that the viewpoint that the system photon behaves either definitely as a wave or definitely as a particle would require faster-than-light communication. Because this would be in strong tension with the special theory of relativity, we believe that such a viewpoint should be given up entirely.[25] The delayed-choice experiment concept began as a series ofthought experimentsinquantum physics, first proposed by Wheeler in 1978.[26][27]According to thecomplementarity principle, the 'particle-like' (having exact location) or 'wave-like' (having frequency or amplitude) properties of a photon can be measured,but not both at the same time. Which characteristic is measured depends on whether experimenters use a device intended to observe particles or to observe waves.[28]When this statement is applied very strictly, one could argue that by determining the detector type one could force the photon to become manifest only as a particle or only as a wave. Detection of a photon is generally a destructive process (seequantum nondemolition measurementfor non-destructive measurements). For example, a photon can be detected as the consequences of being absorbed by an electron in aphotomultiplierthat accepts its energy, which is then used to trigger the cascade of events that produces a "click" from that device. In the case of thedouble-slit experiment, a photon appears as a highly localized point in space and time on a screen. The buildup of the photons on the screen gives an indication on whether the photon must have traveled through the slits as a wave or could have traveled as a particle. The photon is said to have traveled as a wave if the buildup results in the typical interference pattern of waves (seeDouble-slit experiment § Interference from individual particlesfor an animation showing the buildup). However, if one of the slits is closed, or two orthogonal polarizers are placed in front of the slits (making the photons passing through different slits distinguishable), then no interference pattern will appear, and the buildup can be explained as the result of the photon traveling as a particle.
https://en.wikipedia.org/wiki/Wheeler%27s_delayed-choice_experiment
SC(formerlySupercomputing), theInternational Conference for High Performance Computing,Networking,Storage and Analysis, is the annual conference established in 1988 by theAssociation for Computing Machineryand theIEEE Computer Society. In 2019, about 13,950 people participated overall;[1]by 2022 attendance had rebounded to 11,830 both in-person and online.[2]The not-for-profit conference is run by a committee of approximately 600 volunteers who spend roughly three years organizing each conference. SC is sponsored by theAssociation for Computing Machineryand theIEEE Computer Society. From its formation through 2011, ACM sponsorship was managed through ACM'sSpecial Interest Group on Computer Architecture (SIGARCH). Sponsors are listed on each proceedings page in the ACM DL; see for example.[3]Beginning in 2012,[4]ACM began the process of transitioning sponsorship from SIGARCH to the recently formedSpecial Interest Group on High Performance Computing (SIGHPC). This transition was completed after SC15,[5]and for SC16 ACM sponsorship was vested exclusively in SIGHPC (IEEE sponsorship remained unchanged).[6]The conference is non-profit. The conference is governed by a steering committee that includes representatives of the sponsoring societies, the current conference general chair, the general chairs of the preceding two years, the general chairs of the next two conference years, and a number of elected members.[7]All steering committee members are volunteers, with the exception of the two representatives of the sponsoring societies, who are employees of those societies. The committee selects the conference general chair, approves each year's conference budget, and is responsible for setting policy and strategy for the conference. Although each conference committee introduces slight variations on the program each year, the core components of the conference remain largely unchanged from year to year. The SC Technical Program is competitive with an acceptance rate around 20% for papers (seeHistory). Traditionally, the program includes invited talks, panels, research papers, tutorials, workshops, posters, and Birds of a Feather (BoF) sessions.[8] Each year, SC hosts the following conference and sponsoring society awards:[9] In addition to the technical program, SC hosts a research exhibition each year that includes universities, state-sponsored computing research organizations (such as the Federal labs in the US), and vendors of HPC-related hardware and software from many countries around the world. There were 353 exhibitors at SC16 in Salt Lake City, UT.[13] SC's program for students has gone through a variety of changes and emphases over the years. Beginning with SC15[14]the program is called "Students@SC", and is oriented toward undergraduate and graduate students in computing related fields, and computing-oriented students in science and engineering. The program includes professional development programs, opportunities to learn from mentors, and engagement with SC's technical sessions. SCinetis SC's research network. Started in 1991, SCinet features emerging technologies for very high bandwidth, low latency wide area network communications in addition to operational services necessary to provide conference attendees with connectivity to the commodity Internet and to many national research and engineering networks. Since its establishment in 1988,[3]and until 1995,[15]the full name of the conference was the "ACM/IEEE Supercomputing Conference" (sometimes: "ACM/IEEE Conference on Supercomputing"). The conference's abbreviated (and more commonly used) formal name was "Supercomputing 'XY", where XY denotes the last two digits of the year. In 1996, according to the archived front matter of the conference proceedings,[16]the full name was changed to the ACM/IEEE "International Conference on High Performance Computing and Communications". The latter document further announced that, as of 1997, the conference will undergo a name change and will be called "SC97: High Performance Networking and Computing". The document explained that 1997 [will mark] the first use of "SC97" as the name of the annual conference you've known as "Supercomputing 'XY". This change reflects our growing attention to networking, distributed computing, data-intensive applications, and other emerging technologies that push the frontiers of communications and computing. A 1997 HPC Wire article discussed at length the reasoning, considerations, and concerns that accompanied the decision to change the name of the conference series from "Supercomputing 'XY" to "SC 'XY",[17]stating that It's official: the age of supercomputing has ended. At any rate, the word "supercomputing" has been excised from the title of the annual trade shows, sponsored by the IEEE and ACM, that have been known for almost ten years as "Supercomputing '(final two digits of year)". The next event, to be held in San Jose next November, has been redesignated "SC '97." Like Lewis Carroll's Cheshire Cat, "supercomputing" has faded steadily away until only the smile, nose, and whiskers remain. ... The loss is a real one. An enormous range of ordinary people had some idea, however vague, what "supercomputing" meant. No-caf, local alternatives like "SC" and "HPC" lack this authority. This is not a trivial issue. In these days of rapid change, passing technofancies, andinformation overload, a rose with the wrong name is just another thorn -- or forgotten immediately. After all, how can businessmen, ordinary consumers, and taxpayers be expected to pay money for something they can't comprehend? More important, will investors and grant-givers hand over money to support further R&D on something whose only identity is an arbitrary clump of capital letters? Despite these concerns, the abbreviated name of the conference, "SC", is still used today, a reminiscent of the abbreviation of the conference's original name—"Supercomputing Conference". The full name, in contrast, underwent several changes. Between 1997 and 2003,[18][19][20][21][22][23][24]the name "High Performance Networking and Computing" was specified in the front matter of the archived conference proceedings in some years (1997, 1998, 2000, 2002), whereas in other years it was omitted altogether in favor of the abbreviated name (1999, 2001, 2003). In 2004,[25]the stated front matter full name was changed to "High Performance Computing, Networking and Storage Conference". In 2005,[26]this name was replaced by the original name of the conference—"supercomputing"— in the front matter. Finally, in 2006,[27]the current full name, as used today, emerged: "The International Conference for High Performance Computing, Networking, Storage and Analysis". Despite all of the name variances in the proceedings through the years, the digital library of ACM, the co-sponsoring society, records the name of the conference as "The ACM/IEEE Conference on Supercomputing" from 1998 - 2008, when it changes to ""The International Conference for High Performance Computing, Networking, Storage and Analysis". It is these two names that are used in the full citations to the conference proceedings provided in this article. The table below provides the location, name of the general chair, and acceptance statistics for each year of SC.Note that references for data in these tables apply to data preceding the reference to the left on the same row; for example, for SC17 the single reference substantiates all the information in that row, but for SC05 the source for the convention center and chair is different than the source for the acceptance statistics. Originally slated to be held in Atlanta, GA, SC20 was converted to a fully virtual conference[28]due to the COVID-19 pandemic; the conference agenda spread across two weeks instead of the typical one week for an in-person conference. Over 7,440 attendees participated from 115 countries.[29]SC21 was held as a hybrid conference with both in-person attendance in St. Louis, MO, and virtual attendance options available.[30] Actual: Virtual The following table details the keynote speakers during the history of the conference; as of SC23, 16.7% of the keynote speakers have been female, with a mix of speakers from corporate, academic, and national government organizations.
https://en.wikipedia.org/wiki/ACM/IEEE_Supercomputing_Conference
ACM SIGHPCis theAssociation for Computing Machinery'sSpecial Interest GrouponHigh Performance Computing, an international community of students, faculty, researchers, and practitioners working onresearchand inprofessionalpractice related tosupercomputing, high-endcomputers, andcluster computing.[1]The organization co-sponsors international conferences related to high performance and scientific computing, including:SC, the International Conference for High Performance Computing, Networking, Storage and Analysis; the Platform for Advanced Scientific Computing (PASC) Conference; Practice and Experience in Advanced Research Computing (PEARC); and PPoPP, theSymposium on Principles and Practice of Parallel Programming.[2] ACM SIGHPC was founded on November 1, 2011, with the support ofACM SIGARCH.[3][4][5]The first chair wasCherri Pancake, who was also the 1999ACM/IEEE Supercomputing Conferencechair. During its formation, the SIG was led by a set of volunteer officers:[6] These officers were replaced by the first elected slate of officers in July 2013, with subsequent elections scheduled every three years. In addition to elected officers, the SIG is supported by a variety of appointed volunteer leaders who are responsible for membership coordination, creating the newsletter, and other duties needed to operate the SIG; the roles vary as the needs of the SIG change over time. These volunteer leaders are appointed by the SIG chair. ACM SIGHPC co-sponsors the following international conferences related to high performance computing: In addition, ACM SIGHPC supports the following conferences with in-cooperation status: SIGHPC offers a variety of travel grants to support student participation in its conferences, including a program in partnership withACM-Wthat focuses specifically on participation by women.[17]SIGHPC also sponsors an Outstanding Doctoral Dissertation Award, given each year for the best doctoral dissertation completed inhigh performance computing. This award is open to students studying anywhere in the world who have completed a PhD dissertation with HPC as a central research theme.[18] The SIG also sponsors the Emerging Woman Leader in Technical Computing Award. This award is offered once every two years and recognizes a woman in high performance and technical computing during the important middle years of their career for their work in research, education, and/or practice over the five to fifteen years after completing her last academic degree.[19][20] The ACM SIGHPC Computational and Data Science Fellowships are part of a multi-year program to increase the diversity of students pursuing graduate degrees in data science and computational science. Specifically targeted at women or students from racial/ethnic backgrounds that have not traditionally participated in the computing field, the program is open to students pursuing degrees at institutions anywhere in the world.[21]The program began in 2015 with a five-year commitment of $1.5M in funds fromIntel,[22]and continued beginning in 2020 under the sole sponsorship of the SIG. As of the announcement of the 2022 class, the fellowship has supported a total of 73 Phd and MS students.[23] Chapters are sub-organizations within ACM SIGs that may be organized by a geographic region or by a technical topic area. Chapter activities vary, but may include physical meetings, webinars, and workshops. SIGHPC currently supports the following chapters:[24]
https://en.wikipedia.org/wiki/ACM_SIGHPC
High-performance technical computing(HPTC) is the application ofhigh performance computing(HPC) to technical, as opposed to business or scientific, problems (although the lines between the various disciplines are necessarily vague). HPTC often refers to the application of HPC toengineeringproblems and includescomputational fluid dynamics,simulation,modeling, andseismic tomography(particularly in thepetrochemical industry). This computing article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/High-performance_technical_computing
Jungle computingis a form ofhigh performance computingthatdistributescomputational work acrosscluster,gridandcloud computing.[1][2] The increasing complexity of the high performance computing environment has provided a range of choices beside traditional supercomputers andclusters. Scientists can now usegridandcloudinfrastructures, in a variety of combinations along with traditionalsupercomputers- all connected via fast networks. And the emergence ofmany-coretechnologies such asGPUs, as well as supercomputers on chip within these environments has added to the complexity. Thus, high-performance computing can now use multiple diverse platforms and systems simultaneously, giving rise to the term "computing jungle".[1] This computing article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Jungle_computing
TheTesla Personal Supercomputeris adesktop computer(personal supercomputer) that is backed byNvidiaand built by various hardware vendors.[1]It is meant to be a demonstration of the capabilities of Nvidia'sTeslaGPGPUbrand; it utilizes Nvidia'sCUDAparallel computingarchitecture and is powered by up to 2688 parallel processing cores per GPGPU,[2]which allow it to achieve speeds up to 250 times faster than standard PCs, according to Nvidia. Thiscomputer hardwarearticle is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Nvidia_Tesla_Personal_Supercomputer