text
stringlengths
559
401k
source
stringlengths
13
121
The GeoNetwork opensource (GNOS) project is a free and open source (FOSS) cataloging application for spatially referenced resources. It is a catalog of location-oriented information. == Outline == It is a standardized and decentralized spatial information management environment designed to enable access to geo-referenced databases, cartographic products and related metadata from a variety of sources, enhancing the spatial information exchange and sharing between organizations and their audience, using the capacities of the internet. Using the Z39.50 protocol it both accesses remote catalogs and makes its data available to other catalog services. As of 2007, OGC Web Catalog Service are being implemented. Maps, including those derived from satellite imagery, are effective communicational tools and play an important role in the work of decision makers (e.g., sustainable development planners and humanitarian and emergency managers) in need of quick, reliable and up-to-date user-friendly cartographic products as a basis for action and to better plan and monitor their activities; GIS experts in need of exchanging consistent and updated geographical data; and spatial analysts in need of multidisciplinary data to perform preliminary geographical analysis and make reliable forecasts. == Deployment == The software has been deployed to various organizations, the first being FAO GeoNetwork and WFP VAM-SIE-GeoNetwork, both at their headquarters in Rome, Italy. Furthermore, the WHO, CGIAR, BRGM, ESA, FGDC and the Global Change Information and Research Centre (GCIRC) of China are working on GeoNetwork opensource implementations as their spatial information management capacity. It is used for several risk information systems, in particular in the Gambia. Several related tools are packaged with GeoNetwork, including GeoServer. GeoServer stores geographical data, while GeoNetwork catalogs collections of such data. == See also == Comparison of GIS software List of GIS software List of open source software packages Open Source Geospatial Foundation == References == == External links == Official website Sourceforge project GitHub repository
Wikipedia/GeoNetwork_opensource
In probability theory and statistics, two real-valued random variables, X {\displaystyle X} , Y {\displaystyle Y} , are said to be uncorrelated if their covariance, cov ⁡ [ X , Y ] = E ⁡ [ X Y ] − E ⁡ [ X ] E ⁡ [ Y ] {\displaystyle \operatorname {cov} [X,Y]=\operatorname {E} [XY]-\operatorname {E} [X]\operatorname {E} [Y]} , is zero. If two variables are uncorrelated, there is no linear relationship between them. Uncorrelated random variables have a Pearson correlation coefficient, when it exists, of zero, except in the trivial case when either variable has zero variance (is a constant). In this case the correlation is undefined. In general, uncorrelatedness is not the same as orthogonality, except in the special case where at least one of the two random variables has an expected value of 0. In this case, the covariance is the expectation of the product, and X {\displaystyle X} and Y {\displaystyle Y} are uncorrelated if and only if E ⁡ [ X Y ] = 0 {\displaystyle \operatorname {E} [XY]=0} . If X {\displaystyle X} and Y {\displaystyle Y} are independent, with finite second moments, then they are uncorrelated. However, not all uncorrelated variables are independent.: p. 155  == Definition == === Definition for two real random variables === Two random variables X , Y {\displaystyle X,Y} are called uncorrelated if their covariance Cov ⁡ [ X , Y ] = E ⁡ [ ( X − E ⁡ [ X ] ) ( Y − E ⁡ [ Y ] ) ] {\displaystyle \operatorname {Cov} [X,Y]=\operatorname {E} [(X-\operatorname {E} [X])(Y-\operatorname {E} [Y])]} is zero.: p. 153 : p. 121  Formally: === Definition for two complex random variables === Two complex random variables Z , W {\displaystyle Z,W} are called uncorrelated if their covariance K Z W = E ⁡ [ ( Z − E ⁡ [ Z ] ) ( W − E ⁡ [ W ] ) ¯ ] {\displaystyle \operatorname {K} _{ZW}=\operatorname {E} [(Z-\operatorname {E} [Z]){\overline {(W-\operatorname {E} [W])}}]} and their pseudo-covariance J Z W = E ⁡ [ ( Z − E ⁡ [ Z ] ) ( W − E ⁡ [ W ] ) ] {\displaystyle \operatorname {J} _{ZW}=\operatorname {E} [(Z-\operatorname {E} [Z])(W-\operatorname {E} [W])]} is zero, i.e. Z , W uncorrelated ⟺ E ⁡ [ Z W ¯ ] = E ⁡ [ Z ] ⋅ E ⁡ [ W ¯ ] and E ⁡ [ Z W ] = E ⁡ [ Z ] ⋅ E ⁡ [ W ] {\displaystyle Z,W{\text{ uncorrelated}}\quad \iff \quad \operatorname {E} [Z{\overline {W}}]=\operatorname {E} [Z]\cdot \operatorname {E} [{\overline {W}}]{\text{ and }}\operatorname {E} [ZW]=\operatorname {E} [Z]\cdot \operatorname {E} [W]} === Definition for more than two random variables === A set of two or more random variables X 1 , … , X n {\displaystyle X_{1},\ldots ,X_{n}} is called uncorrelated if each pair of them is uncorrelated. This is equivalent to the requirement that the non-diagonal elements of the autocovariance matrix K X X {\displaystyle \operatorname {K} _{\mathbf {X} \mathbf {X} }} of the random vector X = [ X 1 … X n ] T {\displaystyle \mathbf {X} =[X_{1}\ldots X_{n}]^{\mathrm {T} }} are all zero. The autocovariance matrix is defined as: K X X = cov ⁡ [ X , X ] = E ⁡ [ ( X − E ⁡ [ X ] ) ( X − E ⁡ [ X ] ) ) T ] = E ⁡ [ X X T ] − E ⁡ [ X ] E ⁡ [ X ] T {\displaystyle \operatorname {K} _{\mathbf {X} \mathbf {X} }=\operatorname {cov} [\mathbf {X} ,\mathbf {X} ]=\operatorname {E} [(\mathbf {X} -\operatorname {E} [\mathbf {X} ])(\mathbf {X} -\operatorname {E} [\mathbf {X} ]))^{\rm {T}}]=\operatorname {E} [\mathbf {X} \mathbf {X} ^{T}]-\operatorname {E} [\mathbf {X} ]\operatorname {E} [\mathbf {X} ]^{T}} == Examples of dependence without correlation == === Example 1 === Let X {\displaystyle X} be a random variable that takes the value 0 with probability 1/2, and takes the value 1 with probability 1/2. Let Y {\displaystyle Y} be a random variable, independent of X {\displaystyle X} , that takes the value −1 with probability 1/2, and takes the value 1 with probability 1/2. Let U {\displaystyle U} be a random variable constructed as U = X Y {\displaystyle U=XY} . The claim is that U {\displaystyle U} and X {\displaystyle X} have zero covariance (and thus are uncorrelated), but are not independent. Proof: Taking into account that E ⁡ [ U ] = E ⁡ [ X Y ] = E ⁡ [ X ] E ⁡ [ Y ] = E ⁡ [ X ] ⋅ 0 = 0 , {\displaystyle \operatorname {E} [U]=\operatorname {E} [XY]=\operatorname {E} [X]\operatorname {E} [Y]=\operatorname {E} [X]\cdot 0=0,} where the second equality holds because X {\displaystyle X} and Y {\displaystyle Y} are independent, one gets cov ⁡ [ U , X ] = E ⁡ [ ( U − E ⁡ [ U ] ) ( X − E ⁡ [ X ] ) ] = E ⁡ [ U ( X − 1 2 ) ] = E ⁡ [ X 2 Y − 1 2 X Y ] = E ⁡ [ ( X 2 − 1 2 X ) Y ] = E ⁡ [ ( X 2 − 1 2 X ) ] E ⁡ [ Y ] = 0 {\displaystyle {\begin{aligned}\operatorname {cov} [U,X]&=\operatorname {E} [(U-\operatorname {E} [U])(X-\operatorname {E} [X])]=\operatorname {E} [U(X-{\tfrac {1}{2}})]\\&=\operatorname {E} [X^{2}Y-{\tfrac {1}{2}}XY]=\operatorname {E} [(X^{2}-{\tfrac {1}{2}}X)Y]=\operatorname {E} [(X^{2}-{\tfrac {1}{2}}X)]\operatorname {E} [Y]=0\end{aligned}}} Therefore, U {\displaystyle U} and X {\displaystyle X} are uncorrelated. Independence of U {\displaystyle U} and X {\displaystyle X} means that for all a {\displaystyle a} and b {\displaystyle b} , Pr ( U = a ∣ X = b ) = Pr ( U = a ) {\displaystyle \Pr(U=a\mid X=b)=\Pr(U=a)} . This is not true, in particular, for a = 1 {\displaystyle a=1} and b = 0 {\displaystyle b=0} . Pr ( U = 1 ∣ X = 0 ) = Pr ( X Y = 1 ∣ X = 0 ) = 0 {\displaystyle \Pr(U=1\mid X=0)=\Pr(XY=1\mid X=0)=0} Pr ( U = 1 ) = Pr ( X Y = 1 ) = 1 / 4 {\displaystyle \Pr(U=1)=\Pr(XY=1)=1/4} Thus Pr ( U = 1 ∣ X = 0 ) ≠ Pr ( U = 1 ) {\displaystyle \Pr(U=1\mid X=0)\neq \Pr(U=1)} so U {\displaystyle U} and X {\displaystyle X} are not independent. Q.E.D. === Example 2 === If X {\displaystyle X} is a continuous random variable uniformly distributed on [ − 1 , 1 ] {\displaystyle [-1,1]} and Y = X 2 {\displaystyle Y=X^{2}} , then X {\displaystyle X} and Y {\displaystyle Y} are uncorrelated even though X {\displaystyle X} determines Y {\displaystyle Y} and a particular value of Y {\displaystyle Y} can be produced by only one or two values of X {\displaystyle X} : f X ( t ) = 1 2 I [ − 1 , 1 ] ; f Y ( t ) = 1 2 t I ] 0 , 1 ] {\displaystyle f_{X}(t)={1 \over 2}I_{[-1,1]};f_{Y}(t)={1 \over {2{\sqrt {t}}}}I_{]0,1]}} on the other hand, f X , Y {\displaystyle f_{X,Y}} is 0 on the triangle defined by 0 < X < Y < 1 {\displaystyle 0<X<Y<1} although f X × f Y {\displaystyle f_{X}\times f_{Y}} is not null on this domain. Therefore f X , Y ( X , Y ) ≠ f X ( X ) × f Y ( Y ) {\displaystyle f_{X,Y}(X,Y)\neq f_{X}(X)\times f_{Y}(Y)} and the variables are not independent. E [ X ] = 1 − 1 4 = 0 ; E [ Y ] = 1 3 − ( − 1 ) 3 3 × 2 = 1 3 {\displaystyle E[X]={{1-1} \over 4}=0;E[Y]={{1^{3}-(-1)^{3}} \over {3\times 2}}={1 \over 3}} C o v [ X , Y ] = E [ ( X − E [ X ] ) ( Y − E [ Y ] ) ] = E [ X 3 − X 3 ] = 1 4 − ( − 1 ) 4 4 × 2 = 0 {\displaystyle Cov[X,Y]=E\left[(X-E[X])(Y-E[Y])\right]=E\left[X^{3}-{X \over 3}\right]={{1^{4}-(-1)^{4}} \over {4\times 2}}=0} Therefore the variables are uncorrelated. == When uncorrelatedness implies independence == There are cases in which uncorrelatedness does imply independence. One of these cases is the one in which both random variables are two-valued (so each can be linearly transformed to have a Bernoulli distribution). Further, two jointly normally distributed random variables are independent if they are uncorrelated, although this does not hold for variables whose marginal distributions are normal and uncorrelated but whose joint distribution is not joint normal (see Normally distributed and uncorrelated does not imply independent). == Generalizations == === Uncorrelated random vectors === Two random vectors X = ( X 1 , … , X m ) T {\displaystyle \mathbf {X} =(X_{1},\ldots ,X_{m})^{T}} and Y = ( Y 1 , … , Y n ) T {\displaystyle \mathbf {Y} =(Y_{1},\ldots ,Y_{n})^{T}} are called uncorrelated if E ⁡ [ X Y T ] = E ⁡ [ X ] E ⁡ [ Y ] T {\displaystyle \operatorname {E} [\mathbf {X} \mathbf {Y} ^{T}]=\operatorname {E} [\mathbf {X} ]\operatorname {E} [\mathbf {Y} ]^{T}} . They are uncorrelated if and only if their cross-covariance matrix K X Y {\displaystyle \operatorname {K} _{\mathbf {X} \mathbf {Y} }} is zero.: p.337  Two complex random vectors Z {\displaystyle \mathbf {Z} } and W {\displaystyle \mathbf {W} } are called uncorrelated if their cross-covariance matrix and their pseudo-cross-covariance matrix is zero, i.e. if K Z W = J Z W = 0 {\displaystyle \operatorname {K} _{\mathbf {Z} \mathbf {W} }=\operatorname {J} _{\mathbf {Z} \mathbf {W} }=0} where K Z W = E ⁡ [ ( Z − E ⁡ [ Z ] ) ( W − E ⁡ [ W ] ) H ] {\displaystyle \operatorname {K} _{\mathbf {Z} \mathbf {W} }=\operatorname {E} [(\mathbf {Z} -\operatorname {E} [\mathbf {Z} ]){(\mathbf {W} -\operatorname {E} [\mathbf {W} ])}^{\mathrm {H} }]} and J Z W = E ⁡ [ ( Z − E ⁡ [ Z ] ) ( W − E ⁡ [ W ] ) T ] {\displaystyle \operatorname {J} _{\mathbf {Z} \mathbf {W} }=\operatorname {E} [(\mathbf {Z} -\operatorname {E} [\mathbf {Z} ]){(\mathbf {W} -\operatorname {E} [\mathbf {W} ])}^{\mathrm {T} }]} . === Uncorrelated stochastic processes === Two stochastic processes { X t } {\displaystyle \left\{X_{t}\right\}} and { Y t } {\displaystyle \left\{Y_{t}\right\}} are called uncorrelated if their cross-covariance K X Y ⁡ ( t 1 , t 2 ) = E ⁡ [ ( X ( t 1 ) − μ X ( t 1 ) ) ( Y ( t 2 ) − μ Y ( t 2 ) ) ] {\displaystyle \operatorname {K} _{\mathbf {X} \mathbf {Y} }(t_{1},t_{2})=\operatorname {E} \left[\left(X(t_{1})-\mu _{X}(t_{1})\right)\left(Y(t_{2})-\mu _{Y}(t_{2})\right)\right]} is zero for all times.: p. 142  Formally: { X t } , { Y t } uncorrelated : ⟺ ∀ t 1 , t 2 : K X Y ⁡ ( t 1 , t 2 ) = 0 {\displaystyle \left\{X_{t}\right\},\left\{Y_{t}\right\}{\text{ uncorrelated}}\quad :\iff \quad \forall t_{1},t_{2}\colon \operatorname {K} _{\mathbf {X} \mathbf {Y} }(t_{1},t_{2})=0} . == See also == Correlation and dependence Binomial distribution: Covariance between two binomials Uncorrelated Volume Element == References == == Further reading == Probability for Statisticians, Galen R. Shorack, Springer (c2000) ISBN 0-387-98953-6
Wikipedia/Uncorrelatedness_(probability_theory)
In mathematics, Vojta's conjecture is a conjecture introduced by Paul Vojta (1987) about heights of points on algebraic varieties over number fields. The conjecture was motivated by an analogy between diophantine approximation and Nevanlinna theory (value distribution theory) in complex analysis. It implies many other conjectures in Diophantine approximation, Diophantine equations, arithmetic geometry, and mathematical logic. == Statement of the conjecture == Let F {\displaystyle F} be a number field, let X / F {\displaystyle X/F} be a non-singular algebraic variety, let D {\displaystyle D} be an effective divisor on X {\displaystyle X} with at worst normal crossings, let H {\displaystyle H} be an ample divisor on X {\displaystyle X} , and let K X {\displaystyle K_{X}} be a canonical divisor on X {\displaystyle X} . Choose Weil height functions h H {\displaystyle h_{H}} and h K X {\displaystyle h_{K_{X}}} and, for each absolute value v {\displaystyle v} on F {\displaystyle F} , a local height function λ D , v {\displaystyle \lambda _{D,v}} . Fix a finite set of absolute values S {\displaystyle S} of F {\displaystyle F} , and let ϵ > 0 {\displaystyle \epsilon >0} . Then there is a constant C {\displaystyle C} and a non-empty Zariski open set U ⊆ X {\displaystyle U\subseteq X} , depending on all of the above choices, such that ∑ v ∈ S λ D , v ( P ) + h K X ( P ) ≤ ϵ h H ( P ) + C for all P ∈ U ( F ) . {\displaystyle \sum _{v\in S}\lambda _{D,v}(P)+h_{K_{X}}(P)\leq \epsilon h_{H}(P)+C\quad {\hbox{for all }}P\in U(F).} Examples: Let X = P N {\displaystyle X=\mathbb {P} ^{N}} . Then K X ∼ − ( N + 1 ) H {\displaystyle K_{X}\sim -(N+1)H} , so Vojta's conjecture reads ∑ v ∈ S λ D , v ( P ) ≤ ( N + 1 + ϵ ) h H ( P ) + C {\displaystyle \sum _{v\in S}\lambda _{D,v}(P)\leq (N+1+\epsilon )h_{H}(P)+C} for all P ∈ U ( F ) {\displaystyle P\in U(F)} . Let X {\displaystyle X} be a variety with trivial canonical bundle, for example, an abelian variety, a K3 surface or a Calabi-Yau variety. Vojta's conjecture predicts that if D {\displaystyle D} is an effective ample normal crossings divisor, then the S {\displaystyle S} -integral points on the affine variety X ∖ D {\displaystyle X\setminus D} are not Zariski dense. For abelian varieties, this was conjectured by Lang and proven by Faltings (1991). Let X {\displaystyle X} be a variety of general type, i.e., K X {\displaystyle K_{X}} is ample on some non-empty Zariski open subset of X {\displaystyle X} . Then taking S = ∅ {\displaystyle S=\emptyset } , Vojta's conjecture predicts that X ( F ) {\displaystyle X(F)} is not Zariski dense in X {\displaystyle X} . This last statement for varieties of general type is the Bombieri–Lang conjecture. == Generalizations == There are generalizations in which P {\displaystyle P} is allowed to vary over X ( F ¯ ) {\displaystyle X({\overline {F}})} , and there is an additional term in the upper bound that depends on the discriminant of the field extension F ( P ) / F {\displaystyle F(P)/F} . There are generalizations in which the non-archimedean local heights λ D , v {\displaystyle \lambda _{D,v}} are replaced by truncated local heights, which are local heights in which multiplicities are ignored. These versions of Vojta's conjecture provide natural higher-dimensional analogues of the ABC conjecture. == References == Vojta, Paul (1987). Diophantine Approximations and Value Distribution Theory. Lecture Notes in Mathematics. Vol. 1239. Berlin, New York: Springer-Verlag. doi:10.1007/BFb0072989. ISBN 978-3-540-17551-3. MR 0883451. Zbl 0609.14011. Faltings, Gerd (1991). "Diophantine approximation on abelian varieties". Annals of Mathematics. 123 (3): 549–576. doi:10.2307/2944319. JSTOR 2944319. MR 1109353.
Wikipedia/Vojta's_conjecture
Inter-universal Teichmüller theory (IUT or IUTT) is the name given by mathematician Shinichi Mochizuki to a theory he developed in the 2000s, following his earlier work in arithmetic geometry. According to Mochizuki, it is "an arithmetic version of Teichmüller theory for number fields equipped with an elliptic curve". The theory was made public in a series of four preprints posted in 2012 to his website. The most striking claimed application of the theory is to provide a proof for various outstanding conjectures in number theory, in particular the abc conjecture. Mochizuki and a few other mathematicians claim that the theory indeed yields such a proof but this has so far not been accepted by the mathematical community. == History == The theory was developed entirely by Mochizuki up to 2012, and the last parts were written up in a series of four preprints. Mochizuki made his work public in August 2012 with none of the fanfare that typically accompanies major advances, posting the papers only to his institution's preprint server and his website, and making no announcement to colleagues. Soon after, the papers were picked up by Akio Tamagawa and Ivan Fesenko and the mathematical community at large was made aware of the claims to have proven the abc conjecture. The reception of the claim was at first enthusiastic, though number theorists were baffled by the original language introduced and used by Mochizuki. Workshops on IUT were held at the Research Institute for Mathematical Sciences (RIMS) in March 2015, in Beijing in July 2015, in Oxford in December 2015 and again at RIMS in July 2016. The last two events attracted more than 100 participants. Presentations from these workshops are available online. However, these did not lead to broader understanding of Mochizuki's ideas and the status of his claimed proof was not changed by these events. In 2017, a number of mathematicians who had examined Mochizuki's argument in detail pointed to a specific point which they could not understand, near the end of the proof of Corollary 3.12, in paper three of four. In March 2018, Peter Scholze and Jakob Stix visited Kyoto University for five days of discussions with Mochizuki and Yuichiro Hoshi; while this did not resolve the differences, it brought into focus where the difficulties lay. It also resulted in the publication of reports of the discussion by both sides: In May 2018, Scholze and Stix wrote a 10-page report, updated in September 2018, detailing the (previously identified) gap in Corollary 3.12 in the proof, describing it as "so severe that in [their] opinion small modifications will not rescue the proof strategy", and that Mochizuki's preprint cannot claim a proof of abc. In September 2018, Mochizuki wrote a 41-page summary of his view of the discussions and his conclusions about which aspects of his theory he considers misunderstood. In particular he names: "re-initialization" of (mathematical) objects, making their previous "history" inaccessible; "labels" for different "versions" of objects; the emphasis on the types ("species") of objects. In July and October 2018, Mochizuki wrote 8- and 5-page reactions to the May and September versions of the Scholze and Jakob Stix report, maintaining that the gap is the result of their simplifications, and that there is no gap in his theory. Mochizuki published his work in a series of four journal papers in 2021, in the journal Publications of the Research Institute for Mathematical Sciences, Kyoto University, for which he is editor-in-chief. In a review of these papers in zbMATH, Peter Scholze wrote that his concerns from 2017 and 2018 "have not been addressed in the published version". Other authors have pointed to the unresolved dispute between Mochizuki and Scholze over the correctness of this work as an instance in which the peer review process of mathematical journal publication has failed in its usual function of convincing the mathematical community as a whole of the validity of a result. == Mathematical significance == === Scope of the theory === Inter-universal Teichmüller theory is a continuation of Mochizuki's previous work in arithmetic geometry. This work, which has been peer-reviewed and well received by the mathematical community, includes major contributions to anabelian geometry, and the development of p-adic Teichmüller theory, Hodge–Arakelov theory and Frobenioid categories. It was developed with explicit references to the aim of getting a deeper understanding of abc and related conjectures. In the geometric setting, analogues to certain ideas of IUT appear in the proof by Bogomolov of the geometric Szpiro inequality. The key prerequisite for IUT is Mochizuki's mono-anabelian geometry and its reconstruction results, which allows to retrieve various scheme-theoretic objects associated to a hyperbolic curve over a number field from the knowledge of its fundamental group, or of certain Galois groups. IUT applies algorithmic results of mono-anabelian geometry to reconstruct relevant schemes after applying arithmetic deformations to them; a key role is played by three rigidities established in Mochizuki's etale theta theory. Roughly speaking, arithmetic deformations change the multiplication of a given ring, and the task is to measure how much the addition is changed. Infrastructure for deformation procedures is decoded by certain links between so called Hodge theaters, such as a theta-link and a log-link. These Hodge theaters use two main symmetries of IUT: multiplicative arithmetic and additive geometric. On one hand, Hodge theaters generalize such classical objects in number theory as the adeles and ideles in relation to their global elements. On the other hand, they generalize certain structures appearing in the previous Hodge–Arakelov theory of Mochizuki. The links between theaters are not compatible with ring or scheme structures and are performed outside conventional arithmetic geometry. However, they are compatible with certain group structures, and absolute Galois groups as well as certain types of topological groups play a fundamental role in IUT. Considerations of multiradiality, a generalization of functoriality, imply that three mild indeterminacies have to be introduced. === Consequences in number theory === The main claimed application of IUT is to various conjectures in number theory, among them the abc conjecture, but also more geometric conjectures such as Szpiro's conjecture on elliptic curves and Vojta's conjecture for curves. The first step is to translate arithmetic information on these objects to the setting of Frobenioid categories. It is claimed that extra structure on this side allows one to deduce statements which translate back into the claimed results. One issue with Mochizuki's arguments, which he acknowledges, is that it does not seem possible to get intermediate results in his claimed proof of the abc conjecture using IUT. In other words, there is no smaller subset of his arguments more easily amenable to an analysis by outside experts, which would yield a new result in Diophantine geometries. Vesselin Dimitrov extracted from Mochizuki's arguments a proof of a quantitative result on abc, which could in principle give a refutation of the proof. == References == == External links == Shinichi Mochizuki (1995–2018), Papers of Shinichi Mochizuki Shinichi Mochizuki (2014), A panoramic overview of inter-universal Teichmüller theory Yuichiro Hoshi; Go Yamashita (2015), RIMS Joint Research Workshop: On the verification and further development of inter-universal Teichmuller theory Ivan Fesenko (2015), Arithmetic deformation theory via arithmetic fundamental groups and nonarchimedean theta functions, notes on the work of Shinichi Mochizuki. Yuichiro Hoshi (2015) Introduction to inter-universal Teichmüller theory, a survey in Japanese
Wikipedia/Inter-universal_Teichmuller_theory
Inter-universal Teichmüller theory (IUT or IUTT) is the name given by mathematician Shinichi Mochizuki to a theory he developed in the 2000s, following his earlier work in arithmetic geometry. According to Mochizuki, it is "an arithmetic version of Teichmüller theory for number fields equipped with an elliptic curve". The theory was made public in a series of four preprints posted in 2012 to his website. The most striking claimed application of the theory is to provide a proof for various outstanding conjectures in number theory, in particular the abc conjecture. Mochizuki and a few other mathematicians claim that the theory indeed yields such a proof but this has so far not been accepted by the mathematical community. == History == The theory was developed entirely by Mochizuki up to 2012, and the last parts were written up in a series of four preprints. Mochizuki made his work public in August 2012 with none of the fanfare that typically accompanies major advances, posting the papers only to his institution's preprint server and his website, and making no announcement to colleagues. Soon after, the papers were picked up by Akio Tamagawa and Ivan Fesenko and the mathematical community at large was made aware of the claims to have proven the abc conjecture. The reception of the claim was at first enthusiastic, though number theorists were baffled by the original language introduced and used by Mochizuki. Workshops on IUT were held at the Research Institute for Mathematical Sciences (RIMS) in March 2015, in Beijing in July 2015, in Oxford in December 2015 and again at RIMS in July 2016. The last two events attracted more than 100 participants. Presentations from these workshops are available online. However, these did not lead to broader understanding of Mochizuki's ideas and the status of his claimed proof was not changed by these events. In 2017, a number of mathematicians who had examined Mochizuki's argument in detail pointed to a specific point which they could not understand, near the end of the proof of Corollary 3.12, in paper three of four. In March 2018, Peter Scholze and Jakob Stix visited Kyoto University for five days of discussions with Mochizuki and Yuichiro Hoshi; while this did not resolve the differences, it brought into focus where the difficulties lay. It also resulted in the publication of reports of the discussion by both sides: In May 2018, Scholze and Stix wrote a 10-page report, updated in September 2018, detailing the (previously identified) gap in Corollary 3.12 in the proof, describing it as "so severe that in [their] opinion small modifications will not rescue the proof strategy", and that Mochizuki's preprint cannot claim a proof of abc. In September 2018, Mochizuki wrote a 41-page summary of his view of the discussions and his conclusions about which aspects of his theory he considers misunderstood. In particular he names: "re-initialization" of (mathematical) objects, making their previous "history" inaccessible; "labels" for different "versions" of objects; the emphasis on the types ("species") of objects. In July and October 2018, Mochizuki wrote 8- and 5-page reactions to the May and September versions of the Scholze and Jakob Stix report, maintaining that the gap is the result of their simplifications, and that there is no gap in his theory. Mochizuki published his work in a series of four journal papers in 2021, in the journal Publications of the Research Institute for Mathematical Sciences, Kyoto University, for which he is editor-in-chief. In a review of these papers in zbMATH, Peter Scholze wrote that his concerns from 2017 and 2018 "have not been addressed in the published version". Other authors have pointed to the unresolved dispute between Mochizuki and Scholze over the correctness of this work as an instance in which the peer review process of mathematical journal publication has failed in its usual function of convincing the mathematical community as a whole of the validity of a result. == Mathematical significance == === Scope of the theory === Inter-universal Teichmüller theory is a continuation of Mochizuki's previous work in arithmetic geometry. This work, which has been peer-reviewed and well received by the mathematical community, includes major contributions to anabelian geometry, and the development of p-adic Teichmüller theory, Hodge–Arakelov theory and Frobenioid categories. It was developed with explicit references to the aim of getting a deeper understanding of abc and related conjectures. In the geometric setting, analogues to certain ideas of IUT appear in the proof by Bogomolov of the geometric Szpiro inequality. The key prerequisite for IUT is Mochizuki's mono-anabelian geometry and its reconstruction results, which allows to retrieve various scheme-theoretic objects associated to a hyperbolic curve over a number field from the knowledge of its fundamental group, or of certain Galois groups. IUT applies algorithmic results of mono-anabelian geometry to reconstruct relevant schemes after applying arithmetic deformations to them; a key role is played by three rigidities established in Mochizuki's etale theta theory. Roughly speaking, arithmetic deformations change the multiplication of a given ring, and the task is to measure how much the addition is changed. Infrastructure for deformation procedures is decoded by certain links between so called Hodge theaters, such as a theta-link and a log-link. These Hodge theaters use two main symmetries of IUT: multiplicative arithmetic and additive geometric. On one hand, Hodge theaters generalize such classical objects in number theory as the adeles and ideles in relation to their global elements. On the other hand, they generalize certain structures appearing in the previous Hodge–Arakelov theory of Mochizuki. The links between theaters are not compatible with ring or scheme structures and are performed outside conventional arithmetic geometry. However, they are compatible with certain group structures, and absolute Galois groups as well as certain types of topological groups play a fundamental role in IUT. Considerations of multiradiality, a generalization of functoriality, imply that three mild indeterminacies have to be introduced. === Consequences in number theory === The main claimed application of IUT is to various conjectures in number theory, among them the abc conjecture, but also more geometric conjectures such as Szpiro's conjecture on elliptic curves and Vojta's conjecture for curves. The first step is to translate arithmetic information on these objects to the setting of Frobenioid categories. It is claimed that extra structure on this side allows one to deduce statements which translate back into the claimed results. One issue with Mochizuki's arguments, which he acknowledges, is that it does not seem possible to get intermediate results in his claimed proof of the abc conjecture using IUT. In other words, there is no smaller subset of his arguments more easily amenable to an analysis by outside experts, which would yield a new result in Diophantine geometries. Vesselin Dimitrov extracted from Mochizuki's arguments a proof of a quantitative result on abc, which could in principle give a refutation of the proof. == References == == External links == Shinichi Mochizuki (1995–2018), Papers of Shinichi Mochizuki Shinichi Mochizuki (2014), A panoramic overview of inter-universal Teichmüller theory Yuichiro Hoshi; Go Yamashita (2015), RIMS Joint Research Workshop: On the verification and further development of inter-universal Teichmuller theory Ivan Fesenko (2015), Arithmetic deformation theory via arithmetic fundamental groups and nonarchimedean theta functions, notes on the work of Shinichi Mochizuki. Yuichiro Hoshi (2015) Introduction to inter-universal Teichmüller theory, a survey in Japanese
Wikipedia/Inter-universal_Teichmüller_theory
The abc conjecture (also known as the Oesterlé–Masser conjecture) is a conjecture in number theory that arose out of a discussion of Joseph Oesterlé and David Masser in 1985. It is stated in terms of three positive integers a , b {\displaystyle a,b} and c {\displaystyle c} (hence the name) that are relatively prime and satisfy a + b = c {\displaystyle a+b=c} . The conjecture essentially states that the product of the distinct prime factors of a b c {\displaystyle abc} is usually not much smaller than c {\displaystyle c} . A number of famous conjectures and theorems in number theory would follow immediately from the abc conjecture or its versions. Mathematician Dorian Goldfeld described the abc conjecture as "The most important unsolved problem in Diophantine analysis". The abc conjecture originated as the outcome of attempts by Oesterlé and Masser to understand the Szpiro conjecture about elliptic curves, which involves more geometric structures in its statement than the abc conjecture. The abc conjecture was shown to be equivalent to the modified Szpiro's conjecture. Various attempts to prove the abc conjecture have been made, but none have gained broad acceptance. Shinichi Mochizuki claimed to have a proof in 2012, but the conjecture is still regarded as unproven by the mainstream mathematical community. == Formulations == Before stating the conjecture, the notion of the radical of an integer must be introduced: for a positive integer n {\displaystyle n} , the radical of n {\displaystyle n} , denoted rad ( n ) {\displaystyle {\text{rad}}(n)} , is the product of the distinct prime factors of n {\displaystyle n} . For example, rad ( 16 ) = rad ( 2 4 ) = rad ( 2 ) = 2 {\displaystyle {\text{rad}}(16)={\text{rad}}(2^{4})={\text{rad}}(2)=2} rad ( 17 ) = 17 {\displaystyle {\text{rad}}(17)=17} rad ( 18 ) = rad ( 2 ⋅ 3 2 ) = 2 ⋅ 3 = 6 {\displaystyle {\text{rad}}(18)={\text{rad}}(2\cdot 3^{2})=2\cdot 3=6} rad ( 1000000 ) = rad ( 2 6 ⋅ 5 6 ) = 2 ⋅ 5 = 10 {\displaystyle {\text{rad}}(1000000)={\text{rad}}(2^{6}\cdot 5^{6})=2\cdot 5=10} If a, b, and c are coprime positive integers such that a + b = c, it turns out that "usually" c < rad ( a b c ) {\displaystyle c<{\text{rad}}(abc)} . The abc conjecture deals with the exceptions. Specifically, it states that: An equivalent formulation is: Equivalently (using the little o notation): A fourth equivalent formulation of the conjecture involves the quality q(a, b, c) of the triple (a, b, c), which is defined as For example: A typical triple (a, b, c) of coprime positive integers with a + b = c will have c < rad(abc), i.e. q(a, b, c) < 1. Triples with q > 1 such as in the second example are rather special, they consist of numbers divisible by high powers of small prime numbers. The fourth formulation is: Whereas it is known that there are infinitely many triples (a, b, c) of coprime positive integers with a + b = c such that q(a, b, c) > 1, the conjecture predicts that only finitely many of those have q > 1.01 or q > 1.001 or even q > 1.0001, etc. In particular, if the conjecture is true, then there must exist a triple (a, b, c) that achieves the maximal possible quality q(a, b, c). == Examples of triples with small radical == The condition that ε > 0 is necessary as there exist infinitely many triples a, b, c with c > rad(abc). For example, let The integer b is divisible by 9: Using this fact, the following calculation is made: By replacing the exponent 6n with other exponents forcing b to have larger square factors, the ratio between the radical and c can be made arbitrarily small. Specifically, let p > 2 be a prime and consider Now it may be plausibly claimed that b is divisible by p2: The last step uses the fact that p2 divides 2p(p−1) − 1. This follows from Fermat's little theorem, which shows that, for p > 2, 2p−1 = pk + 1 for some integer k. Raising both sides to the power of p then shows that 2p(p−1) = p2(...) + 1. And now with a similar calculation as above, the following results: A list of the highest-quality triples (triples with a particularly small radical relative to c) is given below; the highest quality, 1.6299, was found by Eric Reyssat (Lando & Zvonkin 2004, p. 137) for == Some consequences == The abc conjecture has a large number of consequences. These include both known results (some of which have been proven separately only since the conjecture has been stated) and conjectures for which it gives a conditional proof. The consequences include: Roth's theorem on Diophantine approximation of algebraic numbers. The Mordell conjecture (already proven in general by Gerd Faltings). As equivalent, Vojta's conjecture in dimension 1. The Erdős–Woods conjecture allowing for a finite number of counterexamples. The existence of infinitely many non-Wieferich primes in every base b > 1. The weak form of Marshall Hall's conjecture on the separation between squares and cubes of integers. Fermat's Last Theorem has a famously difficult proof by Andrew Wiles. However it follows easily, at least for n ≥ 6 {\displaystyle n\geq 6} , from an effective form of a weak version of the abc conjecture. The abc conjecture says the lim sup of the set of all qualities (defined above) is 1, which implies the much weaker assertion that there is a finite upper bound for qualities. The conjecture that 2 is such an upper bound suffices for a very short proof of Fermat's Last Theorem for n ≥ 6 {\displaystyle n\geq 6} . The Fermat–Catalan conjecture, a generalization of Fermat's Last Theorem concerning powers that are sums of powers. The L-function L(s, χd) formed with the Legendre symbol, has no Siegel zero, given a uniform version of the abc conjecture in number fields, not just the abc conjecture as formulated above for rational integers. A polynomial P(x) has only finitely many perfect powers for all integers x if P has at least three simple zeros. A generalization of Tijdeman's theorem concerning the number of solutions of ym = xn + k (Tijdeman's theorem answers the case k = 1), and Pillai's conjecture (1931) concerning the number of solutions of Aym = Bxn + k. As equivalent, the Granville–Langevin conjecture, that if f is a square-free binary form of degree n > 2, then for every real β > 2 there is a constant C(f, β) such that for all coprime integers x, y, the radical of f(x, y) exceeds C · max{|x|, |y|}n−β. all the polynominals (x^n-1)/(x-1) have an infinity of square-free values. As equivalent, the modified Szpiro conjecture, which would yield a bound of rad(abc)1.2+ε. Dąbrowski (1996) has shown that the abc conjecture implies that the Diophantine equation n! + A = k2 has only finitely many solutions for any given integer A. There are ~cfN positive integers n ≤ N for which f(n)/B' is square-free, with cf > 0 a positive constant defined as: The Beal conjecture, a generalization of Fermat's Last Theorem proposing that if A, B, C, x, y, and z are positive integers with Ax + By = Cz and x, y, z > 2, then A, B, and C have a common prime factor. The abc conjecture would imply that there are only finitely many counterexamples. Lang's conjecture, a lower bound for the height of a non-torsion rational point of an elliptic curve. A negative solution to the Erdős–Ulam problem on dense sets of Euclidean points with rational distances. An effective version of Siegel's theorem about integral points on algebraic curves. == Theoretical results == The abc conjecture implies that c can be bounded above by a near-linear function of the radical of abc. Bounds are known that are exponential. Specifically, the following bounds have been proven: In these bounds, K1 and K3 are constants that do not depend on a, b, or c, and K2 is a constant that depends on ε (in an effectively computable way) but not on a, b, or c. The bounds apply to any triple for which c > 2. There are also theoretical results that provide a lower bound on the best possible form of the abc conjecture. In particular, Stewart & Tijdeman (1986) showed that there are infinitely many triples (a, b, c) of coprime integers with a + b = c and for all k < 4. The constant k was improved to k = 6.068 by van Frankenhuysen (2000). == Computational results == In 2006, the Mathematics Department of Leiden University in the Netherlands, together with the Dutch Kennislink science institute, launched the ABC@Home project, a grid computing system, which aims to discover additional triples a, b, c with rad(abc) < c. Although no finite set of examples or counterexamples can resolve the abc conjecture, it is hoped that patterns in the triples discovered by this project will lead to insights about the conjecture and about number theory more generally. As of May 2014, ABC@Home had found 23.8 million triples. Note: the quality q(a, b, c) of the triple (a, b, c) is defined above. == Refined forms, generalizations and related statements == The abc conjecture is an integer analogue of the Mason–Stothers theorem for polynomials. A strengthening, proposed by Baker (1998), states that in the abc conjecture one can replace rad(abc) by where ω is the total number of distinct primes dividing a, b and c. Andrew Granville noticed that the minimum of the function ( ε − ω rad ⁡ ( a b c ) ) 1 + ε {\displaystyle {\big (}\varepsilon ^{-\omega }\operatorname {rad} (abc){\big )}^{1+\varepsilon }} over ε > 0 {\displaystyle \varepsilon >0} occurs when ε = ω log ⁡ ( rad ⁡ ( a b c ) ) . {\displaystyle \varepsilon ={\frac {\omega }{\log {\big (}\operatorname {rad} (abc){\big )}}}.} This inspired Baker (2004) to propose a sharper form of the abc conjecture, namely: with κ an absolute constant. After some computational experiments he found that a value of 6 / 5 {\displaystyle 6/5} was admissible for κ. This version is called the "explicit abc conjecture". Baker (1998) also describes related conjectures of Andrew Granville that would give upper bounds on c of the form where Ω(n) is the total number of prime factors of n, and where Θ(n) is the number of integers up to n divisible only by primes dividing n. Robert, Stewart & Tenenbaum (2014) proposed a more precise inequality based on Robert & Tenenbaum (2013). Let k = rad(abc). They conjectured there is a constant C1 such that holds whereas there is a constant C2 such that holds infinitely often. Browkin & Brzeziński (1994) formulated the n conjecture—a version of the abc conjecture involving n > 2 integers. == Claimed proofs == Lucien Szpiro proposed a solution in 2007, but it was found to be incorrect shortly afterwards. Since August 2012, Shinichi Mochizuki has claimed a proof of Szpiro's conjecture and therefore the abc conjecture. He released a series of four preprints developing a new theory he called inter-universal Teichmüller theory (IUTT), which is then applied to prove the abc conjecture. The papers have not been widely accepted by the mathematical community as providing a proof of abc. This is not only because of their length and the difficulty of understanding them, but also because at least one specific point in the argument has been identified as a gap by some other experts. Although a few mathematicians have vouched for the correctness of the proof and have attempted to communicate their understanding via workshops on IUTT, they have failed to convince the number theory community at large. In March 2018, Peter Scholze and Jakob Stix visited Kyoto for discussions with Mochizuki. While they did not resolve the differences, they brought them into clearer focus. Scholze and Stix wrote a report asserting and explaining an error in the logic of the proof and claiming that the resulting gap was "so severe that ... small modifications will not rescue the proof strategy"; Mochizuki claimed that they misunderstood vital aspects of the theory and made invalid simplifications. On April 3, 2020, two mathematicians from the Kyoto research institute where Mochizuki works announced that his claimed proof would be published in Publications of the Research Institute for Mathematical Sciences, the institute's journal. Mochizuki is chief editor of the journal but recused himself from the review of the paper. The announcement was received with skepticism by Kiran Kedlaya and Edward Frenkel, as well as being described by Nature as "unlikely to move many researchers over to Mochizuki's camp". In March 2021, Mochizuki's proof was published in RIMS. == See also == List of unsolved problems in mathematics == Notes == == References == == Sources == == External links == ABC@home Distributed computing project called ABC@Home. Easy as ABC: Easy to follow, detailed explanation by Brian Hayes. Weisstein, Eric W. "abc Conjecture". MathWorld. Abderrahmane Nitaj's ABC conjecture home page Bart de Smit's ABC Triples webpage http://www.math.columbia.edu/~goldfeld/ABC-Conjecture.pdf The ABC's of Number Theory by Noam D. Elkies Questions about Number by Barry Mazur Philosophy behind Mochizuki’s work on the ABC conjecture on MathOverflow ABC Conjecture Polymath project wiki page linking to various sources of commentary on Mochizuki's papers. abc Conjecture Numberphile video News about IUT by Mochizuki
Wikipedia/ABC_conjecture
Symmetric-key algorithms are algorithms for cryptography that use the same cryptographic keys for both the encryption of plaintext and the decryption of ciphertext. The keys may be identical, or there may be a simple transformation to go between the two keys. The keys, in practice, represent a shared secret between two or more parties that can be used to maintain a private information link. The requirement that both parties have access to the secret key is one of the main drawbacks of symmetric-key encryption, in comparison to public-key encryption (also known as asymmetric-key encryption). However, symmetric-key encryption algorithms are usually better for bulk encryption. With exception of the one-time pad they have a smaller key size, which means less storage space and faster transmission. Due to this, asymmetric-key encryption is often used to exchange the secret key for symmetric-key encryption. == Types == Symmetric-key encryption can use either stream ciphers or block ciphers. Stream ciphers encrypt the digits (typically bytes), or letters (in substitution ciphers) of a message one at a time. An example is ChaCha20. Substitution ciphers are well-known ciphers, but can be easily decrypted using a frequency table. Block ciphers take a number of bits and encrypt them in a single unit, padding the plaintext to achieve a multiple of the block size. The Advanced Encryption Standard (AES) algorithm, approved by NIST in December 2001, uses 128-bit blocks. == Implementations == Examples of popular symmetric-key algorithms include Twofish, Serpent, AES (Rijndael), Camellia, Salsa20, ChaCha20, Blowfish, CAST5, Kuznyechik, RC4, DES, 3DES, Skipjack, Safer, and IDEA. == Use as a cryptographic primitive == Symmetric ciphers are commonly used to achieve other cryptographic primitives than just encryption. Encrypting a message does not guarantee that it will remain unchanged while encrypted. Hence, often a message authentication code is added to a ciphertext to ensure that changes to the ciphertext will be noted by the receiver. Message authentication codes can be constructed from an AEAD cipher (e.g. AES-GCM). However, symmetric ciphers cannot be used for non-repudiation purposes except by involving additional parties. See the ISO/IEC 13888-2 standard. Another application is to build hash functions from block ciphers. See one-way compression function for descriptions of several such methods. == Construction of symmetric ciphers == Many modern block ciphers are based on a construction proposed by Horst Feistel. Feistel's construction makes it possible to build invertible functions from other functions that are themselves not invertible. == Security of symmetric ciphers == Symmetric ciphers have historically been susceptible to known-plaintext attacks, chosen-plaintext attacks, differential cryptanalysis and linear cryptanalysis. Careful construction of the functions for each round can greatly reduce the chances of a successful attack. It is also possible to increase the key length or the rounds in the encryption process to better protect against attack. This, however, tends to increase the processing power and decrease the speed at which the process runs due to the amount of operations the system needs to do. Most modern symmetric-key algorithms appear to be resistant to the threat of post-quantum cryptography. Quantum computers would exponentially increase the speed at which these ciphers can be decoded; notably, Grover's algorithm would take the square-root of the time traditionally required for a brute-force attack, although these vulnerabilities can be compensated for by doubling key length. For example, a 128 bit AES cipher would not be secure against such an attack as it would reduce the time required to test all possible iterations from over 10 quintillion years to about six months. By contrast, it would still take a quantum computer the same amount of time to decode a 256 bit AES cipher as it would a conventional computer to decode a 128 bit AES cipher. For this reason, AES-256 is believed to be "quantum resistant". == Key management == == Key establishment == Symmetric-key algorithms require both the sender and the recipient of a message to have the same secret key. All early cryptographic systems required either the sender or the recipient to somehow receive a copy of that secret key over a physically secure channel. Nearly all modern cryptographic systems still use symmetric-key algorithms internally to encrypt the bulk of the messages, but they eliminate the need for a physically secure channel by using Diffie–Hellman key exchange or some other public-key protocol to securely come to agreement on a fresh new secret key for each session/conversation (forward secrecy). == Key generation == When used with asymmetric ciphers for key transfer, pseudorandom key generators are nearly always used to generate the symmetric cipher session keys. However, lack of randomness in those generators or in their initialization vectors is disastrous and has led to cryptanalytic breaks in the past. Therefore, it is essential that an implementation use a source of high entropy for its initialization. == Reciprocal cipher == A reciprocal cipher is a cipher where, just as one enters the plaintext into the cryptography system to get the ciphertext, one could enter the ciphertext into the same place in the system to get the plaintext. A reciprocal cipher is also sometimes referred as self-reciprocal cipher. Practically all mechanical cipher machines implement a reciprocal cipher, a mathematical involution on each typed-in letter. Instead of designing two kinds of machines, one for encrypting and one for decrypting, all the machines can be identical and can be set up (keyed) the same way. Examples of reciprocal ciphers include: Atbash Beaufort cipher Enigma machine Marie Antoinette and Axel von Fersen communicated with a self-reciprocal cipher. the Porta polyalphabetic cipher is self-reciprocal. Purple cipher RC4 ROT13 XOR cipher Vatsyayana cipher The majority of all modern ciphers can be classified as either a stream cipher, most of which use a reciprocal XOR cipher combiner, or a block cipher, most of which use a Feistel cipher or Lai–Massey scheme with a reciprocal transformation in each round. == Notes == == References ==
Wikipedia/Symmetric_key_algorithm
The Annual ACM Symposium on Theory of Computing (STOC) is an academic conference in the field of theoretical computer science. STOC has been organized annually since 1969, typically in May or June; the conference is sponsored by the Association for Computing Machinery special interest group SIGACT. Acceptance rate of STOC, averaged from 1970 to 2012, is 31%, with the rate of 29% in 2012. As Fich (1996) writes, STOC and its annual IEEE counterpart FOCS (the Symposium on Foundations of Computer Science) are considered the two top conferences in theoretical computer science, considered broadly: they “are forums for some of the best work throughout theory of computing that promote breadth among theory of computing researchers and help to keep the community together.” Johnson (1984) includes regular attendance at STOC and FOCS as one of several defining characteristics of theoretical computer scientists. == Awards == The Gödel Prize for outstanding papers in theoretical computer science is presented alternately at STOC and at the International Colloquium on Automata, Languages and Programming (ICALP); the Knuth Prize for outstanding contributions to the foundations of computer science is presented alternately at STOC and at FOCS. Since 2003, STOC has presented one or more Best Paper Awards to recognize papers of the highest quality at the conference. In addition, the Danny Lewin Best Student Paper Award is awarded to the author(s) of the best student-only-authored paper in STOC. The award is named in honor of Daniel M. Lewin, an American-Israeli mathematician and entrepreneur who co-founded Internet company Akamai Technologies, and was one of the first victims of the September 11 attacks. == History == STOC was first organised on 5–7 May 1969, in Marina del Rey, California, United States. The conference chairman was Patrick C. Fischer, and the program committee consisted of Michael A. Harrison, Robert W. Floyd, Juris Hartmanis, Richard M. Karp, Albert R. Meyer, and Jeffrey D. Ullman. Early seminal papers in STOC include Cook (1971), which introduced the concept of NP-completeness (see also Cook–Levin theorem). == Location == STOC was organised in Canada in 1992, 1994, 2002, 2008, and 2017 in Greece in 2001, as a virtual/online conference in 2020 and 2021, and in Italy in 2022; all other meetings in 1969–2023 have been held in the United States. STOC was part of the Federated Computing Research Conference (FCRC) in 1993, 1996, 1999, 2003, 2007, 2011, 2015, 2019, and 2023. == Invited speakers == 2004 Éva Tardos (2004), "Network games", Proceedings of the thirty-sixth annual ACM symposium on Theory of computing - STOC '04, pp. 341–342, doi:10.1145/1007352.1007356, ISBN 978-1581138528, S2CID 18249534 Avi Wigderson (2004), "Depth through breadth, or why should we attend talks in other areas?", Proceedings of the thirty-sixth annual ACM symposium on Theory of computing - STOC '04, p. 579, doi:10.1145/1007352.1007359, ISBN 978-1581138528, S2CID 27563516 2005 Lance Fortnow (2005), "Beyond NP: the work and legacy of Larry Stockmeyer", Proceedings of the thirty-seventh annual ACM symposium on Theory of computing - STOC '05, p. 120, doi:10.1145/1060590.1060609, ISBN 978-1581139600, S2CID 16558679 2006 Prabhakar Raghavan (2006), "The changing face of web search: algorithms, auctions and advertising", Proceedings of the thirty-eighth annual ACM symposium on Theory of computing - STOC '06, p. 129, doi:10.1145/1132516.1132535, ISBN 978-1595931344, S2CID 19222958 Russell Impagliazzo (2006), "Can every randomized algorithm be derandomized?", Proceedings of the thirty-eighth annual ACM symposium on Theory of computing - STOC '06, pp. 373–374, doi:10.1145/1132516.1132571, ISBN 978-1595931344, S2CID 22433370 2007 Nancy Lynch (2007), "Distributed computing theory: algorithms, impossibility results, models, and proofs", Proceedings of the thirty-ninth annual ACM symposium on Theory of computing - STOC '07, p. 247, doi:10.1145/1250790.1250826, ISBN 9781595936318, S2CID 22140755 2008 Jennifer Rexford (2008), "Rethinking internet routing", Proceedings of the fortieth annual ACM symposium on Theory of computing - STOC 08, pp. 55–56, doi:10.1145/1374376.1374386, ISBN 9781605580470, S2CID 10958242 David Haussler (2008), "Computing how we became human", Proceedings of the fortieth annual ACM symposium on Theory of computing - STOC 08, pp. 639–640, doi:10.1145/1374376.1374468, ISBN 9781605580470, S2CID 30452365 Ryan O'Donnell (2008), "Some topics in analysis of boolean functions", Proceedings of the fortieth annual ACM symposium on Theory of computing - STOC 08, pp. 569–578, doi:10.1145/1374376.1374458, ISBN 9781605580470, S2CID 1241681 2009 Shafi Goldwasser (2009), "Athena lecture: Controlling Access to Programs?", Proceedings of the 41st annual ACM symposium on Symposium on theory of computing - STOC '09, pp. 167–168, doi:10.1145/1536414.1536416, ISBN 9781605585062 2010 David S. Johnson (2010), "Approximation Algorithms in Theory and Practice" (Knuth Prize Lecture) 2011 Leslie G. Valiant (2011), "The Extent and Limitations of Mechanistic Explanations of Nature" (2010 ACM Turing Award Lecture) Ravi Kannan (2011), "Algorithms: Recent Highlights and Challenges" (2011 Knuth Prize Lecture) David A. Ferruci (2011), "IBM's Watson/DeepQA" (FCRC Plenary Talk) Luiz Andre Barroso (2011), "Warehouse-Scale Computing: Entering the Teenage Decade" (FCRC Plenary Talk) 2013 Gary Miller (2013), Knuth Prize Lecture Prabhakar Raghavan (2013), Plenary talk 2014 Thomas Rothvoss (2014), "The matching polytope has exponential extension complexity" Shafi Goldwasser (2014), "The Cryptographic Lens" (Turing Award Lecture) video Silvio Micali (2014), "Proofs according to Silvio" (Turing Award Lecture) video 2015 Michael Stonebraker (2015), Turing Award Lecture video Andrew Yao (2015), FCRC Keynote Lecture László Babai (2015), Knuth Prize Lecture Olivier Temam (2015), FCRC Keynote Lecture 2016 Santosh Vempala (2016), "The Interplay of Sampling and Optimization in High Dimension" (Invited Talk) Timothy Chan (2016), "Computational Geometry, from Low to High Dimensions" (Invited Talk) 2017 Avi Wigderson (2017), "On the Nature and Future of ToC" (Keynote Talk) Orna Kupferman (2017), "Examining classical graph-theory problems from the viewpoint of formal-verification methods" (Keynote Talk) Oded Goldreich (2017), Knuth Prize Lecture == See also == Conferences in theoretical computer science. List of computer science conferences contains other academic conferences in computer science. List of computer science awards == Notes == == References == Cook, Stephen (1971), "The complexity of theorem proving procedures" (PDF), Proc. STOC 1971, pp. 151–158, doi:10.1145/800157.805047, S2CID 7573663. Fich, Faith (1996), "Infrastructure issues related to theory of computing research", ACM Computing Surveys, 28 (4es): 217–es, doi:10.1145/242224.242502, S2CID 195706843. Johnson, D. S. (1984), "The genealogy of theoretical computer science: a preliminary report", ACM SIGACT News, 16 (2): 36–49, doi:10.1145/1008959.1008960, S2CID 26789249. == External links == Official website STOC proceedings information in DBLP. STOC proceedings in the ACM Digital Library. Citation Statistics for FOCS/STOC/SODA, Piotr Indyk and Suresh Venkatasubramanian, July 2007.
Wikipedia/Symposium_on_the_Theory_of_Computing
Cryptography, or cryptology (from Ancient Greek: κρυπτός, romanized: kryptós "hidden, secret"; and γράφειν graphein, "to write", or -λογία -logia, "study", respectively), is the practice and study of techniques for secure communication in the presence of adversarial behavior. More generally, cryptography is about constructing and analyzing protocols that prevent third parties or the public from reading private messages. Modern cryptography exists at the intersection of the disciplines of mathematics, computer science, information security, electrical engineering, digital signal processing, physics, and others. Core concepts related to information security (data confidentiality, data integrity, authentication, and non-repudiation) are also central to cryptography. Practical applications of cryptography include electronic commerce, chip-based payment cards, digital currencies, computer passwords, and military communications. Cryptography prior to the modern age was effectively synonymous with encryption, converting readable information (plaintext) to unintelligible nonsense text (ciphertext), which can only be read by reversing the process (decryption). The sender of an encrypted (coded) message shares the decryption (decoding) technique only with the intended recipients to preclude access from adversaries. The cryptography literature often uses the names "Alice" (or "A") for the sender, "Bob" (or "B") for the intended recipient, and "Eve" (or "E") for the eavesdropping adversary. Since the development of rotor cipher machines in World War I and the advent of computers in World War II, cryptography methods have become increasingly complex and their applications more varied. Modern cryptography is heavily based on mathematical theory and computer science practice; cryptographic algorithms are designed around computational hardness assumptions, making such algorithms hard to break in actual practice by any adversary. While it is theoretically possible to break into a well-designed system, it is infeasible in actual practice to do so. Such schemes, if well designed, are therefore termed "computationally secure". Theoretical advances (e.g., improvements in integer factorization algorithms) and faster computing technology require these designs to be continually reevaluated and, if necessary, adapted. Information-theoretically secure schemes that provably cannot be broken even with unlimited computing power, such as the one-time pad, are much more difficult to use in practice than the best theoretically breakable but computationally secure schemes. The growth of cryptographic technology has raised a number of legal issues in the Information Age. Cryptography's potential for use as a tool for espionage and sedition has led many governments to classify it as a weapon and to limit or even prohibit its use and export. In some jurisdictions where the use of cryptography is legal, laws permit investigators to compel the disclosure of encryption keys for documents relevant to an investigation. Cryptography also plays a major role in digital rights management and copyright infringement disputes with regard to digital media. == Terminology == The first use of the term "cryptograph" (as opposed to "cryptogram") dates back to the 19th century—originating from "The Gold-Bug", a story by Edgar Allan Poe. Until modern times, cryptography referred almost exclusively to "encryption", which is the process of converting ordinary information (called plaintext) into an unintelligible form (called ciphertext). Decryption is the reverse, in other words, moving from the unintelligible ciphertext back to plaintext. A cipher (or cypher) is a pair of algorithms that carry out the encryption and the reversing decryption. The detailed operation of a cipher is controlled both by the algorithm and, in each instance, by a "key". The key is a secret (ideally known only to the communicants), usually a string of characters (ideally short so it can be remembered by the user), which is needed to decrypt the ciphertext. In formal mathematical terms, a "cryptosystem" is the ordered list of elements of finite possible plaintexts, finite possible cyphertexts, finite possible keys, and the encryption and decryption algorithms that correspond to each key. Keys are important both formally and in actual practice, as ciphers without variable keys can be trivially broken with only the knowledge of the cipher used and are therefore useless (or even counter-productive) for most purposes. Historically, ciphers were often used directly for encryption or decryption without additional procedures such as authentication or integrity checks. There are two main types of cryptosystems: symmetric and asymmetric. In symmetric systems, the only ones known until the 1970s, the same secret key encrypts and decrypts a message. Data manipulation in symmetric systems is significantly faster than in asymmetric systems. Asymmetric systems use a "public key" to encrypt a message and a related "private key" to decrypt it. The advantage of asymmetric systems is that the public key can be freely published, allowing parties to establish secure communication without having a shared secret key. In practice, asymmetric systems are used to first exchange a secret key, and then secure communication proceeds via a more efficient symmetric system using that key. Examples of asymmetric systems include Diffie–Hellman key exchange, RSA (Rivest–Shamir–Adleman), ECC (Elliptic Curve Cryptography), and Post-quantum cryptography. Secure symmetric algorithms include the commonly used AES (Advanced Encryption Standard) which replaced the older DES (Data Encryption Standard). Insecure symmetric algorithms include children's language tangling schemes such as Pig Latin or other cant, and all historical cryptographic schemes, however seriously intended, prior to the invention of the one-time pad early in the 20th century. In colloquial use, the term "code" is often used to mean any method of encryption or concealment of meaning. However, in cryptography, code has a more specific meaning: the replacement of a unit of plaintext (i.e., a meaningful word or phrase) with a code word (for example, "wallaby" replaces "attack at dawn"). A cypher, in contrast, is a scheme for changing or substituting an element below such a level (a letter, a syllable, or a pair of letters, etc.) to produce a cyphertext. Cryptanalysis is the term used for the study of methods for obtaining the meaning of encrypted information without access to the key normally required to do so; i.e., it is the study of how to "crack" encryption algorithms or their implementations. Some use the terms "cryptography" and "cryptology" interchangeably in English, while others (including US military practice generally) use "cryptography" to refer specifically to the use and practice of cryptographic techniques and "cryptology" to refer to the combined study of cryptography and cryptanalysis. English is more flexible than several other languages in which "cryptology" (done by cryptologists) is always used in the second sense above. RFC 2828 advises that steganography is sometimes included in cryptology. The study of characteristics of languages that have some application in cryptography or cryptology (e.g. frequency data, letter combinations, universal patterns, etc.) is called cryptolinguistics. Cryptolingusitics is especially used in military intelligence applications for deciphering foreign communications. == History == Before the modern era, cryptography focused on message confidentiality (i.e., encryption)—conversion of messages from a comprehensible form into an incomprehensible one and back again at the other end, rendering it unreadable by interceptors or eavesdroppers without secret knowledge (namely the key needed for decryption of that message). Encryption attempted to ensure secrecy in communications, such as those of spies, military leaders, and diplomats. In recent decades, the field has expanded beyond confidentiality concerns to include techniques for message integrity checking, sender/receiver identity authentication, digital signatures, interactive proofs and secure computation, among others. === Classic cryptography === The main classical cipher types are transposition ciphers, which rearrange the order of letters in a message (e.g., 'hello world' becomes 'ehlol owrdl' in a trivially simple rearrangement scheme), and substitution ciphers, which systematically replace letters or groups of letters with other letters or groups of letters (e.g., 'fly at once' becomes 'gmz bu podf' by replacing each letter with the one following it in the Latin alphabet). Simple versions of either have never offered much confidentiality from enterprising opponents. An early substitution cipher was the Caesar cipher, in which each letter in the plaintext was replaced by a letter three positions further down the alphabet. Suetonius reports that Julius Caesar used it with a shift of three to communicate with his generals. Atbash is an example of an early Hebrew cipher. The earliest known use of cryptography is some carved ciphertext on stone in Egypt (c. 1900 BCE), but this may have been done for the amusement of literate observers rather than as a way of concealing information. The Greeks of Classical times are said to have known of ciphers (e.g., the scytale transposition cipher claimed to have been used by the Spartan military). Steganography (i.e., hiding even the existence of a message so as to keep it confidential) was also first developed in ancient times. An early example, from Herodotus, was a message tattooed on a slave's shaved head and concealed under the regrown hair. Other steganography methods involve 'hiding in plain sight,' such as using a music cipher to disguise an encrypted message within a regular piece of sheet music. More modern examples of steganography include the use of invisible ink, microdots, and digital watermarks to conceal information. In India, the 2000-year-old Kama Sutra of Vātsyāyana speaks of two different kinds of ciphers called Kautiliyam and Mulavediya. In the Kautiliyam, the cipher letter substitutions are based on phonetic relations, such as vowels becoming consonants. In the Mulavediya, the cipher alphabet consists of pairing letters and using the reciprocal ones. In Sassanid Persia, there were two secret scripts, according to the Muslim author Ibn al-Nadim: the šāh-dabīrīya (literally "King's script") which was used for official correspondence, and the rāz-saharīya which was used to communicate secret messages with other countries. David Kahn notes in The Codebreakers that modern cryptology originated among the Arabs, the first people to systematically document cryptanalytic methods. Al-Khalil (717–786) wrote the Book of Cryptographic Messages, which contains the first use of permutations and combinations to list all possible Arabic words with and without vowels. Ciphertexts produced by a classical cipher (and some modern ciphers) will reveal statistical information about the plaintext, and that information can often be used to break the cipher. After the discovery of frequency analysis, nearly all such ciphers could be broken by an informed attacker. Such classical ciphers still enjoy popularity today, though mostly as puzzles (see cryptogram). The Arab mathematician and polymath Al-Kindi wrote a book on cryptography entitled Risalah fi Istikhraj al-Mu'amma (Manuscript for the Deciphering Cryptographic Messages), which described the first known use of frequency analysis cryptanalysis techniques. Language letter frequencies may offer little help for some extended historical encryption techniques such as homophonic cipher that tend to flatten the frequency distribution. For those ciphers, language letter group (or n-gram) frequencies may provide an attack. Essentially all ciphers remained vulnerable to cryptanalysis using the frequency analysis technique until the development of the polyalphabetic cipher, most clearly by Leon Battista Alberti around the year 1467, though there is some indication that it was already known to Al-Kindi. Alberti's innovation was to use different ciphers (i.e., substitution alphabets) for various parts of a message (perhaps for each successive plaintext letter at the limit). He also invented what was probably the first automatic cipher device, a wheel that implemented a partial realization of his invention. In the Vigenère cipher, a polyalphabetic cipher, encryption uses a key word, which controls letter substitution depending on which letter of the key word is used. In the mid-19th century Charles Babbage showed that the Vigenère cipher was vulnerable to Kasiski examination, but this was first published about ten years later by Friedrich Kasiski. Although frequency analysis can be a powerful and general technique against many ciphers, encryption has still often been effective in practice, as many a would-be cryptanalyst was unaware of the technique. Breaking a message without using frequency analysis essentially required knowledge of the cipher used and perhaps of the key involved, thus making espionage, bribery, burglary, defection, etc., more attractive approaches to the cryptanalytically uninformed. It was finally explicitly recognized in the 19th century that secrecy of a cipher's algorithm is not a sensible nor practical safeguard of message security; in fact, it was further realized that any adequate cryptographic scheme (including ciphers) should remain secure even if the adversary fully understands the cipher algorithm itself. Security of the key used should alone be sufficient for a good cipher to maintain confidentiality under an attack. This fundamental principle was first explicitly stated in 1883 by Auguste Kerckhoffs and is generally called Kerckhoffs's Principle; alternatively and more bluntly, it was restated by Claude Shannon, the inventor of information theory and the fundamentals of theoretical cryptography, as Shannon's Maxim—'the enemy knows the system'. Different physical devices and aids have been used to assist with ciphers. One of the earliest may have been the scytale of ancient Greece, a rod supposedly used by the Spartans as an aid for a transposition cipher. In medieval times, other aids were invented such as the cipher grille, which was also used for a kind of steganography. With the invention of polyalphabetic ciphers came more sophisticated aids such as Alberti's own cipher disk, Johannes Trithemius' tabula recta scheme, and Thomas Jefferson's wheel cypher (not publicly known, and reinvented independently by Bazeries around 1900). Many mechanical encryption/decryption devices were invented early in the 20th century, and several patented, among them rotor machines—famously including the Enigma machine used by the German government and military from the late 1920s and during World War II. The ciphers implemented by better quality examples of these machine designs brought about a substantial increase in cryptanalytic difficulty after WWI. === Early computer-era cryptography === Cryptanalysis of the new mechanical ciphering devices proved to be both difficult and laborious. In the United Kingdom, cryptanalytic efforts at Bletchley Park during WWII spurred the development of more efficient means for carrying out repetitive tasks, such as military code breaking (decryption). This culminated in the development of the Colossus, the world's first fully electronic, digital, programmable computer, which assisted in the decryption of ciphers generated by the German Army's Lorenz SZ40/42 machine. Extensive open academic research into cryptography is relatively recent, beginning in the mid-1970s. In the early 1970s IBM personnel designed the Data Encryption Standard (DES) algorithm that became the first federal government cryptography standard in the United States. In 1976 Whitfield Diffie and Martin Hellman published the Diffie–Hellman key exchange algorithm. In 1977 the RSA algorithm was published in Martin Gardner's Scientific American column. Since then, cryptography has become a widely used tool in communications, computer networks, and computer security generally. Some modern cryptographic techniques can only keep their keys secret if certain mathematical problems are intractable, such as the integer factorization or the discrete logarithm problems, so there are deep connections with abstract mathematics. There are very few cryptosystems that are proven to be unconditionally secure. The one-time pad is one, and was proven to be so by Claude Shannon. There are a few important algorithms that have been proven secure under certain assumptions. For example, the infeasibility of factoring extremely large integers is the basis for believing that RSA is secure, and some other systems, but even so, proof of unbreakability is unavailable since the underlying mathematical problem remains open. In practice, these are widely used, and are believed unbreakable in practice by most competent observers. There are systems similar to RSA, such as one by Michael O. Rabin that are provably secure provided factoring n = pq is impossible; it is quite unusable in practice. The discrete logarithm problem is the basis for believing some other cryptosystems are secure, and again, there are related, less practical systems that are provably secure relative to the solvability or insolvability discrete log problem. As well as being aware of cryptographic history, cryptographic algorithm and system designers must also sensibly consider probable future developments while working on their designs. For instance, continuous improvements in computer processing power have increased the scope of brute-force attacks, so when specifying key lengths, the required key lengths are similarly advancing. The potential impact of quantum computing are already being considered by some cryptographic system designers developing post-quantum cryptography. The announced imminence of small implementations of these machines may be making the need for preemptive caution rather more than merely speculative. == Modern cryptography == Claude Shannon's two papers, his 1948 paper on information theory, and especially his 1949 paper on cryptography, laid the foundations of modern cryptography and provided a mathematical basis for future cryptography. His 1949 paper has been noted as having provided a "solid theoretical basis for cryptography and for cryptanalysis", and as having turned cryptography from an "art to a science". As a result of his contributions and work, he has been described as the "founding father of modern cryptography". Prior to the early 20th century, cryptography was mainly concerned with linguistic and lexicographic patterns. Since then cryptography has broadened in scope, and now makes extensive use of mathematical subdisciplines, including information theory, computational complexity, statistics, combinatorics, abstract algebra, number theory, and finite mathematics. Cryptography is also a branch of engineering, but an unusual one since it deals with active, intelligent, and malevolent opposition; other kinds of engineering (e.g., civil or chemical engineering) need deal only with neutral natural forces. There is also active research examining the relationship between cryptographic problems and quantum physics. Just as the development of digital computers and electronics helped in cryptanalysis, it made possible much more complex ciphers. Furthermore, computers allowed for the encryption of any kind of data representable in any binary format, unlike classical ciphers which only encrypted written language texts; this was new and significant. Computer use has thus supplanted linguistic cryptography, both for cipher design and cryptanalysis. Many computer ciphers can be characterized by their operation on binary bit sequences (sometimes in groups or blocks), unlike classical and mechanical schemes, which generally manipulate traditional characters (i.e., letters and digits) directly. However, computers have also assisted cryptanalysis, which has compensated to some extent for increased cipher complexity. Nonetheless, good modern ciphers have stayed ahead of cryptanalysis; it is typically the case that use of a quality cipher is very efficient (i.e., fast and requiring few resources, such as memory or CPU capability), while breaking it requires an effort many orders of magnitude larger, and vastly larger than that required for any classical cipher, making cryptanalysis so inefficient and impractical as to be effectively impossible. === Symmetric-key cryptography === Symmetric-key cryptography refers to encryption methods in which both the sender and receiver share the same key (or, less commonly, in which their keys are different, but related in an easily computable way). This was the only kind of encryption publicly known until June 1976. Symmetric key ciphers are implemented as either block ciphers or stream ciphers. A block cipher enciphers input in blocks of plaintext as opposed to individual characters, the input form used by a stream cipher. The Data Encryption Standard (DES) and the Advanced Encryption Standard (AES) are block cipher designs that have been designated cryptography standards by the US government (though DES's designation was finally withdrawn after the AES was adopted). Despite its deprecation as an official standard, DES (especially its still-approved and much more secure triple-DES variant) remains quite popular; it is used across a wide range of applications, from ATM encryption to e-mail privacy and secure remote access. Many other block ciphers have been designed and released, with considerable variation in quality. Many, even some designed by capable practitioners, have been thoroughly broken, such as FEAL. Stream ciphers, in contrast to the 'block' type, create an arbitrarily long stream of key material, which is combined with the plaintext bit-by-bit or character-by-character, somewhat like the one-time pad. In a stream cipher, the output stream is created based on a hidden internal state that changes as the cipher operates. That internal state is initially set up using the secret key material. RC4 is a widely used stream cipher. Block ciphers can be used as stream ciphers by generating blocks of a keystream (in place of a Pseudorandom number generator) and applying an XOR operation to each bit of the plaintext with each bit of the keystream. Message authentication codes (MACs) are much like cryptographic hash functions, except that a secret key can be used to authenticate the hash value upon receipt; this additional complication blocks an attack scheme against bare digest algorithms, and so has been thought worth the effort. Cryptographic hash functions are a third type of cryptographic algorithm. They take a message of any length as input, and output a short, fixed-length hash, which can be used in (for example) a digital signature. For good hash functions, an attacker cannot find two messages that produce the same hash. MD4 is a long-used hash function that is now broken; MD5, a strengthened variant of MD4, is also widely used but broken in practice. The US National Security Agency developed the Secure Hash Algorithm series of MD5-like hash functions: SHA-0 was a flawed algorithm that the agency withdrew; SHA-1 is widely deployed and more secure than MD5, but cryptanalysts have identified attacks against it; the SHA-2 family improves on SHA-1, but is vulnerable to clashes as of 2011; and the US standards authority thought it "prudent" from a security perspective to develop a new standard to "significantly improve the robustness of NIST's overall hash algorithm toolkit." Thus, a hash function design competition was meant to select a new U.S. national standard, to be called SHA-3, by 2012. The competition ended on October 2, 2012, when the NIST announced that Keccak would be the new SHA-3 hash algorithm. Unlike block and stream ciphers that are invertible, cryptographic hash functions produce a hashed output that cannot be used to retrieve the original input data. Cryptographic hash functions are used to verify the authenticity of data retrieved from an untrusted source or to add a layer of security. === Public-key cryptography === Symmetric-key cryptosystems use the same key for encryption and decryption of a message, although a message or group of messages can have a different key than others. A significant disadvantage of symmetric ciphers is the key management necessary to use them securely. Each distinct pair of communicating parties must, ideally, share a different key, and perhaps for each ciphertext exchanged as well. The number of keys required increases as the square of the number of network members, which very quickly requires complex key management schemes to keep them all consistent and secret. In a groundbreaking 1976 paper, Whitfield Diffie and Martin Hellman proposed the notion of public-key (also, more generally, called asymmetric key) cryptography in which two different but mathematically related keys are used—a public key and a private key. A public key system is so constructed that calculation of one key (the 'private key') is computationally infeasible from the other (the 'public key'), even though they are necessarily related. Instead, both keys are generated secretly, as an interrelated pair. The historian David Kahn described public-key cryptography as "the most revolutionary new concept in the field since polyalphabetic substitution emerged in the Renaissance". In public-key cryptosystems, the public key may be freely distributed, while its paired private key must remain secret. In a public-key encryption system, the public key is used for encryption, while the private or secret key is used for decryption. While Diffie and Hellman could not find such a system, they showed that public-key cryptography was indeed possible by presenting the Diffie–Hellman key exchange protocol, a solution that is now widely used in secure communications to allow two parties to secretly agree on a shared encryption key. The X.509 standard defines the most commonly used format for public key certificates. Diffie and Hellman's publication sparked widespread academic efforts in finding a practical public-key encryption system. This race was finally won in 1978 by Ronald Rivest, Adi Shamir, and Len Adleman, whose solution has since become known as the RSA algorithm. The Diffie–Hellman and RSA algorithms, in addition to being the first publicly known examples of high-quality public-key algorithms, have been among the most widely used. Other asymmetric-key algorithms include the Cramer–Shoup cryptosystem, ElGamal encryption, and various elliptic curve techniques. A document published in 1997 by the Government Communications Headquarters (GCHQ), a British intelligence organization, revealed that cryptographers at GCHQ had anticipated several academic developments. Reportedly, around 1970, James H. Ellis had conceived the principles of asymmetric key cryptography. In 1973, Clifford Cocks invented a solution that was very similar in design rationale to RSA. In 1974, Malcolm J. Williamson is claimed to have developed the Diffie–Hellman key exchange. Public-key cryptography is also used for implementing digital signature schemes. A digital signature is reminiscent of an ordinary signature; they both have the characteristic of being easy for a user to produce, but difficult for anyone else to forge. Digital signatures can also be permanently tied to the content of the message being signed; they cannot then be 'moved' from one document to another, for any attempt will be detectable. In digital signature schemes, there are two algorithms: one for signing, in which a secret key is used to process the message (or a hash of the message, or both), and one for verification, in which the matching public key is used with the message to check the validity of the signature. RSA and DSA are two of the most popular digital signature schemes. Digital signatures are central to the operation of public key infrastructures and many network security schemes (e.g., SSL/TLS, many VPNs, etc.). Public-key algorithms are most often based on the computational complexity of "hard" problems, often from number theory. For example, the hardness of RSA is related to the integer factorization problem, while Diffie–Hellman and DSA are related to the discrete logarithm problem. The security of elliptic curve cryptography is based on number theoretic problems involving elliptic curves. Because of the difficulty of the underlying problems, most public-key algorithms involve operations such as modular multiplication and exponentiation, which are much more computationally expensive than the techniques used in most block ciphers, especially with typical key sizes. As a result, public-key cryptosystems are commonly hybrid cryptosystems, in which a fast high-quality symmetric-key encryption algorithm is used for the message itself, while the relevant symmetric key is sent with the message, but encrypted using a public-key algorithm. Similarly, hybrid signature schemes are often used, in which a cryptographic hash function is computed, and only the resulting hash is digitally signed. === Cryptographic hash functions === Cryptographic hash functions are functions that take a variable-length input and return a fixed-length output, which can be used in, for example, a digital signature. For a hash function to be secure, it must be difficult to compute two inputs that hash to the same value (collision resistance) and to compute an input that hashes to a given output (preimage resistance). MD4 is a long-used hash function that is now broken; MD5, a strengthened variant of MD4, is also widely used but broken in practice. The US National Security Agency developed the Secure Hash Algorithm series of MD5-like hash functions: SHA-0 was a flawed algorithm that the agency withdrew; SHA-1 is widely deployed and more secure than MD5, but cryptanalysts have identified attacks against it; the SHA-2 family improves on SHA-1, but is vulnerable to clashes as of 2011; and the US standards authority thought it "prudent" from a security perspective to develop a new standard to "significantly improve the robustness of NIST's overall hash algorithm toolkit." Thus, a hash function design competition was meant to select a new U.S. national standard, to be called SHA-3, by 2012. The competition ended on October 2, 2012, when the NIST announced that Keccak would be the new SHA-3 hash algorithm. Unlike block and stream ciphers that are invertible, cryptographic hash functions produce a hashed output that cannot be used to retrieve the original input data. Cryptographic hash functions are used to verify the authenticity of data retrieved from an untrusted source or to add a layer of security. === Cryptanalysis === The goal of cryptanalysis is to find some weakness or insecurity in a cryptographic scheme, thus permitting its subversion or evasion. It is a common misconception that every encryption method can be broken. In connection with his WWII work at Bell Labs, Claude Shannon proved that the one-time pad cipher is unbreakable, provided the key material is truly random, never reused, kept secret from all possible attackers, and of equal or greater length than the message. Most ciphers, apart from the one-time pad, can be broken with enough computational effort by brute force attack, but the amount of effort needed may be exponentially dependent on the key size, as compared to the effort needed to make use of the cipher. In such cases, effective security could be achieved if it is proven that the effort required (i.e., "work factor", in Shannon's terms) is beyond the ability of any adversary. This means it must be shown that no efficient method (as opposed to the time-consuming brute force method) can be found to break the cipher. Since no such proof has been found to date, the one-time-pad remains the only theoretically unbreakable cipher. Although well-implemented one-time-pad encryption cannot be broken, traffic analysis is still possible. There are a wide variety of cryptanalytic attacks, and they can be classified in any of several ways. A common distinction turns on what Eve (an attacker) knows and what capabilities are available. In a ciphertext-only attack, Eve has access only to the ciphertext (good modern cryptosystems are usually effectively immune to ciphertext-only attacks). In a known-plaintext attack, Eve has access to a ciphertext and its corresponding plaintext (or to many such pairs). In a chosen-plaintext attack, Eve may choose a plaintext and learn its corresponding ciphertext (perhaps many times); an example is gardening, used by the British during WWII. In a chosen-ciphertext attack, Eve may be able to choose ciphertexts and learn their corresponding plaintexts. Finally in a man-in-the-middle attack Eve gets in between Alice (the sender) and Bob (the recipient), accesses and modifies the traffic and then forward it to the recipient. Also important, often overwhelmingly so, are mistakes (generally in the design or use of one of the protocols involved). Cryptanalysis of symmetric-key ciphers typically involves looking for attacks against the block ciphers or stream ciphers that are more efficient than any attack that could be against a perfect cipher. For example, a simple brute force attack against DES requires one known plaintext and 255 decryptions, trying approximately half of the possible keys, to reach a point at which chances are better than even that the key sought will have been found. But this may not be enough assurance; a linear cryptanalysis attack against DES requires 243 known plaintexts (with their corresponding ciphertexts) and approximately 243 DES operations. This is a considerable improvement over brute force attacks. Public-key algorithms are based on the computational difficulty of various problems. The most famous of these are the difficulty of integer factorization of semiprimes and the difficulty of calculating discrete logarithms, both of which are not yet proven to be solvable in polynomial time (P) using only a classical Turing-complete computer. Much public-key cryptanalysis concerns designing algorithms in P that can solve these problems, or using other technologies, such as quantum computers. For instance, the best-known algorithms for solving the elliptic curve-based version of discrete logarithm are much more time-consuming than the best-known algorithms for factoring, at least for problems of more or less equivalent size. Thus, to achieve an equivalent strength of encryption, techniques that depend upon the difficulty of factoring large composite numbers, such as the RSA cryptosystem, require larger keys than elliptic curve techniques. For this reason, public-key cryptosystems based on elliptic curves have become popular since their invention in the mid-1990s. While pure cryptanalysis uses weaknesses in the algorithms themselves, other attacks on cryptosystems are based on actual use of the algorithms in real devices, and are called side-channel attacks. If a cryptanalyst has access to, for example, the amount of time the device took to encrypt a number of plaintexts or report an error in a password or PIN character, they may be able to use a timing attack to break a cipher that is otherwise resistant to analysis. An attacker might also study the pattern and length of messages to derive valuable information; this is known as traffic analysis and can be quite useful to an alert adversary. Poor administration of a cryptosystem, such as permitting too short keys, will make any system vulnerable, regardless of other virtues. Social engineering and other attacks against humans (e.g., bribery, extortion, blackmail, espionage, rubber-hose cryptanalysis or torture) are usually employed due to being more cost-effective and feasible to perform in a reasonable amount of time compared to pure cryptanalysis by a high margin. === Cryptographic primitives === Much of the theoretical work in cryptography concerns cryptographic primitives—algorithms with basic cryptographic properties—and their relationship to other cryptographic problems. More complicated cryptographic tools are then built from these basic primitives. These primitives provide fundamental properties, which are used to develop more complex tools called cryptosystems or cryptographic protocols, which guarantee one or more high-level security properties. Note, however, that the distinction between cryptographic primitives and cryptosystems, is quite arbitrary; for example, the RSA algorithm is sometimes considered a cryptosystem, and sometimes a primitive. Typical examples of cryptographic primitives include pseudorandom functions, one-way functions, etc. === Cryptosystems === One or more cryptographic primitives are often used to develop a more complex algorithm, called a cryptographic system, or cryptosystem. Cryptosystems (e.g., El-Gamal encryption) are designed to provide particular functionality (e.g., public key encryption) while guaranteeing certain security properties (e.g., chosen-plaintext attack (CPA) security in the random oracle model). Cryptosystems use the properties of the underlying cryptographic primitives to support the system's security properties. As the distinction between primitives and cryptosystems is somewhat arbitrary, a sophisticated cryptosystem can be derived from a combination of several more primitive cryptosystems. In many cases, the cryptosystem's structure involves back and forth communication among two or more parties in space (e.g., between the sender of a secure message and its receiver) or across time (e.g., cryptographically protected backup data). Such cryptosystems are sometimes called cryptographic protocols. Some widely known cryptosystems include RSA, Schnorr signature, ElGamal encryption, and Pretty Good Privacy (PGP). More complex cryptosystems include electronic cash systems, signcryption systems, etc. Some more 'theoretical' cryptosystems include interactive proof systems, (like zero-knowledge proofs) and systems for secret sharing. === Lightweight cryptography === Lightweight cryptography (LWC) concerns cryptographic algorithms developed for a strictly constrained environment. The growth of Internet of Things (IoT) has spiked research into the development of lightweight algorithms that are better suited for the environment. An IoT environment requires strict constraints on power consumption, processing power, and security. Algorithms such as PRESENT, AES, and SPECK are examples of the many LWC algorithms that have been developed to achieve the standard set by the National Institute of Standards and Technology. == Applications == Cryptography is widely used on the internet to help protect user-data and prevent eavesdropping. To ensure secrecy during transmission, many systems use private key cryptography to protect transmitted information. With public-key systems, one can maintain secrecy without a master key or a large number of keys. But, some algorithms like BitLocker and VeraCrypt are generally not private-public key cryptography. For example, Veracrypt uses a password hash to generate the single private key. However, it can be configured to run in public-private key systems. The C++ opensource encryption library OpenSSL provides free and opensource encryption software and tools. The most commonly used encryption cipher suit is AES, as it has hardware acceleration for all x86 based processors that has AES-NI. A close contender is ChaCha20-Poly1305, which is a stream cipher, however it is commonly used for mobile devices as they are ARM based which does not feature AES-NI instruction set extension. === Cybersecurity === Cryptography can be used to secure communications by encrypting them. Websites use encryption via HTTPS. "End-to-end" encryption, where only sender and receiver can read messages, is implemented for email in Pretty Good Privacy and for secure messaging in general in WhatsApp, Signal and Telegram. Operating systems use encryption to keep passwords secret, conceal parts of the system, and ensure that software updates are truly from the system maker. Instead of storing plaintext passwords, computer systems store hashes thereof; then, when a user logs in, the system passes the given password through a cryptographic hash function and compares it to the hashed value on file. In this manner, neither the system nor an attacker has at any point access to the password in plaintext. Encryption is sometimes used to encrypt one's entire drive. For example, University College London has implemented BitLocker (a program by Microsoft) to render drive data opaque without users logging in. === Cryptocurrencies and cryptoeconomics === Cryptographic techniques enable cryptocurrency technologies, such as distributed ledger technologies (e.g., blockchains), which finance cryptoeconomics applications such as decentralized finance (DeFi). Key cryptographic techniques that enable cryptocurrencies and cryptoeconomics include, but are not limited to: cryptographic keys, cryptographic hash function, asymmetric (public key) encryption, Multi-Factor Authentication (MFA), End-to-End Encryption (E2EE), and Zero Knowledge Proofs (ZKP). == Legal issues == === Prohibitions === Cryptography has long been of interest to intelligence gathering and law enforcement agencies. Secret communications may be criminal or even treasonous. Because of its facilitation of privacy, and the diminution of privacy attendant on its prohibition, cryptography is also of considerable interest to civil rights supporters. Accordingly, there has been a history of controversial legal issues surrounding cryptography, especially since the advent of inexpensive computers has made widespread access to high-quality cryptography possible. In some countries, even the domestic use of cryptography is, or has been, restricted. Until 1999, France significantly restricted the use of cryptography domestically, though it has since relaxed many of these rules. In China and Iran, a license is still required to use cryptography. Many countries have tight restrictions on the use of cryptography. Among the more restrictive are laws in Belarus, Kazakhstan, Mongolia, Pakistan, Singapore, Tunisia, and Vietnam. In the United States, cryptography is legal for domestic use, but there has been much conflict over legal issues related to cryptography. One particularly important issue has been the export of cryptography and cryptographic software and hardware. Probably because of the importance of cryptanalysis in World War II and an expectation that cryptography would continue to be important for national security, many Western governments have, at some point, strictly regulated export of cryptography. After World War II, it was illegal in the US to sell or distribute encryption technology overseas; in fact, encryption was designated as auxiliary military equipment and put on the United States Munitions List. Until the development of the personal computer, asymmetric key algorithms (i.e., public key techniques), and the Internet, this was not especially problematic. However, as the Internet grew and computers became more widely available, high-quality encryption techniques became well known around the globe. === Export controls === In the 1990s, there were several challenges to US export regulation of cryptography. After the source code for Philip Zimmermann's Pretty Good Privacy (PGP) encryption program found its way onto the Internet in June 1991, a complaint by RSA Security (then called RSA Data Security, Inc.) resulted in a lengthy criminal investigation of Zimmermann by the US Customs Service and the FBI, though no charges were ever filed. Daniel J. Bernstein, then a graduate student at UC Berkeley, brought a lawsuit against the US government challenging some aspects of the restrictions based on free speech grounds. The 1995 case Bernstein v. United States ultimately resulted in a 1999 decision that printed source code for cryptographic algorithms and systems was protected as free speech by the United States Constitution. In 1996, thirty-nine countries signed the Wassenaar Arrangement, an arms control treaty that deals with the export of arms and "dual-use" technologies such as cryptography. The treaty stipulated that the use of cryptography with short key-lengths (56-bit for symmetric encryption, 512-bit for RSA) would no longer be export-controlled. Cryptography exports from the US became less strictly regulated as a consequence of a major relaxation in 2000; there are no longer very many restrictions on key sizes in US-exported mass-market software. Since this relaxation in US export restrictions, and because most personal computers connected to the Internet include US-sourced web browsers such as Firefox or Internet Explorer, almost every Internet user worldwide has potential access to quality cryptography via their browsers (e.g., via Transport Layer Security). The Mozilla Thunderbird and Microsoft Outlook E-mail client programs similarly can transmit and receive emails via TLS, and can send and receive email encrypted with S/MIME. Many Internet users do not realize that their basic application software contains such extensive cryptosystems. These browsers and email programs are so ubiquitous that even governments whose intent is to regulate civilian use of cryptography generally do not find it practical to do much to control distribution or use of cryptography of this quality, so even when such laws are in force, actual enforcement is often effectively impossible. === NSA involvement === Another contentious issue connected to cryptography in the United States is the influence of the National Security Agency on cipher development and policy. The NSA was involved with the design of DES during its development at IBM and its consideration by the National Bureau of Standards as a possible Federal Standard for cryptography. DES was designed to be resistant to differential cryptanalysis, a powerful and general cryptanalytic technique known to the NSA and IBM, that became publicly known only when it was rediscovered in the late 1980s. According to Steven Levy, IBM discovered differential cryptanalysis, but kept the technique secret at the NSA's request. The technique became publicly known only when Biham and Shamir re-discovered and announced it some years later. The entire affair illustrates the difficulty of determining what resources and knowledge an attacker might actually have. Another instance of the NSA's involvement was the 1993 Clipper chip affair, an encryption microchip intended to be part of the Capstone cryptography-control initiative. Clipper was widely criticized by cryptographers for two reasons. The cipher algorithm (called Skipjack) was then classified (declassified in 1998, long after the Clipper initiative lapsed). The classified cipher caused concerns that the NSA had deliberately made the cipher weak to assist its intelligence efforts. The whole initiative was also criticized based on its violation of Kerckhoffs's Principle, as the scheme included a special escrow key held by the government for use by law enforcement (i.e. wiretapping). === Digital rights management === Cryptography is central to digital rights management (DRM), a group of techniques for technologically controlling use of copyrighted material, being widely implemented and deployed at the behest of some copyright holders. In 1998, U.S. President Bill Clinton signed the Digital Millennium Copyright Act (DMCA), which criminalized all production, dissemination, and use of certain cryptanalytic techniques and technology (now known or later discovered); specifically, those that could be used to circumvent DRM technological schemes. This had a noticeable impact on the cryptography research community since an argument can be made that any cryptanalytic research violated the DMCA. Similar statutes have since been enacted in several countries and regions, including the implementation in the EU Copyright Directive. Similar restrictions are called for by treaties signed by World Intellectual Property Organization member-states. The United States Department of Justice and FBI have not enforced the DMCA as rigorously as had been feared by some, but the law, nonetheless, remains a controversial one. Niels Ferguson, a well-respected cryptography researcher, has publicly stated that he will not release some of his research into an Intel security design for fear of prosecution under the DMCA. Cryptologist Bruce Schneier has argued that the DMCA encourages vendor lock-in, while inhibiting actual measures toward cyber-security. Both Alan Cox (longtime Linux kernel developer) and Edward Felten (and some of his students at Princeton) have encountered problems related to the Act. Dmitry Sklyarov was arrested during a visit to the US from Russia, and jailed for five months pending trial for alleged violations of the DMCA arising from work he had done in Russia, where the work was legal. In 2007, the cryptographic keys responsible for Blu-ray and HD DVD content scrambling were discovered and released onto the Internet. In both cases, the Motion Picture Association of America sent out numerous DMCA takedown notices, and there was a massive Internet backlash triggered by the perceived impact of such notices on fair use and free speech. === Forced disclosure of encryption keys === In the United Kingdom, the Regulation of Investigatory Powers Act gives UK police the powers to force suspects to decrypt files or hand over passwords that protect encryption keys. Failure to comply is an offense in its own right, punishable on conviction by a two-year jail sentence or up to five years in cases involving national security. Successful prosecutions have occurred under the Act; the first, in 2009, resulted in a term of 13 months' imprisonment. Similar forced disclosure laws in Australia, Finland, France, and India compel individual suspects under investigation to hand over encryption keys or passwords during a criminal investigation. In the United States, the federal criminal case of United States v. Fricosu addressed whether a search warrant can compel a person to reveal an encryption passphrase or password. The Electronic Frontier Foundation (EFF) argued that this is a violation of the protection from self-incrimination given by the Fifth Amendment. In 2012, the court ruled that under the All Writs Act, the defendant was required to produce an unencrypted hard drive for the court. In many jurisdictions, the legal status of forced disclosure remains unclear. The 2016 FBI–Apple encryption dispute concerns the ability of courts in the United States to compel manufacturers' assistance in unlocking cell phones whose contents are cryptographically protected. As a potential counter-measure to forced disclosure some cryptographic software supports plausible deniability, where the encrypted data is indistinguishable from unused random data (for example such as that of a drive which has been securely wiped). == See also == Collision attack Comparison of cryptography libraries Cryptovirology – Securing and encrypting virology Crypto Wars – Attempts to limit access to strong cryptography Encyclopedia of Cryptography and Security – Book by Technische Universiteit Eindhoven Global surveillance – Mass surveillance across national borders Indistinguishability obfuscation – Type of cryptographic software obfuscation Information theory – Scientific study of digital information Outline of cryptography List of cryptographers – A list of historical mathmaticians List of multiple discoveries List of unsolved problems in computer science – List of unsolved computational problems Pre-shared key – Method to set encryption keys Secure cryptoprocessor Strong cryptography – Term applied to cryptographic systems that are highly resistant to cryptanalysis Syllabical and Steganographical Table – Eighteenth-century work believed to be the first cryptography chart – first cryptography chart World Wide Web Consortium's Web Cryptography API – World Wide Web Consortium cryptography standard == References == == Further reading == == External links == The dictionary definition of cryptography at Wiktionary Media related to Cryptography at Wikimedia Commons Cryptography on In Our Time at the BBC Crypto Glossary and Dictionary of Technical Cryptography Archived 4 July 2022 at the Wayback Machine A Course in Cryptography by Raphael Pass & Abhi Shelat – offered at Cornell in the form of lecture notes. For more on the use of cryptographic elements in fiction, see: Dooley, John F. (23 August 2012). "Cryptology in Fiction". Archived from the original on 29 July 2020. Retrieved 20 February 2015. The George Fabyan Collection at the Library of Congress has early editions of works of seventeenth-century English literature, publications relating to cryptography.
Wikipedia/Cryptographic
The YAK is a public-key authenticated key-agreement protocol, proposed by Feng Hao in 2010. It is claimed to be the simplest authenticated key exchange protocol among the related schemes, including MQV, HMQV, Station-to-Station protocol, SSL/TLS etc. The authentication is based on public key pairs. As with other protocols, YAK normally requires a Public Key Infrastructure to distribute authentic public keys to the communicating parties. The security of YAK is disputed (see below and the talk page). == Description == Two parties, Alice and Bob, agree on a group G {\displaystyle G} with generator g {\displaystyle g} of prime order q {\displaystyle q} in which the discrete log problem is hard. Typically a Schnorr group is used. In general, YAK can use any prime order group that is suitable for public key cryptography, including elliptic curve cryptography. Let g a {\displaystyle g^{a}} be Alice's long-term public key and g b {\displaystyle g^{b}} be Bob's. The protocol executes in one round: Alice selects x ∈ R [ 0 , q − 1 ] {\displaystyle x\in _{\text{R}}[0,q-1]} and sends out g x {\displaystyle g^{x}} together with a zero-knowledge proof (using for example Schnorr non-interactive zero-knowledge proof as described in RFC 8235) for the proof of the exponent x {\displaystyle x} . Similarly, Bob selects y ∈ R [ 0 , q − 1 ] {\displaystyle y\in _{\text{R}}[0,q-1]} and sends out g y {\displaystyle g^{y}} together with a zero-knowledge proof for the proof of the exponent y {\displaystyle y} . Here, the notation ∈ R {\displaystyle \in _{\text{R}}} denotes an element selected randomly with uniform probability. The above communication can be completed in one round as neither party depends on the other. When it finishes, Alice and Bob verify the received zero-knowledge proofs. Alice then computes K = ( g y g b ) x + a = g ( x + a ) ( y + b ) {\displaystyle K=(g^{y}g^{b})^{x+a}=g^{(x+a)(y+b)}} . Similarly, Bob computes K = ( g x g a ) y + b = g ( x + a ) ( y + b ) {\displaystyle K=(g^{x}g^{a})^{y+b}=g^{(x+a)(y+b)}} . With the same keying material K {\displaystyle K} , Alice and Bob can derive a session key using a cryptographic hash function: κ = H ( K ) {\displaystyle \kappa =H(K)} . == Security properties == The use of well-established zero-knowledge proof primitives such as Schnorr's scheme greatly simplifies the security proofs. Given that the underlying zero knowledge proof primitive is secure, the YAK protocol aims to satisfy the following properties. Private key security – An attacker cannot learn the user's static private key even if he is able to learn all session-specific secrets in any compromised session. Forward secrecy – Session keys that were securely established in the past uncorrupted sessions will remain incomputable in the future even when both users' static private keys are disclosed. Session key security – An attacker cannot compute the session key if he impersonates a user but has no access to the user's private key. The security claims in the original YAK paper are based on the Computational Diffie-Hellman assumption in a random oracle model. == Cryptanalysis == In 2015, Toorani mentioned that "the YAK protocol lacks joint key control and perfect forward secrecy attributes and is vulnerable to some attacks including unknown key-share and key-replication attacks" to which Hao has a different opinion. In 2020, Mohammad mentioned that YAK protocol cannot withstand the known key security attack which leads to a new key compromise impersonation attack where an adversary is allowed to reveal both the shared static secret key between two parties and the ephemeral private key of the initiator. The author also proposed an improved protocol to remedy these attacks and the previous attacks mentioned by Toorani on the YAK protocol, and the proposed protocol uses a verification mechanism that provides entity authentication and key confirmation. The author showed that the proposed protocol is secure in the proposed formal security model under the gap Diffie‐Hellman assumption and the random oracle assumption. Moreover, the security of the proposed protocol and attacks on the YAK protocol were verified by the Scyther tool. Mohammad's paper is discussed in the talk page. == References ==
Wikipedia/YAK_(cryptography)
Computer Science and Artificial Intelligence Laboratory (CSAIL) is a research institute at the Massachusetts Institute of Technology (MIT) formed by the 2003 merger of the Laboratory for Computer Science (LCS) and the Artificial Intelligence Laboratory (AI Lab). Housed within the Ray and Maria Stata Center, CSAIL is the largest on-campus laboratory as measured by research scope and membership. It is part of the Schwarzman College of Computing but is also overseen by the MIT Vice President of Research. == Research activities == CSAIL's research activities are organized around a number of semi-autonomous research groups, each of which is headed by one or more professors or research scientists. These groups are divided up into seven general areas of research: Artificial intelligence Computational biology Graphics and vision Language and learning Theory of computation Robotics Systems (includes computer architecture, databases, distributed systems, networks and networked systems, operating systems, programming methodology, and software engineering, among others) == History == Computing Research at MIT began with Vannevar Bush's research into a differential analyzer and Claude Shannon's electronic Boolean algebra in the 1930s, the wartime MIT Radiation Laboratory, the post-war Project Whirlwind and Research Laboratory of Electronics (RLE), and MIT Lincoln Laboratory's SAGE in the early 1950s. At MIT, research in the field of artificial intelligence began in the late 1950s. === Project MAC === On July 1, 1963, Project MAC (the Project on Mathematics and Computation, later backronymed to Multiple Access Computer, Machine Aided Cognitions, or Man and Computer) was launched with a $2 million grant from the Defense Advanced Research Projects Agency (DARPA). Project MAC's original director was Robert Fano of MIT's Research Laboratory of Electronics (RLE). Fano decided to call MAC a "project" rather than a "laboratory" for reasons of internal MIT politics – if MAC had been called a laboratory, then it would have been more difficult to raid other MIT departments for research staff. The program manager responsible for the DARPA grant was J. C. R. Licklider, who had previously been at MIT conducting research in RLE, and would later succeed Fano as director of Project MAC. Project MAC would become famous for groundbreaking research in operating systems, artificial intelligence, and the theory of computation. Its contemporaries included Project Genie at Berkeley, the Stanford Artificial Intelligence Laboratory, and (somewhat later) University of Southern California's (USC's) Information Sciences Institute. An "AI Group" including Marvin Minsky (the director), John McCarthy (inventor of Lisp), and a talented community of computer programmers were incorporated into Project MAC. They were interested principally in the problems of vision, mechanical motion and manipulation, and language, which they view as the keys to more intelligent machines. In the 1960s and 1970s the AI Group developed a time-sharing operating system called Incompatible Timesharing System (ITS) which ran on PDP-6 and later PDP-10 computers. The early Project MAC community included Fano, Minsky, Licklider, Fernando J. Corbató, and a community of computer programmers and enthusiasts among others who drew their inspiration from former colleague John McCarthy. These founders envisioned the creation of a computer utility whose computational power would be as reliable as an electric utility. To this end, Corbató brought the first computer time-sharing system, Compatible Time-Sharing System (CTSS), with him from the MIT Computation Center, using the DARPA funding to purchase an IBM 7094 for research use. One of the early focuses of Project MAC would be the development of a successor to CTSS, Multics, which was to be the first high availability computer system, developed as a part of an industry consortium including General Electric and Bell Laboratories. In 1966, Scientific American featured Project MAC in the September thematic issue devoted to computer science, that was later published in book form. At the time, the system was described as having approximately 100 TTY terminals, mostly on campus but with a few in private homes. Only 30 users could be logged in at the same time. The project enlisted students in various classes to use the terminals simultaneously in problem solving, simulations, and multi-terminal communications as tests for the multi-access computing software being developed. === AI Lab and LCS === In the late 1960s, Minsky's artificial intelligence group was seeking more space, and was unable to get satisfaction from project director Licklider. Minsky found that although Project MAC as a single entity could not get the additional space he wanted, he could split off to form his own laboratory and then be entitled to more office space. As a result, the MIT AI Lab was formed in 1970, and many of Minsky's AI colleagues left Project MAC to join him in the new laboratory, while most of the remaining members went on to form the Laboratory for Computer Science. Talented programmers such as Richard Stallman, who used TECO to develop EMACS, flourished in the AI Lab during this time. Those researchers who did not join the smaller AI Lab formed the Laboratory for Computer Science and continued their research into operating systems, programming languages, distributed systems, and the theory of computation. Two professors, Hal Abelson and Gerald Jay Sussman, chose to remain neutral — their group was referred to variously as Switzerland and Project MAC for the next 30 years. Among much else, the AI Lab led to the invention of Lisp machines and their attempted commercialization by two companies in the 1980s: Symbolics and Lisp Machines Inc. This divided the AI Lab into "camps" which resulted in a hiring away of many of the talented programmers. The incident inspired Richard Stallman's later work on the GNU Project. "Nobody had envisioned that the AI lab's hacker group would be wiped out, but it was." ... "That is the basis for the free software movement — the experience I had, the life that I've lived at the MIT AI lab — to be working on human knowledge, and not be standing in the way of anybody's further using and further disseminating human knowledge". === CSAIL === On the fortieth anniversary of Project MAC's establishment, July 1, 2003, LCS was merged with the AI Lab to form the MIT Computer Science and Artificial Intelligence Laboratory, or CSAIL. This merger created the largest laboratory (over 600 personnel) on the MIT campus and was regarded as a reuniting of the diversified elements of Project MAC. In 2018, CSAIL launched a five-year collaboration program with IFlytek, a company sanctioned the following year for allegedly using its technology for surveillance and human rights abuses in Xinjiang. In October 2019, MIT announced that it would review its partnerships with sanctioned firms such as iFlyTek and SenseTime. In April 2020, the agreement with iFlyTek was terminated. CSAIL moved from the School of Engineering to the newly formed Schwarzman College of Computing by February 2020. == Offices == From 1963 to 2004, Project MAC, LCS, the AI Lab, and CSAIL had their offices at 545 Technology Square, taking over more and more floors of the building over the years. In 2004, CSAIL moved to the new Ray and Maria Stata Center, which was built specifically to house it and other departments. == Outreach activities == The IMARA (from Swahili word for "power") group sponsors a variety of outreach programs that bridge the global digital divide. Its aim is to find and implement long-term, sustainable solutions which will increase the availability of educational technology and resources to domestic and international communities. These projects are run under the aegis of CSAIL and staffed by MIT volunteers who give training, install and donate computer setups in greater Boston, Massachusetts, Kenya, Native American Indian tribal reservations in the American Southwest such as the Navajo Nation, the Middle East, and Fiji Islands. The CommuniTech project strives to empower under-served communities through sustainable technology and education and does this through the MIT Used Computer Factory (UCF), providing refurbished computers to under-served families, and through the Families Accessing Computer Technology (FACT) classes, it trains those families to become familiar and comfortable with computer technology. == Notable researchers == (Including members and alumni of CSAIL's predecessor laboratories) MacArthur Fellows Tim Berners-Lee, Erik Demaine, Dina Katabi, Daniela L. Rus, Regina Barzilay, Peter Shor, Richard Stallman, and Joshua Tenenbaum Turing Award recipients Leonard M. Adleman, Fernando J. Corbató, Shafi Goldwasser, Butler W. Lampson, John McCarthy, Silvio Micali, Marvin Minsky, Ronald L. Rivest, Adi Shamir, Barbara Liskov, and Michael Stonebraker IJCAI Computers and Thought Award recipients Terry Winograd, Patrick Winston, David Marr, Gerald Jay Sussman, Rodney Brooks Rolf Nevanlinna Prize recipients Madhu Sudan, Peter Shor, Constantinos Daskalakis Gödel Prize recipients Shafi Goldwasser (two-time recipient), Silvio Micali, Maurice Herlihy, Charles Rackoff, Johan Håstad, Peter Shor, and Madhu Sudan Grace Murray Hopper Award recipients Robert Metcalfe, Shafi Goldwasser, Guy L. Steele, Jr., Richard Stallman, and W. Daniel Hillis Textbook authors Harold Abelson and Gerald Jay Sussman, Richard Stallman, Thomas H. Cormen, Charles E. Leiserson, Patrick Winston, Ronald L. Rivest, Barbara Liskov, John Guttag, Jerome H. Saltzer, Frans Kaashoek, Clifford Stein, and Nancy Lynch David D. Clark, former chief protocol architect for the Internet; co-author with Jerome H. Saltzer (also a CSAIL member) and David P. Reed of the influential paper "End-to-End Arguments in Systems Design" Eric Grimson, expert on computer vision and its applications to medicine, appointed Chancellor of MIT March 2011 Bob Frankston, co-developer of VisiCalc, the first computer spreadsheet Seymour Papert, inventor of the Logo programming language Joseph Weizenbaum, creator of the ELIZA computer-simulated therapist === Notable alumni === Robert Metcalfe, who later invented Ethernet at Xerox PARC and later founded 3Com Marc Raibert, who created the robot company Boston Dynamics Drew Houston, co-founder of Dropbox Colin Angle and Helen Greiner who, with previous CSAIL director Rodney Brooks, founded iRobot Jeremy Wertheimer, who developed ITA Software used by travel websites like Kayak and Orbitz Max Krohn, co-founder of OkCupid == Directors == Directors of Project MAC Robert Fano, 1963–1968 J. C. R. Licklider, 1968–1971 Edward Fredkin, 1971–1974 Michael Dertouzos, 1974–1975 Directors of the Artificial Intelligence Laboratory Marvin Minsky, 1970–1972 Patrick Winston, 1972–1997 Rodney Brooks, 1997–2003 Directors of the Laboratory for Computer Science Michael Dertouzos, 1975–2001 Victor Zue, 2001–2003 Directors of CSAIL Rodney Brooks, 2003–2007 Victor Zue, 2007–2011 Anant Agarwal, 2011–2012 Daniela L. Rus, 2012– == CSAIL Alliances == CSAIL Alliances is the industry connection arm of MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL). CSAIL Alliances offers companies programs to connect with the research, faculty, students, and startups of CSAIL by providing organizations with opportunities to learn about the research, engage with students, explore collaborations with researchers, and join research initiatives such as FinTech at CSAIL, MIT Future of Data, and Machine Learning Applications. == See also == == References == == Further reading == "A Marriage of Convenience: The Founding of the MIT Artificial Intelligence Laboratory" (PDF)., Chious et al. — includes important information on the Incompatible Timesharing System Weizenbaum. Rebel at Work: a documentary film with and about Joseph Weizenbaum Garfinkel, Simson (1999). Abelson, Hall (ed.). Architects of the Information Society: Thirty-Five Years of the Laboratory for Computer Science at MIT. Cambridge, Massachusetts: MIT Press. ISBN 0-262-07196-7. == External links == Official website of CSAIL, successor of the AI Lab
Wikipedia/Laboratory_for_Computer_Science
IEEE Transactions on Information Theory is a monthly peer-reviewed scientific journal published by the IEEE Information Theory Society. It covers information theory and the mathematics of communications. It was established in 1953 as IRE Transactions on Information Theory. The editor-in-chief is Venugopal V. Veeravalli (University of Illinois Urbana-Champaign). As of 2007, the journal allows the posting of preprints on arXiv. According to Jack van Lint, it is the leading research journal in the whole field of coding theory. A 2006 study using the PageRank network analysis algorithm found that, among hundreds of computer science-related journals, IEEE Transactions on Information Theory had the highest ranking and was thus deemed the most prestigious. ACM Computing Surveys, with the highest impact factor, was deemed the most popular. == References == == External links == Official website List of past editors-in-chief
Wikipedia/IEEE_Transactions_on_Information_Theory
Symmetric-key algorithms are algorithms for cryptography that use the same cryptographic keys for both the encryption of plaintext and the decryption of ciphertext. The keys may be identical, or there may be a simple transformation to go between the two keys. The keys, in practice, represent a shared secret between two or more parties that can be used to maintain a private information link. The requirement that both parties have access to the secret key is one of the main drawbacks of symmetric-key encryption, in comparison to public-key encryption (also known as asymmetric-key encryption). However, symmetric-key encryption algorithms are usually better for bulk encryption. With exception of the one-time pad they have a smaller key size, which means less storage space and faster transmission. Due to this, asymmetric-key encryption is often used to exchange the secret key for symmetric-key encryption. == Types == Symmetric-key encryption can use either stream ciphers or block ciphers. Stream ciphers encrypt the digits (typically bytes), or letters (in substitution ciphers) of a message one at a time. An example is ChaCha20. Substitution ciphers are well-known ciphers, but can be easily decrypted using a frequency table. Block ciphers take a number of bits and encrypt them in a single unit, padding the plaintext to achieve a multiple of the block size. The Advanced Encryption Standard (AES) algorithm, approved by NIST in December 2001, uses 128-bit blocks. == Implementations == Examples of popular symmetric-key algorithms include Twofish, Serpent, AES (Rijndael), Camellia, Salsa20, ChaCha20, Blowfish, CAST5, Kuznyechik, RC4, DES, 3DES, Skipjack, Safer, and IDEA. == Use as a cryptographic primitive == Symmetric ciphers are commonly used to achieve other cryptographic primitives than just encryption. Encrypting a message does not guarantee that it will remain unchanged while encrypted. Hence, often a message authentication code is added to a ciphertext to ensure that changes to the ciphertext will be noted by the receiver. Message authentication codes can be constructed from an AEAD cipher (e.g. AES-GCM). However, symmetric ciphers cannot be used for non-repudiation purposes except by involving additional parties. See the ISO/IEC 13888-2 standard. Another application is to build hash functions from block ciphers. See one-way compression function for descriptions of several such methods. == Construction of symmetric ciphers == Many modern block ciphers are based on a construction proposed by Horst Feistel. Feistel's construction makes it possible to build invertible functions from other functions that are themselves not invertible. == Security of symmetric ciphers == Symmetric ciphers have historically been susceptible to known-plaintext attacks, chosen-plaintext attacks, differential cryptanalysis and linear cryptanalysis. Careful construction of the functions for each round can greatly reduce the chances of a successful attack. It is also possible to increase the key length or the rounds in the encryption process to better protect against attack. This, however, tends to increase the processing power and decrease the speed at which the process runs due to the amount of operations the system needs to do. Most modern symmetric-key algorithms appear to be resistant to the threat of post-quantum cryptography. Quantum computers would exponentially increase the speed at which these ciphers can be decoded; notably, Grover's algorithm would take the square-root of the time traditionally required for a brute-force attack, although these vulnerabilities can be compensated for by doubling key length. For example, a 128 bit AES cipher would not be secure against such an attack as it would reduce the time required to test all possible iterations from over 10 quintillion years to about six months. By contrast, it would still take a quantum computer the same amount of time to decode a 256 bit AES cipher as it would a conventional computer to decode a 128 bit AES cipher. For this reason, AES-256 is believed to be "quantum resistant". == Key management == == Key establishment == Symmetric-key algorithms require both the sender and the recipient of a message to have the same secret key. All early cryptographic systems required either the sender or the recipient to somehow receive a copy of that secret key over a physically secure channel. Nearly all modern cryptographic systems still use symmetric-key algorithms internally to encrypt the bulk of the messages, but they eliminate the need for a physically secure channel by using Diffie–Hellman key exchange or some other public-key protocol to securely come to agreement on a fresh new secret key for each session/conversation (forward secrecy). == Key generation == When used with asymmetric ciphers for key transfer, pseudorandom key generators are nearly always used to generate the symmetric cipher session keys. However, lack of randomness in those generators or in their initialization vectors is disastrous and has led to cryptanalytic breaks in the past. Therefore, it is essential that an implementation use a source of high entropy for its initialization. == Reciprocal cipher == A reciprocal cipher is a cipher where, just as one enters the plaintext into the cryptography system to get the ciphertext, one could enter the ciphertext into the same place in the system to get the plaintext. A reciprocal cipher is also sometimes referred as self-reciprocal cipher. Practically all mechanical cipher machines implement a reciprocal cipher, a mathematical involution on each typed-in letter. Instead of designing two kinds of machines, one for encrypting and one for decrypting, all the machines can be identical and can be set up (keyed) the same way. Examples of reciprocal ciphers include: Atbash Beaufort cipher Enigma machine Marie Antoinette and Axel von Fersen communicated with a self-reciprocal cipher. the Porta polyalphabetic cipher is self-reciprocal. Purple cipher RC4 ROT13 XOR cipher Vatsyayana cipher The majority of all modern ciphers can be classified as either a stream cipher, most of which use a reciprocal XOR cipher combiner, or a block cipher, most of which use a Feistel cipher or Lai–Massey scheme with a reciprocal transformation in each round. == Notes == == References ==
Wikipedia/Symmetric-key_cryptography
Key exchange (also key establishment) is a method in cryptography by which cryptographic keys are exchanged between two parties, allowing use of a cryptographic algorithm. If the sender and receiver wish to exchange encrypted messages, each must be equipped to encrypt messages to be sent and decrypt messages received. The nature of the equipping they require depends on the encryption technique they might use. If they use a code, both will require a copy of the same codebook. If they use a cipher, they will need appropriate keys. If the cipher is a symmetric key cipher, both will need a copy of the same key. If it is an asymmetric key cipher with the public/private key property, both will need the other's public key. == Channel of exchange == Key exchange is done either in-band or out-of-band. == The key exchange problem == The key exchange problem describes ways to exchange whatever keys or other information are needed for establishing a secure communication channel so that no one else can obtain a copy. Historically, before the invention of public-key cryptography (asymmetrical cryptography), symmetric-key cryptography utilized a single key to encrypt and decrypt messages. For two parties to communicate confidentially, they must first exchange the secret key so that each party is able to encrypt messages before sending, and decrypt received ones. This process is known as the key exchange. The overarching problem with symmetrical cryptography, or single-key cryptography, is that it requires a secret key to be communicated through trusted couriers, diplomatic bags, or any other secure communication channel. If two parties cannot establish a secure initial key exchange, they won't be able to communicate securely without the risk of messages being intercepted and decrypted by a third party who acquired the key during the initial key exchange. Public-key cryptography uses a two-key system, consisting of the public and the private keys, where messages are encrypted with one key and decrypted with another. It depends on the selected cryptographic algorithm which key—public or private—is used for encrypting messages, and which for decrypting. For example, in RSA, the private key is used for decrypting messages, while in the Digital Signature Algorithm (DSA), the private key is used for authenticating them. The public key can be sent over non-secure channels or shared in public; the private key is only available to its owner. Known as the Diffie-Hellman key exchange, the encryption key can be openly communicated as it poses no risk to the confidentiality of encrypted messages. One party exchanges the keys to another party where they can then encrypt messages using the key and send back the cipher text. Only the decryption key—in this case, it's the private key—can decrypt that message. At no time during the Diffie-Hellman key exchange is any sensitive information at risk of compromise, as opposed to symmetrical key exchange. === Identification === In principle, the only remaining problem was to be sure (or at least confident) that a public key actually belonged to its supposed owner. Because it is possible to 'spoof' another's identity in any of several ways, this is not a trivial or easily solved problem, particularly when the two users involved have never met and know nothing about each other. === Diffie–Hellman key exchange === In 1976, Whitfield Diffie and Martin Hellman published a cryptographic protocol called the Diffie–Hellman key exchange (D–H) based on concepts developed by Hellman's PhD student Ralph Merkle. The protocol enables users to securely exchange secret keys even if an opponent is monitoring that communication channel. The D–H key exchange protocol, however, does not by itself address authentication (i.e. the problem of being sure of the actual identity of the person or 'entity' at the other end of the communication channel). Authentication is crucial when an opponent can both monitor and alter messages within the communication channel (AKA man-in-the-middle or MITM attacks) and was addressed in the fourth section of the paper. === Public key infrastructure === Public key infrastructures (PKIs) have been proposed as a workaround for the problem of identity authentication. In their most usual implementation, each user applies to a “certificate authority” (CA), trusted by all parties, for a digital certificate which serves for other users as a non-tamperable authentication of identity. The infrastructure is safe, unless the CA itself is compromised. In case it is, though, many PKIs provide a way to revoke certificates so other users will not trust them. Revoked certificates are usually put in certificate revocation lists which any certificate can be matched against. Several countries and other jurisdictions have passed legislation or issued regulations encouraging PKIs by giving (more or less) legal effect to these digital certificates (see digital signature). Many commercial firms, as well as a few government departments, have established such certificate authorities. This does nothing to solve the problem though, as the trustworthiness of the CA itself is still not guaranteed for any particular individual. It is a form of argument from authority fallacy. For actual trustworthiness, personal verification that the certificate belongs to the CA and establishment of trust in the CA are required. This is usually not possible. There are known cases where authoritarian governments proposed establishing so-called “national CAs” whose certificates would be mandatory to install on citizens’ devices and, once installed and trusted, could be used for monitoring, intercepting, modifying, or blocking the encrypted internet traffic. For those new to such things, these arrangements are best thought of as electronic notary endorsements that “this public key belongs to this user”. As with notary endorsements, there can be mistakes or misunderstandings in such vouchings. Additionally, the notary itself can be untrusted. There have been several high-profile public failures by assorted certificate authorities. === Web of trust === At the other end of the conceptual range is the web of trust system, which avoids central Certificate Authorities entirely. Each user is responsible for getting a certificate from another user before using that certificate to communicate with the user. PGP and GPG (an implementation of the OpenPGP Internet Standard) employ just such a web of trust mechanism. === Password-authenticated key agreement === Password-authenticated key agreement algorithms can perform a cryptographic key exchange utilizing knowledge of a user's password. === Quantum key exchange === Quantum key distribution exploits certain properties of quantum physics to ensure its security. It relies on the fact that observations (or measurements) of a quantum state introduces perturbations in that state. Over many systems, these perturbations are detectable as noise by the receiver, making it possible to detect man-in-the-middle attacks. Beside the correctness and completeness of quantum mechanics, the protocol assumes the availability of an authenticated channel between Alice and Bob. == See also == Key (cryptography) Key management Diffie–Hellman key exchange Elliptic-curve Diffie–Hellman Forward secrecy == References == The possibility of Non-Secret digital encryption J. H. Ellis, January 1970. Non-Secret Encryption Using a Finite Field MJ Williamson, January 21, 1974. Thoughts on Cheaper Non-Secret Encryption MJ Williamson, August 10, 1976. New Directions in Cryptography W. Diffie and M. E. Hellman, IEEE Transactions on Information Theory, vol. IT-22, Nov. 1976, pp: 644–654. Cryptographic apparatus and method Martin E. Hellman, Bailey W. Diffie, and Ralph C. Merkle, U.S. Patent #4,200,770, 29 April 1980 The First Ten Years of Public-Key Cryptography Whitfield Diffie, Proceedings of the IEEE, vol. 76, no. 5, May 1988, pp: 560–577 (1.9MB PDF file) Menezes, Alfred; van Oorschot, Paul; Vanstone, Scott (1997). Handbook of Applied Cryptography Boca Raton, Florida: CRC Press. ISBN 0-8493-8523-7. (Available online) Singh, Simon (1999) The Code Book: the evolution of secrecy from Mary Queen of Scots to quantum cryptography New York: Doubleday ISBN 0-385-49531-5Cambodia
Wikipedia/Key-exchange_algorithm
A key in cryptography is a piece of information, usually a string of numbers or letters that are stored in a file, which, when processed through a cryptographic algorithm, can encode or decode cryptographic data. Based on the used method, the key can be different sizes and varieties, but in all cases, the strength of the encryption relies on the security of the key being maintained. A key's security strength is dependent on its algorithm, the size of the key, the generation of the key, and the process of key exchange. == Scope == The key is what is used to encrypt data from plaintext to ciphertext. There are different methods for utilizing keys and encryption. === Symmetric cryptography === Symmetric cryptography refers to the practice of the same key being used for both encryption and decryption. === Asymmetric cryptography === Asymmetric cryptography has separate keys for encrypting and decrypting. These keys are known as the public and private keys, respectively. == Purpose == Since the key protects the confidentiality and integrity of the system, it is important to be kept secret from unauthorized parties. With public key cryptography, only the private key must be kept secret, but with symmetric cryptography, it is important to maintain the confidentiality of the key. Kerckhoff's principle states that the entire security of the cryptographic system relies on the secrecy of the key. == Key sizes == Key size is the number of bits in the key defined by the algorithm. This size defines the upper bound of the cryptographic algorithm's security. The larger the key size, the longer it will take before the key is compromised by a brute force attack. Since perfect secrecy is not feasible for key algorithms, researches are now more focused on computational security. In the past, keys were required to be a minimum of 40 bits in length, however, as technology advanced, these keys were being broken quicker and quicker. As a response, restrictions on symmetric keys were enhanced to be greater in size. Currently, 2048 bit RSA is commonly used, which is sufficient for current systems. However, current RSA key sizes would all be cracked quickly with a powerful quantum computer. "The keys used in public key cryptography have some mathematical structure. For example, public keys used in the RSA system are the product of two prime numbers. Thus public key systems require longer key lengths than symmetric systems for an equivalent level of security. 3072 bits is the suggested key length for systems based on factoring and integer discrete logarithms which aim to have security equivalent to a 128 bit symmetric cipher." == Key generation == To prevent a key from being guessed, keys need to be generated randomly and contain sufficient entropy. The problem of how to safely generate random keys is difficult and has been addressed in many ways by various cryptographic systems. A key can directly be generated by using the output of a Random Bit Generator (RBG), a system that generates a sequence of unpredictable and unbiased bits. A RBG can be used to directly produce either a symmetric key or the random output for an asymmetric key pair generation. Alternatively, a key can also be indirectly created during a key-agreement transaction, from another key or from a password. Some operating systems include tools for "collecting" entropy from the timing of unpredictable operations such as disk drive head movements. For the production of small amounts of keying material, ordinary dice provide a good source of high-quality randomness. == Establishment scheme == The security of a key is dependent on how a key is exchanged between parties. Establishing a secured communication channel is necessary so that outsiders cannot obtain the key. A key establishment scheme (or key exchange) is used to transfer an encryption key among entities. Key agreement and key transport are the two types of a key exchange scheme that are used to be remotely exchanged between entities . In a key agreement scheme, a secret key, which is used between the sender and the receiver to encrypt and decrypt information, is set up to be sent indirectly. All parties exchange information (the shared secret) that permits each party to derive the secret key material. In a key transport scheme, encrypted keying material that is chosen by the sender is transported to the receiver. Either symmetric key or asymmetric key techniques can be used in both schemes. The Diffie–Hellman key exchange and Rivest-Shamir-Adleman (RSA) are the most two widely used key exchange algorithms. In 1976, Whitfield Diffie and Martin Hellman constructed the Diffie–Hellman algorithm, which was the first public key algorithm. The Diffie–Hellman key exchange protocol allows key exchange over an insecure channel by electronically generating a shared key between two parties. On the other hand, RSA is a form of the asymmetric key system which consists of three steps: key generation, encryption, and decryption. Key confirmation delivers an assurance between the key confirmation recipient and provider that the shared keying materials are correct and established. The National Institute of Standards and Technology recommends key confirmation to be integrated into a key establishment scheme to validate its implementations. == Management == Key management concerns the generation, establishment, storage, usage and replacement of cryptographic keys. A key management system (KMS) typically includes three steps of establishing, storing and using keys. The base of security for the generation, storage, distribution, use and destruction of keys depends on successful key management protocols. == Key vs password == A password is a memorized series of characters including letters, digits, and other special symbols that are used to verify identity. It is often produced by a human user or a password management software to protect personal and sensitive information or generate cryptographic keys. Passwords are often created to be memorized by users and may contain non-random information such as dictionary words. On the other hand, a key can help strengthen password protection by implementing a cryptographic algorithm which is difficult to guess or replace the password altogether. A key is generated based on random or pseudo-random data and can often be unreadable to humans. A password is less safe than a cryptographic key due to its low entropy, randomness, and human-readable properties. However, the password may be the only secret data that is accessible to the cryptographic algorithm for information security in some applications such as securing information in storage devices. Thus, a deterministic algorithm called a key derivation function (KDF) uses a password to generate the secure cryptographic keying material to compensate for the password's weakness. Various methods such as adding a salt or key stretching may be used in the generation. == See also == == References ==
Wikipedia/Cryptographic_key
Symmetric-key algorithms are algorithms for cryptography that use the same cryptographic keys for both the encryption of plaintext and the decryption of ciphertext. The keys may be identical, or there may be a simple transformation to go between the two keys. The keys, in practice, represent a shared secret between two or more parties that can be used to maintain a private information link. The requirement that both parties have access to the secret key is one of the main drawbacks of symmetric-key encryption, in comparison to public-key encryption (also known as asymmetric-key encryption). However, symmetric-key encryption algorithms are usually better for bulk encryption. With exception of the one-time pad they have a smaller key size, which means less storage space and faster transmission. Due to this, asymmetric-key encryption is often used to exchange the secret key for symmetric-key encryption. == Types == Symmetric-key encryption can use either stream ciphers or block ciphers. Stream ciphers encrypt the digits (typically bytes), or letters (in substitution ciphers) of a message one at a time. An example is ChaCha20. Substitution ciphers are well-known ciphers, but can be easily decrypted using a frequency table. Block ciphers take a number of bits and encrypt them in a single unit, padding the plaintext to achieve a multiple of the block size. The Advanced Encryption Standard (AES) algorithm, approved by NIST in December 2001, uses 128-bit blocks. == Implementations == Examples of popular symmetric-key algorithms include Twofish, Serpent, AES (Rijndael), Camellia, Salsa20, ChaCha20, Blowfish, CAST5, Kuznyechik, RC4, DES, 3DES, Skipjack, Safer, and IDEA. == Use as a cryptographic primitive == Symmetric ciphers are commonly used to achieve other cryptographic primitives than just encryption. Encrypting a message does not guarantee that it will remain unchanged while encrypted. Hence, often a message authentication code is added to a ciphertext to ensure that changes to the ciphertext will be noted by the receiver. Message authentication codes can be constructed from an AEAD cipher (e.g. AES-GCM). However, symmetric ciphers cannot be used for non-repudiation purposes except by involving additional parties. See the ISO/IEC 13888-2 standard. Another application is to build hash functions from block ciphers. See one-way compression function for descriptions of several such methods. == Construction of symmetric ciphers == Many modern block ciphers are based on a construction proposed by Horst Feistel. Feistel's construction makes it possible to build invertible functions from other functions that are themselves not invertible. == Security of symmetric ciphers == Symmetric ciphers have historically been susceptible to known-plaintext attacks, chosen-plaintext attacks, differential cryptanalysis and linear cryptanalysis. Careful construction of the functions for each round can greatly reduce the chances of a successful attack. It is also possible to increase the key length or the rounds in the encryption process to better protect against attack. This, however, tends to increase the processing power and decrease the speed at which the process runs due to the amount of operations the system needs to do. Most modern symmetric-key algorithms appear to be resistant to the threat of post-quantum cryptography. Quantum computers would exponentially increase the speed at which these ciphers can be decoded; notably, Grover's algorithm would take the square-root of the time traditionally required for a brute-force attack, although these vulnerabilities can be compensated for by doubling key length. For example, a 128 bit AES cipher would not be secure against such an attack as it would reduce the time required to test all possible iterations from over 10 quintillion years to about six months. By contrast, it would still take a quantum computer the same amount of time to decode a 256 bit AES cipher as it would a conventional computer to decode a 128 bit AES cipher. For this reason, AES-256 is believed to be "quantum resistant". == Key management == == Key establishment == Symmetric-key algorithms require both the sender and the recipient of a message to have the same secret key. All early cryptographic systems required either the sender or the recipient to somehow receive a copy of that secret key over a physically secure channel. Nearly all modern cryptographic systems still use symmetric-key algorithms internally to encrypt the bulk of the messages, but they eliminate the need for a physically secure channel by using Diffie–Hellman key exchange or some other public-key protocol to securely come to agreement on a fresh new secret key for each session/conversation (forward secrecy). == Key generation == When used with asymmetric ciphers for key transfer, pseudorandom key generators are nearly always used to generate the symmetric cipher session keys. However, lack of randomness in those generators or in their initialization vectors is disastrous and has led to cryptanalytic breaks in the past. Therefore, it is essential that an implementation use a source of high entropy for its initialization. == Reciprocal cipher == A reciprocal cipher is a cipher where, just as one enters the plaintext into the cryptography system to get the ciphertext, one could enter the ciphertext into the same place in the system to get the plaintext. A reciprocal cipher is also sometimes referred as self-reciprocal cipher. Practically all mechanical cipher machines implement a reciprocal cipher, a mathematical involution on each typed-in letter. Instead of designing two kinds of machines, one for encrypting and one for decrypting, all the machines can be identical and can be set up (keyed) the same way. Examples of reciprocal ciphers include: Atbash Beaufort cipher Enigma machine Marie Antoinette and Axel von Fersen communicated with a self-reciprocal cipher. the Porta polyalphabetic cipher is self-reciprocal. Purple cipher RC4 ROT13 XOR cipher Vatsyayana cipher The majority of all modern ciphers can be classified as either a stream cipher, most of which use a reciprocal XOR cipher combiner, or a block cipher, most of which use a Feistel cipher or Lai–Massey scheme with a reciprocal transformation in each round. == Notes == == References ==
Wikipedia/Symmetric_cryptography
In mathematics, Kähler differentials provide an adaptation of differential forms to arbitrary commutative rings or schemes. The notion was introduced by Erich Kähler in the 1930s. It was adopted as standard in commutative algebra and algebraic geometry somewhat later, once the need was felt to adapt methods from calculus and geometry over the complex numbers to contexts where such methods are not available. == Definition == Let R and S be commutative rings and φ : R → S be a ring homomorphism. An important example is for R a field and S a unital algebra over R (such as the coordinate ring of an affine variety). Kähler differentials formalize the observation that the derivatives of polynomials are again polynomial. In this sense, differentiation is a notion which can be expressed in purely algebraic terms. This observation can be turned into a definition of the module Ω S / R {\displaystyle \Omega _{S/R}} of differentials in different, but equivalent ways. === Definition using derivations === An R-linear derivation on S is an R-module homomorphism d : S → M {\displaystyle d:S\to M} to an S-module M satisfying the Leibniz rule d ( f g ) = f d g + g d f {\displaystyle d(fg)=f\,dg+g\,df} (it automatically follows from this definition that the image of R is in the kernel of d ). The module of Kähler differentials is defined as the S-module Ω S / R {\displaystyle \Omega _{S/R}} for which there is a universal derivation d : S → Ω S / R {\displaystyle d:S\to \Omega _{S/R}} . As with other universal properties, this means that d is the best possible derivation in the sense that any other derivation may be obtained from it by composition with an S-module homomorphism. In other words, the composition with d provides, for every S-module M, an S-module isomorphism Hom S ⁡ ( Ω S / R , M ) → ≅ Der R ⁡ ( S , M ) . {\displaystyle \operatorname {Hom} _{S}(\Omega _{S/R},M){\xrightarrow {\cong }}\operatorname {Der} _{R}(S,M).} One construction of ΩS/R and d proceeds by constructing a free S-module with one formal generator ds for each s in S, and imposing the relations dr = 0, d(s + t) = ds + dt, d(st) = s dt + t ds, for all r in R and all s and t in S. The universal derivation sends s to ds. The relations imply that the universal derivation is a homomorphism of R-modules. === Definition using the augmentation ideal === Another construction proceeds by letting I be the ideal in the tensor product S ⊗ R S {\displaystyle S\otimes _{R}S} defined as the kernel of the multiplication map { S ⊗ R S → S ∑ s i ⊗ t i ↦ ∑ s i ⋅ t i {\displaystyle {\begin{cases}S\otimes _{R}S\to S\\\sum s_{i}\otimes t_{i}\mapsto \sum s_{i}\cdot t_{i}\end{cases}}} Then the module of Kähler differentials of S can be equivalently defined by Ω S / R = I / I 2 , {\displaystyle \Omega _{S/R}=I/I^{2},} and the universal derivation is the homomorphism d defined by d s = 1 ⊗ s − s ⊗ 1. {\displaystyle ds=1\otimes s-s\otimes 1.} This construction is equivalent to the previous one because I is the kernel of the projection { S ⊗ R S → S ⊗ R R ∑ s i ⊗ t i ↦ ∑ s i ⋅ t i ⊗ 1 {\displaystyle {\begin{cases}S\otimes _{R}S\to S\otimes _{R}R\\\sum s_{i}\otimes t_{i}\mapsto \sum s_{i}\cdot t_{i}\otimes 1\end{cases}}} Thus we have: S ⊗ R S ≡ I ⊕ S ⊗ R R . {\displaystyle S\otimes _{R}S\equiv I\oplus S\otimes _{R}R.} Then S ⊗ R S / S ⊗ R R {\displaystyle S\otimes _{R}S/S\otimes _{R}R} may be identified with I by the map induced by the complementary projection ∑ s i ⊗ t i ↦ ∑ s i ⊗ t i − ∑ s i ⋅ t i ⊗ 1. {\displaystyle \sum s_{i}\otimes t_{i}\mapsto \sum s_{i}\otimes t_{i}-\sum s_{i}\cdot t_{i}\otimes 1.} This identifies I with the S-module generated by the formal generators ds for s in S, subject to d being a homomorphism of R-modules which sends each element of R to zero. Taking the quotient by I2 precisely imposes the Leibniz rule. == Examples and basic facts == For any commutative ring R, the Kähler differentials of the polynomial ring S = R [ t 1 , … , t n ] {\displaystyle S=R[t_{1},\dots ,t_{n}]} are a free S-module of rank n generated by the differentials of the variables: Ω R [ t 1 , … , t n ] / R 1 = ⨁ i = 1 n R [ t 1 , … t n ] d t i . {\displaystyle \Omega _{R[t_{1},\dots ,t_{n}]/R}^{1}=\bigoplus _{i=1}^{n}R[t_{1},\dots t_{n}]\,dt_{i}.} Kähler differentials are compatible with extension of scalars, in the sense that for a second R-algebra R′ and S ′ = S ⊗ R R ′ {\displaystyle S'=S\otimes _{R}R'} , there is an isomorphism Ω S / R ⊗ S S ′ ≅ Ω S ′ / R ′ . {\displaystyle \Omega _{S/R}\otimes _{S}S'\cong \Omega _{S'/R'}.} As a particular case of this, Kähler differentials are compatible with localizations, meaning that if W is a multiplicative set in S, then there is an isomorphism W − 1 Ω S / R ≅ Ω W − 1 S / R . {\displaystyle W^{-1}\Omega _{S/R}\cong \Omega _{W^{-1}S/R}.} Given two ring homomorphisms R → S → T {\displaystyle R\to S\to T} , there is a short exact sequence of T-modules Ω S / R ⊗ S T → Ω T / R → Ω T / S → 0. {\displaystyle \Omega _{S/R}\otimes _{S}T\to \Omega _{T/R}\to \Omega _{T/S}\to 0.} If T = S / I {\displaystyle T=S/I} for some ideal I, the term Ω T / S {\displaystyle \Omega _{T/S}} vanishes and the sequence can be continued at the left as follows: I / I 2 → [ f ] ↦ d f ⊗ 1 Ω S / R ⊗ S T → Ω T / R → 0. {\displaystyle I/I^{2}{\xrightarrow {[f]\mapsto df\otimes 1}}\Omega _{S/R}\otimes _{S}T\to \Omega _{T/R}\to 0.} A generalization of these two short exact sequences is provided by the cotangent complex. The latter sequence and the above computation for the polynomial ring allows the computation of the Kähler differentials of finitely generated R-algebras T = R [ t 1 , … , t n ] / ( f 1 , … , f m ) {\displaystyle T=R[t_{1},\ldots ,t_{n}]/(f_{1},\ldots ,f_{m})} . Briefly, these are generated by the differentials of the variables and have relations coming from the differentials of the equations. For example, for a single polynomial in a single variable, Ω ( R [ t ] / ( f ) ) / R ≅ ( R [ t ] d t ⊗ R [ t ] / ( f ) ) / ( d f ) ≅ R [ t ] / ( f , d f / d t ) d t . {\displaystyle \Omega _{(R[t]/(f))/R}\cong (R[t]\,dt\otimes R[t]/(f))/(df)\cong R[t]/(f,df/dt)\,dt.} == Kähler differentials for schemes == Because Kähler differentials are compatible with localization, they may be constructed on a general scheme by performing either of the two definitions above on affine open subschemes and gluing. However, the second definition has a geometric interpretation that globalizes immediately. In this interpretation, I represents the ideal defining the diagonal in the fiber product of Spec(S) with itself over Spec(S) → Spec(R). This construction therefore has a more geometric flavor, in the sense that the notion of first infinitesimal neighbourhood of the diagonal is thereby captured, via functions vanishing modulo functions vanishing at least to second order (see cotangent space for related notions). Moreover, it extends to a general morphism of schemes f : X → Y {\displaystyle f:X\to Y} by setting I {\displaystyle {\mathcal {I}}} to be the ideal of the diagonal in the fiber product X × Y X {\displaystyle X\times _{Y}X} . The cotangent sheaf Ω X / Y = I / I 2 {\displaystyle \Omega _{X/Y}={\mathcal {I}}/{\mathcal {I}}^{2}} , together with the derivation d : O X → Ω X / Y {\displaystyle d:{\mathcal {O}}_{X}\to \Omega _{X/Y}} defined analogously to before, is universal among f − 1 O Y {\displaystyle f^{-1}{\mathcal {O}}_{Y}} -linear derivations of O X {\displaystyle {\mathcal {O}}_{X}} -modules. If U is an open affine subscheme of X whose image in Y is contained in an open affine subscheme V, then the cotangent sheaf restricts to a sheaf on U which is similarly universal. It is therefore the sheaf associated to the module of Kähler differentials for the rings underlying U and V. Similar to the commutative algebra case, there exist exact sequences associated to morphisms of schemes. Given morphisms f : X → Y {\displaystyle f:X\to Y} and g : Y → Z {\displaystyle g:Y\to Z} of schemes there is an exact sequence of sheaves on X {\displaystyle X} f ∗ Ω Y / Z → Ω X / Z → Ω X / Y → 0 {\displaystyle f^{*}\Omega _{Y/Z}\to \Omega _{X/Z}\to \Omega _{X/Y}\to 0} Also, if X ⊂ Y {\displaystyle X\subset Y} is a closed subscheme given by the ideal sheaf I {\displaystyle {\mathcal {I}}} , then Ω X / Y = 0 {\displaystyle \Omega _{X/Y}=0} and there is an exact sequence of sheaves on X {\displaystyle X} I / I 2 → Ω Y / Z | X → Ω X / Z → 0 {\displaystyle {\mathcal {I}}/{\mathcal {I}}^{2}\to \Omega _{Y/Z}|_{X}\to \Omega _{X/Z}\to 0} === Examples === ==== Finite separable field extensions ==== If K / k {\displaystyle K/k} is a finite field extension, then Ω K / k 1 = 0 {\displaystyle \Omega _{K/k}^{1}=0} if and only if K / k {\displaystyle K/k} is separable. Consequently, if K / k {\displaystyle K/k} is a finite separable field extension and π : Y → Spec ⁡ ( K ) {\displaystyle \pi :Y\to \operatorname {Spec} (K)} is a smooth variety (or scheme), then the relative cotangent sequence π ∗ Ω K / k 1 → Ω Y / k 1 → Ω Y / K 1 → 0 {\displaystyle \pi ^{*}\Omega _{K/k}^{1}\to \Omega _{Y/k}^{1}\to \Omega _{Y/K}^{1}\to 0} proves Ω Y / k 1 ≅ Ω Y / K 1 {\displaystyle \Omega _{Y/k}^{1}\cong \Omega _{Y/K}^{1}} . ==== Cotangent modules of a projective variety ==== Given a projective scheme X ∈ Sch ⁡ / k {\displaystyle X\in \operatorname {Sch} /\mathbb {k} } , its cotangent sheaf can be computed from the sheafification of the cotangent module on the underlying graded algebra. For example, consider the complex curve Proj ⁡ ( C [ x , y , z ] ( x n + y n − z n ) ) = Proj ⁡ ( R ) {\displaystyle \operatorname {Proj} \left({\frac {\mathbb {C} [x,y,z]}{(x^{n}+y^{n}-z^{n})}}\right)=\operatorname {Proj} (R)} then we can compute the cotangent module as Ω R / C = R ⋅ d x ⊕ R ⋅ d y ⊕ R ⋅ d z n x n − 1 d x + n y n − 1 d y − n z n − 1 d z {\displaystyle \Omega _{R/\mathbb {C} }={\frac {R\cdot dx\oplus R\cdot dy\oplus R\cdot dz}{nx^{n-1}dx+ny^{n-1}dy-nz^{n-1}dz}}} Then, Ω X / C = Ω R / C ~ {\displaystyle \Omega _{X/\mathbb {C} }={\widetilde {\Omega _{R/\mathbb {C} }}}} ==== Morphisms of schemes ==== Consider the morphism X = Spec ⁡ ( C [ t , x , y ] ( x y − t ) ) = Spec ⁡ ( R ) → Spec ⁡ ( C [ t ] ) = Y {\displaystyle X=\operatorname {Spec} \left({\frac {\mathbb {C} [t,x,y]}{(xy-t)}}\right)=\operatorname {Spec} (R)\to \operatorname {Spec} (\mathbb {C} [t])=Y} in Sch ⁡ / C {\displaystyle \operatorname {Sch} /\mathbb {C} } . Then, using the first sequence we see that R ⋅ d t ~ → R ⋅ d t ⊕ R ⋅ d x ⊕ R ⋅ d y y d x + x d y − d t ~ → Ω X / Y → 0 {\displaystyle {\widetilde {R\cdot dt}}\to {\widetilde {\frac {R\cdot dt\oplus R\cdot dx\oplus R\cdot dy}{ydx+xdy-dt}}}\to \Omega _{X/Y}\to 0} hence Ω X / Y = R ⋅ d x ⊕ R ⋅ d y y d x + x d y ~ {\displaystyle \Omega _{X/Y}={\widetilde {\frac {R\cdot dx\oplus R\cdot dy}{ydx+xdy}}}} == Higher differential forms and algebraic de Rham cohomology == === de Rham complex === As before, fix a map X → Y {\displaystyle X\to Y} . Differential forms of higher degree are defined as the exterior powers (over O X {\displaystyle {\mathcal {O}}_{X}} ), Ω X / Y n := ⋀ n Ω X / Y . {\displaystyle \Omega _{X/Y}^{n}:=\bigwedge ^{n}\Omega _{X/Y}.} The derivation O X → Ω X / Y {\displaystyle {\mathcal {O}}_{X}\to \Omega _{X/Y}} extends in a natural way to a sequence of maps 0 → O X → d Ω X / Y 1 → d Ω X / Y 2 → d ⋯ {\displaystyle 0\to {\mathcal {O}}_{X}{\xrightarrow {d}}\Omega _{X/Y}^{1}{\xrightarrow {d}}\Omega _{X/Y}^{2}{\xrightarrow {d}}\cdots } satisfying d ∘ d = 0. {\displaystyle d\circ d=0.} This is a cochain complex known as the de Rham complex. The de Rham complex enjoys an additional multiplicative structure, the wedge product Ω X / Y n ⊗ Ω X / Y m → Ω X / Y n + m . {\displaystyle \Omega _{X/Y}^{n}\otimes \Omega _{X/Y}^{m}\to \Omega _{X/Y}^{n+m}.} This turns the de Rham complex into a commutative differential graded algebra. It also has a coalgebra structure inherited from the one on the exterior algebra. === de Rham cohomology === The hypercohomology of the de Rham complex of sheaves is called the algebraic de Rham cohomology of X over Y and is denoted by H dR n ( X / Y ) {\displaystyle H_{\text{dR}}^{n}(X/Y)} or just H dR n ( X ) {\displaystyle H_{\text{dR}}^{n}(X)} if Y is clear from the context. (In many situations, Y is the spectrum of a field of characteristic zero.) Algebraic de Rham cohomology was introduced by Grothendieck (1966a). It is closely related to crystalline cohomology. As is familiar from coherent cohomology of other quasi-coherent sheaves, the computation of de Rham cohomology is simplified when X = Spec S and Y = Spec R are affine schemes. In this case, because affine schemes have no higher cohomology, H dR n ( X / Y ) {\displaystyle H_{\text{dR}}^{n}(X/Y)} can be computed as the cohomology of the complex of abelian groups 0 → S → d Ω S / R 1 → d Ω S / R 2 → d ⋯ {\displaystyle 0\to S{\xrightarrow {d}}\Omega _{S/R}^{1}{\xrightarrow {d}}\Omega _{S/R}^{2}{\xrightarrow {d}}\cdots } which is, termwise, the global sections of the sheaves Ω X / Y r {\displaystyle \Omega _{X/Y}^{r}} . To take a very particular example, suppose that X = Spec ⁡ Q [ x , x − 1 ] {\displaystyle X=\operatorname {Spec} \mathbb {Q} \left[x,x^{-1}\right]} is the multiplicative group over Q . {\displaystyle \mathbb {Q} .} Because this is an affine scheme, hypercohomology reduces to ordinary cohomology. The algebraic de Rham complex is Q [ x , x − 1 ] → d Q [ x , x − 1 ] d x . {\displaystyle \mathbb {Q} [x,x^{-1}]{\xrightarrow {d}}\mathbb {Q} [x,x^{-1}]\,dx.} The differential d obeys the usual rules of calculus, meaning d ( x n ) = n x n − 1 d x . {\displaystyle d(x^{n})=nx^{n-1}\,dx.} The kernel and cokernel compute algebraic de Rham cohomology, so H dR 0 ( X ) = Q H dR 1 ( X ) = Q ⋅ x − 1 d x {\displaystyle {\begin{aligned}H_{\text{dR}}^{0}(X)&=\mathbb {Q} \\H_{\text{dR}}^{1}(X)&=\mathbb {Q} \cdot x^{-1}dx\end{aligned}}} and all other algebraic de Rham cohomology groups are zero. By way of comparison, the algebraic de Rham cohomology groups of Y = Spec ⁡ F p [ x , x − 1 ] {\displaystyle Y=\operatorname {Spec} \mathbb {F} _{p}\left[x,x^{-1}\right]} are much larger, namely, H dR 0 ( Y ) = ⨁ k ∈ Z F p ⋅ x k p H dR 1 ( Y ) = ⨁ k ∈ Z F p ⋅ x k p − 1 d x {\displaystyle {\begin{aligned}H_{\text{dR}}^{0}(Y)&=\bigoplus _{k\in \mathbb {Z} }\mathbb {F} _{p}\cdot x^{kp}\\H_{\text{dR}}^{1}(Y)&=\bigoplus _{k\in \mathbb {Z} }\mathbb {F} _{p}\cdot x^{kp-1}\,dx\end{aligned}}} Since the Betti numbers of these cohomology groups are not what is expected, crystalline cohomology was developed to remedy this issue; it defines a Weil cohomology theory over finite fields. === Grothendieck's comparison theorem === If X is a smooth complex algebraic variety, there is a natural comparison map of complexes of sheaves Ω X / C ∙ ( − ) → Ω X an ∙ ( ( − ) an ) {\displaystyle \Omega _{X/\mathbb {C} }^{\bullet }(-)\to \Omega _{X^{\text{an}}}^{\bullet }((-)^{\text{an}})} between the algebraic de Rham complex and the smooth de Rham complex defined in terms of (complex-valued) differential forms on X an {\displaystyle X^{\text{an}}} , the complex manifold associated to X. Here, ( − ) an {\textstyle (-)^{\text{an}}} denotes the complex analytification functor. This map is far from being an isomorphism. Nonetheless, Grothendieck (1966a) showed that the comparison map induces an isomorphism H dR ∗ ( X / C ) ≅ H dR ∗ ( X an ) {\displaystyle H_{\text{dR}}^{\ast }(X/\mathbb {C} )\cong H_{\text{dR}}^{\ast }(X^{\text{an}})} from algebraic to smooth de Rham cohomology (and thus to singular cohomology H sing ∗ ( X an ; C ) {\textstyle H_{\text{sing}}^{*}(X^{\text{an}};\mathbb {C} )} by de Rham's theorem). In particular, if X is a smooth affine algebraic variety embedded in C n {\textstyle \mathbb {C} ^{n}} , then the inclusion of the subcomplex of algebraic differential forms into that of all smooth forms on X is a quasi-isomorphism. For example, if X = { ( w , z ) ∈ C 2 : w z = 1 } {\displaystyle X=\{(w,z)\in \mathbb {C} ^{2}:wz=1\}} , then as shown above, the computation of algebraic de Rham cohomology gives explicit generators { 1 , z − 1 d z } {\textstyle \{1,z^{-1}dz\}} for H dR 0 ( X / C ) {\displaystyle H_{\text{dR}}^{0}(X/\mathbb {C} )} and H dR 1 ( X / C ) {\displaystyle H_{\text{dR}}^{1}(X/\mathbb {C} )} , respectively, while all other cohomology groups vanish. Since X is homotopy equivalent to a circle, this is as predicted by Grothendieck's theorem. Counter-examples in the singular case can be found with non-Du Bois singularities such as the graded ring k [ x , y ] / ( y 2 − x 3 ) {\displaystyle k[x,y]/(y^{2}-x^{3})} with y {\displaystyle y} where deg ⁡ ( y ) = 3 {\displaystyle \deg(y)=3} and deg ⁡ ( x ) = 2 {\displaystyle \deg(x)=2} . Other counterexamples can be found in algebraic plane curves with isolated singularities whose Milnor and Tjurina numbers are non-equal. A proof of Grothendieck's theorem using the concept of a mixed Weil cohomology theory was given by Cisinski & Déglise (2013). == Applications == === Canonical divisor === If X is a smooth variety over a field k, then Ω X / k {\displaystyle \Omega _{X/k}} is a vector bundle (i.e., a locally free O X {\displaystyle {\mathcal {O}}_{X}} -module) of rank equal to the dimension of X. This implies, in particular, that ω X / k := ⋀ dim ⁡ X Ω X / k {\displaystyle \omega _{X/k}:=\bigwedge ^{\dim X}\Omega _{X/k}} is a line bundle or, equivalently, a divisor. It is referred to as the canonical divisor. The canonical divisor is, as it turns out, a dualizing complex and therefore appears in various important theorems in algebraic geometry such as Serre duality or Verdier duality. === Classification of algebraic curves === The geometric genus of a smooth algebraic variety X of dimension d over a field k is defined as the dimension g := dim ⁡ H 0 ( X , Ω X / k d ) . {\displaystyle g:=\dim H^{0}(X,\Omega _{X/k}^{d}).} For curves, this purely algebraic definition agrees with the topological definition (for k = C {\displaystyle k=\mathbb {C} } ) as the "number of handles" of the Riemann surface associated to X. There is a rather sharp trichotomy of geometric and arithmetic properties depending on the genus of a curve, for g being 0 (rational curves), 1 (elliptic curves), and greater than 1 (hyperbolic Riemann surfaces, including hyperelliptic curves), respectively. === Tangent bundle and Riemann–Roch theorem === The tangent bundle of a smooth variety X is, by definition, the dual of the cotangent sheaf Ω X / k {\displaystyle \Omega _{X/k}} . The Riemann–Roch theorem and its far-reaching generalization, the Grothendieck–Riemann–Roch theorem, contain as a crucial ingredient the Todd class of the tangent bundle. === Unramified and smooth morphisms === The sheaf of differentials is related to various algebro-geometric notions. A morphism f : X → Y {\displaystyle f:X\to Y} of schemes is unramified if and only if Ω X / Y {\displaystyle \Omega _{X/Y}} is zero. A special case of this assertion is that for a field k, K := k [ t ] / f {\displaystyle K:=k[t]/f} is separable over k iff Ω K / k = 0 {\displaystyle \Omega _{K/k}=0} , which can also be read off the above computation. A morphism f of finite type is a smooth morphism if it is flat and if Ω X / Y {\displaystyle \Omega _{X/Y}} is a locally free O X {\displaystyle {\mathcal {O}}_{X}} -module of appropriate rank. The computation of Ω R [ t 1 , … , t n ] / R {\displaystyle \Omega _{R[t_{1},\ldots ,t_{n}]/R}} above shows that the projection from affine space A R n → Spec ⁡ ( R ) {\displaystyle \mathbb {A} _{R}^{n}\to \operatorname {Spec} (R)} is smooth. === Periods === Periods are, broadly speaking, integrals of certain arithmetically defined differential forms. The simplest example of a period is 2 π i {\displaystyle 2\pi i} , which arises as ∫ S 1 d z z = 2 π i . {\displaystyle \int _{S^{1}}{\frac {dz}{z}}=2\pi i.} Algebraic de Rham cohomology is used to construct periods as follows: For an algebraic variety X defined over Q , {\displaystyle \mathbb {Q} ,} the above-mentioned compatibility with base-change yields a natural isomorphism H dR n ( X / Q ) ⊗ Q C = H dR n ( X ⊗ Q C / C ) . {\displaystyle H_{\text{dR}}^{n}(X/\mathbb {Q} )\otimes _{\mathbb {Q} }\mathbb {C} =H_{\text{dR}}^{n}(X\otimes _{\mathbb {Q} }\mathbb {C} /\mathbb {C} ).} On the other hand, the right hand cohomology group is isomorphic to de Rham cohomology of the complex manifold X an {\displaystyle X^{\text{an}}} associated to X, denoted here H dR n ( X an ) . {\displaystyle H_{\text{dR}}^{n}(X^{\text{an}}).} Yet another classical result, de Rham's theorem, asserts an isomorphism of the latter cohomology group with singular cohomology (or sheaf cohomology) with complex coefficients, H n ( X an , C ) {\displaystyle H^{n}(X^{\text{an}},\mathbb {C} )} , which by the universal coefficient theorem is in its turn isomorphic to H n ( X an , Q ) ⊗ Q C . {\displaystyle H^{n}(X^{\text{an}},\mathbb {Q} )\otimes _{\mathbb {Q} }\mathbb {C} .} Composing these isomorphisms yields two rational vector spaces which, after tensoring with C {\displaystyle \mathbb {C} } become isomorphic. Choosing bases of these rational subspaces (also called lattices), the determinant of the base-change matrix is a complex number, well defined up to multiplication by a rational number. Such numbers are periods. === Algebraic number theory === In algebraic number theory, Kähler differentials may be used to study the ramification in an extension of algebraic number fields. If L / K is a finite extension with rings of integers R and S respectively then the different ideal δL / K, which encodes the ramification data, is the annihilator of the R-module ΩR/S: δ L / K = { x ∈ R : x d y = 0 for all y ∈ R } . {\displaystyle \delta _{L/K}=\{x\in R:x\,dy=0{\text{ for all }}y\in R\}.} == Related notions == Hochschild homology is a homology theory for associative rings that turns out to be closely related to Kähler differentials. This is because of the Hochschild-Kostant-Rosenberg theorem which states that the Hochschild homology H H ∙ ( R ) {\displaystyle HH_{\bullet }(R)} of an algebra of a smooth variety is isomorphic to the de-Rham complex Ω R / k ∙ {\displaystyle \Omega _{R/k}^{\bullet }} for k {\displaystyle k} a field of characteristic 0 {\displaystyle 0} . A derived enhancement of this theorem states that the Hochschild homology of a differential graded algebra is isomorphic to the derived de-Rham complex. The de Rham–Witt complex is, in very rough terms, an enhancement of the de Rham complex for the ring of Witt vectors. == Notes == == References == Cisinski, Denis-Charles; Déglise, Frédéric (2013), "Mixed Weil cohomologies", Advances in Mathematics, 230 (1): 55–130, arXiv:0712.3291, doi:10.1016/j.aim.2011.10.021 Grothendieck, Alexander (1966a), "On the de Rham cohomology of algebraic varieties", Publications Mathématiques de l'IHÉS, 29 (29): 95–103, doi:10.1007/BF02684807, ISSN 0073-8301, MR 0199194, S2CID 123434721 (letter to Michael Atiyah, October 14, 1963) Grothendieck, Alexander (1966b), Letter to John Tate (PDF) Grothendieck, Alexander (1968), "Crystals and the de Rham cohomology of schemes" (PDF), in Giraud, Jean; Grothendieck, Alexander; Kleiman, Steven L.; et al. (eds.), Dix Exposés sur la Cohomologie des Schémas, Advanced studies in pure mathematics, vol. 3, Amsterdam: North-Holland, pp. 306–358, MR 0269663 Johnson, James (1969), "Kähler differentials and differential algebra", Annals of Mathematics, 89 (1): 92–98, doi:10.2307/1970810, JSTOR 1970810, Zbl 0179.34302 Hartshorne, Robin (1977), Algebraic Geometry, Graduate Texts in Mathematics, vol. 52, New York: Springer-Verlag, ISBN 978-0-387-90244-9, MR 0463157 Matsumura, Hideyuki (1986), Commutative ring theory, Cambridge University Press Neukirch, Jürgen (1999), Algebraische Zahlentheorie, Grundlehren der mathematischen Wissenschaften, vol. 322, Berlin: Springer-Verlag, ISBN 978-3-540-65399-8, MR 1697859, Zbl 0956.11021 Rosenlicht, M. (1976), "On Liouville's theory of elementary functions" (PDF), Pacific Journal of Mathematics, 65 (2): 485–492, doi:10.2140/pjm.1976.65.485, Zbl 0318.12107 Fu, Guofeng; Halás, Miroslav; Li, Ziming (2011), "Some remarks on Kähler differentials and ordinary differentials in nonlinear control systems", Systems and Control Letters, 60: 699–703, doi:10.1016/j.sysconle.2011.05.006 == External links == Notes on p-adic algebraic de-Rham cohomology - gives many computations over characteristic 0 as motivation A thread devoted to the relation on algebraic and analytic differential forms Differentials (Stacks project)
Wikipedia/Algebraic_de_Rham_cohomology
In mathematics, Hodge–Arakelov theory of elliptic curves is an analogue of classical and p-adic Hodge theory for elliptic curves carried out in the framework of Arakelov theory. It was introduced by Mochizuki (1999). It bears the name of two mathematicians, Suren Arakelov and W. V. D. Hodge. The main comparison in his theory remains unpublished as of 2019. Mochizuki's main comparison theorem in Hodge–Arakelov theory states (roughly) that the space of polynomial functions of degree less than d on the universal extension of a smooth elliptic curve in characteristic 0 is naturally isomorphic (via restriction) to the d2-dimensional space of functions on the d-torsion points. It is called a 'comparison theorem' as it is an analogue for Arakelov theory of comparison theorems in cohomology relating de Rham cohomology to singular cohomology of complex varieties or étale cohomology of p-adic varieties. In Mochizuki (1999) and Mochizuki (2002a) he pointed out that arithmetic Kodaira–Spencer map and Gauss–Manin connection may give some important hints for Vojta's conjecture, ABC conjecture and so on; in 2012, he published his Inter-universal Teichmuller theory, in which he didn't use Hodge-Arakelov theory but used the theory of frobenioids, anabelioids and mono-anabelian geometry. == See also == Hodge theory Arakelov theory P-adic Hodge theory Inter-universal Teichmüller theory == References == Mochizuki, Shinichi (1999), The Hodge-Arakelov theory of elliptic curves: global discretization of local Hodge theories (PDF), Preprint No. 1255/1256, Res. Inst. Math. Sci., Kyoto Univ., Kyoto Mochizuki, Shinichi (2002a), "A survey of the Hodge-Arakelov theory of elliptic curves. I", in Fried, Michael D.; Ihara, Yasutaka (eds.), Arithmetic fundamental groups and noncommutative algebra (Berkeley, CA, 1999) (PDF), Proc. Sympos. Pure Math., vol. 70, Providence, R.I.: American Mathematical Society, pp. 533–569, ISBN 978-0-8218-2036-0, MR 1935421 Mochizuki, Shinichi (2002b), "A survey of the Hodge-Arakelov theory of elliptic curves. II", Algebraic geometry 2000, Azumino (Hotaka) (PDF), Adv. Stud. Pure Math., vol. 36, Tokyo: Math. Soc. Japan, pp. 81–114, ISBN 978-4-931469-20-4, MR 1971513
Wikipedia/Hodge-Arakelov_theory
In mathematical analysis, a metric space M is called complete (or a Cauchy space) if every Cauchy sequence of points in M has a limit that is also in M. Intuitively, a space is complete if there are no "points missing" from it (inside or at the boundary). For instance, the set of rational numbers is not complete, because e.g. 2 {\displaystyle {\sqrt {2}}} is "missing" from it, even though one can construct a Cauchy sequence of rational numbers that converges to it (see further examples below). It is always possible to "fill all the holes", leading to the completion of a given space, as explained below. == Definition == Cauchy sequence A sequence x 1 , x 2 , x 3 , … {\displaystyle x_{1},x_{2},x_{3},\ldots } of elements from X {\displaystyle X} of a metric space ( X , d ) {\displaystyle (X,d)} is called Cauchy if for every positive real number r > 0 {\displaystyle r>0} there is a positive integer N {\displaystyle N} such that for all positive integers m , n > N , {\displaystyle m,n>N,} d ( x m , x n ) < r . {\displaystyle d(x_{m},x_{n})<r.} Complete space A metric space ( X , d ) {\displaystyle (X,d)} is complete if any of the following equivalent conditions are satisfied: Every Cauchy sequence in X {\displaystyle X} converges in X {\displaystyle X} (that is, has a limit that is also in X {\displaystyle X} ). Every decreasing sequence of non-empty closed subsets of X , {\displaystyle X,} with diameters tending to 0, has a non-empty intersection: if F n {\displaystyle F_{n}} is closed and non-empty, F n + 1 ⊆ F n {\displaystyle F_{n+1}\subseteq F_{n}} for every n , {\displaystyle n,} and diam ⁡ ( F n ) → 0 , {\displaystyle \operatorname {diam} \left(F_{n}\right)\to 0,} then there is a unique point x ∈ X {\displaystyle x\in X} common to all sets F n . {\displaystyle F_{n}.} == Examples == The space Q {\displaystyle \mathbb {Q} } of rational numbers, with the standard metric given by the absolute value of the difference, is not complete. Consider for instance the sequence defined by x 1 = 1 {\displaystyle x_{1}=1\;} and x n + 1 = x n 2 + 1 x n . {\displaystyle \;x_{n+1}={\frac {x_{n}}{2}}+{\frac {1}{x_{n}}}.} This is a Cauchy sequence of rational numbers, but it does not converge towards any rational limit: If the sequence did have a limit x , {\displaystyle x,} then by solving x = x 2 + 1 x {\displaystyle x={\frac {x}{2}}+{\frac {1}{x}}} necessarily x 2 = 2 , {\displaystyle x^{2}=2,} yet no rational number has this property. However, considered as a sequence of real numbers, it does converge to the irrational number 2 {\displaystyle {\sqrt {2}}} . The open interval (0,1), again with the absolute difference metric, is not complete either. The sequence defined by x n = 1 n {\displaystyle x_{n}={\tfrac {1}{n}}} is Cauchy, but does not have a limit in the given space. However the closed interval [0,1] is complete; for example the given sequence does have a limit in this interval, namely zero. The space R {\displaystyle \mathbb {R} } of real numbers and the space C {\displaystyle \mathbb {C} } of complex numbers (with the metric given by the absolute difference) are complete, and so is Euclidean space R n {\displaystyle \mathbb {R} ^{n}} , with the usual distance metric. In contrast, infinite-dimensional normed vector spaces may or may not be complete; those that are complete are Banach spaces. The space C[a, b] of continuous real-valued functions on a closed and bounded interval is a Banach space, and so a complete metric space, with respect to the supremum norm. However, the supremum norm does not give a norm on the space C(a, b) of continuous functions on (a, b), for it may contain unbounded functions. Instead, with the topology of compact convergence, C(a, b) can be given the structure of a Fréchet space: a locally convex topological vector space whose topology can be induced by a complete translation-invariant metric. The space Qp of p-adic numbers is complete for any prime number p . {\displaystyle p.} This space completes Q with the p-adic metric in the same way that R completes Q with the usual metric. If S {\displaystyle S} is an arbitrary set, then the set SN of all sequences in S {\displaystyle S} becomes a complete metric space if we define the distance between the sequences ( x n ) {\displaystyle \left(x_{n}\right)} and ( y n ) {\displaystyle \left(y_{n}\right)} to be 1 N {\displaystyle {\tfrac {1}{N}}} where N {\displaystyle N} is the smallest index for which x N {\displaystyle x_{N}} is distinct from y N {\displaystyle y_{N}} or 0 {\displaystyle 0} if there is no such index. This space is homeomorphic to the product of a countable number of copies of the discrete space S . {\displaystyle S.} Riemannian manifolds which are complete are called geodesic manifolds; completeness follows from the Hopf–Rinow theorem. == Some theorems == Every compact metric space is complete, though complete spaces need not be compact. In fact, a metric space is compact if and only if it is complete and totally bounded. This is a generalization of the Heine–Borel theorem, which states that any closed and bounded subspace S {\displaystyle S} of Rn is compact and therefore complete. Let ( X , d ) {\displaystyle (X,d)} be a complete metric space. If A ⊆ X {\displaystyle A\subseteq X} is a closed set, then A {\displaystyle A} is also complete. Let ( X , d ) {\displaystyle (X,d)} be a metric space. If A ⊆ X {\displaystyle A\subseteq X} is a complete subspace, then A {\displaystyle A} is also closed. If X {\displaystyle X} is a set and M {\displaystyle M} is a complete metric space, then the set B ( X , M ) {\displaystyle B(X,M)} of all bounded functions f from X to M {\displaystyle M} is a complete metric space. Here we define the distance in B ( X , M ) {\displaystyle B(X,M)} in terms of the distance in M {\displaystyle M} with the supremum norm d ( f , g ) ≡ sup { d [ f ( x ) , g ( x ) ] : x ∈ X } {\displaystyle d(f,g)\equiv \sup\{d[f(x),g(x)]:x\in X\}} If X {\displaystyle X} is a topological space and M {\displaystyle M} is a complete metric space, then the set C b ( X , M ) {\displaystyle C_{b}(X,M)} consisting of all continuous bounded functions f : X → M {\displaystyle f:X\to M} is a closed subspace of B ( X , M ) {\displaystyle B(X,M)} and hence also complete. The Baire category theorem says that every complete metric space is a Baire space. That is, the union of countably many nowhere dense subsets of the space has empty interior. The Banach fixed-point theorem states that a contraction mapping on a complete metric space admits a fixed point. The fixed-point theorem is often used to prove the inverse function theorem on complete metric spaces such as Banach spaces. == Completion == For any metric space M, it is possible to construct a complete metric space M′ (which is also denoted as M ¯ {\displaystyle {\overline {M}}} ), which contains M as a dense subspace. It has the following universal property: if N is any complete metric space and f is any uniformly continuous function from M to N, then there exists a unique uniformly continuous function f′ from M′ to N that extends f. The space M' is determined up to isometry by this property (among all complete metric spaces isometrically containing M), and is called the completion of M. The completion of M can be constructed as a set of equivalence classes of Cauchy sequences in M. For any two Cauchy sequences x ∙ = ( x n ) {\displaystyle x_{\bullet }=\left(x_{n}\right)} and y ∙ = ( y n ) {\displaystyle y_{\bullet }=\left(y_{n}\right)} in M, we may define their distance as d ( x ∙ , y ∙ ) = lim n d ( x n , y n ) {\displaystyle d\left(x_{\bullet },y_{\bullet }\right)=\lim _{n}d\left(x_{n},y_{n}\right)} (This limit exists because the real numbers are complete.) This is only a pseudometric, not yet a metric, since two different Cauchy sequences may have the distance 0. But "having distance 0" is an equivalence relation on the set of all Cauchy sequences, and the set of equivalence classes is a metric space, the completion of M. The original space is embedded in this space via the identification of an element x of M' with the equivalence class of sequences in M converging to x (i.e., the equivalence class containing the sequence with constant value x). This defines an isometry onto a dense subspace, as required. Notice, however, that this construction makes explicit use of the completeness of the real numbers, so completion of the rational numbers needs a slightly different treatment. Cantor's construction of the real numbers is similar to the above construction; the real numbers are the completion of the rational numbers using the ordinary absolute value to measure distances. The additional subtlety to contend with is that it is not logically permissible to use the completeness of the real numbers in their own construction. Nevertheless, equivalence classes of Cauchy sequences are defined as above, and the set of equivalence classes is easily shown to be a field that has the rational numbers as a subfield. This field is complete, admits a natural total ordering, and is the unique totally ordered complete field (up to isomorphism). It is defined as the field of real numbers (see also Construction of the real numbers for more details). One way to visualize this identification with the real numbers as usually viewed is that the equivalence class consisting of those Cauchy sequences of rational numbers that "ought" to have a given real limit is identified with that real number. The truncations of the decimal expansion give just one choice of Cauchy sequence in the relevant equivalence class. For a prime p , {\displaystyle p,} the p-adic numbers arise by completing the rational numbers with respect to a different metric. If the earlier completion procedure is applied to a normed vector space, the result is a Banach space containing the original space as a dense subspace, and if it is applied to an inner product space, the result is a Hilbert space containing the original space as a dense subspace. == Topologically complete spaces == Completeness is a property of the metric and not of the topology, meaning that a complete metric space can be homeomorphic to a non-complete one. An example is given by the real numbers, which are complete but homeomorphic to the open interval (0,1), which is not complete. In topology one considers completely metrizable spaces, spaces for which there exists at least one complete metric inducing the given topology. Completely metrizable spaces can be characterized as those spaces that can be written as an intersection of countably many open subsets of some complete metric space. Since the conclusion of the Baire category theorem is purely topological, it applies to these spaces as well. Completely metrizable spaces are often called topologically complete. However, the latter term is somewhat arbitrary since metric is not the most general structure on a topological space for which one can talk about completeness (see the section Alternatives and generalizations). Indeed, some authors use the term topologically complete for a wider class of topological spaces, the completely uniformizable spaces. A topological space homeomorphic to a separable complete metric space is called a Polish space. == Alternatives and generalizations == Since Cauchy sequences can also be defined in general topological groups, an alternative to relying on a metric structure for defining completeness and constructing the completion of a space is to use a group structure. This is most often seen in the context of topological vector spaces, but requires only the existence of a continuous "subtraction" operation. In this setting, the distance between two points x {\displaystyle x} and y {\displaystyle y} is gauged not by a real number ε {\displaystyle \varepsilon } via the metric d {\displaystyle d} in the comparison d ( x , y ) < ε , {\displaystyle d(x,y)<\varepsilon ,} but by an open neighbourhood N {\displaystyle N} of 0 {\displaystyle 0} via subtraction in the comparison x − y ∈ N . {\displaystyle x-y\in N.} A common generalisation of these definitions can be found in the context of a uniform space, where an entourage is a set of all pairs of points that are at no more than a particular "distance" from each other. It is also possible to replace Cauchy sequences in the definition of completeness by Cauchy nets or Cauchy filters. If every Cauchy net (or equivalently every Cauchy filter) has a limit in X , {\displaystyle X,} then X {\displaystyle X} is called complete. One can furthermore construct a completion for an arbitrary uniform space similar to the completion of metric spaces. The most general situation in which Cauchy nets apply is Cauchy spaces; these too have a notion of completeness and completion just like uniform spaces. == See also == Cauchy space – Concept in general topology and analysis Completion (algebra) – In algebra, completion w.r.t. powers of an idealPages displaying short descriptions of redirect targets Complete uniform space – Topological space with a notion of uniform propertiesPages displaying short descriptions of redirect targets Complete topological vector space – A TVS where points that get progressively closer to each other will always converge to a point Ekeland's variational principle – theorem that asserts that there exist nearly optimal solutions to some optimization problemsPages displaying wikidata descriptions as a fallback Knaster–Tarski theorem – Theorem in order and lattice theory == Notes == == References == Kelley, John L. (1975). General Topology. Springer. ISBN 0-387-90125-6. Kreyszig, Erwin, Introductory functional analysis with applications (Wiley, New York, 1978). ISBN 0-471-03729-X Lang, Serge, "Real and Functional Analysis" ISBN 0-387-94001-4 Meise, Reinhold; Vogt, Dietmar (1997). Introduction to functional analysis. Ramanujan, M.S. (trans.). Oxford: Clarendon Press; New York: Oxford University Press. ISBN 0-19-851485-9.
Wikipedia/Complete_(topology)
In mathematics, p-adic Teichmüller theory describes the "uniformization" of p-adic curves and their moduli, generalizing the usual Teichmüller theory that describes the uniformization of Riemann surfaces and their moduli. It was introduced and developed by Shinichi Mochizuki (1996, 1999). The first problem is to reformulate the Fuchsian uniformization of a complex Riemann surface (an isomorphism from the upper half plane to a universal covering space of the surface) in a way that makes sense for p-adic curves. The existence of a Fuchsian uniformization is equivalent to the existence of a canonical indigenous bundle over the Riemann surface: the unique indigenous bundle that is invariant under complex conjugation and whose monodromy representation is quasi-Fuchsian. For p-adic curves, the analogue of complex conjugation is the Frobenius endomorphism, and the analogue of the quasi-Fuchsian condition is an integrality condition on the indigenous line bundle. So in p-adic Teichmüller theory, the p-adic analogue the Fuchsian uniformization of Teichmüller theory, is the study of integral Frobenius invariant indigenous bundles. == See also == Inter-universal Teichmüller theory Anabelian geometry nilcurve == References == Mochizuki, Shinichi (1996), "A theory of ordinary p-adic curves", Kyoto University. Research Institute for Mathematical Sciences. Publications, 32 (6): 957–1152, doi:10.2977/prims/1195145686, hdl:2433/59800, ISSN 0034-5318, MR 1437328 Mochizuki, Shinichi (1999), Foundations of p-adic Teichmüller theory, AMS/IP Studies in Advanced Mathematics, vol. 11, Providence, R.I.: American Mathematical Society, ISBN 978-0-8218-1190-0, MR 1700772 Mochizuki, Shinichi (2002), Berthelot, Pierre; Fontaine, Jean-Marc; Illusie, Luc; Kato, Kazuya; Rapoport, Michael (eds.), "Cohomologies p-adiques et applications arithmétiques, I.", Astérisque (278): 1–49, ISSN 0303-1179, MR 1922823
Wikipedia/P-adic_Teichmüller_theory
In telecommunications and computing, bit rate (bitrate or as a variable R) is the number of bits that are conveyed or processed per unit of time. The bit rate is expressed in the unit bit per second (symbol: bit/s), often in conjunction with an SI prefix such as kilo (1 kbit/s = 1,000 bit/s), mega (1 Mbit/s = 1,000 kbit/s), giga (1 Gbit/s = 1,000 Mbit/s) or tera (1 Tbit/s = 1,000 Gbit/s). The non-standard abbreviation bps is often used to replace the standard symbol bit/s, so that, for example, 1 Mbps is used to mean one million bits per second. In most computing and digital communication environments, one byte per second (symbol: B/s) corresponds roughly to 8 bit/s. 1 byte = 8 bits However if stop bits, start bits, and parity bits need to be factored in, a higher number of bits per second will be required to achieve a throughput of the same number of bytes. == Prefixes == When quantifying large or small bit rates, SI prefixes (also known as metric prefixes or decimal prefixes) are used, thus: Binary prefixes are sometimes used for bit rates. The International Standard (IEC 80000-13) specifies different symbols for binary and decimal (SI) prefixes (e.g., 1 KiB/s = 1024 B/s = 8192 bit/s, and 1 MiB/s = 1024 KiB/s). == In data communications == === Gross bit rate === In digital communication systems, the physical layer gross bitrate, raw bitrate, data signaling rate, gross data transfer rate or uncoded transmission rate (sometimes written as a variable Rb or fb) is the total number of physically transferred bits per second over a communication link, including useful data as well as protocol overhead. In case of serial communications, the gross bit rate is related to the bit transmission time T b {\displaystyle T_{\text{b}}} as: R b = 1 T b , {\displaystyle R_{\text{b}}={1 \over T_{\text{b}}},} The gross bit rate is related to the symbol rate or modulation rate, which is expressed in bauds or symbols per second. However, the gross bit rate and the baud value are equal only when there are only two levels per symbol, representing 0 and 1, meaning that each symbol of a data transmission system carries exactly one bit of data; for example, this is not the case for modern modulation systems used in modems and LAN equipment. For most line codes and modulation methods: symbol rate ≤ gross bit rate {\displaystyle {\text{symbol rate}}\leq {\text{gross bit rate}}} More specifically, a line code (or baseband transmission scheme) representing the data using pulse-amplitude modulation with 2 N {\displaystyle 2^{N}} different voltage levels, can transfer N {\displaystyle N} bits per pulse. A digital modulation method (or passband transmission scheme) using 2 N {\displaystyle 2^{N}} different symbols, for example 2 N {\displaystyle 2^{N}} amplitudes, phases or frequencies, can transfer N {\displaystyle N} bits per symbol. This results in: gross bit rate = symbol rate × N {\displaystyle {\text{gross bit rate}}={\text{symbol rate}}\times N} An exception from the above is some self-synchronizing line codes, for example Manchester coding and return-to-zero (RTZ) coding, where each bit is represented by two pulses (signal states), resulting in: gross bit rate = symbol rate/2 {\displaystyle {\text{gross bit rate = symbol rate/2}}} A theoretical upper bound for the symbol rate in baud, symbols/s or pulses/s for a certain spectral bandwidth in hertz is given by the Nyquist law: symbol rate ≤ Nyquist rate = 2 × bandwidth {\displaystyle {\text{symbol rate}}\leq {\text{Nyquist rate}}=2\times {\text{bandwidth}}} In practice this upper bound can only be approached for line coding schemes and for so-called vestigial sideband digital modulation. Most other digital carrier-modulated schemes, for example ASK, PSK, QAM and OFDM, can be characterized as double sideband modulation, resulting in the following relation: symbol rate ≤ bandwidth {\displaystyle {\text{symbol rate}}\leq {\text{bandwidth}}} In case of parallel communication, the gross bit rate is given by ∑ i = 1 n log 2 ⁡ M i T i {\displaystyle \sum _{i=1}^{n}{\frac {\log _{2}{M_{i}}}{T_{i}}}} where n is the number of parallel channels, Mi is the number of symbols or levels of the modulation in the ith channel, and Ti is the symbol duration time, expressed in seconds, for the ith channel. === Information rate === The physical layer net bitrate, information rate, useful bit rate, payload rate, net data transfer rate, coded transmission rate, effective data rate or wire speed (informal language) of a digital communication channel is the capacity excluding the physical layer protocol overhead, for example time division multiplex (TDM) framing bits, redundant forward error correction (FEC) codes, equalizer training symbols and other channel coding. Error-correcting codes are common especially in wireless communication systems, broadband modem standards and modern copper-based high-speed LANs. The physical layer net bitrate is the datarate measured at a reference point in the interface between the data link layer and physical layer, and may consequently include data link and higher layer overhead. In modems and wireless systems, link adaptation (automatic adaptation of the data rate and the modulation and/or error coding scheme to the signal quality) is often applied. In that context, the term peak bitrate denotes the net bitrate of the fastest and least robust transmission mode, used for example when the distance is very short between sender and transmitter. Some operating systems and network equipment may detect the "connection speed" (informal language) of a network access technology or communication device, implying the current net bit rate. The term line rate in some textbooks is defined as gross bit rate, in others as net bit rate. The relationship between the gross bit rate and net bit rate is affected by the FEC code rate according to the following. net bit rate ≤ gross bit rate × code rate The connection speed of a technology that involves forward error correction typically refers to the physical layer net bit rate in accordance with the above definition. For example, the net bitrate (and thus the "connection speed") of an IEEE 802.11a wireless network is the net bit rate of between 6 and 54 Mbit/s, while the gross bit rate is between 12 and 72 Mbit/s inclusive of error-correcting codes. The net bit rate of ISDN2 Basic Rate Interface (2 B-channels + 1 D-channel) of 64+64+16 = 144 kbit/s also refers to the payload data rates, while the D channel signalling rate is 16 kbit/s. The net bit rate of the Ethernet 100BASE-TX physical layer standard is 100 Mbit/s, while the gross bitrate is 125 Mbit/s, due to the 4B5B (four bit over five bit) encoding. In this case, the gross bit rate is equal to the symbol rate or pulse rate of 125 megabaud, due to the NRZI line code. In communications technologies without forward error correction and other physical layer protocol overhead, there is no distinction between gross bit rate and physical layer net bit rate. For example, the net as well as gross bit rate of Ethernet 10BASE-T is 10 Mbit/s. Due to the Manchester line code, each bit is represented by two pulses, resulting in a pulse rate of 20 megabaud. The "connection speed" of a V.92 voiceband modem typically refers to the gross bit rate, since there is no additional error-correction code. It can be up to 56,000 bit/s downstream and 48,000 bit/s upstream. A lower bit rate may be chosen during the connection establishment phase due to adaptive modulation – slower but more robust modulation schemes are chosen in case of poor signal-to-noise ratio. Due to data compression, the actual data transmission rate or throughput (see below) may be higher. The channel capacity, also known as the Shannon capacity, is a theoretical upper bound for the maximum net bitrate, exclusive of forward error correction coding, that is possible without bit errors for a certain physical analog node-to-node communication link. net bit rate ≤ channel capacity The channel capacity is proportional to the analog bandwidth in hertz. This proportionality is called Hartley's law. Consequently, the net bit rate is sometimes called digital bandwidth capacity in bit/s. === Network throughput === The term throughput, essentially the same thing as digital bandwidth consumption, denotes the achieved average useful bit rate in a computer network over a logical or physical communication link or through a network node, typically measured at a reference point above the data link layer. This implies that the throughput often excludes data link layer protocol overhead. The throughput is affected by the traffic load from the data source in question, as well as from other sources sharing the same network resources. See also measuring network throughput. === Goodput (data transfer rate) === Goodput or data transfer rate refers to the achieved average net bit rate that is delivered to the application layer, exclusive of all protocol overhead, data packets retransmissions, etc. For example, in the case of file transfer, the goodput corresponds to the achieved file transfer rate. The file transfer rate in bit/s can be calculated as the file size (in bytes) divided by the file transfer time (in seconds) and multiplied by eight. As an example, the goodput or data transfer rate of a V.92 voiceband modem is affected by the modem physical layer and data link layer protocols. It is sometimes higher than the physical layer data rate due to V.44 data compression, and sometimes lower due to bit-errors and automatic repeat request retransmissions. If no data compression is provided by the network equipment or protocols, we have the following relation: goodput ≤ throughput ≤ maximum throughput ≤ net bit rate for a certain communication path. === Progress trends === These are examples of physical layer net bit rates in proposed communication standard interfaces and devices: == Multimedia == In digital multimedia, bit rate represents the amount of information, or detail, that is stored per unit of time of a recording. The bitrate depends on several factors: The original material may be sampled at different frequencies. The samples may use different numbers of bits. The data may be encoded by different schemes. The information may be digitally compressed by different algorithms or to different degrees. Generally, choices are made about the above factors in order to achieve the desired trade-off between minimizing the bitrate and maximizing the quality of the material when it is played. If lossy data compression is used on audio or visual data, differences from the original signal will be introduced; if the compression is substantial, or lossy data is decompressed and recompressed, this may become noticeable in the form of compression artifacts. Whether these affect the perceived quality, and if so how much, depends on the compression scheme, encoder power, the characteristics of the input data, the listener's perceptions, the listener's familiarity with artifacts, and the listening or viewing environment. The encoding bit rate of a multimedia file is its size in bytes divided by the playback time of the recording (in seconds), multiplied by eight. For real-time streaming multimedia, the encoding bit rate is the goodput that is required to avoid playback interruption. The term average bitrate is used in case of variable bitrate multimedia source coding schemes. In this context, the peak bit rate is the maximum number of bits required for any short-term block of compressed data. A theoretical lower bound for the encoding bit rate for lossless data compression is the source information rate, also known as the entropy rate. The bitrates in this section are approximately the minimum that the average listener in a typical listening or viewing environment, when using the best available compression, would perceive as not significantly worse than the reference standard. === Audio === ==== CD-DA ==== Compact Disc Digital Audio (CD-DA) uses 44,100 samples per second, each with a bit depth of 16, a format sometimes abbreviated like "16bit / 44.1kHz". CD-DA is also stereo, using a left and right channel, so the amount of audio data per second is double that of mono, where only a single channel is used. The bit rate of PCM audio data can be calculated with the following formula: bit rate = sample rate × bit depth × channels {\displaystyle {\text{bit rate}}={\text{sample rate}}\times {\text{bit depth}}\times {\text{channels}}} For example, the bit rate of a CD-DA recording (44.1 kHz sampling rate, 16 bits per sample and two channels) can be calculated as follows: 44 , 100 × 16 × 2 = 1 , 411 , 200 bit/s = 1 , 411.2 kbit/s {\displaystyle 44,100\times 16\times 2=1,411,200\ {\text{bit/s}}=1,411.2\ {\text{kbit/s}}} The cumulative size of a length of PCM audio data (excluding a file header or other metadata) can be calculated using the following formula: size in bits = sample rate × bit depth × channels × time . {\displaystyle {\text{size in bits}}={\text{sample rate}}\times {\text{bit depth}}\times {\text{channels}}\times {\text{time}}.} The cumulative size in bytes can be found by dividing the file size in bits by the number of bits in a byte, which is eight: size in bytes = size in bits 8 {\displaystyle {\text{size in bytes}}={\frac {\text{size in bits}}{8}}} Therefore, 80 minutes (4,800 seconds) of CD-DA data requires 846,720,000 bytes of storage: 44 , 100 × 16 × 2 × 4 , 800 8 = 846 , 720 , 000 bytes ≈ 847 MB ≈ 807.5 MiB {\displaystyle {\frac {44,100\times 16\times 2\times 4,800}{8}}=846,720,000\ {\text{bytes}}\approx 847\ {\text{MB}}\approx 807.5\ {\text{MiB}}} where MiB is mebibytes with binary prefix Mi, meaning 220 = 1,048,576. ==== MP3 ==== The MP3 audio format provides lossy data compression. Audio quality improves with increasing bitrate: 32 kbit/s – generally acceptable only for speech 96 kbit/s – generally used for speech or low-quality streaming 128 or 160 kbit/s – mid-range bitrate quality 192 kbit/s – medium quality bitrate 256 kbit/s – a commonly used high-quality bitrate 320 kbit/s – highest level supported by the MP3 standard ==== Other audio ==== 700 bit/s – lowest bitrate open-source speech codec Codec2, but Codec2 sounds much better at 1.2 kbit/s 800 bit/s – minimum necessary for recognizable speech, using the special-purpose FS-1015 speech codecs 2.15 kbit/s – minimum bitrate available through the open-source Speex codec 6 kbit/s – minimum bitrate available through the open-source Opus codec 8 kbit/s – telephone quality using speech codecs 32–500 kbit/s – lossy audio as used in Ogg Vorbis 256 kbit/s – Digital Audio Broadcasting (DAB) MP2 bit rate required to achieve a high quality signal 292 kbit/s – Sony Adaptive Transform Acoustic Coding (ATRAC) for use on the MiniDisc Format 400 kbit/s–1,411 kbit/s – lossless audio as used in formats such as Free Lossless Audio Codec, WavPack, or Monkey's Audio to compress CD audio 1,411.2 kbit/s – Linear PCM sound format of CD-DA 5,644.8 kbit/s – DSD, which is a trademarked implementation of PDM sound format used on Super Audio CD. 6.144 Mbit/s – E-AC-3 (Dolby Digital Plus), an enhanced coding system based on the AC-3 codec 9.6 Mbit/s – DVD-Audio, a digital format for delivering high-fidelity audio content on a DVD. DVD-Audio is not intended to be a video delivery format and is not the same as video DVDs containing concert films or music videos. These discs cannot be played on a standard DVD-player without DVD-Audio logo. 18 Mbit/s – advanced lossless audio codec based on Meridian Lossless Packing (MLP) === Video === 16 kbit/s – videophone quality (minimum necessary for a consumer-acceptable "talking head" picture using various video compression schemes) 128–384 kbit/s – business-oriented videoconferencing quality using video compression 400 kbit/s YouTube 240p videos (using H.264) 750 kbit/s YouTube 360p videos (using H.264) 1 Mbit/s YouTube 480p videos (using H.264) 1.15 Mbit/s max – VCD quality (using MPEG1 compression) 2.5 Mbit/s YouTube 720p videos (using H.264) 3.5 Mbit/s typ – Standard-definition television quality (with bit-rate reduction from MPEG-2 compression) 3.8 Mbit/s YouTube 720p60 (60 FPS) videos (using H.264) 4.5 Mbit/s YouTube 1080p videos (using H.264) 6.8 Mbit/s YouTube 1080p60 (60 FPS) videos (using H.264) 9.8 Mbit/s max – DVD (using MPEG2 compression) 8 to 15 Mbit/s typ – HDTV quality (with bit-rate reduction from MPEG-4 AVC compression) 19 Mbit/s approximate – HDV 720p (using MPEG2 compression) 24 Mbit/s max – AVCHD (using MPEG4 AVC compression) 25 Mbit/s approximate – HDV 1080i (using MPEG2 compression) 29.4 Mbit/s max – HD DVD 40 Mbit/s max – 1080p Blu-ray Disc (using MPEG2, MPEG4 AVC or VC-1 compression) 250 Mbit/s max – DCP (using JPEG 2000 compression) 1.4 Gbit/s – 10-bit 4:4:4 uncompressed 1080p at 24 FPS === Notes === For technical reasons (hardware/software protocols, overheads, encoding schemes, etc.) the actual bit rates used by some of the compared-to devices may be significantly higher than listed above. For example, telephone circuits using μlaw or A-law companding (pulse code modulation) yield 64 kbit/s. == See also == == References == == External links == Live Video Streaming Bitrate Calculator Calculate bitrate for video and live streams DVD-HQ bit rate calculator Calculate bit rate for various types of digital video media. Maximum PC - Do Higher MP3 Bit Rates Pay Off? Valid8 Data Rate Calculator
Wikipedia/Bitrate
Throughput of a network can be measured using various tools available on different platforms. This page explains the theory behind what these tools set out to measure and the issues regarding these measurements. Reasons for measuring throughput in networks. People are often concerned about measuring the maximum data throughput in bits per second of a communications link or network access. A typical method of performing a measurement is to transfer a 'large' file from one system to another system and measure the time required to complete the transfer or copy of the file. The throughput is then calculated by dividing the file size by the time to get the throughput in megabits, kilobits, or bits per second. Unfortunately, the results of such an exercise will often result in the goodput which is less than the maximum theoretical data throughput, leading to people believing that their communications link is not operating correctly. In fact, there are many overheads accounted for in throughput in addition to transmission overheads, including latency, TCP Receive Window size and system limitations, which means the calculated goodput does not reflect the maximum achievable throughput. == Theory: Short Summary == The Maximum bandwidth can be calculated as follows: T h r o u g h p u t ≤ R W I N R T T {\displaystyle \mathrm {Throughput} \leq {\frac {\mathrm {RWIN} }{\mathrm {RTT} }}\,\!} where RWIN is the TCP Receive Window and RTT is the round-trip time for the path. The Max TCP Window size in the absence of TCP window scale option is 65,535 bytes. Example: Max Bandwidth = 65,535 bytes / 0.220 s = 297886.36 B/s * 8 = 2.383 Mbit/s. Over a single TCP connection between those endpoints, the tested bandwidth will be restricted to 2.376 Mbit/s even if the contracted bandwidth is greater. == Bandwidth test software == Bandwidth test software is used to determine the maximum bandwidth of a network or internet connection. It is typically undertaken by attempting to download or upload the maximum amount of data in a certain period of time, or a certain amount of data in the minimum amount of time. For this reason, Bandwidth tests can delay internet transmissions through the internet connection as they are undertaken, and can cause inflated data charges. == Nomenclature == The throughput of communications links is measured in bits per second (bit/s), kilobits per second (kbit/s), megabits per second (Mbit/s) and gigabits per second (Gbit/s). In this application, kilo, mega and giga are the standard S.I. prefixes indicating multiplication by 1,000 (kilo), 1,000,000 (mega), and 1,000,000,000 (giga). File sizes are typically measured in bytes — kilobytes, megabytes, and gigabytes being usual, where a byte is eight bits. In modern textbooks one kilobyte is defined as 1,000 byte, one megabyte as 1,000,000 byte, etc., in accordance with the 1998 International Electrotechnical Commission (IEC) standard. However, the convention adopted by Windows systems is to define 1 kilobyte is as 1,024 (or 210) bytes, which is equal to 1 kibibyte. Similarly, a file size of "1 megabyte" is 1,024 × 1,024 byte, equal to 1 mebibyte), and "1 gigabyte" 1,024 × 1,024 × 1,024 byte = 1 gibibyte). === Confusing and inconsistent use of Suffixes === It is usual for people to abbreviate commonly used expressions. For file sizes, it is usual for someone to say that they have a '64 k' file (meaning 64 kilobytes), or a '100 meg' file (meaning 100 megabytes). When talking about circuit bit rates, people will interchangeably use the terms throughput, bandwidth and speed, and refer to a circuit as being a '64 k' circuit, or a '2 meg' circuit — meaning 64 kbit/s or 2 Mbit/s (see also the List of connection bandwidths). However, a '64 k' circuit will not transmit a '64 k' file in one second. This may not be obvious to those unfamiliar with telecommunications and computing, so misunderstandings sometimes arise. In actuality, a 64 kilobyte file is 64 × 1,024 × 8 bits in size and the 64 k circuit will transmit bits at a rate of 64 × 1,000 bit/s, so the amount of time taken to transmit a 64 kilobyte file over the 64 k circuit will be at least (64 × 1,024 × 8)/(64 × 1,000) seconds, which works out to be 8.192 seconds. == Compression == Some equipment can improve matters by compressing the data as it is sent. This is a feature of most analog modems and of several popular operating systems. If the 64 k file can be shrunk by compression, the time taken to transmit can be reduced. This can be done invisibly to the user, so a highly compressible file may be transmitted considerably faster than expected. As this 'invisible' compression cannot easily be disabled, it therefore follows that when measuring throughput by using files and timing the time to transmit, one should use files that cannot be compressed. Typically, this is done using a file of random data, which becomes harder to compress the closer to truly random it is. Assuming your data cannot be compressed, the 8.192 seconds to transmit a 64 kilobyte file over a 64 kilobit/s communications link is a theoretical minimum time which will not be achieved in practice. This is due to the effect of overheads which are used to format the data in an agreed manner so that both ends of a connection have a consistent view of the data. There are at least two issues that aren't immediately obvious for transmitting compressed files: The throughput of the network itself isn't improved by compression. From the end-to-end (server to client) perspective compression does improve throughput. That's because information content for the same amount of transmission is increased through compression of files. Compressing files at the server and client takes more processor resources at both the ends. The server has to use its processor to compress the files, if they aren't already done. The client has to decompress the files upon receipt. This can be considered an expense (for the server and client) for the benefit of increased end to end throughput(although the throughput hasn't changed for the network itself.) == Overheads and data formats == A common communications link used by many people is the asynchronous start-stop, or just "asynchronous", serial link. If you have an external modem attached to your home or office computer, the chances are that the connection is over an asynchronous serial connection. Its advantage is that it is simple — it can be implemented using only three wires: Send, Receive and Signal Ground (or Signal Common). In an RS-232 interface, an idle connection has a continuous negative voltage applied. A 'zero' bit is represented as a positive voltage difference with respect to the Signal Ground and a 'one' bit is a negative voltage with respect to signal ground, thus indistinguishable from the idle state. This means you need to know when a 'one' bit starts to distinguish it from idle. This is done by agreeing in advance how fast data will be transmitted over a link, then using a start bit to signal the start of a byte — this start bit will be a 'zero' bit. Stop bits are 'one' bits i.e. negative voltage. Actually, more things will have been agreed in advance — the speed of bit transmission, the number of bits per character, the parity and the number of stop bits (signifying the end of a character). So a designation of 9600-8-E-2 would be 9,600 bits per second, with eight bits per character, even parity and two stop bits. A common set-up of an asynchronous serial connection would be 9600-8-N-1 (9,600 bit/s, 8 bits per character, no parity and 1 stop bit) - a total of 10 bits transmitted to send one 8 bit character (one start bit, the 8 bits making up the byte transmitted and one stop bit). This is an overhead of 20%, so a 9,600 bit/s asynchronous serial link will not transmit data at 9600/8 bytes per second (1200 byte/s) but actually, in this case 9600/10 bytes per second (960 byte/s), which is considerably slower than expected. It can get worse. If parity is specified and we use 2 stop bits, the overhead for carrying one 8 bit character is 4 bits (one start bit, one parity bit and two stop bits) - or 50%! In this case a 9600 bit/s connection will carry 9600/12 byte/s (800 byte/s). Asynchronous serial interfaces commonly will support bit transmission speeds of up to 230.4 kbit/s. If it is set up to have no parity and one stop bit, this means the byte transmission rate is 23.04 kbyte/s. The advantage of the asynchronous serial connection is its simplicity. One disadvantage is its low efficiency in carrying data. This can be overcome by using a synchronous interface. In this type of interface, a clock signal is added on a separate wire, and the bits are transmitted in synchrony with the clock — the interface no longer has to look for the start and stop bits of each individual character — however, it is necessary to have a mechanism to ensure the sending and receiving clocks are kept in synchrony, so data is divided up into frames of multiple characters separated by known delimiters. There are three common coding schemes for framed communications — HDLC, PPP, and Ethernet === HDLC === When using HDLC, rather than each byte having a start, optional parity, and one or two stop bits, the bytes are gathered together into a frame. The start and end of the frame are signalled by the 'flag', and error detection is carried out by the frame check sequence. If the frame has a maximum sized address of 32 bits, a maximum sized control part of 16 bits and a maximum sized frame check sequence of 16 bits, the overhead per frame could be as high as 64 bits. If each frame carried but a single byte, the data throughput efficiency would be extremely low. However, the bytes are normally gathered together, so that even with a maximal overhead of 64 bits, frames carrying more than 24 bytes are more efficient than asynchronous serial connections. As frames can vary in size because they can have different numbers of bytes being carried as data, this means the overhead of an HDLC connection is not fixed. === PPP === The "point-to-point protocol" (PPP) is defined by the Internet Request For Comment documents RFC 1570, RFC 1661 and RFC 1662. With respect to the framing of packets, PPP is quite similar to HDLC, but supports both bit-oriented as well as byte-oriented ("octet-stuffed") methods of delimiting frames while maintaining data transparency. === Ethernet === Ethernet is a "local area network" (LAN) technology, which is also framed. The way the frame is electrically defined on a connection between two systems is different from the typically wide-area networking technology that uses HDLC or PPP implemented, but these details are not important for throughput calculations. Ethernet is a shared medium, so that it is not guaranteed that only the two systems that are transferring a file between themselves will have exclusive access to the connection. If several systems are attempting to communicate simultaneously, the throughput between any pair can be substantially lower than the nominal bandwidth available. === Other low-level protocols === Dedicated point-to-point links are not the only option for many connections between systems. Frame Relay, ATM, and MPLS based services can also be used. When calculating or estimating data throughputs, the details of the frame/cell/packet format and the technology's detailed implementation need to be understood. ==== Frame Relay ==== Frame Relay uses a modified HDLC format to define the frame format that carries data. ==== ATM ==== Asynchronous Transfer Mode (ATM) uses a radically different method of carrying data. Rather than using variable length frames or packets, data is carried in fixed size cells. Each cell is 53 bytes long, with the first 5 bytes defined as the header, and the following 48 bytes as payload. Data networking commonly requires packets of data that are larger than 48 bytes, so there is a defined adaptation process that specifies how larger packets of data should be divided up in a standard manner to be carried by the smaller cells. This process varies according to the data carried, so in ATM nomenclature, there are different ATM Adaptation Layers. The process defined for most data is named ATM Adaptation Layer No. 5 or AAL5. Understanding throughput on ATM links requires a knowledge of which ATM adaptation layer has been used for the data being carried. ==== MPLS ==== Multiprotocol Label Switching (MPLS) adds a standard tag or header known as a 'label' to existing packets of data. In certain situations it is possible to use MPLS in a 'stacked' manner, so that labels are added to packets that have already been labelled. Connections between MPLS systems can also be 'native', with no underlying transport protocol, or MPLS labelled packets can be carried inside frame relay or HDLC packets as payloads. Correct throughput calculations need to take such configurations into account. For example, a data packet could have two MPLS labels attached via 'label-stacking', then be placed as payload inside an HDLC frame. This generates more overhead that has to be taken into account that a single MPLS label attached to a packet which is then sent 'natively', with no underlying protocol to a receiving system. == Higher-level protocols == Few systems transfer files and data by simply copying the contents of the file into the 'Data' field of HDLC or PPP frames — another protocol layer is used to format the data inside the 'Data' field of the HDLC or PPP frame. The most commonly used such protocol is Internet Protocol (IP), defined by RFC 791. This imposes its own overheads. Again, few systems simply copy the contents of files into IP packets, but use yet another protocol that manages the connection between two systems — TCP (Transmission Control Protocol), defined by RFC 1812. This adds its own overhead. Finally, a final protocol layer manages the actual data transfer process. A commonly used protocol for this is the "file transfer protocol == See also == Asymptotic bandwidth Network traffic measurement Traffic generation model Speedof.me - HTML5 Bandwidth Test == References == == External links == Lawrence Berkeley National Laboratory paper on measuring available bandwidth
Wikipedia/Measuring_network_throughput
In digital transmission, the number of bit errors is the number of received bits of a data stream over a communication channel that have been altered due to noise, interference, distortion or bit synchronization errors. The bit error rate (BER) is the number of bit errors per unit time. The bit error ratio (also BER) is the number of bit errors divided by the total number of transferred bits during a studied time interval. Bit error ratio is a unitless performance measure, often expressed as a percentage. The bit error probability pe is the expected value of the bit error ratio. The bit error ratio can be considered as an approximate estimate of the bit error probability. This estimate is accurate for a long time interval and a high number of bit errors. == Example == As an example, assume this transmitted bit sequence: 1 1 0 0 0 1 0 1 1 and the following received bit sequence: 0 1 0 1 0 1 0 0 1, The number of bit errors (the underlined bits) is, in this case, 3. The BER is 3 incorrect bits divided by 9 transferred bits, resulting in a BER of 0.333 or 33.3%. == Packet error ratio == The packet error ratio (PER) is the number of incorrectly received data packets divided by the total number of received packets. A packet is declared incorrect if at least one bit is erroneous. The expectation value of the PER is denoted packet error probability pp, which for a data packet length of N bits can be expressed as p p = 1 − ( 1 − p e ) N = 1 − e N ln ⁡ ( 1 − p e ) {\displaystyle p_{p}=1-(1-p_{e})^{N}=1-e^{N\ln(1-p_{e})}} , assuming that the bit errors are independent of each other. For small bit error probabilities and large data packets, this is approximately p p ≈ p e N . {\displaystyle p_{p}\approx p_{e}N.} Similar measurements can be carried out for the transmission of frames, blocks, or symbols. The above expression can be rearranged to express the corresponding BER (pe) as a function of the PER (pp) and the data packet length N in bits: p e = 1 − ( 1 − p p ) N {\displaystyle p_{e}=1-{\sqrt[{N}]{(1-p_{p})}}} == Factors affecting the BER == In a communication system, the receiver side BER may be affected by transmission channel noise, interference, distortion, bit synchronization problems, attenuation, wireless multipath fading, etc. The BER may be improved by choosing a strong signal strength (unless this causes cross-talk and more bit errors), by choosing a slow and robust modulation scheme or line coding scheme, and by applying channel coding schemes such as redundant forward error correction codes. The transmission BER is the number of detected bits that are incorrect before error correction, divided by the total number of transferred bits (including redundant error codes). The information BER, approximately equal to the decoding error probability, is the number of decoded bits that remain incorrect after the error correction, divided by the total number of decoded bits (the useful information). Normally the transmission BER is larger than the information BER. The information BER is affected by the strength of the forward error correction code. == Analysis of the BER == The BER may be evaluated using stochastic (Monte Carlo) computer simulations. If a simple transmission channel model and data source model is assumed, the BER may also be calculated analytically. An example of such a data source model is the Bernoulli source. Examples of simple channel models used in information theory are: Binary symmetric channel (used in analysis of decoding error probability in case of non-bursty bit errors on the transmission channel) Additive white Gaussian noise (AWGN) channel without fading. A worst-case scenario is a completely random channel, where noise totally dominates over the useful signal. This results in a transmission BER of 50% (provided that a Bernoulli binary data source and a binary symmetrical channel are assumed, see below). In a noisy channel, the BER is often expressed as a function of the normalized carrier-to-noise ratio measure denoted Eb/N0, (energy per bit to noise power spectral density ratio), or Es/N0 (energy per modulation symbol to noise spectral density). For example, in the case of BPSK modulation and AWGN channel, the BER as function of the Eb/N0 is given by: BER = Q ( 2 E b / N 0 ) {\displaystyle \operatorname {BER} =Q({\sqrt {2E_{b}/N_{0}}})} , where Q ( x ) := 1 2 π ∫ x ∞ e − t 2 / 2 d t {\displaystyle Q(x):={\frac {1}{\sqrt {2\pi }}}\int _{x}^{\infty }e^{-t^{2}/2}dt} . People usually plot the BER curves to describe the performance of a digital communication system. In optical communication, BER(dB) vs. Received Power(dBm) is usually used; while in wireless communication, BER(dB) vs. SNR(dB) is used. Measuring the bit error ratio helps people choose the appropriate forward error correction codes. Since most such codes correct only bit-flips, but not bit-insertions or bit-deletions, the Hamming distance metric is the appropriate way to measure the number of bit errors. Many FEC coders also continuously measure the current BER. A more general way of measuring the number of bit errors is the Levenshtein distance. The Levenshtein distance measurement is more appropriate for measuring raw channel performance before frame synchronization, and when using error correction codes designed to correct bit-insertions and bit-deletions, such as Marker Codes and Watermark Codes. == Mathematical draft == The BER is the likelihood of a bit misinterpretation due to electrical noise w ( t ) {\displaystyle w(t)} . Considering a bipolar NRZ transmission, we have x 1 ( t ) = A + w ( t ) {\displaystyle x_{1}(t)=A+w(t)} for a "1" and x 0 ( t ) = − A + w ( t ) {\displaystyle x_{0}(t)=-A+w(t)} for a "0". Each of x 1 ( t ) {\displaystyle x_{1}(t)} and x 0 ( t ) {\displaystyle x_{0}(t)} has a period of T {\displaystyle T} . Knowing that the noise has a bilateral spectral density N 0 2 {\displaystyle {\frac {N_{0}}{2}}} , x 1 ( t ) {\displaystyle x_{1}(t)} is N ( A , N 0 2 T ) {\displaystyle {\mathcal {N}}\left(A,{\frac {N_{0}}{2T}}\right)} and x 0 ( t ) {\displaystyle x_{0}(t)} is N ( − A , N 0 2 T ) {\displaystyle {\mathcal {N}}\left(-A,{\frac {N_{0}}{2T}}\right)} . Returning to BER, we have the likelihood of a bit misinterpretation p e = p ( 0 | 1 ) p 1 + p ( 1 | 0 ) p 0 {\displaystyle p_{e}=p(0|1)p_{1}+p(1|0)p_{0}} . p ( 1 | 0 ) = 0.5 erfc ⁡ ( A + λ N o / T ) {\displaystyle p(1|0)=0.5\,\operatorname {erfc} \left({\frac {A+\lambda }{\sqrt {N_{o}/T}}}\right)} and p ( 0 | 1 ) = 0.5 erfc ⁡ ( A − λ N o / T ) {\displaystyle p(0|1)=0.5\,\operatorname {erfc} \left({\frac {A-\lambda }{\sqrt {N_{o}/T}}}\right)} where λ {\displaystyle \lambda } is the threshold of decision, set to 0 when p 1 = p 0 = 0.5 {\displaystyle p_{1}=p_{0}=0.5} . We can use the average energy of the signal E = A 2 T {\displaystyle E=A^{2}T} to find the final expression : p e = 0.5 erfc ⁡ ( E N o ) . {\displaystyle p_{e}=0.5\,\operatorname {erfc} \left({\sqrt {\frac {E}{N_{o}}}}\right).} ±§ == Bit error rate test == BERT or bit error rate test is a testing method for digital communication circuits that uses predetermined stress patterns consisting of a sequence of logical ones and zeros generated by a test pattern generator. A BERT typically consists of a test pattern generator and a receiver that can be set to the same pattern. They can be used in pairs, with one at either end of a transmission link, or singularly at one end with a loopback at the remote end. BERTs are typically stand-alone specialised instruments, but can be personal computer–based. In use, the number of errors, if any, are counted and presented as a ratio such as 1 in 1,000,000, or 1 in 1e06. === Common types of BERT stress patterns === PRBS (pseudorandom binary sequence) – A pseudorandom binary sequencer of N Bits. These pattern sequences are used to measure jitter and eye mask of TX-Data in electrical and optical data links. QRSS (quasi random signal source) – A pseudorandom binary sequencer which generates every combination of a 20-bit word, repeats every 1,048,575 words, and suppresses consecutive zeros to no more than 14. It contains high-density sequences, low-density sequences, and sequences that change from low to high and vice versa. This pattern is also the standard pattern used to measure jitter. 3 in 24 – Pattern contains the longest string of consecutive zeros (15) with the lowest ones density (12.5%). This pattern simultaneously stresses minimum ones density and the maximum number of consecutive zeros. The D4 frame format of 3 in 24 may cause a D4 yellow alarm for frame circuits depending on the alignment of one bits to a frame. 1:7 – Also referred to as 1 in 8. It has only a single one in an eight-bit repeating sequence. This pattern stresses the minimum ones density of 12.5% and should be used when testing facilities set for B8ZS coding as the 3 in 24 pattern increases to 29.5% when converted to B8ZS. Min/max – Pattern rapid sequence changes from low density to high density. Most useful when stressing the repeater's ALBO feature. All ones (or mark) – A pattern composed of ones only. This pattern causes the repeater to consume the maximum amount of power. If DC to the repeater is regulated properly, the repeater will have no trouble transmitting the long ones sequence. This pattern should be used when measuring span power regulation. An unframed all ones pattern is used to indicate an AIS (also known as a blue alarm). All zeros – A pattern composed of zeros only. It is effective in finding equipment misoptioned for AMI, such as fiber/radio multiplex low-speed inputs. Alternating 0s and 1s - A pattern composed of alternating ones and zeroes. 2 in 8 – Pattern contains a maximum of four consecutive zeros. It will not invoke a B8ZS sequence because eight consecutive zeros are required to cause a B8ZS substitution. The pattern is effective in finding equipment misoptioned for B8ZS. Bridgetap - Bridge taps within a span can be detected by employing a number of test patterns with a variety of ones and zeros densities. This test generates 21 test patterns and runs for 15 minutes. If a signal error occurs, the span may have one or more bridge taps. This pattern is only effective for T1 spans that transmit the signal raw. Modulation used in HDSL spans negates the bridgetap patterns' ability to uncover bridge taps. Multipat - This test generates five commonly used test patterns to allow DS1 span testing without having to select each test pattern individually. Patterns are: all ones, 1:7, 2 in 8, 3 in 24, and QRSS. T1-DALY and 55 OCTET - Each of these patterns contain fifty-five (55), eight bit octets of data in a sequence that changes rapidly between low and high density. These patterns are used primarily to stress the ALBO and equalizer circuitry but they will also stress timing recovery. 55 OCTET has fifteen (15) consecutive zeroes and can only be used unframed without violating one's density requirements. For framed signals, the T1-DALY pattern should be used. Both patterns will force a B8ZS code in circuits optioned for B8ZS. == Bit error rate tester == A bit error rate tester (BERT), also known as a "bit error ratio tester" or bit error rate test solution (BERTs) is electronic test equipment used to test the quality of signal transmission of single components or complete systems. The main building blocks of a BERT are: Pattern generator, which transmits a defined test pattern to the DUT or test system Error detector connected to the DUT or test system, to count the errors generated by the DUT or test system Clock signal generator to synchronize the pattern generator and the error detector Digital communication analyser is optional to display the transmitted or received signal Electrical-optical converter and optical-electrical converter for testing optical communication signals == See also == Burst error Error correction code Errored second Pseudo bit error ratio Viterbi Error Rate == References == This article incorporates public domain material from Federal Standard 1037C. General Services Administration. Archived from the original on 2022-01-22. (in support of MIL-STD-188). == External links == QPSK BER for AWGN channel – online experiment
Wikipedia/Bit_error_rate
The NASA Deep Space Network (DSN) is a worldwide network of spacecraft communication ground segment facilities, located in the United States (California), Spain (Madrid), and Australia (Canberra), that supports NASA's interplanetary spacecraft missions. It also performs radio and radar astronomy observations for the exploration of the Solar System and the universe, and supports selected Earth-orbiting missions. DSN is part of the NASA Jet Propulsion Laboratory (JPL). == General information == DSN currently consists of three deep-space communications facilities located such that a distant spacecraft is always in view of at least one station. They are: the Goldstone Deep Space Communications Complex (35°25′36″N 116°53′24″W) about 60 kilometres (37 mi) north of Barstow, California. For details of Goldstone's contribution to the early days of space probe tracking, see Project Space Track; the Madrid Deep Space Communications Complex (40°25′53″N 4°14′53″W), 60 kilometres (37 mi) west of Madrid, Spain; and the Canberra Deep Space Communication Complex (CDSCC) in the Australian Capital Territory (35°24′05″S 148°58′54″E), 40 kilometres (25 mi) southwest of Canberra, Australia near the Tidbinbilla Nature Reserve. Each facility is situated in semi-mountainous, bowl-shaped terrain to help shield against radio frequency interference. The strategic placement of the stations permits constant observation of spacecraft as the Earth rotates, which helps to make the DSN the largest and most sensitive scientific telecommunications system in the world. The DSN supports NASA's contribution to the scientific investigation of the Solar System: It provides a two-way communications link that guides and controls various NASA uncrewed interplanetary space probes, and brings back the images and new scientific information these probes collect. All DSN antennas are steerable, high-gain, parabolic reflector antennas. The antennas and data delivery systems make it possible to: acquire telemetry data from spacecraft. transmit commands to spacecraft. upload software modifications to spacecraft. track spacecraft position and velocity. perform Very Long Baseline Interferometry observations. measure variations in radio waves for radio science experiments. gather science data. monitor and control the performance of the network. Other countries and organizations also run deep space networks. The DSN operates according to the standards of the Consultative Committee for Space Data Systems, as do most other deep space networks, and hence the DSN is able to inter-operate with the networks of other space agencies. These include the Soviet Deep Space Network, the Chinese Deep Space Network, the Indian Deep Space Network, the Japanese Deep Space Network, and the ESTRACK of the European Space Agency. These agencies often cooperate for better mission coverage. In particular, DSN has a cross-support agreement with ESA that allows mutual use of both networks for more effectiveness and reduced risk. In addition, radio astronomy facilities, such as the Parkes Observatory, the Green Bank Telescope, and the Very Large Array, are sometimes used to supplement the antennas of the DSN. === Operations control center === The antennas at all three DSN Complexes communicate directly with the Deep Space Operations Center (also known as Deep Space Network operations control center) located at the JPL facilities in Pasadena, California. In the early years, the operations control center did not have a permanent facility. It was a provisional setup with numerous desks and phones installed in a large room near the computers used to calculate orbits. In July 1961, NASA started the construction of the permanent facility, Space Flight Operations Facility (SFOF). The facility was completed in October 1963 and dedicated on May 14, 1964. In the initial setup of the SFOF, there were 31 consoles, 100 closed-circuit television cameras, and more than 200 television displays to support Ranger 6 to Ranger 9 and Mariner 4. Currently, the operations center personnel at SFOF monitor and direct operations, and oversee the quality of spacecraft telemetry and navigation data delivered to network users. In addition to the DSN complexes and the operations center, a ground communications facility provides communications that link the three complexes to the operations center at JPL, to space flight control centers in the United States and overseas, and to scientists around the world. == Deep space == Tracking vehicles in deep space is quite different from tracking missions in low Earth orbit (LEO). Deep space missions are visible for long periods of time from a large portion of the Earth's surface, and so require few stations (the DSN has only three main sites). These few stations, however, require huge antennas, ultra-sensitive receivers, and powerful transmitters in order to transmit and receive over the vast distances involved. Deep space is defined in several different ways. According to a 1975 NASA report, the DSN was designed to communicate with "spacecraft traveling approximately 16,000 km (10,000 miles) from Earth to the farthest planets of the solar system." JPL diagrams state that at an altitude of 30,000 km (19,000 mi), a spacecraft is always in the field of view of one of the tracking stations. The International Telecommunication Union, which sets aside various frequency bands for deep space and near Earth use, defines "deep space" to start at a distance of 2 million km (1.2 million mi) from the Earth's surface. === Frequency bands === The NASA Deep Space Network can both send and receive in all of the ITU deep space bands - S-band (2 GHz), X-band (8 GHz), and Ka-band (32 GHz). Frequency usage has in general moved upward over the life of the DSN, as higher frequencies have higher gain for the same size antenna, and the deep space bands are wider, so more data can be returned. However, higher frequencies also need more accurate pointing (on the spacecraft) and more precise antenna surfaces (on Earth), so improvements in both spacecraft and the DSN were required to move to higher bands. Early missions used S-band for both uplink and downlink. Viking (1975) had X-band as an experiment, and Voyager (1977) was the first to use it operationally. Similarly, Mars Observer (1994) carried a Ka-band experiment, Mars Reconnaissance Orbiter (2005) had a Ka-band demo, and Kepler (2009) was the first mission to use Ka-band as the primary downlink. However, not all space missions can use these bands. The Moon, the Earth-moon Lagrange points, and the Earth–Sun Lagrangian points L1 and L2 are all closer than 2 million km from Earth (distances are here), so they are considered near space and cannot use the ITU's deep space bands. Missions at these locations that need high data rates must therefore use the "near space" K band (27 GHz). Since NASA has several such missions (such as the James Webb Space Telescope and the Lunar Reconnaissance Orbiter), they have enhanced the Deep Space Network to receive (but not transmit) at these frequencies as well. The DSN is also pursuing optical deep space communication, offering greater communication speeds at the cost of susceptibility to weather and the need for extremely precise pointing of the spacecraft. This technology is working in prototype form. == History == The forerunner of the DSN was established in January 1958, when JPL, then under contract to the U.S. Army, deployed portable radio tracking stations in Nigeria, Singapore, and California to receive telemetry and plot the orbit of the Army-launched Explorer 1, the first successful U.S. satellite. NASA was officially established on October 1, 1958, to consolidate the separately developing space-exploration programs of the US Army, US Navy, and US Air Force into one civilian organization. On December 3, 1958, JPL was transferred from the US Army to NASA and given responsibility for the design and execution of lunar and planetary exploration programs using remotely controlled spacecraft. Shortly after the transfer, NASA established the concept of the Deep Space Network as a separately managed and operated communications system that would accommodate all deep space missions, thereby avoiding the need for each flight project to acquire and operate its own specialized space communications network. The DSN was given responsibility for its own research, development, and operation in support of all of its users. Under this concept, it has become a world leader in the development of low-noise receivers; large parabolic-dish antennas; tracking, telemetry, and command systems; digital signal processing; and deep space navigation. The Deep Space Network formally announced its intention to send missions into deep space on Christmas Eve 1963; it has remained in continuous operation in one capacity or another ever since. The largest antennas of the DSN are often called on during spacecraft emergencies. Almost all spacecraft are designed so normal operation can be conducted on the smaller (and more economical) antennas of the DSN, but during an emergency the use of the largest antennas is crucial. This is because a troubled spacecraft may be forced to use less than its normal transmitter power, attitude control problems may preclude the use of high-gain antennas, and recovering every bit of telemetry is critical to assessing the health of the spacecraft and planning the recovery. The most famous example is the Apollo 13 mission, where limited battery power and inability to use the spacecraft's high-gain antennas reduced signal levels below the capability of the Manned Space Flight Network, and the use of the biggest DSN antennas (and the Australian Parkes Observatory radio telescope) was critical to saving the lives of the astronauts. While Apollo was also a US mission, DSN provides this emergency service to other space agencies as well, in a spirit of inter-agency and international cooperation. For example, the recovery of the Solar and Heliospheric Observatory (SOHO) mission of the European Space Agency (ESA) would not have been possible without the use of the largest DSN facilities. === DSN and the Apollo program === Although normally tasked with tracking uncrewed spacecraft, the Deep Space Network (DSN) also contributed to the communication and tracking of Apollo missions to the Moon, although primary responsibility was held by the Manned Space Flight Network (MSFN). The DSN designed the MSFN stations for lunar communication and provided a second antenna at each MSFN site (the MSFN sites were near the DSN sites for just this reason). Two antennas at each site were needed both for redundancy and because the beam widths of the large antennas needed were too small to encompass both the lunar orbiter and the lander at the same time. DSN also supplied some larger antennas as needed, in particular for television broadcasts from the Moon, and emergency communications such as Apollo 13. Excerpt from a NASA report describing how the DSN and MSFN cooperated for Apollo: Another critical step in the evolution of the Apollo Network came in 1965 with the advent of the DSN Wing concept. Originally, the participation of DSN 26-m antennas during an Apollo Mission was to be limited to a backup role. This was one reason why the MSFN 26-m sites were collocated with the DSN sites at Goldstone, Madrid, and Canberra. However, the presence of two, well-separated spacecraft during lunar operations stimulated the rethinking of the tracking and communication problem. One thought was to add a dual S-band RF system to each of the three 26-m MSFN antennas, leaving the nearby DSN 26-m antennas still in a backup role. Calculations showed, though, that a 26-m antenna pattern centered on the landed Lunar Module would suffer a 9-to-12 db loss at the lunar horizon, making tracking and data acquisition of the orbiting Command Service Module difficult, perhaps impossible. It made sense to use both the MSFN and DSN antennas simultaneously during the all-important lunar operations. JPL was naturally reluctant to compromise the objectives of its many uncrewed spacecraft by turning three of its DSN stations over to the MSFN for long periods. How could the goals of both Apollo and deep space exploration be achieved without building a third 26-m antenna at each of the three sites or undercutting planetary science missions? The solution came in early 1965 at a meeting at NASA Headquarters, when Eberhardt Rechtin suggested what is now known as the "wing concept". The wing approach involves constructing a new section or "wing" to the main building at each of the three involved DSN sites. The wing would include a MSFN control room and the necessary interface equipment to accomplish the following: Permit tracking and two-way data transfer with either spacecraft during lunar operations. Permit tracking and two-way data transfer with the combined spacecraft during the flight to the Moon. Provide backup for the collocated MSFN site passive track (spacecraft to ground RF links) of the Apollo spacecraft during trans-lunar and trans-earth phases. With this arrangement, the DSN station could be quickly switched from a deep-space mission to Apollo and back again. GSFC personnel would operate the MSFN equipment completely independently of DSN personnel. Deep space missions would not be compromised nearly as much as if the entire station's equipment and personnel were turned over to Apollo for several weeks. The details of this cooperation and operation are available in a two-volume technical report from JPL. == Management == The network is a NASA facility and is managed and operated for NASA by JPL, which is part of the California Institute of Technology (Caltech). The Interplanetary Network Directorate (IND) manages the program within JPL and is charged with the development and operation of it. The IND is considered to be JPL's focal point for all matters relating to telecommunications, interplanetary navigation, information systems, information technology, computing, software engineering, and other relevant technologies. While the IND is best known for its duties relating to the Deep Space Network, the organization also maintains the JPL Advanced Multi-Mission Operations System (AMMOS) and JPL's Institutional Computing and Information Services (ICIS). The facilities in Spain and Australia are jointly owned and operated in conjunction with that government's scientific institutions. In Australia, "the Commonwealth Scientific and Industrial Research Organisation (CSIRO), an Australian Commonwealth Government Statutory Authority, established the CSIRO Astronomy and Space Science Division to manage the day-to-day operations, engineering, and maintenance activities of the Canberra Deep Space Communications Complex". Most of the staff at Tidbinbilla are Australian government employees; the land and buildings are owned by the Australian government; NASA provides the bulk of the funding, owns the movable property (such as dishes and electronic equipment) which it has paid for, and gets to decide where to point the dishes. Similarly, in Spain, "Ingenieria de Sistemas para la Defensa de España S.A. (ISDEFE), a wholly owned subsidiary of the Instituto Nacional de Técnica Aeroespacial (INTA) and a part of the Spanish Department of Defense, operates and maintains the Madrid Deep Space Communications Complex (Madrid)". Peraton (formerly Harris Corporation) is under contract to JPL for the DSN's operations and maintenance. Peraton has responsibility for managing the Goldstone complex, operating the DSOC, and for DSN operations, mission planning, operations engineering, and logistics. == Antennas == Each complex consists of at least four deep space terminals equipped with ultra-sensitive receiving systems and large parabolic-dish antennas. There are: Three or more 34-meter (112 ft) Beam waveguide antennas (BWG) One 70-meter (230 ft) antenna. Five of the 34-meter (112 ft) beam waveguide antennas were added to the system in the late 1990s. Three were located at Goldstone, and one each at Canberra and Madrid. A second 34-meter (112 ft) beam waveguide antenna (the network's sixth) was completed at the Madrid complex in 2004. In order to meet the current and future needs of deep space communication services, a number of new Deep Space Station antennas had to be built at the existing Deep Space Network sites. At the Canberra Deep Space Communication Complex the first of these was completed in October 2014 (DSS35), with a second becoming operational in October 2016 (DSS36). A new 34 meter dish (DSS53) became operational at the Madrid complex in February 2022. The 70 meter antennas are aging and more difficult to maintain than the modern BWG antennas. Therefore in 2012 NASA announced a plan to decommission all three of them and replace them with arrayed 34-meter BWG antennas. Each of these new antennas would be upgraded to have X-band uplink capabilities and both X and Ka-band downlink capabilities. However by 2021, NASA decided instead to do a complete refurbishment of all 70 meter antennas, requiring taking them offline for months at a time. These refurbished antennas were expected to serve for decades to come. == Current signal processing capabilities == The general capabilities of the DSN have not substantially changed since the beginning of the Voyager Interstellar Mission in the early 1990s. However, many advancements in digital signal processing, arraying and error correction have been adopted by the DSN. The ability to array several antennas was incorporated to improve the data returned from the Voyager 2 Neptune encounter, and extensively used for the Galileo mission, when the spacecraft's high-gain antenna failed to deploy and as a result Galileo was forced to resort to operating solely off its low-gain antennas. The DSN array currently available since the Galileo mission can link the 70-meter (230 ft) dish antenna at the Deep Space Network complex in Goldstone, California, with an identical antenna located in Australia, in addition to two 34-meter (112 ft) antennas at the Canberra complex. The California and Australia sites were used concurrently to pick up communications with Galileo. Arraying of antennas within the three DSN locations is also used. For example, a 70-meter (230 ft) dish antenna can be arrayed with a 34-meter dish. For especially vital missions, like Voyager 2, non-DSN facilities normally used for radio astronomy can be added to the array. In particular, the Canberra 70-meter (230 ft) dish can be arrayed with the Parkes Radio Telescope in Australia; and the Goldstone 70-meter dish can be arrayed with the Very Large Array of antennas in New Mexico. Also, two or more 34-meter (112 ft) dishes at one DSN location are commonly arrayed together. All the stations are remotely operated from a centralized Signal Processing Center at each complex. These Centers house the electronic subsystems that point and control the antennas, receive and process the telemetry data, transmit commands, and generate the spacecraft navigation data. Once the data are processed at the complexes, they are transmitted to JPL for further processing and for distribution to science teams over a modern communications network. Especially at Mars, there are often many spacecraft within the beam width of an antenna. For operational efficiency, a single antenna can receive signals from multiple spacecraft at the same time. This capability is called Multiple Spacecraft Per Aperture, or MSPA. Currently, the DSN can receive up to 4 spacecraft signals at the same time, or MSPA-4. However, apertures cannot currently be shared for uplink. When two or more high-power carriers are used simultaneously, very high order intermodulation products fall in the receiver bands, causing interference to the much (25 orders of magnitude) weaker received signals. Therefore, only one spacecraft at a time can get an uplink, though up to 4 can be received. == Network limitations and challenges == There are a number of limitations to the current DSN, and a number of challenges going forward. Most of these are outlined in an Audit of NASA's Deep Space Network performed by NASA's Office of Inspector General. Their main conclusions are: NASA's DSN is oversubscribed, leading to mission impacts and scheduling challenges Capacity limitations leading to mission impacts are expected to increase with the onset of crewed Artemis missions Capacity limitations, lack of readily available backups, and laborious process present challenges to scheduling time on DSN Upgrades to NASA's Deep Space Network are behind schedule and more costly than planned Challenges with international partners and project oversight Other problems have been noted as well: The Deep Space Network nodes are all on Earth. Therefore, data transmission rates from/to spacecraft and space probes are severely constrained due to the distances from Earth. For now it can connect with the Mars orbiters in the Mars Relay Network for faster and more flexible communications with spacecraft and landers on Mars. Adding dedicated communication satellites elsewhere in space, to handle multiparty, multi-mission use, such as the canceled Mars Telecommunications Orbiter, would increase flexibility towards some sort of Interplanetary Internet. The need to support "legacy" missions that have remained operational beyond their original lifetimes but are still returning scientific data. Programs such as Voyager have been operating long past their original mission termination date. They also need some of the largest antennas. Replacing major components can cause problems as it can leave an antenna out of service for months at a time. The older 70 m antennas are reaching the end of their lives, and at some point will need to be replaced. NASA has so far extended their lives through major refurbishment. The leading candidate for 70 m replacement had been an array of smaller dishes, but more recently the decision was taken to expand the provision of 34-meter (112 ft) BWG antennas at each complex to a total of 4. All the 34-meter HEF antennas have been replaced. Because of capacity limits on the DSN, new spacecraft intended for missions beyond geocentric orbits are being equipped to use the beacon mode service, which allows such missions to operate without the DSN most of the time. In addition, NASA is creating a network of Lunar Exploration Ground Sites to offload much of the lunar and Artemis mission needs from the DSN. == DSN and radio science == The DSN forms one portion of the radio sciences experiment included on most deep space missions, where radio links between spacecraft and Earth are used to investigate planetary science, space physics and fundamental physics. The experiments include radio occultations, gravity field determination and celestial mechanics, bistatic scattering, doppler wind experiments, solar corona characterization, and tests of fundamental physics. For example, the Deep Space Network forms one component of the gravity science experiment on Juno. This includes special communication hardware on Juno and uses its communication system. The DSN radiates a Ka-band uplink, which is picked up by Juno's Ka-Band communication system and then processed by a special communication box called KaTS, and then this new signal is sent back the DSN. This allows the velocity of the spacecraft over time to be determined with a level of precision that allows a more accurate determination of the gravity field at planet Jupiter. Another radio science experiment is REX on the New Horizons spacecraft to Pluto-Charon. REX received a signal from Earth as it was occulted by Pluto, to take various measurements of that system of bodies. == See also == == Sources == This article incorporates public domain material from R. Corliss, William (June 1974). NASA Technical report CR 140390, Histories of the Space Tracking and Data Acquisition Network (STADAN), the Manned Space Flight Network (MSFN), and the NASA Communications Network (NASCOM) (PDF). NASA. hdl:2060/19750002909. Archived (PDF) from the original on 2022-03-03. == References == Notes The sun orbiting Ulysses' extended mission operation terminated June 30, 2009. The extension permitted a third flyby over the Sun's poles in 2007–2008. The two Voyager spacecraft continue to operate, with some loss in subsystem redundancy, but retain the capability of returning science data from a full complement of VIM science instruments. Both spacecraft also have adequate electrical power and attitude control propellant to continue operating until around 2020, when the available electrical power will no longer support science instrument operation. At this time, science data return and spacecraft operations will cease. The Deep Space Positioning System (DSPS) is being developed. == External links and further reading == JPL DSN – official site. DSN Now, NASA, live status of antennas and spacecraft at all three facilities.
Wikipedia/Deep_Space_Network
Delay-tolerant networking (DTN) is an approach to computer network architecture that seeks to address the technical issues in heterogeneous networks that may lack continuous network connectivity. Examples of such networks are those operating in mobile or extreme terrestrial environments, or planned networks in space. Recently, the term disruption-tolerant networking has gained currency in the United States due to support from DARPA, which has funded many DTN projects. Disruption may occur because of the limits of wireless radio range, sparsity of mobile nodes, energy resources, attack, and noise. == History == In the 1970s, spurred by the decreasing size of computers, researchers began developing technology for routing between non-fixed locations of computers. While the field of ad hoc routing was inactive throughout the 1980s, the widespread use of wireless protocols reinvigorated the field in the 1990s as mobile ad hoc networking (MANET) and vehicular ad hoc networking became areas of increasing interest. Concurrently with (but separate from) the MANET activities, DARPA had funded NASA, MITRE and others to develop a proposal for the Interplanetary Internet (IPN). Internet pioneer Vint Cerf and others developed the initial IPN architecture, relating to the necessity of networking technologies that can cope with the significant delays and packet corruption of deep-space communications. In 2002, Kevin Fall started to adapt some of the ideas in the IPN design to terrestrial networks and coined the term delay-tolerant networking and the DTN acronym. A paper published in 2003 SIGCOMM conference gives the motivation for DTNs. The mid-2000s brought about increased interest in DTNs, including a growing number of academic conferences on delay and disruption-tolerant networking, and growing interest in combining work from sensor networks and MANETs with the work on DTN. This field saw many optimizations on classic ad hoc and delay-tolerant networking algorithms and began to examine factors such as security, reliability, verifiability, and other areas of research that are well understood in traditional computer networking. == Routing == The ability to transport, or route, data from a source to a destination is a fundamental ability all communication networks must have. Delay and disruption-tolerant networks (DTNs), are characterized by their lack of connectivity, resulting in a lack of instantaneous end-to-end paths. In these challenging environments, popular ad hoc routing protocols such as AODV and DSR fail to establish routes. This is due to these protocols trying to first establish a complete route and then, after the route has been established, forward the actual data. However, when instantaneous end-to-end paths are difficult or impossible to establish, routing protocols must take to a "store and forward" approach, where data is incrementally moved and stored throughout the network in hopes that it will eventually reach its destination. A common technique used to maximize the probability of a message being successfully transferred is to replicate many copies of the message in the hope that one will succeed in reaching its destination. This is feasible only on networks with large amounts of local storage and internode bandwidth relative to the expected traffic. In many common problem spaces, this inefficiency is outweighed by the increased efficiency and shortened delivery times made possible by taking maximum advantage of available unscheduled forwarding opportunities. In others, where available storage and internode throughput opportunities are more tightly constrained, a more discriminate algorithm is required. == Other concerns == === Bundle protocols === In efforts to provide a shared framework for algorithm and application development in DTNs, RFC 4838 and 5050 were published in 2007 to define a common abstraction to software running on disrupted networks. Commonly known as the Bundle Protocol, this protocol defines a series of contiguous data blocks as a bundle—where each bundle contains enough semantic information to allow the application to make progress where an individual block may not. Bundles are routed in a store and forward manner between participating nodes over varied network transport technologies (including both IP and non-IP based transports). The transport layers carrying the bundles across their local networks are called bundle convergence layers. The bundle architecture therefore operates as an overlay network, providing a new naming architecture based on Endpoint Identifiers (EIDs) and coarse-grained class of service offerings. Protocols using bundling must leverage application-level preferences for sending bundles across a network. Due to the store and forward nature of delay-tolerant protocols, routing solutions for delay-tolerant networks can benefit from exposure to application-layer information. For example, network scheduling can be influenced if application data must be received in its entirety, quickly, or without variation in packet delay. Bundle protocols collect application data into bundles that can be sent across heterogeneous network configurations with high-level service guarantees. The service guarantees are generally set by the application level, and the RFC 5050 Bundle Protocol specification includes "bulk", "normal", and "expedited" markings. In October 2014 the Internet Engineering Task Force (IETF) instantiated a Delay Tolerant Networking working group to review and revise the protocol specified in RFC 5050. The Bundle Protocol for CCSDS is a profile of RFC 5050 specifically addressing the Bundle Protocol's utility for data communication in space missions. As of January 2022, the IETF published the following RFCs related to BPv7: RFC 9171, 9172, 9173, 9174. In January 2025, RFC 9713 was published, which updates RFC 9171. === Security issues === Addressing security issues has been a major focus of the bundle protocol. Possible attacks take the form of nodes behaving as a "black hole" or a "flooder". Security concerns for delay-tolerant networks vary depending on the environment and application, though authentication and privacy are often critical. These security guarantees are difficult to establish in a network without continuous bi-directional end-to-end paths between devices because the network hinders complicated cryptographic protocols, hinders key exchange, and each device must identify other intermittently visible devices. Solutions have typically been modified from mobile ad hoc network and distributed security research, such as the use of distributed certificate authorities and PKI schemes. Original solutions from the delay-tolerant research community include: 1) the use of identity-based encryption, which allows nodes to receive information encrypted with their public identifier; and 2) the use of tamper-evident tables with a gossiping protocol; == Implementations == There are a number of implementations of the Bundle Protocol: === BPv6 (RFC 5050, Bundle Protocol for CCSDS) === The main implementation of BPv6 are listed below. A number of other implementations exist. High-rate DTN—C++17 - based; performance-optimized DTN; runs directly on Linux and Windows. NASA Interplanetary Overlay Network (ION)—Written in C; designed to run on a wide variety of platforms; conforms to restrictions for space flight software (e.g. no dynamic memory allocation). IBR-DTN—C++ - based; runs on routers with OpenWRT; also contains Java applications (router and user apps) for use on Android. DTN2—C++ - based; designed to be a reference / learning / teaching implementation of the Bundle Protocol. DTN Marshal Enterprise (DTNME)—C++ - based; Enterprise solution; designed as an operational DTN implementation. Currently used in ISS operations. DTNME is a single implementation supporting both BPv6 and BPv7. === BPv7 (Internet Research Task Force RFC) === The draft of BPv7 lists the following implementations. High-rate DTN—C++17 - based; performance-optimized DTN; runs directly on Linux and Windows. μPCN—C; built upon the POSIX API as well as FreeRTOS and intended to run on low-cost micro satellites. PyDTN—Python; developed by X-works and during the IETF 101 Hackathon. Terra—Java; developed in the context of terrestrial DTN. dtn7-go—Go; implementation focused on easy extensibility and suitable for research. dtn7-rs—Rust; intended for environments with limited resources and performance requirements. NASA Interplanetary Overlay Network (ION)—C; intended to be usable in embedded environments including spacecraft flight computers. DTN Marshal Enterprise (DTNME)—C++ - based; Enterprise solution; designed as an operational DTN implementation. Currently used in ISS operations. DTNME is a single implementation supporting both BPv6 and BPv7. NASA BPLib-C; A Bundle Protocol library and associated applications by Goddard Space Flight Center. Intended for general use, particularly in space flight applications, integration with cFS (core Flight System), and other applications where store-and-forward capabilities are needed. First time will be used on PACE mission [1] == Research efforts == Various research efforts are currently investigating the issues involved with DTN: The Delay-Tolerant Networking Research Group. The Technology and Infrastructure for Developing Regions project at UC Berkeley The Bytewalla research project at the Royal Institute of Technology, KTH The KioskNet research project at the University of Waterloo. The DieselNet Archived 2022-03-17 at the Wayback Machine research project at the University of Massachusetts Amherst, Amherst. The ResiliNets Research Initiative at the University of Kansas and Lancaster University. The Haggle EU research project. The Space Internetworking Center EU/FP7 project at the Democritus University of Thrace. The N4C EU/FP7 research project. The WNaN DARPA project. The EMMA and OPTRACOM projects at TU Braunschweig The DTN at Helsinki University of Technology. The SARAH Archived 2018-12-25 at the Wayback Machine project, funded by the French National Research Agency (ANR). The development of the DoDWAN platform at Université Bretagne Sud. The CROWD Archived 2011-07-21 at the Wayback Machine project, funded by the French National Research Agency (ANR). The PodNet project at KTH Stockholm and ETH Zurich. Some research efforts look at DTN for the Interplanetary Internet by examining use of the Bundle Protocol in space: The Saratoga project at the University of Surrey, which was the first to test the bundle protocol in space on the UK-DMC Disaster Monitoring Constellation satellite in 2008. NASA JPL's Deep Impact Networking (DINET) Experiment on board the Deep Impact/EPOXI spacecraft. BioServe Space Technologies, one of the first payload developers to adopt the DTN technology, has utilized their CGBA (Commercial Generic Bioprocessing Apparatus) payloads on board the ISS, which provide computational/communications platforms, to implement the DTN protocol. NASA, ESA Use Experimental Interplanetary Internet to Test Robot From International Space Station == See also == Logistical Networking Message switching Store and forward == References ==
Wikipedia/Delay-tolerant_networking
In computer networks, network traffic measurement is the process of measuring the amount and type of traffic on a particular network. This is especially important with regard to effective bandwidth management. == Techniques == Network performance could be measured using either active or passive techniques. Active techniques (e.g. Iperf) are more intrusive but are arguably more accurate. Passive techniques have less network overhead and hence can run in the background to be used to trigger network management actions. == Measurement studies == A range of studies have been performed from various points on the Internet. The AMS-IX (Amsterdam Internet Exchange) is one of the world's largest Internet exchanges. It produces a constant supply of simple Internet statistics. There are also numerous academic studies that have produced a range of measurement studies on frame size distributions, TCP/UDP ratios and TCP/IP options. == Tools == Various software tools are available to measure network traffic. Some tools measure traffic by sniffing and others use SNMP, WMI or other local agents to measure bandwidth use on individual machines and routers. However, the latter generally do not detect the type of traffic, nor do they work for machines which are not running the necessary agent software, such as rogue machines on the network, or machines for which no compatible agent is available. In the latter case, inline appliances are preferred. These would generally 'sit' between the LAN and the LAN's exit point, generally the WAN or Internet router, and all packets leaving and entering the network would go through them. In most cases the appliance would operate as a bridge on the network so that it is undetectable by users. Some tools used for SNMP monitoring are Tivoli Netcool/Proviso by IBM, CA Performance Management by CA Technologies., and SolarWinds === Functions and features === Measurement tools generally have these functions and features: User interface (web, graphical, console) Real-time traffic graphs Network activity is often reported against pre-configured traffic matching rules to show: Local IP address Remote IP address Port number or protocol Logged in user name Bandwidth quotas Support for traffic shaping or rate limiting (overlapping with the network traffic control page) Support website blocking and content filtering Alarms to notify the administrator of excessive usage (by IP address or in total) == See also == IP Flow Information Export and NetFlow Measuring network throughput Network management Network monitoring Network scheduler Network simulation Packet sniffer Performance management Token bucket == References ==
Wikipedia/Network_traffic_measurement
Pollard's rho algorithm for logarithms is an algorithm introduced by John Pollard in 1978 to solve the discrete logarithm problem, analogous to Pollard's rho algorithm to solve the integer factorization problem. The goal is to compute γ {\displaystyle \gamma } such that α γ = β {\displaystyle \alpha ^{\gamma }=\beta } , where β {\displaystyle \beta } belongs to a cyclic group G {\displaystyle G} generated by α {\displaystyle \alpha } . The algorithm computes integers a {\displaystyle a} , b {\displaystyle b} , A {\displaystyle A} , and B {\displaystyle B} such that α a β b = α A β B {\displaystyle \alpha ^{a}\beta ^{b}=\alpha ^{A}\beta ^{B}} . If the underlying group is cyclic of order n {\displaystyle n} , by substituting β {\displaystyle \beta } as α γ {\displaystyle {\alpha }^{\gamma }} and noting that two powers are equal if and only if the exponents are equivalent modulo the order of the base, in this case modulo n {\displaystyle n} , we get that γ {\displaystyle \gamma } is one of the solutions of the equation ( B − b ) γ = ( a − A ) ( mod n ) {\displaystyle (B-b)\gamma =(a-A){\pmod {n}}} . Solutions to this equation are easily obtained using the extended Euclidean algorithm. To find the needed a {\displaystyle a} , b {\displaystyle b} , A {\displaystyle A} , and B {\displaystyle B} the algorithm uses Floyd's cycle-finding algorithm to find a cycle in the sequence x i = α a i β b i {\displaystyle x_{i}=\alpha ^{a_{i}}\beta ^{b_{i}}} , where the function f : x i ↦ x i + 1 {\displaystyle f:x_{i}\mapsto x_{i+1}} is assumed to be random-looking and thus is likely to enter into a loop of approximate length π n 8 {\displaystyle {\sqrt {\frac {\pi n}{8}}}} after π n 8 {\displaystyle {\sqrt {\frac {\pi n}{8}}}} steps. One way to define such a function is to use the following rules: Partition G {\displaystyle G} into three disjoint subsets S 0 {\displaystyle S_{0}} , S 1 {\displaystyle S_{1}} , and S 2 {\displaystyle S_{2}} of approximately equal size using a hash function. If x i {\displaystyle x_{i}} is in S 0 {\displaystyle S_{0}} then double both a {\displaystyle a} and b {\displaystyle b} ; if x i ∈ S 1 {\displaystyle x_{i}\in S_{1}} then increment a {\displaystyle a} , if x i ∈ S 2 {\displaystyle x_{i}\in S_{2}} then increment b {\displaystyle b} . == Algorithm == Let G {\displaystyle G} be a cyclic group of order n {\displaystyle n} , and given α , β ∈ G {\displaystyle \alpha ,\beta \in G} , and a partition G = S 0 ∪ S 1 ∪ S 2 {\displaystyle G=S_{0}\cup S_{1}\cup S_{2}} , let f : G → G {\displaystyle f:G\to G} be the map f ( x ) = { β x x ∈ S 0 x 2 x ∈ S 1 α x x ∈ S 2 {\displaystyle f(x)={\begin{cases}\beta x&x\in S_{0}\\x^{2}&x\in S_{1}\\\alpha x&x\in S_{2}\end{cases}}} and define maps g : G × Z → Z {\displaystyle g:G\times \mathbb {Z} \to \mathbb {Z} } and h : G × Z → Z {\displaystyle h:G\times \mathbb {Z} \to \mathbb {Z} } by g ( x , k ) = { k x ∈ S 0 2 k ( mod n ) x ∈ S 1 k + 1 ( mod n ) x ∈ S 2 h ( x , k ) = { k + 1 ( mod n ) x ∈ S 0 2 k ( mod n ) x ∈ S 1 k x ∈ S 2 {\displaystyle {\begin{aligned}g(x,k)&={\begin{cases}k&x\in S_{0}\\2k{\pmod {n}}&x\in S_{1}\\k+1{\pmod {n}}&x\in S_{2}\end{cases}}\\h(x,k)&={\begin{cases}k+1{\pmod {n}}&x\in S_{0}\\2k{\pmod {n}}&x\in S_{1}\\k&x\in S_{2}\end{cases}}\end{aligned}}} input: a: a generator of G b: an element of G output: An integer x such that ax = b, or failure Initialise i ← 0, a0 ← 0, b0 ← 0, x0 ← 1 ∈ G loop i ← i + 1 xi ← f(xi−1), ai ← g(xi−1, ai−1), bi ← h(xi−1, bi−1) x2i−1 ← f(x2i−2), a2i−1 ← g(x2i−2, a2i−2), b2i−1 ← h(x2i−2, b2i−2) x2i ← f(x2i−1), a2i ← g(x2i−1, a2i−1), b2i ← h(x2i−1, b2i−1) while xi ≠ x2i r ← bi − b2i if r = 0 return failure return r−1(a2i − ai) mod n == Example == Consider, for example, the group generated by 2 modulo N = 1019 {\displaystyle N=1019} (the order of the group is n = 1018 {\displaystyle n=1018} , 2 generates the group of units modulo 1019). The algorithm is implemented by the following C++ program: The results are as follows (edited): i x a b X A B ------------------------------ 1 2 1 0 10 1 1 2 10 1 1 100 2 2 3 20 2 1 1000 3 3 4 100 2 2 425 8 6 5 200 3 2 436 16 14 6 1000 3 3 284 17 15 7 981 4 3 986 17 17 8 425 8 6 194 17 19 .............................. 48 224 680 376 86 299 412 49 101 680 377 860 300 413 50 505 680 378 101 300 415 51 1010 681 378 1010 301 416 That is 2 681 5 378 = 1010 = 2 301 5 416 ( mod 1019 ) {\displaystyle 2^{681}5^{378}=1010=2^{301}5^{416}{\pmod {1019}}} and so ( 416 − 378 ) γ = 681 − 301 ( mod 1018 ) {\displaystyle (416-378)\gamma =681-301{\pmod {1018}}} , for which γ 1 = 10 {\displaystyle \gamma _{1}=10} is a solution as expected. As n = 1018 {\displaystyle n=1018} is not prime, there is another solution γ 2 = 519 {\displaystyle \gamma _{2}=519} , for which 2 519 = 1014 = − 5 ( mod 1019 ) {\displaystyle 2^{519}=1014=-5{\pmod {1019}}} holds. == Complexity == The running time is approximately O ( n ) {\displaystyle {\mathcal {O}}({\sqrt {n}})} . If used together with the Pohlig–Hellman algorithm, the running time of the combined algorithm is O ( p ) {\displaystyle {\mathcal {O}}({\sqrt {p}})} , where p {\displaystyle p} is the largest prime factor of n {\displaystyle n} . == References == Pollard, J. M. (1978). "Monte Carlo methods for index computation (mod p)". Mathematics of Computation. 32 (143): 918–924. doi:10.2307/2006496. JSTOR 2006496. Menezes, Alfred J.; van Oorschot, Paul C.; Vanstone, Scott A. (2001). "Chapter 3" (PDF). Handbook of Applied Cryptography.
Wikipedia/Pollard's_rho_algorithm_for_logarithms
In mathematics, the Sato–Tate conjecture is a statistical statement about the family of elliptic curves Ep obtained from an elliptic curve E over the rational numbers by reduction modulo almost all prime numbers p. Mikio Sato and John Tate independently posed the conjecture around 1960. If Np denotes the number of points on the elliptic curve Ep defined over the finite field with p elements, the conjecture gives an answer to the distribution of the second-order term for Np. By Hasse's theorem on elliptic curves, N p / p = 1 + O ( 1 / p ) {\displaystyle N_{p}/p=1+\mathrm {O} (1/\!{\sqrt {p}})\ } as p → ∞ {\displaystyle p\to \infty } , and the point of the conjecture is to predict how the O-term varies. The original conjecture and its generalization to all totally real fields was proved by Laurent Clozel, Michael Harris, Nicholas Shepherd-Barron, and Richard Taylor under mild assumptions in 2008, and completed by Thomas Barnet-Lamb, David Geraghty, Harris, and Taylor in 2011. Several generalizations to other algebraic varieties and fields are open. == Statement == Let E be an elliptic curve defined over the rational numbers without complex multiplication. For a prime number p, define θp as the solution to the equation p + 1 − N p = 2 p cos ⁡ θ p ( 0 ≤ θ p ≤ π ) . {\displaystyle p+1-N_{p}=2{\sqrt {p}}\cos \theta _{p}~~(0\leq \theta _{p}\leq \pi ).} Then, for every two real numbers α {\displaystyle \alpha } and β {\displaystyle \beta } for which 0 ≤ α < β ≤ π , {\displaystyle 0\leq \alpha <\beta \leq \pi ,} lim N → ∞ # { p ≤ N : α ≤ θ p ≤ β } # { p ≤ N } = 2 π ∫ α β sin 2 ⁡ θ d θ = 1 π ( β − α + sin ⁡ ( α ) cos ⁡ ( α ) − sin ⁡ ( β ) cos ⁡ ( β ) ) {\displaystyle \lim _{N\to \infty }{\frac {\#\{p\leq N:\alpha \leq \theta _{p}\leq \beta \}}{\#\{p\leq N\}}}={\frac {2}{\pi }}\int _{\alpha }^{\beta }\sin ^{2}\theta \,d\theta ={\frac {1}{\pi }}\left(\beta -\alpha +\sin(\alpha )\cos(\alpha )-\sin(\beta )\cos(\beta )\right)} == Details == By Hasse's theorem on elliptic curves, the ratio ( p + 1 ) − N p 2 p = a p 2 p {\displaystyle {\frac {(p+1)-N_{p}}{2{\sqrt {p}}}}={\frac {a_{p}}{2{\sqrt {p}}}}} is between -1 and 1. Thus it can be expressed as cos θ for an angle θ; in geometric terms there are two eigenvalues accounting for the remainder and with the denominator as given they are complex conjugate and of absolute value 1. The Sato–Tate conjecture, when E doesn't have complex multiplication, states that the probability measure of θ is proportional to sin 2 ⁡ θ d θ . {\displaystyle \sin ^{2}\theta \,d\theta .} This is due to Mikio Sato and John Tate (independently, and around 1960, published somewhat later). == Proof == In 2008, Clozel, Harris, Shepherd-Barron, and Taylor published a proof of the Sato–Tate conjecture for elliptic curves over totally real fields satisfying a certain condition: of having multiplicative reduction at some prime, in a series of three joint papers. Further results are conditional on improved forms of the Arthur–Selberg trace formula. Harris has a conditional proof of a result for the product of two elliptic curves (not isogenous) following from such a hypothetical trace formula. In 2011, Barnet-Lamb, Geraghty, Harris, and Taylor proved a generalized version of the Sato–Tate conjecture for an arbitrary non-CM holomorphic modular form of weight greater than or equal to two, by improving the potential modularity results of previous papers. The prior issues involved with the trace formula were solved by Michael Harris, and Sug Woo Shin. In 2015, Richard Taylor was awarded the Breakthrough Prize in Mathematics "for numerous breakthrough results in (...) the Sato–Tate conjecture." == Generalisations == There are generalisations, involving the distribution of Frobenius elements in Galois groups involved in the Galois representations on étale cohomology. In particular there is a conjectural theory for curves of genus n > 1. Under the random matrix model developed by Nick Katz and Peter Sarnak, there is a conjectural correspondence between (unitarized) characteristic polynomials of Frobenius elements and conjugacy classes in the compact Lie group USp(2n) = Sp(n). The Haar measure on USp(2n) then gives the conjectured distribution, and the classical case is USp(2) = SU(2). == Refinements == There are also more refined statements. The Lang–Trotter conjecture (1976) of Serge Lang and Hale Trotter states the asymptotic number of primes p with a given value of ap, the trace of Frobenius that appears in the formula. For the typical case (no complex multiplication, trace ≠ 0) their formula states that the number of p up to X is asymptotically c X / log ⁡ X {\displaystyle c{\sqrt {X}}/\log X\ } with a specified constant c. Neal Koblitz (1988) provided detailed conjectures for the case of a prime number q of points on Ep, motivated by elliptic curve cryptography. In 1999, Chantal David and Francesco Pappalardi proved an averaged version of the Lang–Trotter conjecture. == See also == Wigner semicircle distribution == References == == External links == Report on Barry Mazur giving context Michael Harris notes, with statement (PDF) La Conjecture de Sato–Tate [d'après Clozel, Harris, Shepherd-Barron, Taylor], Bourbaki seminar June 2007 by Henri Carayol (PDF) Video introducing Elliptic curves and its relation to Sato-Tate conjecture, Imperial College London, 2014 (Last 15 minutes)
Wikipedia/Sato-Tate_conjecture
In computational number theory, the index calculus algorithm is a probabilistic algorithm for computing discrete logarithms. Dedicated to the discrete logarithm in ( Z / q Z ) ∗ {\displaystyle (\mathbb {Z} /q\mathbb {Z} )^{*}} where q {\displaystyle q} is a prime, index calculus leads to a family of algorithms adapted to finite fields and to some families of elliptic curves. The algorithm collects relations among the discrete logarithms of small primes, computes them by a linear algebra procedure and finally expresses the desired discrete logarithm with respect to the discrete logarithms of small primes. == Description == Roughly speaking, the discrete log problem asks us to find an x such that g x ≡ h ( mod n ) {\displaystyle g^{x}\equiv h{\pmod {n}}} , where g, h, and the modulus n are given. The algorithm (described in detail below) applies to the group ( Z / q Z ) ∗ {\displaystyle (\mathbb {Z} /q\mathbb {Z} )^{*}} where q is prime. It requires a factor base as input. This factor base is usually chosen to be the number −1 and the first r primes starting with 2. From the point of view of efficiency, we want this factor base to be small, but in order to solve the discrete log for a large group we require the factor base to be (relatively) large. In practical implementations of the algorithm, those conflicting objectives are compromised one way or another. The algorithm is performed in three stages. The first two stages depend only on the generator g and prime modulus q, and find the discrete logarithms of a factor base of r small primes. The third stage finds the discrete log of the desired number h in terms of the discrete logs of the factor base. The first stage consists of searching for a set of r linearly independent relations between the factor base and power of the generator g. Each relation contributes one equation to a system of linear equations in r unknowns, namely the discrete logarithms of the r primes in the factor base. This stage is embarrassingly parallel and easy to divide among many computers. The second stage solves the system of linear equations to compute the discrete logs of the factor base. A system of hundreds of thousands or millions of equations is a significant computation requiring large amounts of memory, and it is not embarrassingly parallel, so a supercomputer is typically used. This was considered a minor step compared to the others for smaller discrete log computations. However, larger discrete logarithm records were made possible only by shifting the work away from the linear algebra and onto the sieve (i.e., increasing the number of equations while reducing the number of variables). The third stage searches for a power s of the generator g which, when multiplied by the argument h, may be factored in terms of the factor base gsh = (−1)f0 2f1 3f2···prfr. Finally, in an operation too simple to really be called a fourth stage, the results of the second and third stages can be rearranged by simple algebraic manipulation to work out the desired discrete logarithm x = f0logg(−1) + f1logg2 + f2logg3 + ··· + frloggpr − s. The first and third stages are both embarrassingly parallel, and in fact the third stage does not depend on the results of the first two stages, so it may be done in parallel with them. The choice of the factor base size r is critical, and the details are too intricate to explain here. The larger the factor base, the easier it is to find relations in stage 1, and the easier it is to complete stage 3, but the more relations you need before you can proceed to stage 2, and the more difficult stage 2 is. The relative availability of computers suitable for the different types of computation required for stages 1 and 2 is also important. === Applications in other groups === The lack of the notion of prime elements in the group of points on elliptic curves makes it impossible to find an efficient factor base to run index calculus method as presented here in these groups. Therefore this algorithm is incapable of solving discrete logarithms efficiently in elliptic curve groups. However: For special kinds of curves (so called supersingular elliptic curves) there are specialized algorithms for solving the problem faster than with generic methods. While the use of these special curves can easily be avoided, in 2009 it has been proven that for certain fields the discrete logarithm problem in the group of points on general elliptic curves over these fields can be solved faster than with generic methods. The algorithms are indeed adaptations of the index calculus method. == The algorithm == Input: Discrete logarithm generator g {\displaystyle g} , modulus q {\displaystyle q} and argument h {\displaystyle h} . Factor base { − 1 , 2 , 3 , 5 , 7 , 11 , … , p r } {\displaystyle \{-1,2,3,5,7,11,\ldots ,p_{r}\}} , of length r + 1 {\displaystyle r+1} . Output: x {\displaystyle x} such that g x = h mod q {\displaystyle g^{x}=h\mod q} . relations ← empty_list for k = 1 , 2 , … {\displaystyle k=1,2,\ldots } Using an integer factorization algorithm optimized for smooth numbers, try to factor g k mod q {\displaystyle g^{k}{\bmod {q}}} (Euclidean residue) using the factor base, i.e. find e i {\displaystyle e_{i}} 's such that g k mod q = ( − 1 ) e 0 2 e 1 3 e 2 ⋯ p r e r {\displaystyle g^{k}{\bmod {q}}=(-1)^{e_{0}}2^{e_{1}}3^{e_{2}}\cdots p_{r}^{e_{r}}} Each time a factorization is found: Store k {\displaystyle k} and the computed e i {\displaystyle e_{i}} 's as a vector ( e 0 , e 1 , e 2 , … , e r , k ) {\displaystyle (e_{0},e_{1},e_{2},\ldots ,e_{r},k)} (this is a called a relation) If this relation is linearly independent to the other relations: Add it to the list of relations If there are at least r + 1 {\displaystyle r+1} relations, exit loop Form a matrix whose rows are the relations Obtain the reduced echelon form of the matrix The first element in the last column is the discrete log of − 1 {\displaystyle -1} and the second element is the discrete log of 2 {\displaystyle 2} and so on for s = 1 , 2 , … {\displaystyle s=1,2,\ldots } Try to factor g s h mod q = ( − 1 ) f 0 2 f 1 3 f 2 ⋯ p r f r {\displaystyle g^{s}h{\bmod {q}}=(-1)^{f_{0}}2^{f_{1}}3^{f_{2}}\cdots p_{r}^{f_{r}}} over the factor base When a factorization is found: Output x = f 0 log g ⁡ ( − 1 ) + f 1 log g ⁡ 2 + ⋯ + f r log g ⁡ p r − s . {\displaystyle x=f_{0}\log _{g}(-1)+f_{1}\log _{g}2+\cdots +f_{r}\log _{g}p_{r}-s.} == Complexity == Assuming an optimal selection of the factor base, the expected running time (using L-notation) of the index-calculus algorithm can be stated as L n [ 1 / 2 , 2 + o ( 1 ) ] {\displaystyle L_{n}[1/2,{\sqrt {2}}+o(1)]} . == History == The basic idea of the algorithm is due to Western and Miller (1968), which ultimately relies on ideas from Kraitchik (1922). The first practical implementations followed the 1976 introduction of the Diffie-Hellman cryptosystem which relies on the discrete logarithm. Merkle's Stanford University dissertation (1979) was credited by Pohlig (1977) and Hellman and Reyneri (1983), who also made improvements to the implementation. Adleman optimized the algorithm and presented it in the present form. == The Index Calculus family == Index Calculus inspired a large family of algorithms. In finite fields F q {\displaystyle \mathbb {F} _{q}} with q = p n {\displaystyle q=p^{n}} for some prime p {\displaystyle p} , the state-of-art algorithms are the Number Field Sieve for Discrete Logarithms, L q [ 1 / 3 , 64 / 9 3 ] {\textstyle L_{q}\left[1/3,{\sqrt[{3}]{64/9}}\,\right]} , when p {\displaystyle p} is large compared to q {\displaystyle q} , the function field sieve, L q [ 1 / 3 , 32 / 9 3 ] {\textstyle L_{q}\left[1/3,{\sqrt[{3}]{32/9}}\,\right]} , and Joux, L q [ 1 / 4 + ε , c ] {\displaystyle L_{q}\left[1/4+\varepsilon ,c\right]} for c > 0 {\displaystyle c>0} , when p {\displaystyle p} is small compared to q {\displaystyle q} and the Number Field Sieve in High Degree, L q [ 1 / 3 , c ] {\displaystyle L_{q}[1/3,c]} for c > 0 {\displaystyle c>0} when p {\displaystyle p} is middle-sided. Discrete logarithm in some families of elliptic curves can be solved in time L q [ 1 / 3 , c ] {\displaystyle L_{q}\left[1/3,c\right]} for c > 0 {\displaystyle c>0} , but the general case remains exponential. == External links == Discrete logarithms in finite fields and their cryptographic significance, by Andrew Odlyzko Discrete Logarithm Problem, by Chris Studholme, including the June 21, 2002 paper "The Discrete Log Problem". A. Menezes; P. van Oorschot; S. Vanstone (1997). Handbook of Applied Cryptography. CRC Press. pp. 107–109. ISBN 0-8493-8523-7. == Notes ==
Wikipedia/Index_calculus_algorithm
The Tonelli–Shanks algorithm (referred to by Shanks as the RESSOL algorithm) is used in modular arithmetic to solve for r in a congruence of the form r2 ≡ n (mod p), where p is a prime: that is, to find a square root of n modulo p. Tonelli–Shanks cannot be used for composite moduli: finding square roots modulo composite numbers is a computational problem equivalent to integer factorization. An equivalent, but slightly more redundant version of this algorithm was developed by Alberto Tonelli in 1891. The version discussed here was developed independently by Daniel Shanks in 1973, who explained: My tardiness in learning of these historical references was because I had lent Volume 1 of Dickson's History to a friend and it was never returned. According to Dickson, Tonelli's algorithm can take square roots of x modulo prime powers pλ apart from primes. == Core ideas == Given a non-zero n {\displaystyle n} and a prime p > 2 {\displaystyle p>2} (which will always be odd), Euler's criterion tells us that n {\displaystyle n} has a square root (i.e., n {\displaystyle n} is a quadratic residue) if and only if: n p − 1 2 ≡ 1 ( mod p ) {\displaystyle n^{\frac {p-1}{2}}\equiv 1{\pmod {p}}} . In contrast, if a number z {\displaystyle z} has no square root (is a non-residue), Euler's criterion tells us that: z p − 1 2 ≡ − 1 ( mod p ) {\displaystyle z^{\frac {p-1}{2}}\equiv -1{\pmod {p}}} . It is not hard to find such z {\displaystyle z} , because half of the integers between 1 and p − 1 {\displaystyle p-1} have this property. So we assume that we have access to such a non-residue. By (normally) dividing by 2 repeatedly, we can write p − 1 {\displaystyle p-1} as Q 2 S {\displaystyle Q2^{S}} , where Q {\displaystyle Q} is odd. Note that if we try R ≡ n Q + 1 2 ( mod p ) {\displaystyle R\equiv n^{\frac {Q+1}{2}}{\pmod {p}}} , then R 2 ≡ n Q + 1 = ( n ) ( n Q ) ( mod p ) {\displaystyle R^{2}\equiv n^{Q+1}=(n)(n^{Q}){\pmod {p}}} . If t ≡ n Q ≡ 1 ( mod p ) {\displaystyle t\equiv n^{Q}\equiv 1{\pmod {p}}} , then R {\displaystyle R} is a square root of n {\displaystyle n} . Otherwise, for M = S {\displaystyle M=S} , we have R {\displaystyle R} and t {\displaystyle t} satisfying: R 2 ≡ n t ( mod p ) {\displaystyle R^{2}\equiv nt{\pmod {p}}} ; and t {\displaystyle t} is a 2 M − 1 {\displaystyle 2^{M-1}} -th root of 1 (because t 2 M − 1 = t 2 S − 1 ≡ n Q 2 S − 1 = n p − 1 2 {\displaystyle t^{2^{M-1}}=t^{2^{S-1}}\equiv n^{Q2^{S-1}}=n^{\frac {p-1}{2}}} ). If, given a choice of R {\displaystyle R} and t {\displaystyle t} for a particular M {\displaystyle M} satisfying the above (where R {\displaystyle R} is not a square root of n {\displaystyle n} ), we can easily calculate another R {\displaystyle R} and t {\displaystyle t} for M − 1 {\displaystyle M-1} such that the above relations hold, then we can repeat this until t {\displaystyle t} becomes a 2 0 {\displaystyle 2^{0}} -th root of 1, i.e., t = 1 {\displaystyle t=1} . At that point R {\displaystyle R} is a square root of n {\displaystyle n} . We can check whether t {\displaystyle t} is a 2 M − 2 {\displaystyle 2^{M-2}} -th root of 1 by squaring it M − 2 {\displaystyle M-2} times and check whether it is 1. If it is, then we do not need to do anything, as the same choice of R {\displaystyle R} and t {\displaystyle t} works. But if it is not, t 2 M − 2 {\displaystyle t^{2^{M-2}}} must be -1 (because squaring it gives 1, and there can only be two square roots 1 and -1 of 1 modulo p {\displaystyle p} ). To find a new pair of R {\displaystyle R} and t {\displaystyle t} , we can multiply R {\displaystyle R} by a factor b {\displaystyle b} , to be determined. Then t {\displaystyle t} must be multiplied by a factor b 2 {\displaystyle b^{2}} to keep R 2 ≡ n t ( mod p ) {\displaystyle R^{2}\equiv nt{\pmod {p}}} . So, when t 2 M − 2 {\displaystyle t^{2^{M-2}}} is -1, we need to find a factor b 2 {\displaystyle b^{2}} so that t b 2 {\displaystyle tb^{2}} is a 2 M − 2 {\displaystyle 2^{M-2}} -th root of 1, or equivalently b 2 {\displaystyle b^{2}} is a 2 M − 2 {\displaystyle 2^{M-2}} -th root of -1. The trick here is to make use of z {\displaystyle z} , the known non-residue. The Euler's criterion applied to z {\displaystyle z} shown above says that z Q {\displaystyle z^{Q}} is a 2 S − 1 {\displaystyle 2^{S-1}} -th root of -1. So by squaring z Q {\displaystyle z^{Q}} repeatedly, we have access to a sequence of 2 i {\displaystyle 2^{i}} -th root of -1. We can select the right one to serve as b {\displaystyle b} . With a little bit of variable maintenance and trivial case compression, the algorithm below emerges naturally. == The algorithm == Operations and comparisons on elements of the multiplicative group of integers modulo p Z / p Z {\displaystyle \mathbb {Z} /p\mathbb {Z} } are implicitly mod p. Inputs: p, a prime n, an element of Z / p Z {\displaystyle \mathbb {Z} /p\mathbb {Z} } such that solutions to the congruence r2 = n exist; when this is so we say that n is a quadratic residue mod p. Outputs: r in Z / p Z {\displaystyle \mathbb {Z} /p\mathbb {Z} } such that r2 = n Algorithm: By factoring out powers of 2, find Q and S such that p − 1 = Q 2 S {\displaystyle p-1=Q2^{S}} with Q odd Search for a z in Z / p Z {\displaystyle \mathbb {Z} /p\mathbb {Z} } which is a quadratic non-residue Half of the elements in the set will be quadratic non-residues Candidates can be tested with Euler's criterion or by finding the Jacobi symbol Let M ← S c ← z Q t ← n Q R ← n Q + 1 2 {\displaystyle {\begin{aligned}M&\leftarrow S\\c&\leftarrow z^{Q}\\t&\leftarrow n^{Q}\\R&\leftarrow n^{\frac {Q+1}{2}}\end{aligned}}} Loop: If t = 0, return r = 0 If t = 1, return r = R Otherwise, use repeated squaring to find the least i, 0 < i < M, such that t 2 i = 1 {\displaystyle t^{2^{i}}=1} Let b ← c 2 M − i − 1 {\displaystyle b\leftarrow c^{2^{M-i-1}}} , and set M ← i c ← b 2 t ← t b 2 R ← R b {\displaystyle {\begin{aligned}M&\leftarrow i\\c&\leftarrow b^{2}\\t&\leftarrow tb^{2}\\R&\leftarrow Rb\end{aligned}}} Once you have solved the congruence with r the second solution is − r ( mod p ) {\displaystyle -r{\pmod {p}}} . If the least i such that t 2 i = 1 {\displaystyle t^{2^{i}}=1} is M, then no solution to the congruence exists, i.e. n is not a quadratic residue. This is most useful when p ≡ 1 (mod 4). For primes such that p ≡ 3 (mod 4), this problem has possible solutions r = ± n p + 1 4 ( mod p ) {\displaystyle r=\pm n^{\frac {p+1}{4}}{\pmod {p}}} . If these satisfy r 2 ≡ n ( mod p ) {\displaystyle r^{2}\equiv n{\pmod {p}}} , they are the only solutions. If not, r 2 ≡ − n ( mod p ) {\displaystyle r^{2}\equiv -n{\pmod {p}}} , n is a quadratic non-residue, and there are no solutions. == Proof == We can show that at the start of each iteration of the loop the following loop invariants hold: c 2 M − 1 = − 1 {\displaystyle c^{2^{M-1}}=-1} t 2 M − 1 = 1 {\displaystyle t^{2^{M-1}}=1} R 2 = t n {\displaystyle R^{2}=tn} Initially: c 2 M − 1 = z Q 2 S − 1 = z p − 1 2 = − 1 {\displaystyle c^{2^{M-1}}=z^{Q2^{S-1}}=z^{\frac {p-1}{2}}=-1} (since z is a quadratic nonresidue, per Euler's criterion) t 2 M − 1 = n Q 2 S − 1 = n p − 1 2 = 1 {\displaystyle t^{2^{M-1}}=n^{Q2^{S-1}}=n^{\frac {p-1}{2}}=1} (since n is a quadratic residue) R 2 = n Q + 1 = t n {\displaystyle R^{2}=n^{Q+1}=tn} At each iteration, with M' , c' , t' , R' the new values replacing M, c, t, R: c ′ 2 M ′ − 1 = ( b 2 ) 2 i − 1 = c 2 M − i 2 i − 1 = c 2 M − 1 = − 1 {\displaystyle c'^{2^{M'-1}}=(b^{2})^{2^{i-1}}=c^{2^{M-i}2^{i-1}}=c^{2^{M-1}}=-1} t ′ 2 M ′ − 1 = ( t b 2 ) 2 i − 1 = t 2 i − 1 b 2 i = − 1 ⋅ − 1 = 1 {\displaystyle t'^{2^{M'-1}}=(tb^{2})^{2^{i-1}}=t^{2^{i-1}}b^{2^{i}}=-1\cdot -1=1} t 2 i − 1 = − 1 {\displaystyle t^{2^{i-1}}=-1} since we have that t 2 i = 1 {\displaystyle t^{2^{i}}=1} but t 2 i − 1 ≠ 1 {\displaystyle t^{2^{i-1}}\neq 1} (i is the least value such that t 2 i = 1 {\displaystyle t^{2^{i}}=1} ) b 2 i = c 2 M − i − 1 2 i = c 2 M − 1 = − 1 {\displaystyle b^{2^{i}}=c^{2^{M-i-1}2^{i}}=c^{2^{M-1}}=-1} R ′ 2 = R 2 b 2 = t n b 2 = t ′ n {\displaystyle R'^{2}=R^{2}b^{2}=tnb^{2}=t'n} From t 2 M − 1 = 1 {\displaystyle t^{2^{M-1}}=1} and the test against t = 1 at the start of the loop, we see that we will always find an i in 0 < i < M such that t 2 i = 1 {\displaystyle t^{2^{i}}=1} . M is strictly smaller on each iteration, and thus the algorithm is guaranteed to halt. When we hit the condition t = 1 and halt, the last loop invariant implies that R2 = n. === Order of t === We can alternately express the loop invariants using the order of the elements: ord ⁡ ( c ) = 2 M {\displaystyle \operatorname {ord} (c)=2^{M}} ord ⁡ ( t ) | 2 M − 1 {\displaystyle \operatorname {ord} (t)|2^{M-1}} R 2 = t n {\displaystyle R^{2}=tn} as before Each step of the algorithm moves t into a smaller subgroup by measuring the exact order of t and multiplying it by an element of the same order. == Example == Solving the congruence r2 ≡ 5 (mod 41). 41 is prime as required and 41 ≡ 1 (mod 4). 5 is a quadratic residue by Euler's criterion: 5 41 − 1 2 = 5 20 = 1 {\displaystyle 5^{\frac {41-1}{2}}=5^{20}=1} (as before, operations in ( Z / 41 Z ) × {\displaystyle (\mathbb {Z} /41\mathbb {Z} )^{\times }} are implicitly mod 41). p − 1 = 40 = 5 ⋅ 2 3 {\displaystyle p-1=40=5\cdot 2^{3}} so Q ← 5 {\displaystyle Q\leftarrow 5} , S ← 3 {\displaystyle S\leftarrow 3} Find a value for z: 2 41 − 1 2 = 1 {\displaystyle 2^{\frac {41-1}{2}}=1} , so 2 is a quadratic residue by Euler's criterion. 3 41 − 1 2 = 40 = − 1 {\displaystyle 3^{\frac {41-1}{2}}=40=-1} , so 3 is a quadratic nonresidue: set z ← 3 {\displaystyle z\leftarrow 3} Set M ← S = 3 {\displaystyle M\leftarrow S=3} c ← z Q = 3 5 = 38 {\displaystyle c\leftarrow z^{Q}=3^{5}=38} t ← n Q = 5 5 = 9 {\displaystyle t\leftarrow n^{Q}=5^{5}=9} R ← n Q + 1 2 = 5 5 + 1 2 = 2 {\displaystyle R\leftarrow n^{\frac {Q+1}{2}}=5^{\frac {5+1}{2}}=2} Loop: First iteration: t ≠ 1 {\displaystyle t\neq 1} , so we're not finished t 2 1 = 40 {\displaystyle t^{2^{1}}=40} , t 2 2 = 1 {\displaystyle t^{2^{2}}=1} so i ← 2 {\displaystyle i\leftarrow 2} b ← c 2 M − i − 1 = 38 2 3 − 2 − 1 = 38 {\displaystyle b\leftarrow c^{2^{M-i-1}}=38^{2^{3-2-1}}=38} M ← i = 2 {\displaystyle M\leftarrow i=2} c ← b 2 = 38 2 = 9 {\displaystyle c\leftarrow b^{2}=38^{2}=9} t ← t b 2 = 9 ⋅ 9 = 40 {\displaystyle t\leftarrow tb^{2}=9\cdot 9=40} R ← R b = 2 ⋅ 38 = 35 {\displaystyle R\leftarrow Rb=2\cdot 38=35} Second iteration: t ≠ 1 {\displaystyle t\neq 1} , so we're still not finished t 2 1 = 1 {\displaystyle t^{2^{1}}=1} so i ← 1 {\displaystyle i\leftarrow 1} b ← c 2 M − i − 1 = 9 2 2 − 1 − 1 = 9 {\displaystyle b\leftarrow c^{2^{M-i-1}}=9^{2^{2-1-1}}=9} M ← i = 1 {\displaystyle M\leftarrow i=1} c ← b 2 = 9 2 = 40 {\displaystyle c\leftarrow b^{2}=9^{2}=40} t ← t b 2 = 40 ⋅ 40 = 1 {\displaystyle t\leftarrow tb^{2}=40\cdot 40=1} R ← R b = 35 ⋅ 9 = 28 {\displaystyle R\leftarrow Rb=35\cdot 9=28} Third iteration: t = 1 {\displaystyle t=1} , and we are finished; return r = R = 28 {\displaystyle r=R=28} Indeed, 282 ≡ 5 (mod 41) and (−28)2 ≡ 132 ≡ 5 (mod 41). So the algorithm yields the two solutions to our congruence. == Speed of the algorithm == The Tonelli–Shanks algorithm requires (on average over all possible input (quadratic residues and quadratic nonresidues)) 2 m + 2 k + S ( S − 1 ) 4 + 1 2 S − 1 − 9 {\displaystyle 2m+2k+{\frac {S(S-1)}{4}}+{\frac {1}{2^{S-1}}}-9} modular multiplications, where m {\displaystyle m} is the number of digits in the binary representation of p {\displaystyle p} and k {\displaystyle k} is the number of ones in the binary representation of p {\displaystyle p} . If the required quadratic nonresidue z {\displaystyle z} is to be found by checking if a randomly taken number y {\displaystyle y} is a quadratic nonresidue, it requires (on average) 2 {\displaystyle 2} computations of the Legendre symbol. The average of two computations of the Legendre symbol are explained as follows: y {\displaystyle y} is a quadratic residue with chance p + 1 2 p = 1 + 1 p 2 {\displaystyle {\tfrac {\tfrac {p+1}{2}}{p}}={\tfrac {1+{\tfrac {1}{p}}}{2}}} , which is smaller than 1 {\displaystyle 1} but ≥ 1 2 {\displaystyle \geq {\tfrac {1}{2}}} , so we will on average need to check if a y {\displaystyle y} is a quadratic residue two times. This shows essentially that the Tonelli–Shanks algorithm works very well if the modulus p {\displaystyle p} is random, that is, if S {\displaystyle S} is not particularly large with respect to the number of digits in the binary representation of p {\displaystyle p} . As written above, Cipolla's algorithm works better than Tonelli–Shanks if (and only if) S ( S − 1 ) > 8 m + 20 {\displaystyle S(S-1)>8m+20} . However, if one instead uses Sutherland's algorithm to perform the discrete logarithm computation in the 2-Sylow subgroup of F p ∗ {\displaystyle \mathbb {F} _{p}^{\ast }} , one may replace S ( S − 1 ) {\displaystyle S(S-1)} with an expression that is asymptotically bounded by O ( S log ⁡ S / log ⁡ log ⁡ S ) {\displaystyle O(S\log S/\log \log S)} . Explicitly, one computes e {\displaystyle e} such that c e ≡ n Q {\displaystyle c^{e}\equiv n^{Q}} and then R ≡ c − e / 2 n ( Q + 1 ) / 2 {\displaystyle R\equiv c^{-e/2}n^{(Q+1)/2}} satisfies R 2 ≡ n {\displaystyle R^{2}\equiv n} (note that e {\displaystyle e} is a multiple of 2 because n {\displaystyle n} is a quadratic residue). The algorithm requires us to find a quadratic nonresidue z {\displaystyle z} . There is no known deterministic algorithm that runs in polynomial time for finding such a z {\displaystyle z} . However, if the generalized Riemann hypothesis is true, there exists a quadratic nonresidue z < 2 ln 2 ⁡ p {\displaystyle z<2\ln ^{2}{p}} , making it possible to check every z {\displaystyle z} up to that limit and find a suitable z {\displaystyle z} within polynomial time. Keep in mind, however, that this is a worst-case scenario; in general, z {\displaystyle z} is found in on average 2 trials as stated above. == Uses == The Tonelli–Shanks algorithm can (naturally) be used for any process in which square roots modulo a prime are necessary. For example, it can be used for finding points on elliptic curves. It is also useful for the computations in the Rabin cryptosystem and in the sieving step of the quadratic sieve. == Generalizations == Tonelli–Shanks can be generalized to any cyclic group (instead of ( Z / p Z ) × {\displaystyle (\mathbb {Z} /p\mathbb {Z} )^{\times }} ) and to kth roots for arbitrary integer k, in particular to taking the kth root of an element of a finite field. If many square-roots must be done in the same cyclic group and S is not too large, a table of square-roots of the elements of 2-power order can be prepared in advance and the algorithm simplified and sped up as follows. Factor out powers of 2 from p − 1, defining Q and S as: p − 1 = Q 2 S {\displaystyle p-1=Q2^{S}} with Q odd. Let R ← n Q + 1 2 , t ← n Q ≡ R 2 / n {\displaystyle R\leftarrow n^{\frac {Q+1}{2}},t\leftarrow n^{Q}\equiv R^{2}/n} Find b {\displaystyle b} from the table such that b 2 ≡ t {\displaystyle b^{2}\equiv t} and set R ≡ R / b {\displaystyle R\equiv R/b} return R. === Tonelli's algorithm will work on mod === p λ {\displaystyle p^{\lambda }} According to Dickson's "Theory of Numbers" A. Tonelli gave an explicit formula for the roots of x 2 = c ( mod p λ ) {\displaystyle x^{2}=c{\pmod {p^{\lambda }}}} The Dickson reference shows the following formula for the square root of x 2 mod p λ {\displaystyle x^{2}{\bmod {p^{\lambda }}}} . when p = 4 ⋅ 7 + 1 {\displaystyle p=4\cdot 7+1} , or s = 2 {\displaystyle s=2} (s must be 2 for this equation) and a = 7 {\displaystyle a=7} such that 29 = 2 2 ⋅ 7 + 1 {\displaystyle 29=2^{2}\cdot 7+1} for x 2 mod p λ ≡ c {\displaystyle x^{2}{\bmod {p^{\lambda }}}\equiv c} then x mod p λ ≡ ± ( c a + 3 ) β ⋅ c ( β + 1 ) / 2 {\displaystyle x{\bmod {p^{\lambda }}}\equiv \pm (c^{a}+3)^{\beta }\cdot c^{(\beta +1)/2}} where β ≡ a ⋅ p λ − 1 {\displaystyle \beta \equiv a\cdot p^{\lambda -1}} Noting that 23 2 mod 29 3 ≡ 529 {\displaystyle 23^{2}{\bmod {29^{3}}}\equiv 529} and noting that β = 7 ⋅ 29 2 {\displaystyle \beta =7\cdot 29^{2}} then ( 529 7 + 3 ) 7 ⋅ 29 2 ⋅ 529 ( 7 ⋅ 29 2 + 1 ) / 2 mod 29 3 ≡ 24366 ≡ − 23 {\displaystyle (529^{7}+3)^{7\cdot 29^{2}}\cdot 529^{(7\cdot 29^{2}+1)/2}{\bmod {29^{3}}}\equiv 24366\equiv -23} To take another example: 2333 2 mod 29 3 ≡ 4142 {\displaystyle 2333^{2}{\bmod {29^{3}}}\equiv 4142} and ( 4142 7 + 3 ) 7 ⋅ 29 2 ⋅ 4142 ( 7 ⋅ 29 2 + 1 ) / 2 mod 29 3 ≡ 2333 {\displaystyle (4142^{7}+3)^{7\cdot 29^{2}}\cdot 4142^{(7\cdot 29^{2}+1)/2}{\bmod {29^{3}}}\equiv 2333} Dickson also attributes the following equation to Tonelli: X mod p λ ≡ x p λ − 1 ⋅ c ( p λ − 2 p λ − 1 + 1 ) / 2 {\displaystyle X{\bmod {p^{\lambda }}}\equiv x^{p^{\lambda -1}}\cdot c^{(p^{\lambda }-2p^{\lambda -1}+1)/2}} where X 2 mod p λ ≡ c {\displaystyle X^{2}{\bmod {p^{\lambda }}}\equiv c} and x 2 mod p ≡ c {\displaystyle x^{2}{\bmod {p}}\equiv c} ; Using p = 23 {\displaystyle p=23} and using the modulus of p 3 {\displaystyle p^{3}} the math follows: 1115 2 mod 23 3 = 2191 {\displaystyle 1115^{2}{\bmod {23^{3}}}=2191} First, find the modular square root mod p {\displaystyle p} which can be done by the regular Tonelli algorithm for one or the other roots: 1115 2 mod 23 ≡ 6 {\displaystyle 1115^{2}{\bmod {23}}\equiv 6} and thus 6 mod 23 ≡ 11 {\displaystyle {\sqrt {6}}{\bmod {23}}\equiv 11} And applying Tonelli's equation (see above): 11 23 2 ⋅ 2191 ( 23 3 − 2 ⋅ 23 2 + 1 ) / 2 mod 23 3 ≡ 1115 {\displaystyle 11^{23^{2}}\cdot 2191^{(23^{3}-2\cdot 23^{2}+1)/2}{\bmod {23^{3}}}\equiv 1115} Dickson's reference clearly shows that Tonelli's algorithm works on moduli of p λ {\displaystyle p^{\lambda }} . == Notes == == References == Ivan Niven; Herbert S. Zuckerman; Hugh L. Montgomery (1991). An Introduction to the Theory of Numbers (5th ed.). Wiley. pp. 110–115. ISBN 0-471-62546-9. Daniel Shanks. Five Number Theoretic Algorithms. Proceedings of the Second Manitoba Conference on Numerical Mathematics. Pp. 51–70. 1973. Alberto Tonelli, Bemerkung über die Auflösung quadratischer Congruenzen. Nachrichten von der Königlichen Gesellschaft der Wissenschaften und der Georg-Augusts-Universität zu Göttingen. Pp. 344–346. 1891. Gagan Tara Nanda - Mathematics 115: The RESSOL Algorithm Gonzalo Tornaria
Wikipedia/Tonelli–Shanks_algorithm
In computational number theory, Cornacchia's algorithm is an algorithm for solving the Diophantine equation x 2 + d y 2 = m {\displaystyle x^{2}+dy^{2}=m} , where 1 ≤ d < m {\displaystyle 1\leq d<m} and d and m are coprime. The algorithm was described in 1908 by Giuseppe Cornacchia. == The algorithm == First, find any solution to r 0 2 ≡ − d ( mod m ) {\displaystyle r_{0}^{2}\equiv -d{\pmod {m}}} (perhaps by using an algorithm listed here); if no such r 0 {\displaystyle r_{0}} exist, there can be no primitive solution to the original equation. Without loss of generality, we can assume that r0 ≤ ⁠m/2⁠ (if not, then replace r0 with m - r0, which will still be a root of -d). Then use the Euclidean algorithm to find r 1 ≡ m ( mod r 0 ) {\displaystyle r_{1}\equiv m{\pmod {r_{0}}}} , r 2 ≡ r 0 ( mod r 1 ) {\displaystyle r_{2}\equiv r_{0}{\pmod {r_{1}}}} and so on; stop when r k < m {\displaystyle r_{k}<{\sqrt {m}}} . If s = m − r k 2 d {\displaystyle s={\sqrt {\tfrac {m-r_{k}^{2}}{d}}}} is an integer, then the solution is x = r k , y = s {\displaystyle x=r_{k},y=s} ; otherwise try another root of -d until either a solution is found or all roots have been exhausted. In this case there is no primitive solution. To find non-primitive solutions (x, y) where gcd(x, y) = g ≠ 1, note that the existence of such a solution implies that g2 divides m (and equivalently, that if m is square-free, then all solutions are primitive). Thus the above algorithm can be used to search for a primitive solution (u, v) to u2 + dv2 = ⁠m/g2⁠. If such a solution is found, then (gu, gv) will be a solution to the original equation. == Example == Solve the equation x 2 + 6 y 2 = 103 {\displaystyle x^{2}+6y^{2}=103} . A square root of −6 (mod 103) is 32, and 103 ≡ 7 (mod 32); since 7 2 < 103 {\displaystyle 7^{2}<103} and 103 − 7 2 6 = 3 {\displaystyle {\sqrt {\tfrac {103-7^{2}}{6}}}=3} , there is a solution x = 7, y = 3. == References == == External links == Basilla, J. M. (2004). "On the solutions of x 2 + d y 2 = m {\displaystyle x^{2}+dy^{2}=m} " (PDF). Proc. Japan Acad. 80(A): 40–41. doi:10.3792/pjaa.80.40.
Wikipedia/Cornacchia's_algorithm
Pollard's rho algorithm is an algorithm for integer factorization. It was invented by John Pollard in 1975. It uses only a small amount of space, and its expected running time is proportional to the square root of the smallest prime factor of the composite number being factorized. == Core ideas == The algorithm is used to factorize a number n = p q {\displaystyle n=pq} , where p {\displaystyle p} is a non-trivial factor. A polynomial modulo n {\displaystyle n} , called g ( x ) {\displaystyle g(x)} (e.g., g ( x ) = ( x 2 + 1 ) mod n {\displaystyle g(x)=(x^{2}+1){\bmod {n}}} ), is used to generate a pseudorandom sequence. It is important to note that g ( x ) {\displaystyle g(x)} must be a polynomial. A starting value, say 2, is chosen, and the sequence continues as x 1 = g ( 2 ) {\displaystyle x_{1}=g(2)} , x 2 = g ( g ( 2 ) ) {\displaystyle x_{2}=g(g(2))} , x 3 = g ( g ( g ( 2 ) ) ) {\displaystyle x_{3}=g(g(g(2)))} , etc. The sequence is related to another sequence { x k mod p } {\displaystyle \{x_{k}{\bmod {p}}\}} . Since p {\displaystyle p} is not known beforehand, this sequence cannot be explicitly computed in the algorithm. Yet in it lies the core idea of the algorithm. Because the number of possible values for these sequences is finite, both the { x k } {\displaystyle \{x_{k}\}} sequence, which is mod n {\displaystyle n} , and { x k mod p } {\displaystyle \{x_{k}{\bmod {p}}\}} sequence will eventually repeat, even though these values are unknown. If the sequences were to behave like random numbers, the birthday paradox implies that the number of x k {\displaystyle x_{k}} before a repetition occurs would be expected to be O ( N ) {\displaystyle O({\sqrt {N}})} , where N {\displaystyle N} is the number of possible values. So the sequence { x k mod p } {\displaystyle \{x_{k}{\bmod {p}}\}} will likely repeat much earlier than the sequence { x k } {\displaystyle \{x_{k}\}} . When one has found a k 1 , k 2 {\displaystyle k_{1},k_{2}} such that x k 1 ≠ x k 2 {\displaystyle x_{k_{1}}\neq x_{k_{2}}} but x k 1 ≡ x k 2 mod p {\displaystyle x_{k_{1}}\equiv x_{k_{2}}{\bmod {p}}} , the number | x k 1 − x k 2 | {\displaystyle |x_{k_{1}}-x_{k_{2}}|} is a multiple of p {\displaystyle p} , so a non-trivial divisor has been found. Once a sequence has a repeated value, the sequence will cycle, because each value depends only on the one before it. This structure of eventual cycling gives rise to the name "rho algorithm", owing to similarity to the shape of the Greek letter ρ when the values x 1 mod p {\displaystyle x_{1}{\bmod {p}}} , x 2 mod p {\displaystyle x_{2}{\bmod {p}}} , etc. are represented as nodes in a directed graph. This is detected by Floyd's cycle-finding algorithm: two nodes i {\displaystyle i} and j {\displaystyle j} (i.e., x i {\displaystyle x_{i}} and x j {\displaystyle x_{j}} ) are kept. In each step, one moves to the next node in the sequence and the other moves forward by two nodes. After that, it is checked whether gcd ( x i − x j , n ) ≠ 1 {\displaystyle \gcd(x_{i}-x_{j},n)\neq 1} . If it is not 1, then this implies that there is a repetition in the { x k mod p } {\displaystyle \{x_{k}{\bmod {p}}\}} sequence (i.e. x i mod p = x j mod p ) {\displaystyle x_{i}{\bmod {p}}=x_{j}{\bmod {p}})} . This works because if the x i mod p {\displaystyle x_{i}{\bmod {p}}} is the same as x j mod p {\displaystyle x_{j}{\bmod {p}}} , the difference between x i {\displaystyle x_{i}} and x j {\displaystyle x_{j}} is necessarily a multiple of p {\displaystyle p} . Although this always happens eventually, the resulting greatest common divisor (GCD) is a divisor of n {\displaystyle n} other than 1. This may be n {\displaystyle n} itself, since the two sequences might repeat at the same time. In this (uncommon) case the algorithm fails, it can be repeated with a different parameter. == Algorithm == The algorithm takes as its inputs n, the integer to be factored; and ⁠ g ( x ) {\displaystyle g(x)} ⁠, a polynomial in x computed modulo n. In the original algorithm, g ( x ) = ( x 2 − 1 ) mod n {\displaystyle g(x)=(x^{2}-1){\bmod {n}}} , but nowadays it is more common to use g ( x ) = ( x 2 + 1 ) mod n {\displaystyle g(x)=(x^{2}+1){\bmod {n}}} . The output is either a non-trivial factor of n, or failure. It performs the following steps: Pseudocode for Pollard's rho algorithm x ← 2 // starting value y ← x d ← 1 while d = 1: x ← g(x) y ← g(g(y)) d ← gcd(|x - y|, n) if d = n: return failure else: return d Here x and y corresponds to ⁠ x i {\displaystyle x_{i}} ⁠ and ⁠ x j {\displaystyle x_{j}} ⁠ in the previous section. Note that this algorithm may fail to find a nontrivial factor even when n is composite. In that case, the method can be tried again, using a starting value of x other than 2 ( 0 ≤ x < n {\displaystyle 0\leq x<n} ) or a different ⁠ g ( x ) {\displaystyle g(x)} ⁠, g ( x ) = ( x 2 + b ) mod n {\displaystyle g(x)=(x^{2}+b){\bmod {n}}} , with 1 ≤ b < n − 2 {\displaystyle 1\leq b<n-2} . == Example factorization == Let n = 8051 {\displaystyle n=8051} and g ( x ) = ( x 2 + 1 ) mod 8 051 {\displaystyle g(x)=(x^{2}+1){\bmod {8}}051} . Now 97 is a non-trivial factor of 8051. Starting values other than x = y = 2 may give the cofactor (83) instead of 97. One extra iteration is shown above to make it clear that y moves twice as fast as x. Note that even after a repetition, the GCD can return to 1. == Variants == In 1980, Richard Brent published a faster variant of the rho algorithm. He used the same core ideas as Pollard but a different method of cycle detection, replacing Floyd's cycle-finding algorithm with the related Brent's cycle finding method. CLRS gives a heuristic analysis and failure conditions (the trivial divisor n {\displaystyle n} is found). A further improvement was made by Pollard and Brent. They observed that if gcd ( a , n ) > 1 {\displaystyle \gcd(a,n)>1} , then also gcd ( a b , n ) > 1 {\displaystyle \gcd(ab,n)>1} for any positive integer ⁠ b {\displaystyle b} ⁠. In particular, instead of computing gcd ( | x − y | , n ) {\displaystyle \gcd(|x-y|,n)} at every step, it suffices to define ⁠ z {\displaystyle z} ⁠ as the product of 100 consecutive | x − y | {\displaystyle |x-y|} terms modulo ⁠ n {\displaystyle n} ⁠, and then compute a single gcd ( z , n ) {\displaystyle \gcd(z,n)} . A major speed up results as 100 gcd steps are replaced with 99 multiplications modulo ⁠ n {\displaystyle n} ⁠ and a single gcd. Occasionally it may cause the algorithm to fail by introducing a repeated factor, for instance when ⁠ n {\displaystyle n} ⁠ is a square. But it then suffices to go back to the previous gcd term, where gcd ( z , n ) = 1 {\displaystyle \gcd(z,n)=1} , and use the regular ρ algorithm from there. == Application == The algorithm is very fast for numbers with small factors, but slower in cases where all factors are large. The ρ algorithm's most remarkable success was the 1980 factorization of the Fermat number F8 = 1238926361552897 × 93461639715357977769163558199606896584051237541638188580280321. The ρ algorithm was a good choice for F8 because the prime factor p = 1238926361552897 is much smaller than the other factor. The factorization took 2 hours on a UNIVAC 1100/42. == Example: factoring n = 10403 = 101 · 103 == The following table shows numbers produced by the algorithm, starting with x = 2 {\displaystyle x=2} and using the polynomial g ( x ) = ( x 2 + 1 ) mod 1 0403 {\displaystyle g(x)=(x^{2}+1){\bmod {1}}0403} . The third and fourth columns of the table contain additional information not known by the algorithm. They are included to show how the algorithm works. The first repetition modulo 101 is 97 which occurs in step 17. The repetition is not detected until step 23, when x ≡ y ( mod 101 ) {\displaystyle x\equiv y{\pmod {101}}} . This causes gcd ( x − y , n ) = gcd ( 2799 − 9970 , n ) {\displaystyle \gcd(x-y,n)=\gcd(2799-9970,n)} to be p = 101 {\displaystyle p=101} , and a factor is found. == Complexity == If the pseudorandom number x = g ( x ) {\displaystyle x=g(x)} occurring in the Pollard ρ algorithm were an actual random number, it would follow that success would be achieved half the time, by the birthday paradox in O ( p ) ≤ O ( n 1 / 4 ) {\displaystyle O({\sqrt {p}})\leq O(n^{1/4})} iterations. It is believed that the same analysis applies as well to the actual rho algorithm, but this is a heuristic claim, and rigorous analysis of the algorithm remains open. == See also == Pollard's rho algorithm for logarithms Pollard's kangaroo algorithm == Notes == == References == == Further reading == Bai, Shi; Brent, Richard P. (January 2008). "On the Efficiency of Pollard's Rho Method for Discrete Logarithms". Conferences in Research and Practice in Information Technology, Vol. 77. The Australasian Theory Symposium (CATS2008). Wollongong. pp. 125–131. Describes the improvements available from different iteration functions and cycle-finding algorithms. Katz, Jonathan; Lindell, Yehuda (2007). "Chapter 8". Introduction to Modern Cryptography. CRC Press. Samuel S. Wagstaff, Jr. (2013). The Joy of Factoring. Providence, RI: American Mathematical Society. pp. 135–138. ISBN 978-1-4704-1048-3. == External links == Comprehensive article on Pollard's Rho algorithm aimed at an introductory-level audience Weisstein, Eric W. "Pollard rho Factorization Method". MathWorld. Java Implementation About Pollard rho
Wikipedia/Pollard's_rho_algorithm
In number theory, Dixon's factorization method (also Dixon's random squares method or Dixon's algorithm) is a general-purpose integer factorization algorithm; it is the prototypical factor base method. Unlike for other factor base methods, its run-time bound comes with a rigorous proof that does not rely on conjectures about the smoothness properties of the values taken by a polynomial. The algorithm was designed by John D. Dixon, a mathematician at Carleton University, and was published in 1981. == Basic idea == Dixon's method is based on finding a congruence of squares modulo the integer N which is intended to factor. Fermat's factorization method finds such a congruence by selecting random or pseudo-random x values and hoping that the integer x2 mod N is a perfect square (in the integers): x 2 ≡ y 2 ( mod N ) , x ≢ ± y ( mod N ) . {\displaystyle x^{2}\equiv y^{2}\quad ({\hbox{mod }}N),\qquad x\not \equiv \pm y\quad ({\hbox{mod }}N).} For example, if N = 84923, (by starting at 292, the first number greater than √N and counting up) the 5052 mod 84923 is 256, the square of 16. So (505 − 16)(505 + 16) = 0 mod 84923. Computing the greatest common divisor of 505 − 16 and N using Euclid's algorithm gives 163, which is a factor of N. In practice, selecting random x values will take an impractically long time to find a congruence of squares, since there are only √N squares less than N. Dixon's method replaces the condition "is the square of an integer" with the much weaker one "has only small prime factors"; for example, there are 292 squares smaller than 84923; 662 numbers smaller than 84923 whose prime factors are only 2,3,5 or 7; and 4767 whose prime factors are all less than 30. (Such numbers are called B-smooth with respect to some bound B.) If there are many numbers a 1 … a n {\displaystyle a_{1}\ldots a_{n}} whose squares can be factorized as a i 2 mod N = ∏ j = 1 m b j e i j {\displaystyle a_{i}^{2}\mod N=\prod _{j=1}^{m}b_{j}^{e_{ij}}} for a fixed set b 1 … b m {\displaystyle b_{1}\ldots b_{m}} of small primes, linear algebra modulo 2 on the matrix e i j {\displaystyle e_{ij}} will give a subset of the a i {\displaystyle a_{i}} whose squares combine to a product of small primes to an even power — that is, a subset of the a i {\displaystyle a_{i}} whose squares multiply to the square of a (hopefully different) number mod N. == Method == Suppose the composite number N is being factored. Bound B is chosen, and the factor base is identified (which is called P), the set of all primes less than or equal to B. Next, positive integers z are sought such that z2 mod N is B-smooth. Therefore we can write, for suitable exponents ai, z 2 mod N = ∏ p i ∈ P p i a i {\displaystyle z^{2}{\text{ mod }}N=\prod _{p_{i}\in P}p_{i}^{a_{i}}} When enough of these relations have been generated (it is generally sufficient that the number of relations be a few more than the size of P), the methods of linear algebra, such as Gaussian elimination, can be used to multiply together these various relations in such a way that the exponents of the primes on the right-hand side are all even: z 1 2 z 2 2 ⋯ z k 2 ≡ ∏ p i ∈ P p i a i , 1 + a i , 2 + ⋯ + a i , k ( mod N ) ( where a i , 1 + a i , 2 + ⋯ + a i , k ≡ 0 ( mod 2 ) ) {\displaystyle {z_{1}^{2}z_{2}^{2}\cdots z_{k}^{2}\equiv \prod _{p_{i}\in P}p_{i}^{a_{i,1}+a_{i,2}+\cdots +a_{i,k}}\ {\pmod {N}}\quad ({\text{where }}a_{i,1}+a_{i,2}+\cdots +a_{i,k}\equiv 0{\pmod {2}})}} This yields a congruence of squares of the form a2 ≡ b2 (mod N), which can be turned into a factorization of N, N = gcd(a + b, N) × (N/gcd(a + b, N)). This factorization might turn out to be trivial (i.e. N = N × 1), which can only happen if a ≡ ±b (mod N), in which case another try must be made with a different combination of relations; but if a nontrivial pair of factors of N is reached, the algorithm terminates. == Pseudocode == This section is taken directly from Dixon (1981). Dixon's algorithm Initialization. Let L be a list of integers in the range [1, n], and let P = {p1, ..., ph} be the list of the h primes ≤ v. Let B and Z be initially empty lists (Z will be indexed by B). Step 1. If L is empty, exit (algorithm unsuccessful). Otherwise, take the first term z from L, remove it from L, and proceed to Step 2. Step 2. Compute w as the least positive remainder of z2 mod n. Factor w as: w = w ′ ∏ i p i a i {\displaystyle w=w'\prod _{i}p_{i}^{a_{i}}} where w′ has no factor in P. If w′ = 1, proceed to Step 3; otherwise, return to Step 1. Step 3. Let a ← (a1, ..., ah). Add a to B and z to Z. If B has at most h elements, return to Step 1; otherwise, proceed to Step 4. Step 4. Find the first vector c in B that is linearly dependent (mod 2) on earlier vectors in B. Remove c from B and z c {\displaystyle z_{c}} from Z. Compute coefficients f b {\displaystyle f_{b}} such that: c ≡ ∑ b ∈ B f b b ( mod 2 ) {\displaystyle \mathbf {c} \equiv \sum _{b\in B}f_{b}\mathbf {b} {\pmod {2}}} Define: d = ( d 1 , … , d n ) ← 1 2 ( c + ∑ f b b ) {\displaystyle \mathbf {d} =(d_{1},\dots ,d_{n})\gets {\frac {1}{2}}\left(\mathbf {c} +\sum f_{b}\mathbf {b} \right)} Proceed to Step 5. Step 5. Compute: x ← z c ∏ b z b f b , y ← ∏ i p i d i {\displaystyle x\gets z_{c}\prod _{b}z_{b}^{f_{b}},\quad y\gets \prod _{i}p_{i}^{d_{i}}} so that: x 2 ≡ ∏ i p i 2 d i = y 2 mod n . {\displaystyle x^{2}\equiv \prod _{i}p_{i}^{2d_{i}}=y^{2}\mod n.} If x ≡ y {\displaystyle x\equiv y} or x ≡ − y ( mod n ) {\displaystyle x\equiv -y{\pmod {n}}} , return to Step 1. Otherwise, return: gcd ( n , x + y ) {\displaystyle \gcd(n,x+y)} which provides a nontrivial factor of n, and terminate successfully. == Step-by-step example : factorizing (n = 84923) using Dixon's algorithm == This example is lifted directly from the LeetArxiv substack. Credit is given to the original author. Initialization: Define a list of numbers L, ranging from 1 to 84923: L = { 1 , … , 84923 } {\displaystyle L=\{1,\dots ,84923\}} Define a value v, which is the smoothness factor: v = 7 {\displaystyle v=7} Define a list P containing all the prime numbers less than or equal to v: P = 2 , 3 , 5 , 7 {\displaystyle P={2,3,5,7}} Define B and Z, two empty lists. B is a list of powers, while Z is a list of accepted integers: B = [ ] {\displaystyle B=[]} Z = [ ] {\displaystyle Z=[]} Step 1: Iterating z {\displaystyle z} values Write a for loop that indexes the list L {\displaystyle L} . Each element in L {\displaystyle L} is indexed as z {\displaystyle z} . The for loop exits at the end of the list. Step 2: Computing z 2 mod n {\displaystyle z^{2}\mod n} and v-smooth Prime Factorization To proceed, compute z 2 mod 84923 {\displaystyle z^{2}\mod 84923} for each z, then express the result as a prime factorization. 1 2 mod 84923 ≡ 1 mod 84923 = 2 0 ⋅ 3 0 ⋅ 5 0 ⋅ 7 0 mod 84923 {\displaystyle 1^{2}\mod 84923\equiv 1\mod 84923=2^{0}\cdot 3^{0}\cdot 5^{0}\cdot 7^{0}\mod 84923} ⋮ {\displaystyle \vdots } 513 2 mod 84923 = 8400 mod 84923 = 2 4 ⋅ 3 1 ⋅ 5 2 ⋅ 7 1 mod 84923 {\displaystyle 513^{2}\mod 84923=8400\mod 84923=2^{4}\cdot 3^{1}\cdot 5^{2}\cdot 7^{1}\mod 84923} ⋮ {\displaystyle \vdots } 537 2 mod 84923 = 33600 mod 84923 = 2 6 ⋅ 3 1 ⋅ 5 2 ⋅ 7 1 mod 84923 {\displaystyle 537^{2}\mod 84923=33600\mod 84923=2^{6}\cdot 3^{1}\cdot 5^{2}\cdot 7^{1}\mod 84923} 538 2 mod 84923 = 34675 mod 84923 = 5 2 ⋅ 19 1 ⋅ 73 1 mod 84923 {\displaystyle 538^{2}\mod 84923=34675\mod 84923=5^{2}\cdot 19^{1}\cdot 73^{1}\mod 84923} This step continues for all values of z in the range. Step 3: If z 2 mod 84923 {\displaystyle z^{2}\mod 84923} is 7-smooth, then append its powers to list B {\displaystyle B} and append z {\displaystyle z} to list Z {\displaystyle Z} . Z = { 1 , 513 , 537 } {\displaystyle Z=\{1,513,537\}} B = { [ 0 , 0 , 0 , 0 ] , [ 4 , 1 , 2 , 1 ] , [ 6 , 1 , 2 , 1 ] } {\displaystyle B=\{[0,0,0,0],[4,1,2,1],[6,1,2,1]\}} Step 4: This step is split into two parts. Part 1: Find B {\displaystyle B} modulo 2. B = ( 0 0 0 0 4 1 2 1 6 1 2 1 ) mod 2 ≡ B = ( 0 0 0 0 0 1 0 1 0 1 0 1 ) {\displaystyle B={\begin{pmatrix}0&0&0&0\\4&1&2&1\\6&1&2&1\end{pmatrix}}\mod 2\equiv B={\begin{pmatrix}0&0&0&0\\0&1&0&1\\0&1&0&1\end{pmatrix}}} Part 2: Check if any row combinations of B {\displaystyle B} sum to even numbers For example, summing Row 2 {\displaystyle 2} and Row 3 {\displaystyle 3} gives us a vector of even numbers. R 2 = { 0 , 1 , 0 , 1 } {\displaystyle R_{2}=\{0,1,0,1\}} and R 3 = { 0 , 1 , 0 , 1 } {\displaystyle R_{3}=\{0,1,0,1\}} then R 2 + R 3 = { 0 , 1 , 0 , 1 } + { 0 , 1 , 0 , 1 } {\displaystyle R_{2}+R_{3}=\{0,1,0,1\}+\{0,1,0,1\}} R 2 + R 3 = { 0 , 2 , 0 , 2 } {\displaystyle R_{2}+R_{3}=\{0,2,0,2\}} . Step 5 : This step is split into four parts. Part 1. (Finding x): Multiply the corresponding z {\displaystyle z} values for the rows found in Step 4. Then find the square root. This gives us x {\displaystyle x} . For Row 2, we had 2 4 ∗ 3 1 ∗ 5 2 ∗ 7 1 {\displaystyle 2^{4}*3^{1}*5^{2}*7^{1}} . For Row 3, we had 2 6 ∗ 3 1 ∗ 5 2 ∗ 7 1 {\displaystyle 2^{6}*3^{1}*5^{2}*7^{1}} . Thus, we find x {\displaystyle x} : ( 513 ⋅ 537 ) 2 mod 84923 = y 2 where x 2 mod 84923 = ( 513 ⋅ 537 ) 2 mod 84923 thus x = ( 513 ⋅ 537 ) mod 84923 so x = 275481 mod 84923 Finally x = 20712 mod 84923 {\displaystyle {\begin{array}{ll}(513\cdot 537)^{2}\mod 84923=y^{2}\\\\{\text{where}}\quad x^{2}\mod 84923=(513\cdot 537)^{2}\mod 84923\\\\{\text{thus}}\quad x=(513\cdot 537)\mod 84923\\\\{\text{so}}\quad x=275481\mod 84923\\\\{\text{Finally}}\quad x=20712\mod 84923\\\end{array}}} Part 2. (Finding y): Multiply the corresponding smooth factorizations for the rows found in Step 4. Then find the square root. This gives us y {\displaystyle y} . y 2 = ( 2 4 ⋅ 3 1 ⋅ 5 2 ⋅ 7 1 ) × ( 2 6 ⋅ 3 1 ⋅ 5 2 ⋅ 7 1 ) By the multiplication law of exponents, y 2 = 2 ( 4 + 6 ) ⋅ 3 ( 1 + 1 ) ⋅ 5 ( 2 + 2 ) ⋅ 7 ( 1 + 1 ) Thus, y 2 = 2 10 ⋅ 3 2 ⋅ 5 4 ⋅ 7 2 Taking square roots on both sides gives y = 2 5 ⋅ 3 1 ⋅ 5 2 ⋅ 7 1 Therefore, y = 32 × 3 × 25 × 7 Finally, y = 16800 {\displaystyle {\begin{array}{ll}y^{2}=(2^{4}\cdot 3^{1}\cdot 5^{2}\cdot 7^{1})\times (2^{6}\cdot 3^{1}\cdot 5^{2}\cdot 7^{1})\\\\{\text{By the multiplication law of exponents,}}\\y^{2}=2^{(4+6)}\cdot 3^{(1+1)}\cdot 5^{(2+2)}\cdot 7^{(1+1)}\\\\{\text{Thus,}}\\y^{2}=2^{10}\cdot 3^{2}\cdot 5^{4}\cdot 7^{2}\\\\{\text{Taking square roots on both sides gives}}\\y=2^{5}\cdot 3^{1}\cdot 5^{2}\cdot 7^{1}\\\\{\text{Therefore,}}\\y=32\times 3\times 25\times 7\\\\{\text{Finally,}}\\y=16800\end{array}}} Part 3. (Find x + y and x - y) where x = 20712 and y = 16800. x + y = 20712 + 16800 = 37512 {\displaystyle x+y=20712+16800=37512} x − y = 20712 − 16800 = 3912 {\displaystyle x-y=20712-16800=3912} Part 4. Compute GCD(x+y, n) and GCD(x-y, n), where n = 84923, x+y = 292281 and x-y = 258681 gcd ( 37512 , 84923 ) = 521 gcd ( 3912 , 84923 ) = 163 {\displaystyle {\begin{array}{ll}\gcd(37512,84923)=521\\\gcd(3912,84923)=163\end{array}}} Quick check shows 84923 = 521 × 163 {\displaystyle 84923=521\times 163} . == Optimizations == The quadratic sieve is an optimization of Dixon's method. It selects values of x close to the square root of N such that x2 modulo N is small, thereby largely increasing the chance of obtaining a smooth number. Other ways to optimize Dixon's method include using a better algorithm to solve the matrix equation, taking advantage of the sparsity of the matrix: a number z cannot have more than log 2 ⁡ z {\displaystyle \log _{2}z} factors, so each row of the matrix is almost all zeros. In practice, the block Lanczos algorithm is often used. Also, the size of the factor base must be chosen carefully: if it is too small, it will be difficult to find numbers that factorize completely over it, and if it is too large, more relations will have to be collected. A more sophisticated analysis, using the approximation that a number has all its prime factors less than N 1 / a {\displaystyle N^{1/a}} with probability about a − a {\displaystyle a^{-a}} (an approximation to the Dickman–de Bruijn function), indicates that choosing too small a factor base is much worse than too large, and that the ideal factor base size is some power of exp ⁡ ( log ⁡ N log ⁡ log ⁡ N ) {\displaystyle \exp \left({\sqrt {\log N\log \log N}}\right)} . The optimal complexity of Dixon's method is O ( exp ⁡ ( 2 2 log ⁡ n log ⁡ log ⁡ n ) ) {\displaystyle O\left(\exp \left(2{\sqrt {2}}{\sqrt {\log n\log \log n}}\right)\right)} in big-O notation, or L n [ 1 / 2 , 2 2 ] {\displaystyle L_{n}[1/2,2{\sqrt {2}}]} in L-notation. == References ==
Wikipedia/Dixon's_factorization_method
The Korkine–Zolotarev (KZ) lattice basis reduction algorithm or Hermite–Korkine–Zolotarev (HKZ) algorithm is a lattice reduction algorithm. For lattices in R n {\displaystyle \mathbb {R} ^{n}} it yields a lattice basis with orthogonality defect at most n n {\displaystyle n^{n}} , unlike the 2 n 2 / 2 {\displaystyle 2^{n^{2}/2}} bound of the LLL reduction. KZ has exponential complexity versus the polynomial complexity of the LLL reduction algorithm, however it may still be preferred for solving multiple closest vector problems (CVPs) in the same lattice, where it can be more efficient. == History == The definition of a KZ-reduced basis was given by Aleksandr Korkin and Yegor Ivanovich Zolotarev in 1877, a strengthened version of Hermite reduction. The first algorithm for constructing a KZ-reduced basis was given in 1983 by Kannan. The block Korkine-Zolotarev (BKZ) algorithm was introduced in 1987. == Definition == A KZ-reduced basis for a lattice is defined as follows: Given a basis B = { b 1 , b 2 , … , b n } , {\displaystyle \mathbf {B} =\{\mathbf {b} _{1},\mathbf {b} _{2},\dots ,\mathbf {b} _{n}\},} define its Gram–Schmidt process orthogonal basis B ∗ = { b 1 ∗ , b 2 ∗ , … , b n ∗ } , {\displaystyle \mathbf {B} ^{*}=\{\mathbf {b} _{1}^{*},\mathbf {b} _{2}^{*},\dots ,\mathbf {b} _{n}^{*}\},} and the Gram-Schmidt coefficients μ i , j = ⟨ b i , b j ∗ ⟩ ⟨ b j ∗ , b j ∗ ⟩ {\displaystyle \mu _{i,j}={\frac {\langle \mathbf {b} _{i},\mathbf {b} _{j}^{*}\rangle }{\langle \mathbf {b} _{j}^{*},\mathbf {b} _{j}^{*}\rangle }}} , for any 1 ≤ j < i ≤ n {\displaystyle 1\leq j<i\leq n} . Also define projection functions π i ( x ) = ∑ j ≥ i ⟨ x , b j ∗ ⟩ ⟨ b j ∗ , b j ∗ ⟩ b j ∗ {\displaystyle \pi _{i}(\mathbf {x} )=\sum _{j\geq i}{\frac {\langle \mathbf {x} ,\mathbf {b} _{j}^{*}\rangle }{\langle \mathbf {b} _{j}^{*},\mathbf {b} _{j}^{*}\rangle }}\mathbf {b} _{j}^{*}} which project x {\displaystyle \mathbf {x} } orthogonally onto the span of b i ∗ , ⋯ , b n ∗ {\displaystyle \mathbf {b} _{i}^{*},\cdots ,\mathbf {b} _{n}^{*}} . Then the basis B {\displaystyle B} is KZ-reduced if the following holds: b i ∗ {\displaystyle \mathbf {b} _{i}^{*}} is the shortest nonzero vector in π i ( L ( B ) ) {\displaystyle \pi _{i}({\mathcal {L}}(\mathbf {B} ))} For all j < i {\displaystyle j<i} , | μ i , j | ≤ 1 / 2 {\displaystyle \left|\mu _{i,j}\right|\leq 1/2} Note that the first condition can be reformulated recursively as stating that b 1 {\displaystyle \mathbf {b} _{1}} is a shortest vector in the lattice, and { π 2 ( b 2 ) , ⋯ π 2 ( b n ) } {\displaystyle \{\pi _{2}(\mathbf {b} _{2}),\cdots \pi _{2}(\mathbf {b} _{n})\}} is a KZ-reduced basis for the lattice π 2 ( L ( B ) ) {\displaystyle \pi _{2}({\mathcal {L}}(\mathbf {B} ))} . Also note that the second condition guarantees that the reduced basis is length-reduced (adding an integer multiple of one basis vector to another will not decrease its length); the same condition is used in the LLL reduction. == Notes == == References == Korkine, A.; Zolotareff, G. (1877). "Sur les formes quadratiques positives". Mathematische Annalen. 11 (2): 242–292. doi:10.1007/BF01442667. S2CID 121803621. Lyu, Shanxiang; Ling, Cong (2017). "Boosted KZ and LLL Algorithms". IEEE Transactions on Signal Processing. 65 (18): 4784–4796. arXiv:1703.03303. Bibcode:2017ITSP...65.4784L. doi:10.1109/TSP.2017.2708020. S2CID 16832357. Wen, Jinming; Chang, Xiao-Wen (2018). "On the KZ Reduction". arXiv:1702.08152. {{cite journal}}: Cite journal requires |journal= (help) Micciancio, Daniele; Goldwasser, Shafi (2002). Complexity of Lattice Problems. pp. 131–136. doi:10.1007/978-1-4615-0897-7. ISBN 978-1-4613-5293-8. Zhang, Wen; Qiao, Sanzheng; Wei, Yimin (2012). "HKZ and Minkowski Reduction Algorithms for Lattice-Reduction-Aided MIMO Detection" (PDF). IEEE Transactions on Signal Processing. 60 (11): 5963. Bibcode:2012ITSP...60.5963Z. doi:10.1109/TSP.2012.2210708. S2CID 5962834.
Wikipedia/Korkine–Zolotarev_lattice_basis_reduction_algorithm
In mathematics, a conjecture is a conclusion or a proposition that is proffered on a tentative basis without proof. Some conjectures, such as the Riemann hypothesis or Fermat's conjecture (now a theorem, proven in 1995 by Andrew Wiles), have shaped much of mathematical history as new areas of mathematics are developed in order to prove them. == Resolution of conjectures == === Proof === Formal mathematics is based on provable truth. In mathematics, any number of cases supporting a universally quantified conjecture, no matter how large, is insufficient for establishing the conjecture's veracity, since a single counterexample could immediately bring down the conjecture. Mathematical journals sometimes publish the minor results of research teams having extended the search for a counterexample farther than previously done. For instance, the Collatz conjecture, which concerns whether or not certain sequences of integers terminate, has been tested for all integers up to 1.2 × 1012 (1.2 trillion). However, the failure to find a counterexample after extensive search does not constitute a proof that the conjecture is true—because the conjecture might be false but with a very large minimal counterexample. Nevertheless, mathematicians often regard a conjecture as strongly supported by evidence even though not yet proved. That evidence may be of various kinds, such as verification of consequences of it or strong interconnections with known results. A conjecture is considered proven only when it has been shown that it is logically impossible for it to be false. There are various methods of doing so; see methods of mathematical proof for more details. One method of proof, applicable when there are only a finite number of cases that could lead to counterexamples, is known as "brute force": in this approach, all possible cases are considered and shown not to give counterexamples. In some occasions, the number of cases is quite large, in which case a brute-force proof may require as a practical matter the use of a computer algorithm to check all the cases. For example, the validity of the 1976 and 1997 brute-force proofs of the four color theorem by computer was initially doubted, but was eventually confirmed in 2005 by theorem-proving software. When a conjecture has been proven, it is no longer a conjecture but a theorem. Many important theorems were once conjectures, such as the Geometrization theorem (which resolved the Poincaré conjecture), Fermat's Last Theorem, and others. === Disproof === Conjectures disproven through counterexample are sometimes referred to as false conjectures (cf. the Pólya conjecture and Euler's sum of powers conjecture). In the case of the latter, the first counterexample found for the n=4 case involved numbers in the millions, although it has been subsequently found that the minimal counterexample is actually smaller. === Independent conjectures === Not every conjecture ends up being proven true or false. The continuum hypothesis, which tries to ascertain the relative cardinality of certain infinite sets, was eventually shown to be independent from the generally accepted set of Zermelo–Fraenkel axioms of set theory. It is therefore possible to adopt this statement, or its negation, as a new axiom in a consistent manner (much as Euclid's parallel postulate can be taken either as true or false in an axiomatic system for geometry). In this case, if a proof uses this statement, researchers will often look for a new proof that does not require the hypothesis (in the same way that it is desirable that statements in Euclidean geometry be proved using only the axioms of neutral geometry, i.e. without the parallel postulate). The one major exception to this in practice is the axiom of choice, as the majority of researchers usually do not worry whether a result requires it—unless they are studying this axiom in particular. == Conditional proofs == Sometimes, a conjecture is called a hypothesis when it is used frequently and repeatedly as an assumption in proofs of other results. For example, the Riemann hypothesis is a conjecture from number theory that — amongst other things — makes predictions about the distribution of prime numbers. Few number theorists doubt that the Riemann hypothesis is true. In fact, in anticipation of its eventual proof, some have even proceeded to develop further proofs which are contingent on the truth of this conjecture. These are called conditional proofs: the conjectures assumed appear in the hypotheses of the theorem, for the time being. These "proofs", however, would fall apart if it turned out that the hypothesis was false, so there is considerable interest in verifying the truth or falsity of conjectures of this type. == Important examples == === Fermat's Last Theorem === In number theory, Fermat's Last Theorem (sometimes called Fermat's conjecture, especially in older texts) states that no three positive integers a {\displaystyle a} , b {\displaystyle b} , and c {\displaystyle c} can satisfy the equation a n + b n = c n {\displaystyle a^{n}+b^{n}=c^{n}} for any integer value of n {\displaystyle n} greater than two. This theorem was first conjectured by Pierre de Fermat in 1637 in the margin of a copy of Arithmetica, where he claimed that he had a proof that was too large to fit in the margin. The first successful proof was released in 1994 by Andrew Wiles, and formally published in 1995, after 358 years of effort by mathematicians. The unsolved problem stimulated the development of algebraic number theory in the 19th century, and the proof of the modularity theorem in the 20th century. It is among the most notable theorems in the history of mathematics, and prior to its proof it was in the Guinness Book of World Records for "most difficult mathematical problems". === Four color theorem === In mathematics, the four color theorem, or the four color map theorem, states that given any separation of a plane into contiguous regions, producing a figure called a map, no more than four colors are required to color the regions of the map—so that no two adjacent regions have the same color. Two regions are called adjacent if they share a common boundary that is not a corner, where corners are the points shared by three or more regions. For example, in the map of the United States of America, Utah and Arizona are adjacent, but Utah and New Mexico, which only share a point that also belongs to Arizona and Colorado, are not. Möbius mentioned the problem in his lectures as early as 1840. The conjecture was first proposed on October 23, 1852 when Francis Guthrie, while trying to color the map of counties of England, noticed that only four different colors were needed. The five color theorem, which has a short elementary proof, states that five colors suffice to color a map and was proven in the late 19th century; however, proving that four colors suffice turned out to be significantly harder. A number of false proofs and false counterexamples have appeared since the first statement of the four color theorem in 1852. The four color theorem was ultimately proven in 1976 by Kenneth Appel and Wolfgang Haken. It was the first major theorem to be proved using a computer. Appel and Haken's approach started by showing that there is a particular set of 1,936 maps, each of which cannot be part of a smallest-sized counterexample to the four color theorem (i.e., if they did appear, one could make a smaller counter-example). Appel and Haken used a special-purpose computer program to confirm that each of these maps had this property. Additionally, any map that could potentially be a counterexample must have a portion that looks like one of these 1,936 maps. Showing this with hundreds of pages of hand analysis, Appel and Haken concluded that no smallest counterexample exists because any must contain, yet do not contain, one of these 1,936 maps. This contradiction means there are no counterexamples at all and that the theorem is therefore true. Initially, their proof was not accepted by mathematicians at all because the computer-assisted proof was infeasible for a human to check by hand. However, the proof has since then gained wider acceptance, although doubts still remain. === Hauptvermutung === The Hauptvermutung (German for main conjecture) of geometric topology is the conjecture that any two triangulations of a triangulable space have a common refinement, a single triangulation that is a subdivision of both of them. It was originally formulated in 1908, by Steinitz and Tietze. This conjecture is now known to be false. The non-manifold version was disproved by John Milnor in 1961 using Reidemeister torsion. The manifold version is true in dimensions m ≤ 3. The cases m = 2 and 3 were proved by Tibor Radó and Edwin E. Moise in the 1920s and 1950s, respectively. === Weil conjectures === In mathematics, the Weil conjectures were some highly influential proposals by André Weil (1949) on the generating functions (known as local zeta-functions) derived from counting the number of points on algebraic varieties over finite fields. A variety V over a finite field with q elements has a finite number of rational points, as well as points over every finite field with qk elements containing that field. The generating function has coefficients derived from the numbers Nk of points over the (essentially unique) field with qk elements. Weil conjectured that such zeta-functions should be rational functions, should satisfy a form of functional equation, and should have their zeroes in restricted places. The last two parts were quite consciously modeled on the Riemann zeta function and Riemann hypothesis. The rationality was proved by Dwork (1960), the functional equation by Grothendieck (1965), and the analogue of the Riemann hypothesis was proved by Deligne (1974). === Poincaré conjecture === In mathematics, the Poincaré conjecture is a theorem about the characterization of the 3-sphere, which is the hypersphere that bounds the unit ball in four-dimensional space. The conjecture states that: Every simply connected, closed 3-manifold is homeomorphic to the 3-sphere. An equivalent form of the conjecture involves a coarser form of equivalence than homeomorphism called homotopy equivalence: if a 3-manifold is homotopy equivalent to the 3-sphere, then it is necessarily homeomorphic to it. Originally conjectured by Henri Poincaré in 1904, the theorem concerns a space that locally looks like ordinary three-dimensional space but is connected, finite in size, and lacks any boundary (a closed 3-manifold). The Poincaré conjecture claims that if such a space has the additional property that each loop in the space can be continuously tightened to a point, then it is necessarily a three-dimensional sphere. An analogous result has been known in higher dimensions for some time. After nearly a century of effort by mathematicians, Grigori Perelman presented a proof of the conjecture in three papers made available in 2002 and 2003 on arXiv. The proof followed on from the program of Richard S. Hamilton to use the Ricci flow to attempt to solve the problem. Hamilton later introduced a modification of the standard Ricci flow, called Ricci flow with surgery to systematically excise singular regions as they develop, in a controlled way, but was unable to prove this method "converged" in three dimensions. Perelman completed this portion of the proof. Several teams of mathematicians have verified that Perelman's proof is correct. The Poincaré conjecture, before being proven, was one of the most important open questions in topology. === Riemann hypothesis === In mathematics, the Riemann hypothesis, proposed by Bernhard Riemann (1859), is a conjecture that the non-trivial zeros of the Riemann zeta function all have real part 1/2. The name is also used for some closely related analogues, such as the Riemann hypothesis for curves over finite fields. The Riemann hypothesis implies results about the distribution of prime numbers. Along with suitable generalizations, some mathematicians consider it the most important unresolved problem in pure mathematics. The Riemann hypothesis, along with the Goldbach conjecture, is part of Hilbert's eighth problem in David Hilbert's list of 23 unsolved problems; it is also one of the Clay Mathematics Institute Millennium Prize Problems. === P versus NP problem === The P versus NP problem is a major unsolved problem in computer science. Informally, it asks whether every problem whose solution can be quickly verified by a computer can also be quickly solved by a computer; it is widely conjectured that the answer is no. It was essentially first mentioned in a 1956 letter written by Kurt Gödel to John von Neumann. Gödel asked whether a certain NP-complete problem could be solved in quadratic or linear time. The precise statement of the P=NP problem was introduced in 1971 by Stephen Cook in his seminal paper "The complexity of theorem proving procedures" and is considered by many to be the most important open problem in the field. It is one of the seven Millennium Prize Problems selected by the Clay Mathematics Institute to carry a US$1,000,000 prize for the first correct solution. === Other conjectures === Goldbach's conjecture The twin prime conjecture The Collatz conjecture The Manin conjecture The Maldacena conjecture The Euler conjecture, proposed by Euler in the 18th century but for which counterexamples for a number of exponents (starting with n=4) were found beginning in the mid 20th century The Hardy-Littlewood conjectures are a pair of conjectures concerning the distribution of prime numbers, the first of which expands upon the aforementioned twin prime conjecture. Neither one has either been proven or disproven, but it has been proven that both cannot simultaneously be true (i.e., at least one must be false). It has not been proven which one is false, but it is widely believed that the first conjecture is true and the second one is false. The Langlands program is a far-reaching web of these ideas of 'unifying conjectures' that link different subfields of mathematics (e.g. between number theory and representation theory of Lie groups). Some of these conjectures have since been proved. == In other sciences == Karl Popper pioneered the use of the term "conjecture" in scientific philosophy. Conjecture is related to hypothesis, which in science refers to a testable conjecture. == See also == Bold hypothesis Futures studies Hypotheticals List of conjectures Ramanujan machine == References == === Works cited === Deligne, Pierre (1974), "La conjecture de Weil. I", Publications Mathématiques de l'IHÉS, 43 (43): 273–307, doi:10.1007/BF02684373, ISSN 1618-1913, MR 0340258, S2CID 123139343 Dwork, Bernard (1960), "On the rationality of the zeta function of an algebraic variety", American Journal of Mathematics, 82 (3), American Journal of Mathematics, Vol. 82, No. 3: 631–648, doi:10.2307/2372974, ISSN 0002-9327, JSTOR 2372974, MR 0140494 Grothendieck, Alexander (1995) [1965], "Formule de Lefschetz et rationalité des fonctions L", Séminaire Bourbaki, vol. 9, Paris: Société Mathématique de France, pp. 41–55, MR 1608788 == External links == Media related to Conjectures at Wikimedia Commons Open Problem Garden Unsolved Problems web site
Wikipedia/Conjectures
The Karatsuba algorithm is a fast multiplication algorithm for integers. It was discovered by Anatoly Karatsuba in 1960 and published in 1962. It is a divide-and-conquer algorithm that reduces the multiplication of two n-digit numbers to three multiplications of n/2-digit numbers and, by repeating this reduction, to at most n log 2 ⁡ 3 ≈ n 1.58 {\displaystyle n^{\log _{2}3}\approx n^{1.58}} single-digit multiplications. It is therefore asymptotically faster than the traditional algorithm, which performs n 2 {\displaystyle n^{2}} single-digit products. The Karatsuba algorithm was the first multiplication algorithm asymptotically faster than the quadratic "grade school" algorithm. The Toom–Cook algorithm (1963) is a faster generalization of Karatsuba's method, and the Schönhage–Strassen algorithm (1971) is even faster, for sufficiently large n. == History == The standard procedure for multiplication of two n-digit numbers requires a number of elementary operations proportional to n 2 {\displaystyle n^{2}\,\!} , or O ( n 2 ) {\displaystyle O(n^{2})\,\!} in big-O notation. Andrey Kolmogorov conjectured that the traditional algorithm was asymptotically optimal, meaning that any algorithm for that task would require Ω ( n 2 ) {\displaystyle \Omega (n^{2})\,\!} elementary operations. In 1960, Kolmogorov organized a seminar on mathematical problems in cybernetics at the Moscow State University, where he stated the Ω ( n 2 ) {\displaystyle \Omega (n^{2})\,\!} conjecture and other problems in the complexity of computation. Within a week, Karatsuba, then a 23-year-old student, found an algorithm that multiplies two n-digit numbers in O ( n log 2 ⁡ 3 ) {\displaystyle O(n^{\log _{2}3})} elementary steps, thus disproving the conjecture. Kolmogorov was very excited about the discovery; he communicated it at the next meeting of the seminar, which was then terminated. Kolmogorov gave some lectures on the Karatsuba result at conferences all over the world (see, for example, "Proceedings of the International Congress of Mathematicians 1962", pp. 351–356, and also "6 Lectures delivered at the International Congress of Mathematicians in Stockholm, 1962") and published the method in 1962, in the Proceedings of the USSR Academy of Sciences. The article had been written by Kolmogorov and contained two results on multiplication, Karatsuba's algorithm and a separate result by Yuri Ofman; it listed "A. Karatsuba and Yu. Ofman" as the authors. Karatsuba only became aware of the paper when he received the reprints from the publisher. == Algorithm == === Basic step === The basic principle of Karatsuba's algorithm is divide-and-conquer, using a formula that allows one to compute the product of two large numbers x {\displaystyle x} and y {\displaystyle y} using three multiplications of smaller numbers, each with about half as many digits as x {\displaystyle x} or y {\displaystyle y} , plus some additions and digit shifts. This basic step is, in fact, a generalization of a similar complex multiplication algorithm, where the imaginary unit i is replaced by a power of the base. Let x {\displaystyle x} and y {\displaystyle y} be represented as n {\displaystyle n} -digit strings in some base B {\displaystyle B} . For any positive integer m {\displaystyle m} less than n {\displaystyle n} , one can write the two given numbers as x = x 1 B m + x 0 , {\displaystyle x=x_{1}B^{m}+x_{0},} y = y 1 B m + y 0 , {\displaystyle y=y_{1}B^{m}+y_{0},} where x 0 {\displaystyle x_{0}} and y 0 {\displaystyle y_{0}} are less than B m {\displaystyle B^{m}} . The product is then x y = ( x 1 B m + x 0 ) ( y 1 B m + y 0 ) = x 1 y 1 B 2 m + ( x 1 y 0 + x 0 y 1 ) B m + x 0 y 0 = z 2 B 2 m + z 1 B m + z 0 , {\displaystyle {\begin{aligned}xy&=(x_{1}B^{m}+x_{0})(y_{1}B^{m}+y_{0})\\&=x_{1}y_{1}B^{2m}+(x_{1}y_{0}+x_{0}y_{1})B^{m}+x_{0}y_{0}\\&=z_{2}B^{2m}+z_{1}B^{m}+z_{0},\\\end{aligned}}} where z 2 = x 1 y 1 , {\displaystyle z_{2}=x_{1}y_{1},} z 1 = x 1 y 0 + x 0 y 1 , {\displaystyle z_{1}=x_{1}y_{0}+x_{0}y_{1},} z 0 = x 0 y 0 . {\displaystyle z_{0}=x_{0}y_{0}.} These formulae require four multiplications and were known to Charles Babbage. Karatsuba observed that x y {\displaystyle xy} can be computed in only three multiplications, at the cost of a few extra additions. With z 0 {\displaystyle z_{0}} and z 2 {\displaystyle z_{2}} as before and z 3 = ( x 1 + x 0 ) ( y 1 + y 0 ) , {\displaystyle z_{3}=(x_{1}+x_{0})(y_{1}+y_{0}),} one can observe that z 1 = x 1 y 0 + x 0 y 1 = ( x 1 + x 0 ) ( y 0 + y 1 ) − x 1 y 1 − x 0 y 0 = z 3 − z 2 − z 0 . {\displaystyle {\begin{aligned}z_{1}&=x_{1}y_{0}+x_{0}y_{1}\\&=(x_{1}+x_{0})(y_{0}+y_{1})-x_{1}y_{1}-x_{0}y_{0}\\&=z_{3}-z_{2}-z_{0}.\\\end{aligned}}} Thus only three multiplications are required for computing z 0 , z 1 {\displaystyle z_{0},z_{1}} and z 2 . {\displaystyle z_{2}.} === Example === To compute the product of 12345 and 6789, where B = 10, choose m = 3. We use m right shifts for decomposing the input operands using the resulting base (Bm = 1000), as: 12345 = 12 · 1000 + 345 6789 = 6 · 1000 + 789 Only three multiplications, which operate on smaller integers, are used to compute three partial results: z2 = 12 × 6 = 72 z0 = 345 × 789 = 272205 z1 = (12 + 345) × (6 + 789) − z2 − z0 = 357 × 795 − 72 − 272205 = 283815 − 72 − 272205 = 11538 We get the result by just adding these three partial results, shifted accordingly (and then taking carries into account by decomposing these three inputs in base 1000 as for the input operands): result = z2 · (Bm)2 + z1 · (Bm)1 + z0 · (Bm)0, i.e. result = 72 · 10002 + 11538 · 1000 + 272205 = 83810205. Note that the intermediate third multiplication operates on an input domain which is less than two times larger than for the two first multiplications, its output domain is less than four times larger, and base-1000 carries computed from the first two multiplications must be taken into account when computing these two subtractions. === Recursive application === If n is four or more, the three multiplications in Karatsuba's basic step involve operands with fewer than n digits. Therefore, those products can be computed by recursive calls of the Karatsuba algorithm. The recursion can be applied until the numbers are so small that they can (or must) be computed directly. In a computer with a full 32-bit by 32-bit multiplier, for example, one could choose B = 231 and store each digit as a separate 32-bit binary word. Then the sums x1 + x0 and y1 + y0 will not need an extra binary word for storing the carry-over digit (as in carry-save adder), and the Karatsuba recursion can be applied until the numbers to multiply are only one digit long. === Time complexity analysis === Karatsuba's basic step works for any base B and any m, but the recursive algorithm is most efficient when m is equal to n/2, rounded up. In particular, if n is 2k, for some integer k, and the recursion stops only when n is 1, then the number of single-digit multiplications is 3k, which is nc where c = log23. Since one can extend any inputs with zero digits until their length is a power of two, it follows that the number of elementary multiplications, for any n, is at most 3 ⌈ log 2 ⁡ n ⌉ ≤ 3 n log 2 ⁡ 3 {\displaystyle 3^{\lceil \log _{2}n\rceil }\leq 3n^{\log _{2}3}\,\!} . Since the additions, subtractions, and digit shifts (multiplications by powers of B) in Karatsuba's basic step take time proportional to n, their cost becomes negligible as n increases. More precisely, if T(n) denotes the total number of elementary operations that the algorithm performs when multiplying two n-digit numbers, then T ( n ) = 3 T ( ⌈ n / 2 ⌉ ) + c n + d {\displaystyle T(n)=3T(\lceil n/2\rceil )+cn+d} for some constants c and d. For this recurrence relation, the master theorem for divide-and-conquer recurrences gives the asymptotic bound T ( n ) = Θ ( n log 2 ⁡ 3 ) {\displaystyle T(n)=\Theta (n^{\log _{2}3})\,\!} . It follows that, for sufficiently large n, Karatsuba's algorithm will perform fewer shifts and single-digit additions than longhand multiplication, even though its basic step uses more additions and shifts than the straightforward formula. For small values of n, however, the extra shift and add operations may make it run slower than the longhand method. == Implementation == Here is the pseudocode for this algorithm, using numbers represented in base ten. For the binary representation of integers, it suffices to replace everywhere 10 by 2. The second argument of the split_at function specifies the number of digits to extract from the right: for example, split_at("12345", 3) will extract the 3 final digits, giving: high="12", low="345". An issue that occurs when implementation is that the above computation of ( x 1 + x 0 ) {\displaystyle (x_{1}+x_{0})} and ( y 1 + y 0 ) {\displaystyle (y_{1}+y_{0})} for z 1 {\displaystyle z_{1}} may result in overflow (will produce a result in the range B m ≤ result < 2 B m {\displaystyle B^{m}\leq {\text{result}}<2B^{m}} ), which require a multiplier having one extra bit. This can be avoided by noting that z 1 = ( x 0 − x 1 ) ( y 1 − y 0 ) + z 2 + z 0 . {\displaystyle z_{1}=(x_{0}-x_{1})(y_{1}-y_{0})+z_{2}+z_{0}.} This computation of ( x 0 − x 1 ) {\displaystyle (x_{0}-x_{1})} and ( y 1 − y 0 ) {\displaystyle (y_{1}-y_{0})} will produce a result in the range of − B m < result < B m {\displaystyle -B^{m}<{\text{result}}<B^{m}} . This method may produce negative numbers, which require one extra bit to encode signedness, and would still require one extra bit for the multiplier. However, one way to avoid this is to record the sign and then use the absolute value of ( x 0 − x 1 ) {\displaystyle (x_{0}-x_{1})} and ( y 1 − y 0 ) {\displaystyle (y_{1}-y_{0})} to perform an unsigned multiplication, after which the result may be negated when both signs originally differed. Another advantage is that even though ( x 0 − x 1 ) ( y 1 − y 0 ) {\displaystyle (x_{0}-x_{1})(y_{1}-y_{0})} may be negative, the final computation of z 1 {\displaystyle z_{1}} only involves additions. == References == == External links == Karatsuba's Algorithm for Polynomial Multiplication Weisstein, Eric W. "Karatsuba Multiplication". MathWorld. Bernstein, D. J., "Multidigit multiplication for mathematicians". Covers Karatsuba and many other multiplication algorithms.
Wikipedia/Karatsuba_algorithm
Lehmer's GCD algorithm, named after Derrick Henry Lehmer, is a fast GCD algorithm, an improvement on the simpler but slower Euclidean algorithm. It is mainly used for big integers that have a representation as a string of digits relative to some chosen numeral system base, say β = 1000 or β = 232. == Algorithm == Lehmer noted that most of the quotients from each step of the division part of the standard algorithm are small. (For example, Knuth observed that the quotients 1, 2, and 3 comprise 67.7% of all quotients.) Those small quotients can be identified from only a few leading digits. Thus the algorithm starts by splitting off those leading digits and computing the sequence of quotients as long as it is correct. Say we want to obtain the GCD of the two integers a and b. Let a ≥ b. If b contains only one digit (in the chosen base, say β = 1000 or β = 232), use some other method, such as the Euclidean algorithm, to obtain the result. If a and b differ in the length of digits, perform a division so that a and b are equal in length, with length equal to m. Outer loop: Iterate until one of a or b is zero: Decrease m by one. Let x be the leading (most significant) digit in a, x = a div β m and y the leading digit in b, y = b div β m. Initialize a 2 by 3 matrix [ A B x C D y ] {\displaystyle \textstyle {\begin{bmatrix}A&B&x\\C&D&y\end{bmatrix}}} to an extended identity matrix [ 1 0 x 0 1 y ] , {\displaystyle \textstyle {\begin{bmatrix}1&0&x\\0&1&y\end{bmatrix}},} and perform the euclidean algorithm simultaneously on the pairs (x + A, y + C) and (x + B, y + D), until the quotients differ. That is, iterate as an inner loop: Compute the quotients w1 of the long divisions of (x + A) by (y + C) and w2 of (x + B) by (y + D) respectively. Also let w be the (not computed) quotient from the current long division in the chain of long divisions of the euclidean algorithm. If w1 ≠ w2, then break out of the inner iteration. Else set w to w1 (or w2). Replace the current matrix [ A B x C D y ] {\displaystyle \textstyle {\begin{bmatrix}A&B&x\\C&D&y\end{bmatrix}}} with the matrix product [ 0 1 1 − w ] ⋅ [ A B x C D y ] = [ C D y A − w C B − w D x − w y ] {\displaystyle \textstyle {\begin{bmatrix}0&1\\1&-w\end{bmatrix}}\cdot {\begin{bmatrix}A&B&x\\C&D&y\end{bmatrix}}={\begin{bmatrix}C&D&y\\A-wC&B-wD&x-wy\end{bmatrix}}} according to the matrix formulation of the extended euclidean algorithm. If B ≠ 0, go to the start of the inner loop. If B = 0, we have reached a deadlock; perform a normal step of the euclidean algorithm with a and b, and restart the outer loop. Set a to aA + bB and b to Ca + Db (again simultaneously). This applies the steps of the euclidean algorithm that were performed on the leading digits in compressed form to the long integers a and b. If b ≠ 0 go to the start of the outer loop. == References == Kapil Paranjape, Lehmer's Algorithm
Wikipedia/Lehmer's_GCD_algorithm
Trial division is the most laborious but easiest to understand of the integer factorization algorithms. The essential idea behind trial division tests to see if an integer n, the integer to be factored, can be divided by each number in turn that is less than or equal to the square root of n. For example, to find the prime factors of n = 70, one can try to divide 70 by successive primes: first, 70 / 2 = 35; next, neither 2 nor 3 evenly divides 35; finally, 35 / 5 = 7, and 7 is itself prime. So 70 = 2 × 5 × 7. Trial division was first described by Fibonacci in his book Liber Abaci (1202). == Method == Given an integer n (n refers to "the integer to be factored"), the trial division consists of systematically testing whether n is divisible by any smaller number. Clearly, it is only worthwhile to test candidate factors less than n, and in order from two upwards because an arbitrary n is more likely to be divisible by two than by three, and so on. With this ordering, there is no point in testing for divisibility by four if the number has already been determined not divisible by two, and so on for three and any multiple of three, etc. Therefore, the effort can be reduced by selecting only prime numbers as candidate factors. Furthermore, the trial factors need go no further than n {\displaystyle \scriptstyle {\sqrt {n}}} because, if n is divisible by some number p, then n = p × q and if q were smaller than p, n would have been detected earlier as being divisible by q or by a prime factor of q. A definite bound on the prime factors is possible. Suppose Pi is the i'th prime, so that P1 = 2, P2 = 3, P3 = 5, etc. Then the last prime number worth testing as a possible factor of n is Pi where P2i + 1 > n; equality here would mean that Pi + 1 is a factor. Thus, testing with 2, 3, and 5 suffices up to n = 48 not just 25 because the square of the next prime is 49, and below n = 25 just 2 and 3 are sufficient. Should the square root of n be an integer, then it is a factor and n is a perfect square. The trial division algorithm in pseudocode: algorithm trial-division is input: Integer n to be factored output: List F of prime factors of n P ← set of all primes ≤ n {\displaystyle {\sqrt {n}}} F ← empty list of factors for each prime p in P do while n mod p is 0 Add factor p to list F n ← n/p if F is empty (Original n is prime?) Add factor n to list F Determining the primes less than or equal to n {\displaystyle {\sqrt {n}}} is not a trivial task as n gets larger, so the simplest computer programs to factor a number just try successive integers, prime and composite, from 2 to n {\displaystyle {\sqrt {n}}} as possible factors. == Speed == In the worst case, trial division is a laborious algorithm. For a base-2 n digit number a, if it starts from two and works up only to the square root of a, the algorithm requires π ( 2 n / 2 ) ≈ 2 n / 2 ( n 2 ) ln ⁡ 2 {\displaystyle \pi (2^{n/2})\approx {2^{n/2} \over \left({\frac {n}{2}}\right)\ln 2}} trial divisions, where π ( x ) {\displaystyle \pi (x)} denotes the prime-counting function, the number of primes less than x. This does not take into account the overhead of primality testing to obtain the prime numbers as candidate factors. A useful table need not be large: P(3512) = 32749, the last prime that fits into a sixteen-bit signed integer and P(6542) = 65521 for unsigned sixteen-bit integers. That would suffice to test primality for numbers up to 655372 = 4,295,098,369. Preparing such a table (usually via the Sieve of Eratosthenes) would only be worthwhile if many numbers were to be tested. If instead a variant is used without primality testing, but simply dividing by every odd number less than the square root the base-2 n digit number a, prime or not, it can take up to about: 2 n / 2 {\displaystyle 2^{n/2}} In both cases, the required time grows exponentially with the digits of the number. Even so, this is a quite satisfactory method, considering that even the best-known algorithms have exponential time growth. For a chosen uniformly at random from integers of a given length, there is a 50% chance that 2 is a factor of a and a 33% chance that 3 is a factor of a, and so on. It can be shown that 88% of all positive integers have a factor under 100 and that 92% have a factor under 1000. Thus, when confronted by an arbitrary large a, it is worthwhile to check for divisibility by the small primes, since for a = 1000 {\displaystyle a=1000} , in base-2 n = 10 {\displaystyle n=10} . However, many-digit numbers that do not have factors in the small primes can require days or months to factor with the trial division. In such cases other methods are used such as the quadratic sieve and the general number field sieve (GNFS). Because these methods also have superpolynomial time growth a practical limit of n digits is reached very quickly. For this reason, in public key cryptography, values for a are chosen to have large prime factors of similar size so that they cannot be factored by any publicly known method in a useful time period on any available computer system or computer cluster such as supercomputers and computer grids. The largest cryptography-grade number that has been factored is RSA-250, a 250-digit number, using the GNFS and resources of several supercomputers. The running time was 2700 core years. == References == Childs, Lindsay N. (2009). A concrete introduction to higher algebra. Undergraduate Texts in Mathematics (3rd ed.). New York, NY: Springer-Verlag. ISBN 978-0-387-74527-5. Zbl 1165.00002. Crandall, Richard; Pomerance, Carl (2005). Prime numbers. A computational perspective (2nd ed.). New York, NY: Springer-Verlag. ISBN 0-387-25282-7. Zbl 1088.11001. == External links == Wikiversity offers a lesson on prime factorization using trial division with Python. Fast JavaScript Prime Factor Calculator using trial division. Can handle numbers up to about 253 Trial Division in Java, C and JavaScript (in Portuguese)
Wikipedia/Trial_division
In number theory, Berlekamp's root finding algorithm, also called the Berlekamp–Rabin algorithm, is the probabilistic method of finding roots of polynomials over the field F p {\displaystyle \mathbb {F} _{p}} with p {\displaystyle p} elements. The method was discovered by Elwyn Berlekamp in 1970 as an auxiliary to the algorithm for polynomial factorization over finite fields. The algorithm was later modified by Rabin for arbitrary finite fields in 1979. The method was also independently discovered before Berlekamp by other researchers. == History == The method was proposed by Elwyn Berlekamp in his 1970 work on polynomial factorization over finite fields. His original work lacked a formal correctness proof and was later refined and modified for arbitrary finite fields by Michael Rabin. In 1986 René Peralta proposed a similar algorithm for finding square roots in F p {\displaystyle \mathbb {F} _{p}} . In 2000 Peralta's method was generalized for cubic equations. == Statement of problem == Let p {\displaystyle p} be an odd prime number. Consider the polynomial f ( x ) = a 0 + a 1 x + ⋯ + a n x n {\textstyle f(x)=a_{0}+a_{1}x+\cdots +a_{n}x^{n}} over the field F p ≃ Z / p Z {\displaystyle \mathbb {F} _{p}\simeq \mathbb {Z} /p\mathbb {Z} } of remainders modulo p {\displaystyle p} . The algorithm should find all λ {\displaystyle \lambda } in F p {\displaystyle \mathbb {F} _{p}} such that f ( λ ) = 0 {\textstyle f(\lambda )=0} in F p {\displaystyle \mathbb {F} _{p}} . == Algorithm == === Randomization === Let f ( x ) = ( x − λ 1 ) ( x − λ 2 ) ⋯ ( x − λ n ) {\textstyle f(x)=(x-\lambda _{1})(x-\lambda _{2})\cdots (x-\lambda _{n})} . Finding all roots of this polynomial is equivalent to finding its factorization into linear factors. To find such factorization it is sufficient to split the polynomial into any two non-trivial divisors and factorize them recursively. To do this, consider the polynomial f z ( x ) = f ( x − z ) = ( x − λ 1 − z ) ( x − λ 2 − z ) ⋯ ( x − λ n − z ) {\textstyle f_{z}(x)=f(x-z)=(x-\lambda _{1}-z)(x-\lambda _{2}-z)\cdots (x-\lambda _{n}-z)} where z {\displaystyle z} is some element of F p {\displaystyle \mathbb {F} _{p}} . If one can represent this polynomial as the product f z ( x ) = p 0 ( x ) p 1 ( x ) {\displaystyle f_{z}(x)=p_{0}(x)p_{1}(x)} then in terms of the initial polynomial it means that f ( x ) = p 0 ( x + z ) p 1 ( x + z ) {\displaystyle f(x)=p_{0}(x+z)p_{1}(x+z)} , which provides needed factorization of f ( x ) {\displaystyle f(x)} . === Classification of === F p {\displaystyle \mathbb {F} _{p}} elements Due to Euler's criterion, for every monomial ( x − λ ) {\displaystyle (x-\lambda )} exactly one of following properties holds: The monomial is equal to x {\displaystyle x} if λ = 0 {\displaystyle \lambda =0} , The monomial divides g 0 ( x ) = ( x ( p − 1 ) / 2 − 1 ) {\textstyle g_{0}(x)=(x^{(p-1)/2}-1)} if λ {\displaystyle \lambda } is quadratic residue modulo p {\displaystyle p} , The monomial divides g 1 ( x ) = ( x ( p − 1 ) / 2 + 1 ) {\textstyle g_{1}(x)=(x^{(p-1)/2}+1)} if λ {\displaystyle \lambda } is quadratic non-residual modulo p {\displaystyle p} . Thus if f z ( x ) {\displaystyle f_{z}(x)} is not divisible by x {\displaystyle x} , which may be checked separately, then f z ( x ) {\displaystyle f_{z}(x)} is equal to the product of greatest common divisors gcd ( f z ( x ) ; g 0 ( x ) ) {\displaystyle \gcd(f_{z}(x);g_{0}(x))} and gcd ( f z ( x ) ; g 1 ( x ) ) {\displaystyle \gcd(f_{z}(x);g_{1}(x))} . === Berlekamp's method === The property above leads to the following algorithm: Explicitly calculate coefficients of f z ( x ) = f ( x − z ) {\displaystyle f_{z}(x)=f(x-z)} , Calculate remainders of x , x 2 , x 2 2 , x 2 3 , x 2 4 , … , x 2 ⌊ log 2 ⁡ p ⌋ {\textstyle x,x^{2},x^{2^{2}},x^{2^{3}},x^{2^{4}},\ldots ,x^{2^{\lfloor \log _{2}p\rfloor }}} modulo f z ( x ) {\displaystyle f_{z}(x)} by squaring the current polynomial and taking remainder modulo f z ( x ) {\displaystyle f_{z}(x)} , Using exponentiation by squaring and polynomials calculated on the previous steps calculate the remainder of x ( p − 1 ) / 2 {\textstyle x^{(p-1)/2}} modulo f z ( x ) {\textstyle f_{z}(x)} , If x ( p − 1 ) / 2 ≢ ± 1 ( mod f z ( x ) ) {\textstyle x^{(p-1)/2}\not \equiv \pm 1{\pmod {f_{z}(x)}}} then gcd {\displaystyle \gcd } mentioned below provide a non-trivial factorization of f z ( x ) {\displaystyle f_{z}(x)} , Otherwise all roots of f z ( x ) {\displaystyle f_{z}(x)} are either residues or non-residues simultaneously and one has to choose another z {\displaystyle z} . If f ( x ) {\displaystyle f(x)} is divisible by some non-linear primitive polynomial g ( x ) {\displaystyle g(x)} over F p {\displaystyle \mathbb {F} _{p}} then when calculating gcd {\displaystyle \gcd } with g 0 ( x ) {\displaystyle g_{0}(x)} and g 1 ( x ) {\displaystyle g_{1}(x)} one will obtain a non-trivial factorization of f z ( x ) / g z ( x ) {\displaystyle f_{z}(x)/g_{z}(x)} , thus algorithm allows to find all roots of arbitrary polynomials over F p {\displaystyle \mathbb {F} _{p}} . === Modular square root === Consider equation x 2 ≡ a ( mod p ) {\textstyle x^{2}\equiv a{\pmod {p}}} having elements β {\displaystyle \beta } and − β {\displaystyle -\beta } as its roots. Solution of this equation is equivalent to factorization of polynomial f ( x ) = x 2 − a = ( x − β ) ( x + β ) {\textstyle f(x)=x^{2}-a=(x-\beta )(x+\beta )} over F p {\displaystyle \mathbb {F} _{p}} . In this particular case problem it is sufficient to calculate only gcd ( f z ( x ) ; g 0 ( x ) ) {\displaystyle \gcd(f_{z}(x);g_{0}(x))} . For this polynomial exactly one of the following properties will hold: GCD is equal to 1 {\displaystyle 1} which means that z + β {\displaystyle z+\beta } and z − β {\displaystyle z-\beta } are both quadratic non-residues, GCD is equal to f z ( x ) {\displaystyle f_{z}(x)} which means that both numbers are quadratic residues, GCD is equal to ( x − t ) {\displaystyle (x-t)} which means that exactly one of these numbers is quadratic residue. In the third case GCD is equal to either ( x − z − β ) {\displaystyle (x-z-\beta )} or ( x − z + β ) {\displaystyle (x-z+\beta )} . It allows to write the solution as β = ( t − z ) ( mod p ) {\textstyle \beta =(t-z){\pmod {p}}} . === Example === Assume we need to solve the equation x 2 ≡ 5 ( mod 11 ) {\textstyle x^{2}\equiv 5{\pmod {11}}} . For this we need to factorize f ( x ) = x 2 − 5 = ( x − β ) ( x + β ) {\displaystyle f(x)=x^{2}-5=(x-\beta )(x+\beta )} . Consider some possible values of z {\displaystyle z} : Let z = 3 {\displaystyle z=3} . Then f z ( x ) = ( x − 3 ) 2 − 5 = x 2 − 6 x + 4 {\displaystyle f_{z}(x)=(x-3)^{2}-5=x^{2}-6x+4} , thus gcd ( x 2 − 6 x + 4 ; x 5 − 1 ) = 1 {\displaystyle \gcd(x^{2}-6x+4;x^{5}-1)=1} . Both numbers 3 ± β {\displaystyle 3\pm \beta } are quadratic non-residues, so we need to take some other z {\displaystyle z} . Let z = 2 {\displaystyle z=2} . Then f z ( x ) = ( x − 2 ) 2 − 5 = x 2 − 4 x − 1 {\displaystyle f_{z}(x)=(x-2)^{2}-5=x^{2}-4x-1} , thus gcd ( x 2 − 4 x − 1 ; x 5 − 1 ) ≡ x − 9 ( mod 11 ) {\textstyle \gcd(x^{2}-4x-1;x^{5}-1)\equiv x-9{\pmod {11}}} . From this follows x − 9 = x − 2 − β {\textstyle x-9=x-2-\beta } , so β ≡ 7 ( mod 11 ) {\displaystyle \beta \equiv 7{\pmod {11}}} and − β ≡ − 7 ≡ 4 ( mod 11 ) {\textstyle -\beta \equiv -7\equiv 4{\pmod {11}}} . A manual check shows that, indeed, 7 2 ≡ 49 ≡ 5 ( mod 11 ) {\textstyle 7^{2}\equiv 49\equiv 5{\pmod {11}}} and 4 2 ≡ 16 ≡ 5 ( mod 11 ) {\textstyle 4^{2}\equiv 16\equiv 5{\pmod {11}}} . == Correctness proof == The algorithm finds factorization of f z ( x ) {\displaystyle f_{z}(x)} in all cases except for ones when all numbers z + λ 1 , z + λ 2 , … , z + λ n {\displaystyle z+\lambda _{1},z+\lambda _{2},\ldots ,z+\lambda _{n}} are quadratic residues or non-residues simultaneously. According to theory of cyclotomy, the probability of such an event for the case when λ 1 , … , λ n {\displaystyle \lambda _{1},\ldots ,\lambda _{n}} are all residues or non-residues simultaneously (that is, when z = 0 {\displaystyle z=0} would fail) may be estimated as 2 − k {\displaystyle 2^{-k}} where k {\displaystyle k} is the number of distinct values in λ 1 , … , λ n {\displaystyle \lambda _{1},\ldots ,\lambda _{n}} . In this way even for the worst case of k = 1 {\displaystyle k=1} and f ( x ) = ( x − λ ) n {\displaystyle f(x)=(x-\lambda )^{n}} , the probability of error may be estimated as 1 / 2 {\displaystyle 1/2} and for modular square root case error probability is at most 1 / 4 {\displaystyle 1/4} . == Complexity == Let a polynomial have degree n {\displaystyle n} . We derive the algorithm's complexity as follows: Due to the binomial theorem ( x − z ) k = ∑ i = 0 k ( k i ) ( − z ) k − i x i {\textstyle (x-z)^{k}=\sum \limits _{i=0}^{k}{\binom {k}{i}}(-z)^{k-i}x^{i}} , we may transition from f ( x ) {\displaystyle f(x)} to f ( x − z ) {\displaystyle f(x-z)} in O ( n 2 ) {\displaystyle O(n^{2})} time. Polynomial multiplication and taking remainder of one polynomial modulo another one may be done in O ( n 2 ) {\textstyle O(n^{2})} , thus calculation of x 2 k mod f z ( x ) {\textstyle x^{2^{k}}{\bmod {f}}_{z}(x)} is done in O ( n 2 log ⁡ p ) {\textstyle O(n^{2}\log p)} . Binary exponentiation works in O ( n 2 log ⁡ p ) {\displaystyle O(n^{2}\log p)} . Taking the gcd {\displaystyle \gcd } of two polynomials via Euclidean algorithm works in O ( n 2 ) {\displaystyle O(n^{2})} . Thus the whole procedure may be done in O ( n 2 log ⁡ p ) {\displaystyle O(n^{2}\log p)} . Using the fast Fourier transform and Half-GCD algorithm, the algorithm's complexity may be improved to O ( n log ⁡ n log ⁡ p n ) {\displaystyle O(n\log n\log pn)} . For the modular square root case, the degree is n = 2 {\displaystyle n=2} , thus the whole complexity of algorithm in such case is bounded by O ( log ⁡ p ) {\displaystyle O(\log p)} per iteration. == References ==
Wikipedia/Berlekamp–Rabin_algorithm
The binary GCD algorithm, also known as Stein's algorithm or the binary Euclidean algorithm, is an algorithm that computes the greatest common divisor (GCD) of two nonnegative integers. Stein's algorithm uses simpler arithmetic operations than the conventional Euclidean algorithm; it replaces division with arithmetic shifts, comparisons, and subtraction. Although the algorithm in its contemporary form was first published by the physicist and programmer Josef Stein in 1967, it was known by the 2nd century BCE, in ancient China. == Algorithm == The algorithm finds the GCD of two nonnegative numbers u {\displaystyle u} and v {\displaystyle v} by repeatedly applying these identities: gcd ( u , 0 ) = u {\displaystyle \gcd(u,0)=u} : everything divides zero, and u {\displaystyle u} is the largest number that divides u {\displaystyle u} . gcd ( 2 u , 2 v ) = 2 ⋅ gcd ( u , v ) {\displaystyle \gcd(2u,2v)=2\cdot \gcd(u,v)} : 2 {\displaystyle 2} is a common divisor. gcd ( u , 2 v ) = gcd ( u , v ) {\displaystyle \gcd(u,2v)=\gcd(u,v)} if u {\displaystyle u} is odd: 2 {\displaystyle 2} is then not a common divisor. gcd ( u , v ) = gcd ( u , v − u ) {\displaystyle \gcd(u,v)=\gcd(u,v-u)} if u , v {\displaystyle u,v} odd and u ≤ v {\displaystyle u\leq v} . As GCD is commutative ( gcd ( u , v ) = gcd ( v , u ) {\displaystyle \gcd(u,v)=\gcd(v,u)} ), those identities still apply if the operands are swapped: gcd ( 0 , v ) = v {\displaystyle \gcd(0,v)=v} , gcd ( 2 u , v ) = gcd ( u , v ) {\displaystyle \gcd(2u,v)=\gcd(u,v)} if v {\displaystyle v} is odd, etc. == Implementation == While the above description of the algorithm is mathematically correct, performant software implementations typically differ from it in a few notable ways: eschewing trial division by 2 {\displaystyle 2} in favour of a single bitshift and the count trailing zeros primitive; this is functionally equivalent to repeatedly applying identity 3, but much faster; expressing the algorithm iteratively rather than recursively: the resulting implementation can be laid out to avoid repeated work, invoking identity 2 at the start and maintaining as invariant that both numbers are odd upon entering the loop, which only needs to implement identities 3 and 4; making the loop's body branch-free except for its exit condition ( v = 0 {\displaystyle v=0} ): in the example below, the exchange of u {\displaystyle u} and v {\displaystyle v} (ensuring u ≤ v {\displaystyle u\leq v} ) compiles down to conditional moves; hard-to-predict branches can have a large, negative impact on performance. The following is an implementation of the algorithm in Rust exemplifying those differences, adapted from uutils: Note: The implementation above accepts unsigned (non-negative) integers; given that gcd ( u , v ) = gcd ( ± u , ± v ) {\displaystyle \gcd(u,v)=\gcd(\pm {}u,\pm {}v)} , the signed case can be handled as follows: == Complexity == Asymptotically, the algorithm requires O ( n ) {\displaystyle O(n)} steps, where n {\displaystyle n} is the number of bits in the larger of the two numbers, as every two steps reduce at least one of the operands by at least a factor of 2 {\displaystyle 2} . Each step involves only a few arithmetic operations ( O ( 1 ) {\displaystyle O(1)} with a small constant); when working with word-sized numbers, each arithmetic operation translates to a single machine operation, so the number of machine operations is on the order of n {\displaystyle n} , i.e. log 2 ⁡ ( max ( u , v ) ) {\displaystyle \log _{2}(\max(u,v))} . For arbitrarily large numbers, the asymptotic complexity of this algorithm is O ( n 2 ) {\displaystyle O(n^{2})} , as each arithmetic operation (subtract and shift) involves a linear number of machine operations (one per word in the numbers' binary representation). If the numbers can be represented in the machine's memory, i.e. each number's size can be represented by a single machine word, this bound is reduced to: O ( n 2 log 2 ⁡ n ) {\displaystyle O\left({\frac {n^{2}}{\log _{2}n}}\right)} This is the same as for the Euclidean algorithm, though a more precise analysis by Akhavi and Vallée proved that binary GCD uses about 60% fewer bit operations. == Extensions == The binary GCD algorithm can be extended in several ways, either to output additional information, deal with arbitrarily large integers more efficiently, or to compute GCDs in domains other than the integers. The extended binary GCD algorithm, analogous to the extended Euclidean algorithm, fits in the first kind of extension, as it provides the Bézout coefficients in addition to the GCD: integers a {\displaystyle a} and b {\displaystyle b} such that a ⋅ u + b ⋅ v = gcd ( u , v ) {\displaystyle a\cdot {}u+b\cdot {}v=\gcd(u,v)} . In the case of large integers, the best asymptotic complexity is O ( M ( n ) log ⁡ n ) {\displaystyle O(M(n)\log n)} , with M ( n ) {\displaystyle M(n)} the cost of n {\displaystyle n} -bit multiplication; this is near-linear and vastly smaller than the binary GCD algorithm's O ( n 2 ) {\displaystyle O(n^{2})} , though concrete implementations only outperform older algorithms for numbers larger than about 64 kilobits (i.e. greater than 8×1019265). This is achieved by extending the binary GCD algorithm using ideas from the Schönhage–Strassen algorithm for fast integer multiplication. The binary GCD algorithm has also been extended to domains other than natural numbers, such as Gaussian integers, Eisenstein integers, quadratic rings, and integer rings of number fields. == Historical description == An algorithm for computing the GCD of two numbers was known in ancient China, under the Han dynasty, as a method to reduce fractions: If possible halve it; otherwise, take the denominator and the numerator, subtract the lesser from the greater, and do that alternately to make them the same. Reduce by the same number. The phrase "if possible halve it" is ambiguous, if this applies when either of the numbers become even, the algorithm is the binary GCD algorithm; if this only applies when both numbers are even, the algorithm is similar to the Euclidean algorithm. == See also == Euclidean algorithm Extended Euclidean algorithm Least common multiple == References == == Further reading == Knuth, Donald (1998). "§4.5 Rational arithmetic". Seminumerical Algorithms. The Art of Computer Programming. Vol. 2 (3rd ed.). Addison-Wesley. pp. 330–417. ISBN 978-0-201-89684-8. Covers the extended binary GCD, and a probabilistic analysis of the algorithm. Cohen, Henri (1993). "Chapter 1 : Fundamental Number-Theoretic Algorithms". A Course In Computational Algebraic Number Theory. Graduate Texts in Mathematics. Vol. 138. Springer-Verlag. pp. 12–24. ISBN 0-387-55640-0. Covers a variety of topics, including the extended binary GCD algorithm which outputs Bézout coefficients, efficient handling of multi-precision integers using a variant of Lehmer's GCD algorithm, and the relationship between GCD and continued fraction expansions of real numbers. Vallée, Brigitte (September–October 1998). "Dynamics of the Binary Euclidean Algorithm: Functional Analysis and Operators". Algorithmica. 22 (4): 660–685. doi:10.1007/PL00009246. S2CID 27441335. Archived from the original (PS) on 13 May 2011. An analysis of the algorithm in the average case, through the lens of functional analysis: the algorithms' main parameters are cast as a dynamical system, and their average value is related to the invariant measure of the system's transfer operator. == External links == NIST Dictionary of Algorithms and Data Structures: binary GCD algorithm Cut-the-Knot: Binary Euclid's Algorithm at cut-the-knot Analysis of the Binary Euclidean Algorithm (1976), a paper by Richard P. Brent, including a variant using left shifts
Wikipedia/Binary_GCD_algorithm
In group theory, the Pohlig–Hellman algorithm, sometimes credited as the Silver–Pohlig–Hellman algorithm, is a special-purpose algorithm for computing discrete logarithms in a finite abelian group whose order is a smooth integer. The algorithm was introduced by Roland Silver, but first published by Stephen Pohlig and Martin Hellman, who credit Silver with its earlier independent but unpublished discovery. Pohlig and Hellman also list Richard Schroeppel and H. Block as having found the same algorithm, later than Silver, but again without publishing it. == Groups of prime-power order == As an important special case, which is used as a subroutine in the general algorithm (see below), the Pohlig–Hellman algorithm applies to groups whose order is a prime power. The basic idea of this algorithm is to iteratively compute the p {\displaystyle p} -adic digits of the logarithm by repeatedly "shifting out" all but one unknown digit in the exponent, and computing that digit by elementary methods. (Note that for readability, the algorithm is stated for cyclic groups — in general, G {\displaystyle G} must be replaced by the subgroup ⟨ g ⟩ {\displaystyle \langle g\rangle } generated by g {\displaystyle g} , which is always cyclic.) Input. A cyclic group G {\displaystyle G} of order n = p e {\displaystyle n=p^{e}} with generator g {\displaystyle g} and an element h ∈ G {\displaystyle h\in G} . Output. The unique integer x ∈ { 0 , … , n − 1 } {\displaystyle x\in \{0,\dots ,n-1\}} such that g x = h {\displaystyle g^{x}=h} . Initialize x 0 := 0. {\displaystyle x_{0}:=0.} Compute γ := g p e − 1 {\displaystyle \gamma :=g^{p^{e-1}}} . By Lagrange's theorem, this element has order p {\displaystyle p} . For all k ∈ { 0 , … , e − 1 } {\displaystyle k\in \{0,\dots ,e-1\}} , do: Compute h k := ( g − x k h ) p e − 1 − k {\displaystyle h_{k}:=(g^{-x_{k}}h)^{p^{e-1-k}}} . By construction, the order of this element must divide p {\displaystyle p} , hence h k ∈ ⟨ γ ⟩ {\displaystyle h_{k}\in \langle \gamma \rangle } . Using the baby-step giant-step algorithm, compute d k ∈ { 0 , … , p − 1 } {\displaystyle d_{k}\in \{0,\dots ,p-1\}} such that γ d k = h k {\displaystyle \gamma ^{d_{k}}=h_{k}} . It takes time O ( p ) {\displaystyle O({\sqrt {p}})} . Set x k + 1 := x k + p k d k {\displaystyle x_{k+1}:=x_{k}+p^{k}d_{k}} . Return x e {\displaystyle x_{e}} . The algorithm computes discrete logarithms in time complexity O ( e p ) {\displaystyle O(e{\sqrt {p}})} , far better than the baby-step giant-step algorithm's O ( p e ) {\displaystyle O({\sqrt {p^{e}}})} when e {\displaystyle e} is large. == The general algorithm == In this section, we present the general case of the Pohlig–Hellman algorithm. The core ingredients are the algorithm from the previous section (to compute a logarithm modulo each prime power in the group order) and the Chinese remainder theorem (to combine these to a logarithm in the full group). (Again, we assume the group to be cyclic, with the understanding that a non-cyclic group must be replaced by the subgroup generated by the logarithm's base element.) Input. A cyclic group G {\displaystyle G} of order n {\displaystyle n} with generator g {\displaystyle g} , an element h ∈ G {\displaystyle h\in G} , and a prime factorization n = ∏ i = 1 r p i e i {\textstyle n=\prod _{i=1}^{r}p_{i}^{e_{i}}} . Output. The unique integer x ∈ { 0 , … , n − 1 } {\displaystyle x\in \{0,\dots ,n-1\}} such that g x = h {\displaystyle g^{x}=h} . For each i ∈ { 1 , … , r } {\displaystyle i\in \{1,\dots ,r\}} , do: Compute g i := g n / p i e i {\displaystyle g_{i}:=g^{n/p_{i}^{e_{i}}}} . By Lagrange's theorem, this element has order p i e i {\displaystyle p_{i}^{e_{i}}} . Compute h i := h n / p i e i {\displaystyle h_{i}:=h^{n/p_{i}^{e_{i}}}} . By construction, h i ∈ ⟨ g i ⟩ {\displaystyle h_{i}\in \langle g_{i}\rangle } . Using the algorithm above in the group ⟨ g i ⟩ {\displaystyle \langle g_{i}\rangle } , compute x i ∈ { 0 , … , p i e i − 1 } {\displaystyle x_{i}\in \{0,\dots ,p_{i}^{e_{i}}-1\}} such that g i x i = h i {\displaystyle g_{i}^{x_{i}}=h_{i}} . Solve the simultaneous congruence x ≡ x i ( mod p i e i ) ∀ i ∈ { 1 , … , r } . {\displaystyle x\equiv x_{i}{\pmod {p_{i}^{e_{i}}}}\quad \forall i\in \{1,\dots ,r\}{\text{.}}} The Chinese remainder theorem guarantees there exists a unique solution x ∈ { 0 , … , n − 1 } {\displaystyle x\in \{0,\dots ,n-1\}} . Return x {\displaystyle x} . The correctness of this algorithm can be verified via the classification of finite abelian groups: Raising g {\displaystyle g} and h {\displaystyle h} to the power of n / p i e i {\displaystyle n/p_{i}^{e_{i}}} can be understood as the projection to the factor group of order p i e i {\displaystyle p_{i}^{e_{i}}} . == Complexity == The worst-case input for the Pohlig–Hellman algorithm is a group of prime order: In that case, it degrades to the baby-step giant-step algorithm, hence the worst-case time complexity is O ( n ) {\displaystyle {\mathcal {O}}({\sqrt {n}})} . However, it is much more efficient if the order is smooth: Specifically, if ∏ i p i e i {\displaystyle \prod _{i}p_{i}^{e_{i}}} is the prime factorization of n {\displaystyle n} , then the algorithm's complexity is O ( ∑ i e i ( log ⁡ n + p i ) ) {\displaystyle {\mathcal {O}}\left(\sum _{i}{e_{i}(\log n+{\sqrt {p_{i}}})}\right)} group operations. == Notes == == References == Mollin, Richard (2006-09-18). An Introduction To Cryptography (2nd ed.). Chapman and Hall/CRC. p. 344. ISBN 978-1-58488-618-1. Pohlig, S.; Hellman, M. (1978). "An Improved Algorithm for Computing Logarithms over GF(p) and its Cryptographic Significance" (PDF). IEEE Transactions on Information Theory (24): 106–110. doi:10.1109/TIT.1978.1055817. Menezes, Alfred J.; van Oorschot, Paul C.; Vanstone, Scott A. (1997). "Number-Theoretic Reference Problems" (PDF). Handbook of Applied Cryptography. CRC Press. pp. 107–109. ISBN 0-8493-8523-7.
Wikipedia/Pohlig–Hellman_algorithm
In computational number theory, Williams's p + 1 algorithm is an integer factorization algorithm, one of the family of algebraic-group factorisation algorithms. It was invented by Hugh C. Williams in 1982. It works well if the number N to be factored contains one or more prime factors p such that p + 1 is smooth, i.e. p + 1 contains only small factors. It uses Lucas sequences to perform exponentiation in a quadratic field. It is analogous to Pollard's p − 1 algorithm. == Algorithm == Choose some integer A greater than 2 which characterizes the Lucas sequence: V 0 = 2 , V 1 = A , V j = A V j − 1 − V j − 2 {\displaystyle V_{0}=2,V_{1}=A,V_{j}=AV_{j-1}-V_{j-2}} where all operations are performed modulo N. Then any odd prime p divides gcd ( N , V M − 2 ) {\displaystyle \gcd(N,V_{M}-2)} whenever M is a multiple of p − ( D / p ) {\displaystyle p-(D/p)} , where D = A 2 − 4 {\displaystyle D=A^{2}-4} and ( D / p ) {\displaystyle (D/p)} is the Jacobi symbol. We require that ( D / p ) = − 1 {\displaystyle (D/p)=-1} , that is, D should be a quadratic non-residue modulo p. But as we don't know p beforehand, more than one value of A may be required before finding a solution. If ( D / p ) = + 1 {\displaystyle (D/p)=+1} , this algorithm degenerates into a slow version of Pollard's p − 1 algorithm. So, for different values of M we calculate gcd ( N , V M − 2 ) {\displaystyle \gcd(N,V_{M}-2)} , and when the result is not equal to 1 or to N, we have found a non-trivial factor of N. The values of M used are successive factorials, and V M {\displaystyle V_{M}} is the M-th value of the sequence characterized by V M − 1 {\displaystyle V_{M-1}} . To find the M-th element V of the sequence characterized by B, we proceed in a manner similar to left-to-right exponentiation: x := B y := (B ^ 2 − 2) mod N for each bit of M to the right of the most significant bit do if the bit is 1 then x := (x × y − B) mod N y := (y ^ 2 − 2) mod N else y := (x × y − B) mod N x := (x ^ 2 − 2) mod N V := x == Example == With N=112729 and A=5, successive values of V M {\displaystyle V_{M}} are: V1 of seq(5) = V1! of seq(5) = 5 V2 of seq(5) = V2! of seq(5) = 23 V3 of seq(23) = V3! of seq(5) = 12098 V4 of seq(12098) = V4! of seq(5) = 87680 V5 of seq(87680) = V5! of seq(5) = 53242 V6 of seq(53242) = V6! of seq(5) = 27666 V7 of seq(27666) = V7! of seq(5) = 110229. At this point, gcd(110229-2,112729) = 139, so 139 is a non-trivial factor of 112729. Notice that p+1 = 140 = 22 × 5 × 7. The number 7! is the lowest factorial which is multiple of 140, so the proper factor 139 is found in this step. Using another initial value, say A = 9, we get: V1 of seq(9) = V1! of seq(9) = 9 V2 of seq(9) = V2! of seq(9) = 79 V3 of seq(79) = V3! of seq(9) = 41886 V4 of seq(41886) = V4! of seq(9) = 79378 V5 of seq(79378) = V5! of seq(9) = 1934 V6 of seq(1934) = V6! of seq(9) = 10582 V7 of seq(10582) = V7! of seq(9) = 84241 V8 of seq(84241) = V8! of seq(9) = 93973 V9 of seq(93973) = V9! of seq(9) = 91645. At this point gcd(91645-2,112729) = 811, so 811 is a non-trivial factor of 112729. Notice that p−1 = 810 = 2 × 5 × 34. The number 9! is the lowest factorial which is multiple of 810, so the proper factor 811 is found in this step. The factor 139 is not found this time because p−1 = 138 = 2 × 3 × 23 which is not a divisor of 9! As can be seen in these examples we do not know in advance whether the prime that will be found has a smooth p+1 or p−1. == Generalization == Based on Pollard's p − 1 and Williams's p+1 factoring algorithms, Eric Bach and Jeffrey Shallit developed techniques to factor n efficiently provided that it has a prime factor p such that any kth cyclotomic polynomial Φk(p) is smooth. The first few cyclotomic polynomials are given by the sequence Φ1(p) = p−1, Φ2(p) = p+1, Φ3(p) = p2+p+1, and Φ4(p) = p2+1. == References == Williams, H. C. (1982), "A p+1 method of factoring", Mathematics of Computation, 39 (159): 225–234, doi:10.2307/2007633, JSTOR 2007633, MR 0658227 == External links == P + 1 factorization method at Prime Wiki
Wikipedia/Williams's_p_+_1_algorithm
The Lenstra–Lenstra–Lovász (LLL) lattice basis reduction algorithm is a polynomial time lattice reduction algorithm invented by Arjen Lenstra, Hendrik Lenstra and László Lovász in 1982. Given a basis B = { b 1 , b 2 , … , b d } {\displaystyle \mathbf {B} =\{\mathbf {b} _{1},\mathbf {b} _{2},\dots ,\mathbf {b} _{d}\}} with n-dimensional integer coordinates, for a lattice L (a discrete subgroup of Rn) with d ≤ n {\displaystyle d\leq n} , the LLL algorithm calculates an LLL-reduced (short, nearly orthogonal) lattice basis in time O ( d 5 n log 3 ⁡ B ) {\displaystyle {\mathcal {O}}(d^{5}n\log ^{3}B)} where B {\displaystyle B} is the largest length of b i {\displaystyle \mathbf {b} _{i}} under the Euclidean norm, that is, B = max ( ‖ b 1 ‖ 2 , ‖ b 2 ‖ 2 , … , ‖ b d ‖ 2 ) {\displaystyle B=\max \left(\|\mathbf {b} _{1}\|_{2},\|\mathbf {b} _{2}\|_{2},\dots ,\|\mathbf {b} _{d}\|_{2}\right)} . The original applications were to give polynomial-time algorithms for factorizing polynomials with rational coefficients, for finding simultaneous rational approximations to real numbers, and for solving the integer linear programming problem in fixed dimensions. == LLL reduction == The precise definition of LLL-reduced is as follows: Given a basis B = { b 1 , b 2 , … , b n } , {\displaystyle \mathbf {B} =\{\mathbf {b} _{1},\mathbf {b} _{2},\dots ,\mathbf {b} _{n}\},} define its Gram–Schmidt process orthogonal basis B ∗ = { b 1 ∗ , b 2 ∗ , … , b n ∗ } , {\displaystyle \mathbf {B} ^{*}=\{\mathbf {b} _{1}^{*},\mathbf {b} _{2}^{*},\dots ,\mathbf {b} _{n}^{*}\},} and the Gram-Schmidt coefficients μ i , j = ⟨ b i , b j ∗ ⟩ ⟨ b j ∗ , b j ∗ ⟩ , {\displaystyle \mu _{i,j}={\frac {\langle \mathbf {b} _{i},\mathbf {b} _{j}^{*}\rangle }{\langle \mathbf {b} _{j}^{*},\mathbf {b} _{j}^{*}\rangle }},} for any 1 ≤ j < i ≤ n {\displaystyle 1\leq j<i\leq n} . Then the basis B {\displaystyle B} is LLL-reduced if there exists a parameter δ {\displaystyle \delta } in (0.25, 1] such that the following holds: (size-reduced) For 1 ≤ j < i ≤ n : | μ i , j | ≤ 0.5 {\displaystyle 1\leq j<i\leq n\colon \left|\mu _{i,j}\right|\leq 0.5} . By definition, this property guarantees the length reduction of the ordered basis. (Lovász condition) For k = 2,3,..,n : δ ‖ b k − 1 ∗ ‖ 2 ≤ ‖ b k ∗ ‖ 2 + μ k , k − 1 2 ‖ b k − 1 ∗ ‖ 2 {\displaystyle \colon \delta \Vert \mathbf {b} _{k-1}^{*}\Vert ^{2}\leq \Vert \mathbf {b} _{k}^{*}\Vert ^{2}+\mu _{k,k-1}^{2}\Vert \mathbf {b} _{k-1}^{*}\Vert ^{2}} . Here, estimating the value of the δ {\displaystyle \delta } parameter, we can conclude how well the basis is reduced. Greater values of δ {\displaystyle \delta } lead to stronger reductions of the basis. Initially, A. Lenstra, H. Lenstra and L. Lovász demonstrated the LLL-reduction algorithm for δ = 3 4 {\displaystyle \delta ={\frac {3}{4}}} . Note that although LLL-reduction is well-defined for δ = 1 {\displaystyle \delta =1} , the polynomial-time complexity is guaranteed only for δ {\displaystyle \delta } in ( 0.25 , 1 ) {\displaystyle (0.25,1)} . The LLL algorithm computes LLL-reduced bases. There is no known efficient algorithm to compute a basis in which the basis vectors are as short as possible for lattices of dimensions greater than 4. However, an LLL-reduced basis is nearly as short as possible, in the sense that there are absolute bounds c i > 1 {\displaystyle c_{i}>1} such that the first basis vector is no more than c 1 {\displaystyle c_{1}} times as long as a shortest vector in the lattice, the second basis vector is likewise within c 2 {\displaystyle c_{2}} of the second successive minimum, and so on. == Applications == An early successful application of the LLL algorithm was its use by Andrew Odlyzko and Herman te Riele in disproving Mertens conjecture. The LLL algorithm has found numerous other applications in MIMO detection algorithms and cryptanalysis of public-key encryption schemes: knapsack cryptosystems, RSA with particular settings, NTRUEncrypt, and so forth. The algorithm can be used to find integer solutions to many problems. In particular, the LLL algorithm forms a core of one of the integer relation algorithms. For example, if it is believed that r=1.618034 is a (slightly rounded) root to an unknown quadratic equation with integer coefficients, one may apply LLL reduction to the lattice in R 4 {\displaystyle \mathbf {R} ^{4}} spanned by [ 1 , 0 , 0 , 10000 r 2 ] , [ 0 , 1 , 0 , 10000 r ] , {\displaystyle [1,0,0,10000r^{2}],[0,1,0,10000r],} and [ 0 , 0 , 1 , 10000 ] {\displaystyle [0,0,1,10000]} . The first vector in the reduced basis will be an integer linear combination of these three, thus necessarily of the form [ a , b , c , 10000 ( a r 2 + b r + c ) ] {\displaystyle [a,b,c,10000(ar^{2}+br+c)]} ; but such a vector is "short" only if a, b, c are small and a r 2 + b r + c {\displaystyle ar^{2}+br+c} is even smaller. Thus the first three entries of this short vector are likely to be the coefficients of the integral quadratic polynomial which has r as a root. In this example the LLL algorithm finds the shortest vector to be [1, -1, -1, 0.00025] and indeed x 2 − x − 1 {\displaystyle x^{2}-x-1} has a root equal to the golden ratio, 1.6180339887.... == Properties of LLL-reduced basis == Let B = { b 1 , b 2 , … , b n } {\displaystyle \mathbf {B} =\{\mathbf {b} _{1},\mathbf {b} _{2},\dots ,\mathbf {b} _{n}\}} be a δ {\displaystyle \delta } -LLL-reduced basis of a lattice L {\displaystyle {\mathcal {L}}} . From the definition of LLL-reduced basis, we can derive several other useful properties about B {\displaystyle \mathbf {B} } . The first vector in the basis cannot be much larger than the shortest non-zero vector: ‖ b 1 ‖ ≤ ( 2 / ( 4 δ − 1 ) ) n − 1 ⋅ λ 1 ( L ) {\displaystyle \Vert \mathbf {b} _{1}\Vert \leq (2/({\sqrt {4\delta -1}}))^{n-1}\cdot \lambda _{1}({\mathcal {L}})} . In particular, for δ = 3 / 4 {\displaystyle \delta =3/4} , this gives ‖ b 1 ‖ ≤ 2 ( n − 1 ) / 2 ⋅ λ 1 ( L ) {\displaystyle \Vert \mathbf {b} _{1}\Vert \leq 2^{(n-1)/2}\cdot \lambda _{1}({\mathcal {L}})} . The first vector in the basis is also bounded by the determinant of the lattice: ‖ b 1 ‖ ≤ ( 2 / ( 4 δ − 1 ) ) ( n − 1 ) / 2 ⋅ ( det ( L ) ) 1 / n {\displaystyle \Vert \mathbf {b} _{1}\Vert \leq (2/({\sqrt {4\delta -1}}))^{(n-1)/2}\cdot (\det({\mathcal {L}}))^{1/n}} . In particular, for δ = 3 / 4 {\displaystyle \delta =3/4} , this gives ‖ b 1 ‖ ≤ 2 ( n − 1 ) / 4 ⋅ ( det ( L ) ) 1 / n {\displaystyle \Vert \mathbf {b} _{1}\Vert \leq 2^{(n-1)/4}\cdot (\det({\mathcal {L}}))^{1/n}} . The product of the norms of the vectors in the basis cannot be much larger than the determinant of the lattice: let δ = 3 / 4 {\displaystyle \delta =3/4} , then ∏ i = 1 n ‖ b i ‖ ≤ 2 n ( n − 1 ) / 4 ⋅ det ( L ) {\textstyle \prod _{i=1}^{n}\Vert \mathbf {b} _{i}\Vert \leq 2^{n(n-1)/4}\cdot \det({\mathcal {L}})} . == LLL algorithm pseudocode == The following description is based on (Hoffstein, Pipher & Silverman 2008, Theorem 6.68), with the corrections from the errata. INPUT a lattice basis b1, b2, ..., bn in Zm a parameter δ with 1/4 < δ < 1, most commonly δ = 3/4 PROCEDURE B* <- GramSchmidt({b1, ..., bn}) = {b1*, ..., bn*}; and do not normalize μi,j <- InnerProduct(bi, bj*)/InnerProduct(bj*, bj*); using the most current values of bi and bj* k <- 2; while k <= n do for j from k−1 to 1 do if |μk,j| > 1/2 then bk <- bk − ⌊μk,j⌉bj; Update B* and the related μi,j's as needed. (The naive method is to recompute B* whenever bi changes: B* <- GramSchmidt({b1, ..., bn}) = {b1*, ..., bn*}) end if end for if InnerProduct(bk*, bk*) > (δ − μ2k,k−1) InnerProduct(bk−1*, bk−1*) then k <- k + 1; else Swap bk and bk−1; Update B* and the related μi,j's as needed. k <- max(k−1, 2); end if end while return B the LLL reduced basis of {b1, ..., bn} OUTPUT the reduced basis b1, b2, ..., bn in Zm == Examples == === Example from Z3 === Let a lattice basis b 1 , b 2 , b 3 ∈ Z 3 {\displaystyle \mathbf {b} _{1},\mathbf {b} _{2},\mathbf {b} _{3}\in \mathbf {Z} ^{3}} , be given by the columns of [ 1 − 1 3 1 0 5 1 2 6 ] {\displaystyle {\begin{bmatrix}1&-1&3\\1&0&5\\1&2&6\end{bmatrix}}} then the reduced basis is [ 0 1 − 1 1 0 0 0 1 2 ] , {\displaystyle {\begin{bmatrix}0&1&-1\\1&0&0\\0&1&2\end{bmatrix}},} which is size-reduced, satisfies the Lovász condition, and is hence LLL-reduced, as described above. See W. Bosma. for details of the reduction process. === Example from Z[i]4 === Likewise, for the basis over the complex integers given by the columns of the matrix below, [ − 2 + 2 i 7 + 3 i 7 + 3 i − 5 + 4 i 3 + 3 i − 2 + 4 i 6 + 2 i − 1 + 4 i 2 + 2 i − 8 + 0 i − 9 + 1 i − 7 + 5 i 8 + 2 i − 9 + 0 i 6 + 3 i − 4 + 4 i ] , {\displaystyle {\begin{bmatrix}-2+2i&7+3i&7+3i&-5+4i\\3+3i&-2+4i&6+2i&-1+4i\\2+2i&-8+0i&-9+1i&-7+5i\\8+2i&-9+0i&6+3i&-4+4i\end{bmatrix}},} then the columns of the matrix below give an LLL-reduced basis. [ − 6 + 3 i − 2 + 2 i 2 − 2 i − 3 + 6 i 6 − 1 i 3 + 3 i 5 − 5 i 2 + 1 i 2 − 2 i 2 + 2 i − 3 − 1 i − 5 + 3 i − 2 + 1 i 8 + 2 i 7 + 1 i − 2 − 4 i ] . {\displaystyle {\begin{bmatrix}-6+3i&-2+2i&2-2i&-3+6i\\6-1i&3+3i&5-5i&2+1i\\2-2i&2+2i&-3-1i&-5+3i\\-2+1i&8+2i&7+1i&-2-4i\\\end{bmatrix}}.} == Implementations == LLL is implemented in Arageli as the function lll_reduction_int fpLLL as a stand-alone implementation FLINT as the function fmpz_lll GAP as the function LLLReducedBasis Macaulay2 as the function LLL in the package LLLBases Magma as the functions LLL and LLLGram (taking a gram matrix) Maple as the function IntegerRelations[LLL] Mathematica as the function LatticeReduce Number Theory Library (NTL) as the function LLL PARI/GP as the function qflll Pymatgen as the function analysis.get_lll_reduced_lattice SageMath as the method LLL driven by fpLLL and NTL Isabelle/HOL in the 'archive of formal proofs' entry LLL_Basis_Reduction. This code exports to efficiently executable Haskell. == See also == Coppersmith method == Notes == == References == Napias, Huguette (1996). "A generalization of the LLL algorithm over euclidean rings or orders". Journal de Théorie des Nombres de Bordeaux. 8 (2): 387–396. doi:10.5802/jtnb.176. Cohen, Henri (2000). A course in computational algebraic number theory. GTM. Vol. 138. Springer. ISBN 3-540-55640-0. Borwein, Peter (2002). Computational Excursions in Analysis and Number Theory. Springer. ISBN 0-387-95444-9. Luk, Franklin T.; Qiao, Sanzheng (2011). "A pivoted LLL algorithm". Linear Algebra and Its Applications. 434 (11): 2296–2307. doi:10.1016/j.laa.2010.04.003. Hoffstein, Jeffrey; Pipher, Jill; Silverman, J.H. (2008). An Introduction to Mathematical Cryptography. Springer. ISBN 978-0-387-77993-5.
Wikipedia/Lenstra–Lenstra–Lovász_lattice_basis_reduction_algorithm
A multiplication algorithm is an algorithm (or method) to multiply two numbers. Depending on the size of the numbers, different algorithms are more efficient than others. Numerous algorithms are known and there has been much research into the topic. The oldest and simplest method, known since antiquity as long multiplication or grade-school multiplication, consists of multiplying every digit in the first number by every digit in the second and adding the results. This has a time complexity of O ( n 2 ) {\displaystyle O(n^{2})} , where n is the number of digits. When done by hand, this may also be reframed as grid method multiplication or lattice multiplication. In software, this may be called "shift and add" due to bitshifts and addition being the only two operations needed. In 1960, Anatoly Karatsuba discovered Karatsuba multiplication, unleashing a flood of research into fast multiplication algorithms. This method uses three multiplications rather than four to multiply two two-digit numbers. (A variant of this can also be used to multiply complex numbers quickly.) Done recursively, this has a time complexity of O ( n log 2 ⁡ 3 ) {\displaystyle O(n^{\log _{2}3})} . Splitting numbers into more than two parts results in Toom-Cook multiplication; for example, using three parts results in the Toom-3 algorithm. Using many parts can set the exponent arbitrarily close to 1, but the constant factor also grows, making it impractical. In 1968, the Schönhage-Strassen algorithm, which makes use of a Fourier transform over a modulus, was discovered. It has a time complexity of O ( n log ⁡ n log ⁡ log ⁡ n ) {\displaystyle O(n\log n\log \log n)} . In 2007, Martin Fürer proposed an algorithm with complexity O ( n log ⁡ n 2 Θ ( log ∗ ⁡ n ) ) {\displaystyle O(n\log n2^{\Theta (\log ^{*}n)})} . In 2014, Harvey, Joris van der Hoeven, and Lecerf proposed one with complexity O ( n log ⁡ n 2 3 log ∗ ⁡ n ) {\displaystyle O(n\log n2^{3\log ^{*}n})} , thus making the implicit constant explicit; this was improved to O ( n log ⁡ n 2 2 log ∗ ⁡ n ) {\displaystyle O(n\log n2^{2\log ^{*}n})} in 2018. Lastly, in 2019, Harvey and van der Hoeven came up with a galactic algorithm with complexity O ( n log ⁡ n ) {\displaystyle O(n\log n)} . This matches a guess by Schönhage and Strassen that this would be the optimal bound, although this remains a conjecture today. Integer multiplication algorithms can also be used to multiply polynomials by means of the method of Kronecker substitution. == Long multiplication == If a positional numeral system is used, a natural way of multiplying numbers is taught in schools as long multiplication, sometimes called grade-school multiplication, sometimes called the Standard Algorithm: multiply the multiplicand by each digit of the multiplier and then add up all the properly shifted results. It requires memorization of the multiplication table for single digits. This is the usual algorithm for multiplying larger numbers by hand in base 10. A person doing long multiplication on paper will write down all the products and then add them together; an abacus-user will sum the products as soon as each one is computed. === Example === This example uses long multiplication to multiply 23,958,233 (multiplicand) by 5,830 (multiplier) and arrives at 139,676,498,390 for the result (product). 23958233 × 5830 ——————————————— 00000000 ( = 23,958,233 × 0) 71874699 ( = 23,958,233 × 30) 191665864 ( = 23,958,233 × 800) + 119791165 ( = 23,958,233 × 5,000) ——————————————— 139676498390 ( = 139,676,498,390) ==== Other notations ==== In some countries such as Germany, the above multiplication is depicted similarly but with the original product kept horizontal and computation starting with the first digit of the multiplier: 23958233 · 5830 ——————————————— 119791165 191665864 71874699 00000000 ——————————————— 139676498390 Below pseudocode describes the process of above multiplication. It keeps only one row to maintain the sum which finally becomes the result. Note that the '+=' operator is used to denote sum to existing value and store operation (akin to languages such as Java and C) for compactness. === Usage in computers === Some chips implement long multiplication, in hardware or in microcode, for various integer and floating-point word sizes. In arbitrary-precision arithmetic, it is common to use long multiplication with the base set to 2w, where w is the number of bits in a word, for multiplying relatively small numbers. To multiply two numbers with n digits using this method, one needs about n2 operations. More formally, multiplying two n-digit numbers using long multiplication requires Θ(n2) single-digit operations (additions and multiplications). When implemented in software, long multiplication algorithms must deal with overflow during additions, which can be expensive. A typical solution is to represent the number in a small base, b, such that, for example, 8b is a representable machine integer. Several additions can then be performed before an overflow occurs. When the number becomes too large, we add part of it to the result, or we carry and map the remaining part back to a number that is less than b. This process is called normalization. Richard Brent used this approach in his Fortran package, MP. Computers initially used a very similar algorithm to long multiplication in base 2, but modern processors have optimized circuitry for fast multiplications using more efficient algorithms, at the price of a more complex hardware realization. In base two, long multiplication is sometimes called "shift and add", because the algorithm simplifies and just consists of shifting left (multiplying by powers of two) and adding. Most currently available microprocessors implement this or other similar algorithms (such as Booth encoding) for various integer and floating-point sizes in hardware multipliers or in microcode. On currently available processors, a bit-wise shift instruction is usually (but not always) faster than a multiply instruction and can be used to multiply (shift left) and divide (shift right) by powers of two. Multiplication by a constant and division by a constant can be implemented using a sequence of shifts and adds or subtracts. For example, there are several ways to multiply by 10 using only bit-shift and addition. In some cases such sequences of shifts and adds or subtracts will outperform hardware multipliers and especially dividers. A division by a number of the form 2 n {\displaystyle 2^{n}} or 2 n ± 1 {\displaystyle 2^{n}\pm 1} often can be converted to such a short sequence. == Algorithms for multiplying by hand == In addition to the standard long multiplication, there are several other methods used to perform multiplication by hand. Such algorithms may be devised for speed, ease of calculation, or educational value, particularly when computers or multiplication tables are unavailable. === Grid method === The grid method (or box method) is an introductory method for multiple-digit multiplication that is often taught to pupils at primary school or elementary school. It has been a standard part of the national primary school mathematics curriculum in England and Wales since the late 1990s. Both factors are broken up ("partitioned") into their hundreds, tens and units parts, and the products of the parts are then calculated explicitly in a relatively simple multiplication-only stage, before these contributions are then totalled to give the final answer in a separate addition stage. The calculation 34 × 13, for example, could be computed using the grid: followed by addition to obtain 442, either in a single sum (see right), or through forming the row-by-row totals (300 + 40) + (90 + 12) = 340 + 102 = 442. This calculation approach (though not necessarily with the explicit grid arrangement) is also known as the partial products algorithm. Its essence is the calculation of the simple multiplications separately, with all addition being left to the final gathering-up stage. The grid method can in principle be applied to factors of any size, although the number of sub-products becomes cumbersome as the number of digits increases. Nevertheless, it is seen as a usefully explicit method to introduce the idea of multiple-digit multiplications; and, in an age when most multiplication calculations are done using a calculator or a spreadsheet, it may in practice be the only multiplication algorithm that some students will ever need. === Lattice multiplication === Lattice, or sieve, multiplication is algorithmically equivalent to long multiplication. It requires the preparation of a lattice (a grid drawn on paper) which guides the calculation and separates all the multiplications from the additions. It was introduced to Europe in 1202 in Fibonacci's Liber Abaci. Fibonacci described the operation as mental, using his right and left hands to carry the intermediate calculations. Matrakçı Nasuh presented 6 different variants of this method in this 16th-century book, Umdet-ul Hisab. It was widely used in Enderun schools across the Ottoman Empire. Napier's bones, or Napier's rods also used this method, as published by Napier in 1617, the year of his death. As shown in the example, the multiplicand and multiplier are written above and to the right of a lattice, or a sieve. It is found in Muhammad ibn Musa al-Khwarizmi's "Arithmetic", one of Leonardo's sources mentioned by Sigler, author of "Fibonacci's Liber Abaci", 2002. During the multiplication phase, the lattice is filled in with two-digit products of the corresponding digits labeling each row and column: the tens digit goes in the top-left corner. During the addition phase, the lattice is summed on the diagonals. Finally, if a carry phase is necessary, the answer as shown along the left and bottom sides of the lattice is converted to normal form by carrying ten's digits as in long addition or multiplication. ==== Example ==== The pictures on the right show how to calculate 345 × 12 using lattice multiplication. As a more complicated example, consider the picture below displaying the computation of 23,958,233 multiplied by 5,830 (multiplier); the result is 139,676,498,390. Notice 23,958,233 is along the top of the lattice and 5,830 is along the right side. The products fill the lattice and the sum of those products (on the diagonal) are along the left and bottom sides. Then those sums are totaled as shown. === Russian peasant multiplication === The binary method is also known as peasant multiplication, because it has been widely used by people who are classified as peasants and thus have not memorized the multiplication tables required for long multiplication. The algorithm was in use in ancient Egypt. Its main advantages are that it can be taught quickly, requires no memorization, and can be performed using tokens, such as poker chips, if paper and pencil aren't available. The disadvantage is that it takes more steps than long multiplication, so it can be unwieldy for large numbers. ==== Description ==== On paper, write down in one column the numbers you get when you repeatedly halve the multiplier, ignoring the remainder; in a column beside it repeatedly double the multiplicand. Cross out each row in which the last digit of the first number is even, and add the remaining numbers in the second column to obtain the product. ==== Examples ==== This example uses peasant multiplication to multiply 11 by 3 to arrive at a result of 33. Decimal: Binary: 11 3 1011 11 5 6 101 110 2 12 10 1100 1 24 1 11000 —— —————— 33 100001 Describing the steps explicitly: 11 and 3 are written at the top 11 is halved (5.5) and 3 is doubled (6). The fractional portion is discarded (5.5 becomes 5). 5 is halved (2.5) and 6 is doubled (12). The fractional portion is discarded (2.5 becomes 2). The figure in the left column (2) is even, so the figure in the right column (12) is discarded. 2 is halved (1) and 12 is doubled (24). All not-scratched-out values are summed: 3 + 6 + 24 = 33. The method works because multiplication is distributive, so: 3 × 11 = 3 × ( 1 × 2 0 + 1 × 2 1 + 0 × 2 2 + 1 × 2 3 ) = 3 × ( 1 + 2 + 8 ) = 3 + 6 + 24 = 33. {\displaystyle {\begin{aligned}3\times 11&=3\times (1\times 2^{0}+1\times 2^{1}+0\times 2^{2}+1\times 2^{3})\\&=3\times (1+2+8)\\&=3+6+24\\&=33.\end{aligned}}} A more complicated example, using the figures from the earlier examples (23,958,233 and 5,830): Decimal: Binary: 5830 23958233 1011011000110 1011011011001001011011001 2915 47916466 101101100011 10110110110010010110110010 1457 95832932 10110110001 101101101100100101101100100 728 191665864 1011011000 1011011011001001011011001000 364 383331728 101101100 10110110110010010110110010000 182 766663456 10110110 101101101100100101101100100000 91 1533326912 1011011 1011011011001001011011001000000 45 3066653824 101101 10110110110010010110110010000000 22 6133307648 10110 101101101100100101101100100000000 11 12266615296 1011 1011011011001001011011001000000000 5 24533230592 101 10110110110010010110110010000000000 2 49066461184 10 101101101100100101101100100000000000 1 98132922368 1 1011011011001001011011001000000000000 ———————————— 1022143253354344244353353243222210110 (before carry) 139676498390 10000010000101010111100011100111010110 === Quarter square multiplication === This formula can in some cases be used, to make multiplication tasks easier to complete: ( x + y ) 2 4 − ( x − y ) 2 4 = 1 4 ( ( x 2 + 2 x y + y 2 ) − ( x 2 − 2 x y + y 2 ) ) = 1 4 ( 4 x y ) = x y . {\displaystyle {\frac {\left(x+y\right)^{2}}{4}}-{\frac {\left(x-y\right)^{2}}{4}}={\frac {1}{4}}\left(\left(x^{2}+2xy+y^{2}\right)-\left(x^{2}-2xy+y^{2}\right)\right)={\frac {1}{4}}\left(4xy\right)=xy.} In the case where x {\displaystyle x} and y {\displaystyle y} are integers, we have that ( x + y ) 2 ≡ ( x − y ) 2 mod 4 {\displaystyle (x+y)^{2}\equiv (x-y)^{2}{\bmod {4}}} because x + y {\displaystyle x+y} and x − y {\displaystyle x-y} are either both even or both odd. This means that x y = 1 4 ( x + y ) 2 − 1 4 ( x − y ) 2 = ( ( x + y ) 2 div 4 ) − ( ( x − y ) 2 div 4 ) {\displaystyle {\begin{aligned}xy&={\frac {1}{4}}(x+y)^{2}-{\frac {1}{4}}(x-y)^{2}\\&=\left((x+y)^{2}{\text{ div }}4\right)-\left((x-y)^{2}{\text{ div }}4\right)\end{aligned}}} and it's sufficient to (pre-)compute the integral part of squares divided by 4 like in the following example. ==== Examples ==== Below is a lookup table of quarter squares with the remainder discarded for the digits 0 through 18; this allows for the multiplication of numbers up to 9×9. If, for example, you wanted to multiply 9 by 3, you observe that the sum and difference are 12 and 6 respectively. Looking both those values up on the table yields 36 and 9, the difference of which is 27, which is the product of 9 and 3. ==== History of quarter square multiplication ==== In prehistoric time, quarter square multiplication involved floor function; that some sources attribute to Babylonian mathematics (2000–1600 BC). Antoine Voisin published a table of quarter squares from 1 to 1000 in 1817 as an aid in multiplication. A larger table of quarter squares from 1 to 100000 was published by Samuel Laundy in 1856, and a table from 1 to 200000 by Joseph Blater in 1888. Quarter square multipliers were used in analog computers to form an analog signal that was the product of two analog input signals. In this application, the sum and difference of two input voltages are formed using operational amplifiers. The square of each of these is approximated using piecewise linear circuits. Finally the difference of the two squares is formed and scaled by a factor of one fourth using yet another operational amplifier. In 1980, Everett L. Johnson proposed using the quarter square method in a digital multiplier. To form the product of two 8-bit integers, for example, the digital device forms the sum and difference, looks both quantities up in a table of squares, takes the difference of the results, and divides by four by shifting two bits to the right. For 8-bit integers the table of quarter squares will have 29−1=511 entries (one entry for the full range 0..510 of possible sums, the differences using only the first 256 entries in range 0..255) or 29−1=511 entries (using for negative differences the technique of 2-complements and 9-bit masking, which avoids testing the sign of differences), each entry being 16-bit wide (the entry values are from (0²/4)=0 to (510²/4)=65025). The quarter square multiplier technique has benefited 8-bit systems that do not have any support for a hardware multiplier. Charles Putney implemented this for the 6502. == Computational complexity of multiplication == A line of research in theoretical computer science is about the number of single-bit arithmetic operations necessary to multiply two n {\displaystyle n} -bit integers. This is known as the computational complexity of multiplication. Usual algorithms done by hand have asymptotic complexity of O ( n 2 ) {\displaystyle O(n^{2})} , but in 1960 Anatoly Karatsuba discovered that better complexity was possible (with the Karatsuba algorithm). Currently, the algorithm with the best computational complexity is a 2019 algorithm of David Harvey and Joris van der Hoeven, which uses the strategies of using number-theoretic transforms introduced with the Schönhage–Strassen algorithm to multiply integers using only O ( n log ⁡ n ) {\displaystyle O(n\log n)} operations. This is conjectured to be the best possible algorithm, but lower bounds of Ω ( n log ⁡ n ) {\displaystyle \Omega (n\log n)} are not known. === Karatsuba multiplication === Karatsuba multiplication is an O(nlog23) ≈ O(n1.585) divide and conquer algorithm, that uses recursion to merge together sub calculations. By rewriting the formula, one makes it possible to do sub calculations / recursion. By doing recursion, one can solve this in a fast manner. Let x {\displaystyle x} and y {\displaystyle y} be represented as n {\displaystyle n} -digit strings in some base B {\displaystyle B} . For any positive integer m {\displaystyle m} less than n {\displaystyle n} , one can write the two given numbers as x = x 1 B m + x 0 , {\displaystyle x=x_{1}B^{m}+x_{0},} y = y 1 B m + y 0 , {\displaystyle y=y_{1}B^{m}+y_{0},} where x 0 {\displaystyle x_{0}} and y 0 {\displaystyle y_{0}} are less than B m {\displaystyle B^{m}} . The product is then x y = ( x 1 B m + x 0 ) ( y 1 B m + y 0 ) = x 1 y 1 B 2 m + ( x 1 y 0 + x 0 y 1 ) B m + x 0 y 0 = z 2 B 2 m + z 1 B m + z 0 , {\displaystyle {\begin{aligned}xy&=(x_{1}B^{m}+x_{0})(y_{1}B^{m}+y_{0})\\&=x_{1}y_{1}B^{2m}+(x_{1}y_{0}+x_{0}y_{1})B^{m}+x_{0}y_{0}\\&=z_{2}B^{2m}+z_{1}B^{m}+z_{0},\\\end{aligned}}} where z 2 = x 1 y 1 , {\displaystyle z_{2}=x_{1}y_{1},} z 1 = x 1 y 0 + x 0 y 1 , {\displaystyle z_{1}=x_{1}y_{0}+x_{0}y_{1},} z 0 = x 0 y 0 . {\displaystyle z_{0}=x_{0}y_{0}.} These formulae require four multiplications and were known to Charles Babbage. Karatsuba observed that x y {\displaystyle xy} can be computed in only three multiplications, at the cost of a few extra additions. With z 0 {\displaystyle z_{0}} and z 2 {\displaystyle z_{2}} as before one can observe that z 1 = x 1 y 0 + x 0 y 1 = x 1 y 0 + x 0 y 1 + x 1 y 1 − x 1 y 1 + x 0 y 0 − x 0 y 0 = x 1 y 0 + x 0 y 0 + x 0 y 1 + x 1 y 1 − x 1 y 1 − x 0 y 0 = ( x 1 + x 0 ) y 0 + ( x 0 + x 1 ) y 1 − x 1 y 1 − x 0 y 0 = ( x 1 + x 0 ) ( y 0 + y 1 ) − x 1 y 1 − x 0 y 0 = ( x 1 + x 0 ) ( y 1 + y 0 ) − z 2 − z 0 . {\displaystyle {\begin{aligned}z_{1}&=x_{1}y_{0}+x_{0}y_{1}\\&=x_{1}y_{0}+x_{0}y_{1}+x_{1}y_{1}-x_{1}y_{1}+x_{0}y_{0}-x_{0}y_{0}\\&=x_{1}y_{0}+x_{0}y_{0}+x_{0}y_{1}+x_{1}y_{1}-x_{1}y_{1}-x_{0}y_{0}\\&=(x_{1}+x_{0})y_{0}+(x_{0}+x_{1})y_{1}-x_{1}y_{1}-x_{0}y_{0}\\&=(x_{1}+x_{0})(y_{0}+y_{1})-x_{1}y_{1}-x_{0}y_{0}\\&=(x_{1}+x_{0})(y_{1}+y_{0})-z_{2}-z_{0}.\\\end{aligned}}} Because of the overhead of recursion, Karatsuba's multiplication is slower than long multiplication for small values of n; typical implementations therefore switch to long multiplication for small values of n. ==== General case with multiplication of N numbers ==== By exploring patterns after expansion, one see following: ( x 1 B m + x 0 ) ( y 1 B m + y 0 ) ( z 1 B m + z 0 ) ( a 1 B m + a 0 ) = a 1 x 1 y 1 z 1 B 4 m + a 1 x 1 y 1 z 0 B 3 m + a 1 x 1 y 0 z 1 B 3 m + a 1 x 0 y 1 z 1 B 3 m + a 0 x 1 y 1 z 1 B 3 m + a 1 x 1 y 0 z 0 B 2 m + a 1 x 0 y 1 z 0 B 2 m + a 0 x 1 y 1 z 0 B 2 m + a 1 x 0 y 0 z 1 B 2 m + a 0 x 1 y 0 z 1 B 2 m + a 0 x 0 y 1 z 1 B 2 m + a 1 x 0 y 0 z 0 B m 1 + a 0 x 1 y 0 z 0 B m 1 + a 0 x 0 y 1 z 0 B m 1 + a 0 x 0 y 0 z 1 B m 1 + a 0 x 0 y 0 z 0 B 1 m {\displaystyle {\begin{alignedat}{5}(x_{1}B^{m}+x_{0})(y_{1}B^{m}+y_{0})(z_{1}B^{m}+z_{0})(a_{1}B^{m}+a_{0})&=a_{1}x_{1}y_{1}z_{1}B^{4m}&+a_{1}x_{1}y_{1}z_{0}B^{3m}&+a_{1}x_{1}y_{0}z_{1}B^{3m}&+a_{1}x_{0}y_{1}z_{1}B^{3m}\\&+a_{0}x_{1}y_{1}z_{1}B^{3m}&+a_{1}x_{1}y_{0}z_{0}B^{2m}&+a_{1}x_{0}y_{1}z_{0}B^{2m}&+a_{0}x_{1}y_{1}z_{0}B^{2m}\\&+a_{1}x_{0}y_{0}z_{1}B^{2m}&+a_{0}x_{1}y_{0}z_{1}B^{2m}&+a_{0}x_{0}y_{1}z_{1}B^{2m}&+a_{1}x_{0}y_{0}z_{0}B^{m{\phantom {1}}}\\&+a_{0}x_{1}y_{0}z_{0}B^{m{\phantom {1}}}&+a_{0}x_{0}y_{1}z_{0}B^{m{\phantom {1}}}&+a_{0}x_{0}y_{0}z_{1}B^{m{\phantom {1}}}&+a_{0}x_{0}y_{0}z_{0}{\phantom {B^{1m}}}\end{alignedat}}} Each summand is associated to a unique binary number from 0 to 2 N + 1 − 1 {\displaystyle 2^{N+1}-1} , for example a 1 x 1 y 1 z 1 ⟷ 1111 , a 1 x 0 y 1 z 0 ⟷ 1010 {\displaystyle a_{1}x_{1}y_{1}z_{1}\longleftrightarrow 1111,\ a_{1}x_{0}y_{1}z_{0}\longleftrightarrow 1010} etc. Furthermore; B is powered to number of 1, in this binary string, multiplied with m. If we express this in fewer terms, we get: ∏ j = 1 N ( x j , 1 B m + x j , 0 ) = ∑ i = 1 2 N + 1 − 1 ∏ j = 1 N x j , c ( i , j ) B m ∑ j = 1 N c ( i , j ) = ∑ j = 0 N z j B j m {\displaystyle \prod _{j=1}^{N}(x_{j,1}B^{m}+x_{j,0})=\sum _{i=1}^{2^{N+1}-1}\prod _{j=1}^{N}x_{j,c(i,j)}B^{m\sum _{j=1}^{N}c(i,j)}=\sum _{j=0}^{N}z_{j}B^{jm}} , where c ( i , j ) {\displaystyle c(i,j)} means digit in number i at position j. Notice that c ( i , j ) ∈ { 0 , 1 } {\displaystyle c(i,j)\in \{0,1\}} z 0 = ∏ j = 1 N x j , 0 z N = ∏ j = 1 N x j , 1 z N − 1 = ∏ j = 1 N ( x j , 0 + x j , 1 ) − ∑ i ≠ N − 1 N z i {\displaystyle {\begin{aligned}z_{0}&=\prod _{j=1}^{N}x_{j,0}\\z_{N}&=\prod _{j=1}^{N}x_{j,1}\\z_{N-1}&=\prod _{j=1}^{N}(x_{j,0}+x_{j,1})-\sum _{i\neq N-1}^{N}z_{i}\end{aligned}}} ==== History ==== Karatsuba's algorithm was the first known algorithm for multiplication that is asymptotically faster than long multiplication, and can thus be viewed as the starting point for the theory of fast multiplications. === Toom–Cook === Another method of multiplication is called Toom–Cook or Toom-3. The Toom–Cook method splits each number to be multiplied into multiple parts. The Toom–Cook method is one of the generalizations of the Karatsuba method. A three-way Toom–Cook can do a size-3N multiplication for the cost of five size-N multiplications. This accelerates the operation by a factor of 9/5, while the Karatsuba method accelerates it by 4/3. Although using more and more parts can reduce the time spent on recursive multiplications further, the overhead from additions and digit management also grows. For this reason, the method of Fourier transforms is typically faster for numbers with several thousand digits, and asymptotically faster for even larger numbers. === Schönhage–Strassen === Every number in base B, can be written as a polynomial: X = ∑ i = 0 N x i B i {\displaystyle X=\sum _{i=0}^{N}{x_{i}B^{i}}} Furthermore, multiplication of two numbers could be thought of as a product of two polynomials: X Y = ( ∑ i = 0 N x i B i ) ( ∑ j = 0 N y i B j ) {\displaystyle XY=(\sum _{i=0}^{N}{x_{i}B^{i}})(\sum _{j=0}^{N}{y_{i}B^{j}})} Because,for B k {\displaystyle B^{k}} : c k = ∑ ( i , j ) : i + j = k a i b j = ∑ i = 0 k a i b k − i {\displaystyle c_{k}=\sum _{(i,j):i+j=k}{a_{i}b_{j}}=\sum _{i=0}^{k}{a_{i}b_{k-i}}} , we have a convolution. By using fft (fast fourier transformation) with convolution rule, we can get f ^ ( a ∗ b ) = f ^ ( ∑ i = 0 k a i b k − i ) = f ^ ( a ) ∙ f ^ ( b ) {\displaystyle {\hat {f}}(a*b)={\hat {f}}(\sum _{i=0}^{k}{a_{i}b_{k-i}})={\hat {f}}(a)\bullet {\hat {f}}(b)} . That is; C k = a k ∙ b k {\displaystyle C_{k}=a_{k}\bullet b_{k}} , where C k {\displaystyle C_{k}} is the corresponding coefficient in fourier space. This can also be written as: f f t ( a ∗ b ) = f f t ( a ) ∙ f f t ( b ) {\displaystyle \mathrm {fft} (a*b)=\mathrm {fft} (a)\bullet \mathrm {fft} (b)} . We have the same coefficient due to linearity under fourier transformation, and because these polynomials only consist of one unique term per coefficient: f ^ ( x n ) = ( i 2 π ) n δ ( n ) {\displaystyle {\hat {f}}(x^{n})=\left({\frac {i}{2\pi }}\right)^{n}\delta ^{(n)}} and f ^ ( a X ( ξ ) + b Y ( ξ ) ) = a X ^ ( ξ ) + b Y ^ ( ξ ) {\displaystyle {\hat {f}}(a\,X(\xi )+b\,Y(\xi ))=a\,{\hat {X}}(\xi )+b\,{\hat {Y}}(\xi )} Convolution rule: f ^ ( X ∗ Y ) = f ^ ( X ) ∙ f ^ ( Y ) {\displaystyle {\hat {f}}(X*Y)=\ {\hat {f}}(X)\bullet {\hat {f}}(Y)} We have reduced our convolution problem to product problem, through fft. By finding ifft (polynomial interpolation), for each c k {\displaystyle c_{k}} , one get the desired coefficients. Algorithm uses divide and conquer strategy, to divide problem to subproblems. It has a time complexity of O(n log(n) log(log(n))). ==== History ==== The algorithm was invented by Strassen (1968). It was made practical and theoretical guarantees were provided in 1971 by Schönhage and Strassen resulting in the Schönhage–Strassen algorithm. === Further improvements === In 2007 the asymptotic complexity of integer multiplication was improved by the Swiss mathematician Martin Fürer of Pennsylvania State University to O ( n log ⁡ n ⋅ 2 Θ ( log ∗ ⁡ ( n ) ) ) {\textstyle O(n\log n\cdot {2}^{\Theta (\log ^{*}(n))})} using Fourier transforms over complex numbers, where log* denotes the iterated logarithm. Anindya De, Chandan Saha, Piyush Kurur and Ramprasad Saptharishi gave a similar algorithm using modular arithmetic in 2008 achieving the same running time. In context of the above material, what these latter authors have achieved is to find N much less than 23k + 1, so that Z/NZ has a (2m)th root of unity. This speeds up computation and reduces the time complexity. However, these latter algorithms are only faster than Schönhage–Strassen for impractically large inputs. In 2014, Harvey, Joris van der Hoeven and Lecerf gave a new algorithm that achieves a running time of O ( n log ⁡ n ⋅ 2 3 log ∗ ⁡ n ) {\displaystyle O(n\log n\cdot 2^{3\log ^{*}n})} , making explicit the implied constant in the O ( log ∗ ⁡ n ) {\displaystyle O(\log ^{*}n)} exponent. They also proposed a variant of their algorithm which achieves O ( n log ⁡ n ⋅ 2 2 log ∗ ⁡ n ) {\displaystyle O(n\log n\cdot 2^{2\log ^{*}n})} but whose validity relies on standard conjectures about the distribution of Mersenne primes. In 2016, Covanov and Thomé proposed an integer multiplication algorithm based on a generalization of Fermat primes that conjecturally achieves a complexity bound of O ( n log ⁡ n ⋅ 2 2 log ∗ ⁡ n ) {\displaystyle O(n\log n\cdot 2^{2\log ^{*}n})} . This matches the 2015 conditional result of Harvey, van der Hoeven, and Lecerf but uses a different algorithm and relies on a different conjecture. In 2018, Harvey and van der Hoeven used an approach based on the existence of short lattice vectors guaranteed by Minkowski's theorem to prove an unconditional complexity bound of O ( n log ⁡ n ⋅ 2 2 log ∗ ⁡ n ) {\displaystyle O(n\log n\cdot 2^{2\log ^{*}n})} . In March 2019, David Harvey and Joris van der Hoeven announced their discovery of an O(n log n) multiplication algorithm. It was published in the Annals of Mathematics in 2021. Because Schönhage and Strassen predicted that n log(n) is the "best possible" result, Harvey said: "... our work is expected to be the end of the road for this problem, although we don't know yet how to prove this rigorously." === Lower bounds === There is a trivial lower bound of Ω(n) for multiplying two n-bit numbers on a single processor; no matching algorithm (on conventional machines, that is on Turing equivalent machines) nor any sharper lower bound is known. Multiplication lies outside of AC0[p] for any prime p, meaning there is no family of constant-depth, polynomial (or even subexponential) size circuits using AND, OR, NOT, and MODp gates that can compute a product. This follows from a constant-depth reduction of MODq to multiplication. Lower bounds for multiplication are also known for some classes of branching programs. == Complex number multiplication == Complex multiplication normally involves four multiplications and two additions. ( a + b i ) ( c + d i ) = ( a c − b d ) + ( b c + a d ) i . {\displaystyle (a+bi)(c+di)=(ac-bd)+(bc+ad)i.} Or × a b i c a c b c i d i a d i − b d {\displaystyle {\begin{array}{c|c|c}\times &a&bi\\\hline c&ac&bci\\\hline di&adi&-bd\end{array}}} As observed by Peter Ungar in 1963, one can reduce the number of multiplications to three, using essentially the same computation as Karatsuba's algorithm. The product (a + bi) · (c + di) can be calculated in the following way. k1 = c · (a + b) k2 = a · (d − c) k3 = b · (c + d) Real part = k1 − k3 Imaginary part = k1 + k2. This algorithm uses only three multiplications, rather than four, and five additions or subtractions rather than two. If a multiply is more expensive than three adds or subtracts, as when calculating by hand, then there is a gain in speed. On modern computers a multiply and an add can take about the same time so there may be no speed gain. There is a trade-off in that there may be some loss of precision when using floating point. For fast Fourier transforms (FFTs) (or any linear transformation) the complex multiplies are by constant coefficients c + di (called twiddle factors in FFTs), in which case two of the additions (d−c and c+d) can be precomputed. Hence, only three multiplies and three adds are required. However, trading off a multiplication for an addition in this way may no longer be beneficial with modern floating-point units. == Polynomial multiplication == All the above multiplication algorithms can also be expanded to multiply polynomials. Alternatively the Kronecker substitution technique may be used to convert the problem of multiplying polynomials into a single binary multiplication. Long multiplication methods can be generalised to allow the multiplication of algebraic formulae: 14ac - 3ab + 2 multiplied by ac - ab + 1 14ac -3ab 2 ac -ab 1 ———————————————————— 14a2c2 -3a2bc 2ac -14a2bc 3 a2b2 -2ab 14ac -3ab 2 ——————————————————————————————————————— 14a2c2 -17a2bc 16ac 3a2b2 -5ab +2 ======================================= As a further example of column based multiplication, consider multiplying 23 long tons (t), 12 hundredweight (cwt) and 2 quarters (qtr) by 47. This example uses avoirdupois measures: 1 t = 20 cwt, 1 cwt = 4 qtr. t cwt qtr 23 12 2 47 x ———————————————— 141 94 94 940 470 29 23 ———————————————— 1110 587 94 ———————————————— 1110 7 2 ================= Answer: 1110 ton 7 cwt 2 qtr First multiply the quarters by 47, the result 94 is written into the first workspace. Next, multiply cwt 12*47 = (2 + 10)*47 but don't add up the partial results (94, 470) yet. Likewise multiply 23 by 47 yielding (141, 940). The quarters column is totaled and the result placed in the second workspace (a trivial move in this case). 94 quarters is 23 cwt and 2 qtr, so place the 2 in the answer and put the 23 in the next column left. Now add up the three entries in the cwt column giving 587. This is 29 t 7 cwt, so write the 7 into the answer and the 29 in the column to the left. Now add up the tons column. There is no adjustment to make, so the result is just copied down. The same layout and methods can be used for any traditional measurements and non-decimal currencies such as the old British £sd system. == See also == Binary multiplier Dadda multiplier Division algorithm Horner scheme for evaluating of a polynomial Logarithm Matrix multiplication algorithm Mental calculation Number-theoretic transform Prosthaphaeresis Slide rule Trachtenberg system Residue number system § Multiplication for another fast multiplication algorithm, specially efficient when many operations are done in sequence, such as in linear algebra Wallace tree == References == == Further reading == Warren Jr., Henry S. (2013). Hacker's Delight (2 ed.). Addison Wesley - Pearson Education, Inc. ISBN 978-0-321-84268-8. Savard, John J. G. (2018) [2006]. "Advanced Arithmetic Techniques". quadibloc. Archived from the original on 2018-07-03. Retrieved 2018-07-16. Johansson, Kenny (2008). Low Power and Low Complexity Shift-and-Add Based Computations (PDF) (Dissertation thesis). Linköping Studies in Science and Technology (1 ed.). Linköping, Sweden: Department of Electrical Engineering, Linköping University. ISBN 978-91-7393-836-5. ISSN 0345-7524. No. 1201. Archived (PDF) from the original on 2017-08-13. Retrieved 2021-08-23. (x+268 pages) == External links == === Basic arithmetic === The Many Ways of Arithmetic in UCSMP Everyday Mathematics A Powerpoint presentation about ancient mathematics Lattice Multiplication Flash Video === Advanced algorithms === Multiplication Algorithms used by GMP
Wikipedia/Fürer's_algorithm
In computational number theory, Cipolla's algorithm is a technique for solving a congruence of the form x 2 ≡ n ( mod p ) , {\displaystyle x^{2}\equiv n{\pmod {p}},} where x , n ∈ F p {\displaystyle x,n\in \mathbf {F} _{p}} , so n is the square of x, and where p {\displaystyle p} is an odd prime. Here F p {\displaystyle \mathbf {F} _{p}} denotes the finite field with p {\displaystyle p} elements; { 0 , 1 , … , p − 1 } {\displaystyle \{0,1,\dots ,p-1\}} . The algorithm is named after Michele Cipolla, an Italian mathematician who discovered it in 1907. Apart from prime moduli, Cipolla's algorithm is also able to take square roots modulo prime powers. == Algorithm == Inputs: p {\displaystyle p} , an odd prime, n ∈ F p {\displaystyle n\in \mathbf {F} _{p}} , which is a square. Outputs: x ∈ F p {\displaystyle x\in \mathbf {F} _{p}} , satisfying x 2 = n . {\displaystyle x^{2}=n.} Step 1 is to find an a ∈ F p {\displaystyle a\in \mathbf {F} _{p}} such that a 2 − n {\displaystyle a^{2}-n} is not a square. There is no known deterministic algorithm for finding such an a {\displaystyle a} , but the following trial and error method can be used. Simply pick an a {\displaystyle a} and by computing the Legendre symbol ( a 2 − n p ) {\displaystyle \left({\frac {a^{2}-n}{p}}\right)} one can see whether a {\displaystyle a} satisfies the condition. The chance that a random a {\displaystyle a} will satisfy is ( p − 1 ) / 2 p {\displaystyle (p-1)/2p} . With p {\displaystyle p} large enough this is about 1 / 2 {\displaystyle 1/2} . Therefore, the expected number of trials before finding a suitable a {\displaystyle a} is about 2. Step 2 is to compute x by computing x = ( a + a 2 − n ) ( p + 1 ) / 2 {\displaystyle x=\left(a+{\sqrt {a^{2}-n}}\right)^{(p+1)/2}} within the field extension F p 2 = F p ( a 2 − n ) {\displaystyle \mathbf {F} _{p^{2}}=\mathbf {F} _{p}({\sqrt {a^{2}-n}})} . This x will be the one satisfying x 2 = n . {\displaystyle x^{2}=n.} If x 2 = n {\displaystyle x^{2}=n} , then ( − x ) 2 = n {\displaystyle (-x)^{2}=n} also holds. And since p is odd, x ≠ − x {\displaystyle x\neq -x} . So whenever a solution x is found, there's always a second solution, -x. == Example == (Note: All elements before step two are considered as an element of F 13 {\displaystyle \mathbf {F} _{13}} and all elements in step two are considered as elements of F 13 2 {\displaystyle \mathbf {F} _{13^{2}}} .) Find all x such that x 2 = 10. {\displaystyle x^{2}=10.} Before applying the algorithm, it must be checked that 10 {\displaystyle 10} is indeed a square in F 13 {\displaystyle \mathbf {F} _{13}} . Therefore, the Legendre symbol ( 10 | 13 ) {\displaystyle (10|13)} has to be equal to 1. This can be computed using Euler's criterion: ( 10 | 13 ) ≡ 10 6 ≡ 1 ( mod 13 ) . {\textstyle (10|13)\equiv 10^{6}\equiv 1{\pmod {13}}.} This confirms 10 being a square and hence the algorithm can be applied. Step 1: Find an a such that a 2 − n {\displaystyle a^{2}-n} is not a square. As stated, this has to be done by trial and error. Choose a = 2 {\displaystyle a=2} . Then a 2 − n {\displaystyle a^{2}-n} becomes 7. The Legendre symbol ( 7 | 13 ) {\displaystyle (7|13)} has to be −1. Again this can be computed using Euler's criterion: 7 6 = 343 2 ≡ 5 2 ≡ 25 ≡ − 1 ( mod 13 ) . {\textstyle 7^{6}=343^{2}\equiv 5^{2}\equiv 25\equiv -1{\pmod {13}}.} So a = 2 {\displaystyle a=2} is a suitable choice for a. Step 2: Compute x = ( a + a 2 − n ) ( p + 1 ) / 2 = ( 2 + − 6 ) 7 {\displaystyle x=\left(a+{\sqrt {a^{2}-n}}\right)^{(p+1)/2}=\left(2+{\sqrt {-6}}\right)^{7}} in F 13 ( − 6 ) {\displaystyle \mathbf {F} _{13}({\sqrt {-6}})} : ( 2 + − 6 ) 2 = 4 + 4 − 6 − 6 = − 2 + 4 − 6 {\displaystyle \left(2+{\sqrt {-6}}\right)^{2}=4+4{\sqrt {-6}}-6=-2+4{\sqrt {-6}}} ( 2 + − 6 ) 4 = ( − 2 + 4 − 6 ) 2 = − 1 − 3 − 6 {\displaystyle \left(2+{\sqrt {-6}}\right)^{4}=\left(-2+4{\sqrt {-6}}\right)^{2}=-1-3{\sqrt {-6}}} ( 2 + − 6 ) 6 = ( − 2 + 4 − 6 ) ( − 1 − 3 − 6 ) = 9 + 2 − 6 {\displaystyle \left(2+{\sqrt {-6}}\right)^{6}=\left(-2+4{\sqrt {-6}}\right)\left(-1-3{\sqrt {-6}}\right)=9+2{\sqrt {-6}}} ( 2 + − 6 ) 7 = ( 9 + 2 − 6 ) ( 2 + − 6 ) = 6 {\displaystyle \left(2+{\sqrt {-6}}\right)^{7}=\left(9+2{\sqrt {-6}}\right)\left(2+{\sqrt {-6}}\right)=6} So x = 6 {\displaystyle x=6} is a solution, as well as x = − 6 {\displaystyle x=-6} . Indeed, 6 2 ≡ 10 ( mod 13 ) . {\textstyle 6^{2}\equiv 10{\pmod {13}}.} == Proof == The first part of the proof is to verify that F p 2 = F p ( a 2 − n ) = { x + y a 2 − n : x , y ∈ F p } {\displaystyle \mathbf {F} _{p^{2}}=\mathbf {F} _{p}({\sqrt {a^{2}-n}})=\{x+y{\sqrt {a^{2}-n}}:x,y\in \mathbf {F} _{p}\}} is indeed a field. For the sake of notation simplicity, ω {\displaystyle \omega } is defined as a 2 − n {\displaystyle {\sqrt {a^{2}-n}}} . Of course, a 2 − n {\displaystyle a^{2}-n} is a quadratic non-residue, so there is no square root in F p {\displaystyle \mathbf {F} _{p}} . This ω {\displaystyle \omega } can roughly be seen as analogous to the complex number i. The field arithmetic is quite obvious. Addition is defined as ( x 1 + y 1 ω ) + ( x 2 + y 2 ω ) = ( x 1 + x 2 ) + ( y 1 + y 2 ) ω {\displaystyle \left(x_{1}+y_{1}\omega \right)+\left(x_{2}+y_{2}\omega \right)=\left(x_{1}+x_{2}\right)+\left(y_{1}+y_{2}\right)\omega } . Multiplication is also defined as usual. With keeping in mind that ω 2 = a 2 − n {\displaystyle \omega ^{2}=a^{2}-n} , it becomes ( x 1 + y 1 ω ) ( x 2 + y 2 ω ) = x 1 x 2 + x 1 y 2 ω + y 1 x 2 ω + y 1 y 2 ω 2 = ( x 1 x 2 + y 1 y 2 ( a 2 − n ) ) + ( x 1 y 2 + y 1 x 2 ) ω {\displaystyle \left(x_{1}+y_{1}\omega \right)\left(x_{2}+y_{2}\omega \right)=x_{1}x_{2}+x_{1}y_{2}\omega +y_{1}x_{2}\omega +y_{1}y_{2}\omega ^{2}=\left(x_{1}x_{2}+y_{1}y_{2}\left(a^{2}-n\right)\right)+\left(x_{1}y_{2}+y_{1}x_{2}\right)\omega } . Now the field properties have to be checked. The properties of closure under addition and multiplication, associativity, commutativity and distributivity are easily seen. This is because in this case the field F p 2 {\displaystyle \mathbf {F} _{p^{2}}} is somewhat resembles the field of complex numbers (with ω {\displaystyle \omega } being the analogon of i). The additive identity is 0 {\displaystyle 0} , or more formally 0 + 0 ω {\displaystyle 0+0\omega } : Let α ∈ F p 2 {\displaystyle \alpha \in \mathbf {F} _{p^{2}}} , then α + 0 = ( x + y ω ) + ( 0 + 0 ω ) = ( x + 0 ) + ( y + 0 ) ω = x + y ω = α {\displaystyle \alpha +0=(x+y\omega )+(0+0\omega )=(x+0)+(y+0)\omega =x+y\omega =\alpha } . The multiplicative identity is 1 {\displaystyle 1} , or more formally 1 + 0 ω {\displaystyle 1+0\omega } : α ⋅ 1 = ( x + y ω ) ( 1 + 0 ω ) = ( x ⋅ 1 + 0 ⋅ y ( a 2 − n ) ) + ( x ⋅ 0 + 1 ⋅ y ) ω = x + y ω = α {\displaystyle \alpha \cdot 1=(x+y\omega )(1+0\omega )=\left(x\cdot 1+0\cdot y\left(a^{2}-n\right)\right)+(x\cdot 0+1\cdot y)\omega =x+y\omega =\alpha } . The only thing left for F p 2 {\displaystyle \mathbf {F} _{p^{2}}} being a field is the existence of additive and multiplicative inverses. It is easily seen that the additive inverse of x + y ω {\displaystyle x+y\omega } is − x − y ω {\displaystyle -x-y\omega } , which is an element of F p 2 {\displaystyle \mathbf {F} _{p^{2}}} , because − x , − y ∈ F p {\displaystyle -x,-y\in \mathbf {F} _{p}} . In fact, those are the additive inverse elements of x and y. For showing that every non-zero element α {\displaystyle \alpha } has a multiplicative inverse, write down α = x 1 + y 1 ω {\displaystyle \alpha =x_{1}+y_{1}\omega } and α − 1 = x 2 + y 2 ω {\displaystyle \alpha ^{-1}=x_{2}+y_{2}\omega } . In other words, ( x 1 + y 1 ω ) ( x 2 + y 2 ω ) = ( x 1 x 2 + y 1 y 2 ( a 2 − n ) ) + ( x 1 y 2 + y 1 x 2 ) ω = 1 {\displaystyle (x_{1}+y_{1}\omega )(x_{2}+y_{2}\omega )=\left(x_{1}x_{2}+y_{1}y_{2}\left(a^{2}-n\right)\right)+\left(x_{1}y_{2}+y_{1}x_{2}\right)\omega =1} . So the two equalities x 1 x 2 + y 1 y 2 ( a 2 − n ) = 1 {\displaystyle x_{1}x_{2}+y_{1}y_{2}(a^{2}-n)=1} and x 1 y 2 + y 1 x 2 = 0 {\displaystyle x_{1}y_{2}+y_{1}x_{2}=0} must hold. Working out the details gives expressions for x 2 {\displaystyle x_{2}} and y 2 {\displaystyle y_{2}} , namely x 2 = − y 1 − 1 x 1 ( y 1 ( a 2 − n ) − x 1 2 y 1 − 1 ) − 1 {\displaystyle x_{2}=-y_{1}^{-1}x_{1}\left(y_{1}\left(a^{2}-n\right)-x_{1}^{2}y_{1}^{-1}\right)^{-1}} , y 2 = ( y 1 ( a 2 − n ) − x 1 2 y 1 − 1 ) − 1 {\displaystyle y_{2}=\left(y_{1}\left(a^{2}-n\right)-x_{1}^{2}y_{1}^{-1}\right)^{-1}} . The inverse elements which are shown in the expressions of x 2 {\displaystyle x_{2}} and y 2 {\displaystyle y_{2}} do exist, because these are all elements of F p {\displaystyle \mathbf {F} _{p}} . This completes the first part of the proof, showing that F p 2 {\displaystyle \mathbf {F} _{p^{2}}} is a field. The second and middle part of the proof is showing that for every element x + y ω ∈ F p 2 : ( x + y ω ) p = x − y ω {\displaystyle x+y\omega \in \mathbf {F} _{p^{2}}:(x+y\omega )^{p}=x-y\omega } . By definition, ω 2 = a 2 − n {\displaystyle \omega ^{2}=a^{2}-n} is not a square in F p {\displaystyle \mathbf {F} _{p}} . Euler's criterion then says that ω p − 1 = ( ω 2 ) p − 1 2 = − 1 {\displaystyle \omega ^{p-1}=\left(\omega ^{2}\right)^{\frac {p-1}{2}}=-1} . Thus ω p = − ω {\displaystyle \omega ^{p}=-\omega } . This, together with Fermat's little theorem (which says that x p = x {\displaystyle x^{p}=x} for all x ∈ F p {\displaystyle x\in \mathbf {F} _{p}} ) and the knowledge that in fields of characteristic p the equation ( a + b ) p = a p + b p {\displaystyle \left(a+b\right)^{p}=a^{p}+b^{p}} holds, a relationship sometimes called the Freshman's dream, shows the desired result ( x + y ω ) p = x p + y p ω p = x − y ω {\displaystyle (x+y\omega )^{p}=x^{p}+y^{p}\omega ^{p}=x-y\omega } . The third and last part of the proof is to show that if x 0 = ( a + ω ) p + 1 2 ∈ F p 2 {\displaystyle x_{0}=\left(a+\omega \right)^{\frac {p+1}{2}}\in \mathbf {F} _{p^{2}}} , then x 0 2 = n ∈ F p {\displaystyle x_{0}^{2}=n\in \mathbf {F} _{p}} . Compute x 0 2 = ( a + ω ) p + 1 = ( a + ω ) ( a + ω ) p = ( a + ω ) ( a − ω ) = a 2 − ω 2 = a 2 − ( a 2 − n ) = n {\displaystyle x_{0}^{2}=\left(a+\omega \right)^{p+1}=(a+\omega )(a+\omega )^{p}=(a+\omega )(a-\omega )=a^{2}-\omega ^{2}=a^{2}-\left(a^{2}-n\right)=n} . Note that this computation took place in F p 2 {\displaystyle \mathbf {F} _{p^{2}}} , so this x 0 ∈ F p 2 {\displaystyle x_{0}\in \mathbf {F} _{p^{2}}} . But with Lagrange's theorem, stating that a non-zero polynomial of degree n has at most n roots in any field K, and the knowledge that x 2 − n {\displaystyle x^{2}-n} has 2 roots in F p {\displaystyle \mathbf {F} _{p}} , these roots must be all of the roots in F p 2 {\displaystyle \mathbf {F} _{p^{2}}} . It was just shown that x 0 {\displaystyle x_{0}} and − x 0 {\displaystyle -x_{0}} are roots of x 2 − n {\displaystyle x^{2}-n} in F p 2 {\displaystyle \mathbf {F} _{p^{2}}} , so it must be that x 0 , − x 0 ∈ F p {\displaystyle x_{0},-x_{0}\in \mathbf {F} _{p}} . == Speed == After finding a suitable a, the number of operations required for the algorithm is 4 m + 2 k − 4 {\displaystyle 4m+2k-4} multiplications, 4 m − 2 {\displaystyle 4m-2} sums, where m is the number of digits in the binary representation of p and k is the number of ones in this representation. To find a by trial and error, the expected number of computations of the Legendre symbol is 2. But one can be lucky with the first try and one may need more than 2 tries. In the field F p 2 {\displaystyle \mathbf {F} _{p^{2}}} , the following two equalities hold ( x + y ω ) 2 = ( x 2 + y 2 ω 2 ) + ( ( x + y ) 2 − x 2 − y 2 ) ω , {\displaystyle (x+y\omega )^{2}=\left(x^{2}+y^{2}\omega ^{2}\right)+\left(\left(x+y\right)^{2}-x^{2}-y^{2}\right)\omega ,} where ω 2 = a 2 − n {\displaystyle \omega ^{2}=a^{2}-n} is known in advance. This computation needs 4 multiplications and 4 sums. ( x + y ω ) 2 ( a + ω ) = ( a d 2 − b ( x + d ) ) + ( d 2 − b y ) ω , {\displaystyle \left(x+y\omega \right)^{2}\left(a+\omega \right)=\left(ad^{2}-b\left(x+d\right)\right)+\left(d^{2}-by\right)\omega ,} where d = ( x + y a ) {\displaystyle d=(x+ya)} and b = n y {\displaystyle b=ny} . This operation needs 6 multiplications and 4 sums. Assuming that p ≡ 1 ( mod 4 ) , {\displaystyle p\equiv 1{\pmod {4}},} (in the case p ≡ 3 ( mod 4 ) {\displaystyle p\equiv 3{\pmod {4}}} , the direct computation x ≡ ± n p + 1 4 {\displaystyle x\equiv \pm n^{\frac {p+1}{4}}} is much faster) the binary expression of ( p + 1 ) / 2 {\displaystyle (p+1)/2} has m − 1 {\displaystyle m-1} digits, of which k are ones. So for computing a ( p + 1 ) / 2 {\displaystyle (p+1)/2} power of ( a + ω ) {\displaystyle \left(a+\omega \right)} , the first formula has to be used n − k − 1 {\displaystyle n-k-1} times and the second k − 1 {\displaystyle k-1} times. For this, Cipolla's algorithm is better than the Tonelli–Shanks algorithm if and only if S ( S − 1 ) > 8 m + 20 {\displaystyle S(S-1)>8m+20} , with 2 S {\displaystyle 2^{S}} being the maximum power of 2 which divides p − 1 {\displaystyle p-1} . == Prime power moduli == According to Dickson's "History Of Numbers", the following formula of Cipolla will find square roots modulo powers of prime: 2 − 1 q t ( ( k + k 2 − q ) s + ( k − k 2 − q ) s ) mod p λ {\displaystyle 2^{-1}q^{t}((k+{\sqrt {k^{2}-q}})^{s}+(k-{\sqrt {k^{2}-q}})^{s}){\bmod {p^{\lambda }}}} where t = ( p λ − 2 p λ − 1 + 1 ) / 2 {\displaystyle t=(p^{\lambda }-2p^{\lambda -1}+1)/2} and s = p λ − 1 ( p + 1 ) / 2 {\displaystyle s=p^{\lambda -1}(p+1)/2} where q = 10 {\displaystyle q=10} , k = 2 {\displaystyle k=2} as in this article's example Taking the example in the wiki article we can see that this formula above does indeed take square roots modulo prime powers. As 10 mod 13 3 ≡ 1046 {\displaystyle {\sqrt {10}}{\bmod {13^{3}}}\equiv 1046} Now solve for 2 − 1 q t {\displaystyle 2^{-1}q^{t}} via: 2 − 1 10 ( 13 3 − 2 ⋅ 13 2 + 1 ) / 2 mod 13 3 ≡ 1086 {\displaystyle 2^{-1}10^{(13^{3}-2\cdot 13^{2}+1)/2}{\bmod {13^{3}}}\equiv 1086} Now create the ( 2 + 2 2 − 10 ) 13 2 ⋅ 7 mod 13 3 {\displaystyle (2+{\sqrt {2^{2}-10}})^{13^{2}\cdot 7}{\bmod {13^{3}}}} and ( 2 − 2 2 − 10 ) 13 2 ⋅ 7 mod 13 3 {\displaystyle (2-{\sqrt {2^{2}-10}})^{13^{2}\cdot 7}{\bmod {13^{3}}}} (See here for mathematica code showing this above computation, remembering that something close to complex modular arithmetic is going on here) As such: ( 2 + 2 2 − 10 ) 13 2 ⋅ 7 mod 13 3 ≡ 1540 {\displaystyle (2+{\sqrt {2^{2}-10}})^{13^{2}\cdot 7}{\bmod {13^{3}}}\equiv 1540} and ( 2 − 2 2 − 10 ) 13 2 ⋅ 7 mod 13 3 ≡ 1540 {\displaystyle (2-{\sqrt {2^{2}-10}})^{13^{2}\cdot 7}{\bmod {13^{3}}}\equiv 1540} and the final equation is: 1086 ( 1540 + 1540 ) mod 13 3 ≡ 1046 {\displaystyle 1086(1540+1540){\bmod {13^{3}}}\equiv 1046} which is the answer. == References == == Sources == E. Bach, J.O. Shallit Algorithmic Number Theory: Efficient algorithms MIT Press, (1996)
Wikipedia/Cipolla's_algorithm
An integer relation between a set of real numbers x1, x2, ..., xn is a set of integers a1, a2, ..., an, not all 0, such that a 1 x 1 + a 2 x 2 + ⋯ + a n x n = 0. {\displaystyle a_{1}x_{1}+a_{2}x_{2}+\cdots +a_{n}x_{n}=0.\,} An integer relation algorithm is an algorithm for finding integer relations. Specifically, given a set of real numbers known to a given precision, an integer relation algorithm will either find an integer relation between them, or will determine that no integer relation exists with coefficients whose magnitudes are less than a certain upper bound. == History == For the case n = 2, an extension of the Euclidean algorithm can find any integer relation that exists between any two real numbers x1 and x2. The algorithm generates successive terms of the continued fraction expansion of x1/x2; if there is an integer relation between the numbers, then their ratio is rational and the algorithm eventually terminates. The Ferguson–Forcade algorithm was published in 1979 by Helaman Ferguson and R.W. Forcade. Although the paper treats general n, it is not clear if the paper fully solves the problem because it lacks the detailed steps, proofs, and a precision bound that are crucial for a reliable implementation. The first algorithm with complete proofs was the LLL algorithm, developed by Arjen Lenstra, Hendrik Lenstra and László Lovász in 1982. The HJLS algorithm, developed by Johan Håstad, Bettina Just, Jeffrey Lagarias, and Claus-Peter Schnorr in 1986. The PSOS algorithm, developed by Ferguson in 1988. The PSLQ algorithm, developed by Ferguson and Bailey in 1992 and substantially simplified by Ferguson, Bailey, and Arno in 1999. In 2000 the PSLQ algorithm was selected as one of the "Top Ten Algorithms of the Century" by Jack Dongarra and Francis Sullivan even though it is considered essentially equivalent to HJLS. The LLL algorithm has been improved by numerous authors. Modern LLL implementations can solve integer relation problems with n above 500. == Applications == Integer relation algorithms have numerous applications. The first application is to determine whether a given real number x is likely to be algebraic, by searching for an integer relation between a set of powers of x {1, x, x2, ..., xn}. The second application is to search for an integer relation between a real number x and a set of mathematical constants such as e, π and ln(2), which will lead to an expression for x as a linear combination of these constants. A typical approach in experimental mathematics is to use numerical methods and arbitrary precision arithmetic to find an approximate value for an infinite series, infinite product or an integral to a high degree of precision (usually at least 100 significant figures), and then use an integer relation algorithm to search for an integer relation between this value and a set of mathematical constants. If an integer relation is found, this suggests a possible closed-form expression for the original series, product or integral. This conjecture can then be validated by formal algebraic methods. The higher the precision to which the inputs to the algorithm are known, the greater the level of confidence that any integer relation that is found is not just a numerical artifact. A notable success of this approach was the use of the PSLQ algorithm to find the integer relation that led to the Bailey–Borwein–Plouffe formula for the value of π. PSLQ has also helped find new identities involving multiple zeta functions and their appearance in quantum field theory; and in identifying bifurcation points of the logistic map. For example, where B4 is the logistic map's fourth bifurcation point, the constant α = −B4(B4 − 2) is a root of a 120th-degree polynomial whose largest coefficient is 25730. Integer relation algorithms are combined with tables of high precision mathematical constants and heuristic search methods in applications such as the Inverse Symbolic Calculator or Plouffe's Inverter. Integer relation finding can be used to factor polynomials of high degree. == References == == External links == Recognizing Numerical Constants by David H. Bailey and Simon Plouffe Ten Problems in Experimental Mathematics Archived 2011-06-10 at the Wayback Machine by David H. Bailey, Jonathan M. Borwein, Vishaal Kapoor, and Eric W. Weisstein
Wikipedia/Integer_relation_algorithm
Fermat's factorization method, named after Pierre de Fermat, is based on the representation of an odd integer as the difference of two squares: N = a 2 − b 2 . {\displaystyle N=a^{2}-b^{2}.} That difference is algebraically factorable as ( a + b ) ( a − b ) {\displaystyle (a+b)(a-b)} ; if neither factor equals one, it is a proper factorization of N. Each odd number has such a representation. Indeed, if N = c d {\displaystyle N=cd} is a factorization of N, then N = ( c + d 2 ) 2 − ( c − d 2 ) 2 . {\displaystyle N=\left({\frac {c+d}{2}}\right)^{2}-\left({\frac {c-d}{2}}\right)^{2}.} Since N is odd, then c and d are also odd, so those halves are integers. (A multiple of four is also a difference of squares: let c and d be even.) In its simplest form, Fermat's method might be even slower than trial division (worst case). Nonetheless, the combination of trial division and Fermat's is more effective than either by itself. == Basic method == One tries various values of a, hoping that a 2 − N = b 2 {\displaystyle a^{2}-N=b^{2}} , a square. FermatFactor(N): // N should be odd a ← ceiling(sqrt(N)) b2 ← a*a - N repeat until b2 is a square: a ← a + 1 b2 ← a*a - N // equivalently: // b2 ← b2 + 2*a + 1 // a ← a + 1 return a - sqrt(b2) // or a + sqrt(b2) For example, to factor N = 5959 {\displaystyle N=5959} , the first try for a is the square root of 5959 rounded up to the next integer, which is 78. Then b 2 = 78 2 − 5959 = 125 {\displaystyle b^{2}=78^{2}-5959=125} . Since 125 is not a square, a second try is made by increasing the value of a by 1. The second attempt also fails, because 282 is again not a square. The third try produces the perfect square of 441. Thus, a = 80 {\displaystyle a=80} , b = 21 {\displaystyle b=21} , and the factors of 5959 are a − b = 59 {\displaystyle a-b=59} and a + b = 101 {\displaystyle a+b=101} . Suppose N has more than two prime factors. That procedure first finds the factorization with the least values of a and b. That is, a + b {\displaystyle a+b} is the smallest factor ≥ the square-root of N, and so a − b = N / ( a + b ) {\displaystyle a-b=N/(a+b)} is the largest factor ≤ root-N. If the procedure finds N = 1 ⋅ N {\displaystyle N=1\cdot N} , that shows that N is prime. For N = c d {\displaystyle N=cd} , let c be the largest subroot factor. a = ( c + d ) / 2 {\displaystyle a=(c+d)/2} , so the number of steps is approximately ( c + d ) / 2 − N = ( d − c ) 2 / 2 = ( N − c ) 2 / 2 c {\displaystyle (c+d)/2-{\sqrt {N}}=({\sqrt {d}}-{\sqrt {c}})^{2}/2=({\sqrt {N}}-c)^{2}/2c} . If N is prime (so that c = 1 {\displaystyle c=1} ), one needs O ( N ) {\displaystyle O(N)} steps. This is a bad way to prove primality. But if N has a factor close to its square root, the method works quickly. More precisely, if c differs less than ( 4 N ) 1 / 4 {\displaystyle {\left(4N\right)}^{1/4}} from N {\displaystyle {\sqrt {N}}} , the method requires only one step; this is independent of the size of N. == Fermat's and trial division == Consider trying to factor the prime number N = 2,345,678,917, but also compute b and a − b throughout. Going up from N {\displaystyle {\sqrt {N}}} rounded up to the next integer, which is 48,433, we can tabulate: In practice, one wouldn't bother with that last row until b is an integer. But observe that if N had a subroot factor above a − b = 47830.1 {\displaystyle a-b=47830.1} , Fermat's method would have found it already. Trial division would normally try up to 48,432; but after only four Fermat steps, we need only divide up to 47830, to find a factor or prove primality. This all suggests a combined factoring method. Choose some bound a m a x > N {\displaystyle a_{\mathrm {max} }>{\sqrt {N}}} ; use Fermat's method for factors between N {\displaystyle {\sqrt {N}}} and a m a x {\displaystyle a_{\mathrm {max} }} . This gives a bound for trial division which is a m a x − a m a x 2 − N {\displaystyle a_{\mathrm {max} }-{\sqrt {a_{\mathrm {max} }^{2}-N}}} . In the above example, with a m a x = 48436 {\displaystyle a_{\mathrm {max} }=48436} the bound for trial division is 47830. A reasonable choice could be a m a x = 55000 {\displaystyle a_{\mathrm {max} }=55000} giving a bound of 28937. In this regard, Fermat's method gives diminishing returns. One would surely stop before this point: == Sieve improvement == When considering the table for N = 2345678917 {\displaystyle N=2345678917} , one can quickly tell that none of the values of b 2 {\displaystyle b^{2}} are squares: It is not necessary to compute all the square-roots of a 2 − N {\displaystyle a^{2}-N} , nor even examine all the values for a. Squares are always congruent to 0, 1, 4, 5, 9, 16 modulo 20. The values repeat with each increase of a by 10. In this example, N is 17 mod 20, so subtracting 17 mod 20 (or adding 3), a 2 − N {\displaystyle a^{2}-N} produces 3, 4, 7, 8, 12, and 19 modulo 20 for these values. It is apparent that only the 4 from this list can be a square. Thus, a 2 {\displaystyle a^{2}} must be 1 mod 20, which means that a is 1, 9, 11 or 19 mod 20; it will produce a b 2 {\displaystyle b^{2}} which ends in 4 mod 20 and, if square, b will end in 2 or 8 mod 10. This can be performed with any modulus. Using the same N = 2345678917 {\displaystyle N=2345678917} , One generally chooses a power of a different prime for each modulus. Given a sequence of a-values (start, end, and step) and a modulus, one can proceed thus: FermatSieve(N, astart, aend, astep, modulus) a ← astart do modulus times: b2 ← a*a - N if b2 is a square, modulo modulus: FermatSieve(N, a, aend, astep * modulus, NextModulus) endif a ← a + astep enddo But the recursion is stopped when few a-values remain; that is, when (aend-astart)/astep is small. Also, because a's step-size is constant, one can compute successive b2's with additions. == Multiplier improvement == Fermat's method works best when there is a factor near the square-root of N. If the approximate ratio of two factors ( d / c {\displaystyle d/c} ) is known, then a rational number v / u {\displaystyle v/u} can be picked near that value. N u v = c v ⋅ d u {\displaystyle Nuv=cv\cdot du} , and Fermat's method, applied to Nuv, will find the factors c v {\displaystyle cv} and d u {\displaystyle du} quickly. Then gcd ( N , c v ) = c {\displaystyle \gcd(N,cv)=c} and gcd ( N , d u ) = d {\displaystyle \gcd(N,du)=d} . (Unless c divides u or d divides v.) Generally, if the ratio is not known, various u / v {\displaystyle u/v} values can be tried, and try to factor each resulting Nuv. R. Lehman devised a systematic way to do this, so that Fermat's plus trial division can factor N in O ( N 1 / 3 ) {\displaystyle O(N^{1/3})} time. == Other improvements == The fundamental ideas of Fermat's factorization method are the basis of the quadratic sieve and general number field sieve, the best-known algorithms for factoring large semiprimes, which are the "worst-case". The primary improvement that quadratic sieve makes over Fermat's factorization method is that instead of simply finding a square in the sequence of a 2 − n {\displaystyle a^{2}-n} , it finds a subset of elements of this sequence whose product is a square, and it does this in a highly efficient manner. The end result is the same: a difference of squares mod n that, if nontrivial, can be used to factor n. == See also == Completing the square Factorization of polynomials Factor theorem FOIL rule Monoid factorisation Pascal's triangle Prime factor Factorization Euler's factorization method Integer factorization Program synthesis Table of Gaussian integer factorizations Unique factorization == Notes == == References == Fermat (1894), Oeuvres de Fermat, vol. 2, p. 256 McKee, J (1999). "Speeding Fermat's factoring method". Mathematics of Computation. 68 (228): 1729–1737. doi:10.1090/S0025-5718-99-01133-3. == External links == Fermat's factorization running time, at blogspot.in Fermat's Factorization Online Calculator, at windowspros.ru
Wikipedia/Fermat's_factorization_method
Pollard's p − 1 algorithm is a number theoretic integer factorization algorithm, invented by John Pollard in 1974. It is a special-purpose algorithm, meaning that it is only suitable for integers with specific types of factors; it is the simplest example of an algebraic-group factorisation algorithm. The factors it finds are ones for which the number preceding the factor, p − 1, is powersmooth; the essential observation is that, by working in the multiplicative group modulo a composite number N, we are also working in the multiplicative groups modulo all of N's factors. The existence of this algorithm leads to the concept of safe primes, being primes for which p − 1 is two times a Sophie Germain prime q and thus minimally smooth. These primes are sometimes construed as "safe for cryptographic purposes", but they might be unsafe — in current recommendations for cryptographic strong primes (e.g. ANSI X9.31), it is necessary but not sufficient that p − 1 has at least one large prime factor. Most sufficiently large primes are strong; if a prime used for cryptographic purposes turns out to be non-strong, it is much more likely to be through malice than through an accident of random number generation. This terminology is considered obsolete by the cryptography industry: the ECM factorization method is more efficient than Pollard's algorithm and finds safe prime factors just as quickly as it finds non-safe prime factors of similar size, thus the size of p is the key security parameter, not the smoothness of p − 1. == Base concepts == Let n be a composite integer with prime factor p. By Fermat's little theorem, we know that for all integers a coprime to p and for all positive integers K: a K ( p − 1 ) ≡ 1 ( mod p ) {\displaystyle a^{K(p-1)}\equiv 1{\pmod {p}}} If a number x is congruent to 1 modulo a factor of n, then the gcd(x − 1, n) will be divisible by that factor. The idea is to make the exponent a large multiple of p − 1 by making it a number with very many prime factors; generally, we take the product of all prime powers less than some limit B. Start with a random x, and repeatedly replace it by x w mod n {\displaystyle x^{w}{\bmod {n}}} as w runs through those prime powers. Check at each stage, or once at the end if you prefer, whether gcd(x − 1, n) is not equal to 1. == Multiple factors == It is possible that for all the prime factors p of n, p − 1 is divisible by small primes, at which point the Pollard p − 1 algorithm simply returns n. == Algorithm and running time == The basic algorithm can be written as follows: Inputs: n: a composite number Output: a nontrivial factor of n or failure select a smoothness bound B define M = ∏ primes q ≤ B q ⌊ log q ⁡ B ⌋ {\displaystyle M=\prod _{{\text{primes}}~q\leq B}q^{\lfloor \log _{q}{B}\rfloor }} (note: explicitly evaluating M may not be necessary) randomly pick a positive integer, a, which is coprime to n (note: we can actually fix a, e.g. if n is odd, then we can always select a = 2, random selection here is not imperative) compute g = gcd(aM − 1, n) (note: exponentiation can be done modulo n) if 1 < g < n then return g if g = 1 then select a larger B and go to step 2 or return failure if g = n then select a smaller B and go to step 2 or return failure If g = 1 in step 6, this indicates there are no prime factors p for which p − 1 is B-powersmooth. If g = n in step 7, this usually indicates that all factors were B-powersmooth, but in rare cases it could indicate that a had a small order modulo n. Additionally, when the maximum prime factors of p − 1 for each prime factors p of n are all the same in some rare cases, this algorithm will fail. The running time of this algorithm is O(B × log B × log2 n); larger values of B make it run slower, but are more likely to produce a factor. === Example === If we want to factor the number n = 299. We select B = 5. Thus M = 22 × 31 × 51. We select a = 2. g = gcd(aM − 1, n) = 13. Since 1 < 13 < 299, thus return 13. 299 / 13 = 23 is prime, thus it is fully factored: 299 = 13 × 23. == Methods of choosing B == Since the algorithm is incremental, it is able to keep running with the bound constantly increasing. Assume that p − 1, where p is the smallest prime factor of n, can be modelled as a random number of size less than √n. By the Dickman function, the probability that the largest factor of such a number is less than (p − 1)1/ε is roughly ε−ε; so there is a probability of about 3−3 = 1/27 that a B value of n1/6 will yield a factorisation. In practice, the elliptic curve method is faster than the Pollard p − 1 method once the factors are at all large; running the p − 1 method up to B = 232 will find a quarter of all 64-bit factors and 1/27 of all 96-bit factors. == Two-stage variant == A variant of the basic algorithm is sometimes used; instead of requiring that p − 1 has all its factors less than B, we require it to have all but one of its factors less than some B1, and the remaining factor less than some B2 ≫ B1. After completing the first stage, which is the same as the basic algorithm, instead of computing a new M ′ = ∏ primes q ≤ B 2 q ⌊ log q ⁡ B 2 ⌋ {\displaystyle M'=\prod _{{\text{primes }}q\leq B_{2}}q^{\lfloor \log _{q}B_{2}\rfloor }} for B2 and checking gcd(aM' − 1, n), we compute Q = ∏ primes q ∈ ( B 1 , B 2 ] ( H q − 1 ) {\displaystyle Q=\prod _{{\text{primes }}q\in (B_{1},B_{2}]}(H^{q}-1)} where H = aM and check if gcd(Q, n) produces a nontrivial factor of n. As before, exponentiations can be done modulo n. Let {q1, q2, …} be successive prime numbers in the interval (B1, B2] and dn = qn − qn−1 the difference between consecutive prime numbers. Since typically B1 > 2, dn are even numbers. The distribution of prime numbers is such that the dn will all be relatively small. It is suggested that dn ≤ ln2 B2. Hence, the values of H2, H4, H6, … (mod n) can be stored in a table, and Hqn be computed from Hqn−1⋅Hdn, saving the need for exponentiations. == Implementations == The GMP-ECM package includes an efficient implementation of the p − 1 method. Prime95 and MPrime, the official clients of the Great Internet Mersenne Prime Search, use a modified version of the p − 1 algorithm to eliminate potential candidates. == See also == Williams's p + 1 algorithm == References == Pollard, J. M. (1974). "Theorems of factorization and primality testing". Proceedings of the Cambridge Philosophical Society. 76 (3): 521–528. Bibcode:1974PCPS...76..521P. doi:10.1017/S0305004100049252. S2CID 122817056. Montgomery, P. L.; Silverman, R. D. (1990). "An FFT extension to the P − 1 factoring algorithm". Mathematics of Computation. 54 (190): 839–854. Bibcode:1990MaCom..54..839M. doi:10.1090/S0025-5718-1990-1011444-3. Samuel S. Wagstaff, Jr. (2013). The Joy of Factoring. Providence, RI: American Mathematical Society. pp. 138–141. ISBN 978-1-4704-1048-3.
Wikipedia/Pollard's_p_−_1_algorithm
In mathematics the Function Field Sieve is one of the most efficient algorithms to solve the Discrete Logarithm Problem (DLP) in a finite field. It has heuristic subexponential complexity. Leonard Adleman developed it in 1994 and then elaborated it together with M. D. Huang in 1999. Previous work includes the work of D. Coppersmith about the DLP in fields of characteristic two. The discrete logarithm problem in a finite field consists of solving the equation a x = b {\displaystyle a^{x}=b} for a , b ∈ F p n {\displaystyle a,b\in \mathbb {F} _{p^{n}}} , p {\displaystyle p} a prime number and n {\displaystyle n} an integer. The function f : F p n → F p n , a ↦ a x {\displaystyle f:\mathbb {F} _{p^{n}}\to \mathbb {F} _{p^{n}},a\mapsto a^{x}} for a fixed x ∈ N {\displaystyle x\in \mathbb {N} } is a one-way function used in cryptography. Several cryptographic methods are based on the DLP such as the Diffie-Hellman key exchange, the El Gamal cryptosystem and the Digital Signature Algorithm. == Number theoretical background == === Function Fields === Let C ( x , y ) {\displaystyle C(x,y)} be a polynomial defining an algebraic curve over a finite field F p {\displaystyle \mathbb {F} _{p}} . A function field may be viewed as the field of fractions of the affine coordinate ring F p [ x , y ] / ( C ( x , y ) ) {\displaystyle \mathbb {F} _{p}[x,y]/(C(x,y))} , where ( C ( x , y ) ) {\displaystyle (C(x,y))} denotes the ideal generated by C ( x , y ) {\displaystyle C(x,y)} . This is a special case of an algebraic function field. It is defined over the finite field F p {\displaystyle \mathbb {F} _{p}} and has transcendence degree one. The transcendent element will be denoted by x {\displaystyle x} . There exist bijections between valuation rings in function fields and equivalence classes of places, as well as between valuation rings and equivalence classes of valuations. This correspondence is frequently used in the Function Field Sieve algorithm. === Divisors === A discrete valuation of the function field K / F p {\displaystyle K/\mathbb {F} _{p}} , namely a discrete valuation ring F p ⊂ O ⊂ K {\displaystyle \mathbb {F} _{p}\subset O\subset K} , has a unique maximal ideal P {\displaystyle P} called a prime of the function field. The degree of P {\displaystyle P} is d e g ( P ) = [ O / P : F p ] {\displaystyle deg(P)=[O/P:\mathbb {F} _{p}]} and we also define f O = [ O / P : F p ] {\displaystyle f_{O}=[O/P:\mathbb {F} _{p}]} . A divisor is a Z {\displaystyle \mathbb {Z} } -linear combination over all primes, so d = ∑ α P P {\textstyle d=\sum \alpha _{P}P} where α P ∈ Z {\displaystyle \alpha _{P}\in \mathbb {Z} } and only finitely many elements of the sum are non-zero. The divisor of an element x ∈ K {\displaystyle x\in K} is defined as div ( x ) = ∑ v P ( x ) P {\textstyle {\text{div}}(x)=\sum v_{P}(x)P} , where v P {\displaystyle v_{P}} is the valuation corresponding to the prime P {\displaystyle P} . The degree of a divisor is deg ⁡ ( d ) = ∑ α P deg ⁡ ( P ) {\textstyle \deg(d)=\sum \alpha _{P}\deg(P)} . == Method == The Function Field Sieve algorithm consists of a precomputation where the discrete logarithms of irreducible polynomials of small degree are found and a reduction step where they are combined to the logarithm of b {\displaystyle b} . Functions that decompose into irreducible function of degree smaller than some bound B {\displaystyle B} are called B {\displaystyle B} -smooth. This is analogous to the definition of a smooth number and such functions are useful because their decomposition can be found relatively fast. The set of those functions S = { g ( x ) ∈ F p [ x ] ∣ irreductible with deg ⁡ ( g ) < B } {\displaystyle S=\{g(x)\in \mathbb {F} _{p}[x]\mid {\text{ irreductible with }}\deg(g)<B\}} is called the factor base. A pair of functions ( r , s ) {\displaystyle (r,s)} is doubly-smooth if r m + s {\displaystyle rm+s} and N ( r y + s ) {\displaystyle N(ry+s)} are both smooth, where N ( ⋅ , ⋅ ) {\displaystyle N(\cdot ,\cdot )} is the norm of an element of K {\displaystyle K} over F p {\displaystyle \mathbb {F} _{p}} , m ∈ F p [ x ] {\displaystyle m\in \mathbb {F} _{p}[x]} is some parameter and r y + s {\displaystyle ry+s} is viewed as an element of the function field of C {\displaystyle C} . The sieving step of the algorithm consists of finding doubly-smooth pairs of functions. In the subsequent step we use them to find linear relations including the logarithms of the functions in the decompositions. By solving a linear system we then calculate the logarithms. In the reduction step we express log a ⁡ ( b ) {\displaystyle \log _{a}(b)} as a combination of the logarithm we found before and thus solve the DLP. === Precomputation === ==== Parameter selection ==== The algorithm requires the following parameters: an irreducible function f {\displaystyle f} of degree n {\displaystyle n} , a function m ∈ F p [ x ] {\displaystyle m\in \mathbb {F} _{p}[x]} and a curve C ( x , y ) {\displaystyle C(x,y)} of given degree d {\displaystyle d} such that C ( x , m ) ≡ 0 mod f {\displaystyle C(x,m)\equiv 0{\text{ mod }}f} . Here n {\displaystyle n} is the power in the order of the base field F p n {\displaystyle \mathbb {F} _{p^{n}}} . Let K {\displaystyle K} denote the function field defined by C {\displaystyle C} . This leads to an isomorphism F p n ≃ F p [ x ] / f {\displaystyle \mathbb {F} _{p^{n}}\simeq \mathbb {F} _{p}[x]/f} and a homomorphism ϕ : F p [ x , y ] / C → F p [ x ] / f , y ↦ m . {\displaystyle \phi :\mathbb {F} _{p}[x,y]/C\to \mathbb {F} _{p}[x]/f,y\mapsto m.} Using the isomorphism each element of F p n {\displaystyle \mathbb {F} _{p^{n}}} can be considered as a polynomial in F p [ x ] / f {\displaystyle \mathbb {F} _{p}[x]/f} . One also needs to set a smoothness bound B {\displaystyle B} for the factor base S {\displaystyle S} . ==== Sieving ==== In this step doubly-smooth pairs of functions ( r , s ) ∈ F p [ x ] × F p [ x ] {\displaystyle (r,s)\in \mathbb {F} _{p}[x]\times \mathbb {F} _{p}[x]} are found. One considers functions of the form f = ( r m + s ) N ( r y + s ) {\displaystyle f=(rm+s)N(ry+s)} , then divides f {\displaystyle f} by any g ∈ S {\displaystyle g\in S} as many times as possible. Any f {\displaystyle f} that is reduced to one in this process is B {\displaystyle B} -smooth. To implement this, Gray code can be used to efficiently step through multiples of a given polynomial. This is completely analogous to the sieving step in other sieving algorithms such as the Number Field Sieve or the index calculus algorithm. Instead of numbers one sieves through functions in F p [ x ] {\displaystyle \mathbb {F} _{p}[x]} but those functions can be factored into irreducible polynomials just as numbers can be factored into primes. ==== Finding linear relations ==== This is the most difficult part of the algorithm, involving function fields, places and divisors as defined above. The goal is to use the doubly-smooth pairs of functions to find linear relations involving the discrete logarithms of elements in the factor base. For each irreducible function in the factor base we find places v 1 , v 2 , . . . {\displaystyle v_{1},v_{2},...} of K {\displaystyle K} that lie over them and surrogate functions α 1 , α 2 , . . . {\displaystyle \alpha _{1},\alpha _{2},...} that correspond to the places. A surrogate function α i ∈ K {\displaystyle \alpha _{i}\in K} corresponding to a place v i {\displaystyle v_{i}} satisfies div ( α i ) = h ( v i − f v i u ) {\displaystyle {\text{div}}(\alpha _{i})=h(v_{i}-f_{v_{i}}u)} where h {\displaystyle h} is the class number of K {\displaystyle K} and u {\displaystyle u} is any fixed discrete valuation with f u = 1 {\displaystyle f_{u}=1} . The function defined this way is unique up to a constant in F p {\displaystyle \mathbb {F} _{p}} . By the definition of a divisor div ( r y + s ) = ∑ a i v i {\textstyle {\text{div}}(ry+s)=\sum a_{i}v_{i}} for a i = v i ( r y + s ) {\displaystyle a_{i}=v_{i}(ry+s)} . Using this and the fact that ∑ a i f v i = deg ⁡ ( div ( r y + s ) ) = 0 {\textstyle \sum a_{i}f_{v_{i}}=\deg({\text{div}}(ry+s))=0} we get the following expression: div ( ( r y + s ) h ) = ∑ h a i v i = ∑ h a i v i − ∑ h a i f v i v + h v ∑ a i f v i = ∑ a i h ( v i − f v i v ) ) = div ( ∏ α i a i ) {\displaystyle {\text{div}}((ry+s)^{h})=\sum ha_{i}v_{i}=\sum ha_{i}v_{i}-\sum ha_{i}f_{v_{i}}v+hv\sum a_{i}f_{v_{i}}=\sum a_{i}h(v_{i}-f_{v_{i}}v))={\text{div}}(\prod \alpha _{i}^{a_{i}})} where v {\displaystyle v} is any valuation with f v = 1 {\displaystyle f_{v}=1} . Then, using the fact that the divisor of a surrogate function is unique up to a constant, one gets ( r y + s ) h = c ∏ α i a i for some c ∈ F p ∗ {\displaystyle (ry+s)^{h}=c\prod \alpha _{i}^{a_{i}}{\text{ for some }}c\in F_{p}^{*}} ⟹ ϕ ( ( r y + s ) h ) = ϕ ( c ) ∏ ϕ ( α i ) a i {\displaystyle \implies \phi ((ry+s)^{h})=\phi (c)\prod \phi (\alpha _{i})^{a_{i}}} We now use the fact that ϕ ( r y + s ) = r m + s {\displaystyle \phi (ry+s)=rm+s} and the known decomposition of this expression into irreducible polynomials. Let e g {\displaystyle e_{g}} be the power of g ∈ S {\displaystyle g\in S} in this decomposition. Then ∏ g ∈ S g h e g ≡ ϕ ( c ) ∏ ϕ ( α i ) a i mod f {\displaystyle \prod _{g\in S}g^{he_{g}}\equiv \phi (c)\prod \phi (\alpha _{i})^{a_{i}}{\text{ mod }}f} Here we can take the discrete logarithm of the equation up to a unit. This is called the restricted discrete logarithm log ∗ ⁡ ( x ) {\displaystyle \log _{*}(x)} . It is defined by the equation a log ∗ ⁡ ( x ) = u x {\displaystyle a^{\log _{*}(x)}=ux} for some unit u ∈ F p {\displaystyle u\in \mathbb {F} _{p}} . ∑ g ∈ S e g log ∗ ⁡ g ≡ ∑ a i h 1 log ∗ ⁡ ( ϕ ( α i ) ) mod ( p n − 1 ) / ( p − 1 ) , {\displaystyle \sum _{g\in S}e_{g}\log _{*}g\equiv \sum a_{i}h_{1}\log _{*}(\phi (\alpha _{i})){\text{ mod }}(p^{n}-1)/(p-1),} where h 1 {\displaystyle h_{1}} is the inverse of h {\displaystyle h} modulo ( p n − 1 ) / ( p − 1 ) {\displaystyle (p^{n}-1)/(p-1)} . The expressions h 1 log ∗ ⁡ ( ϕ ( α i ) ) {\displaystyle h_{1}\log _{*}(\phi (\alpha _{i}))} and the logarithms log ∗ ⁡ ( g ) {\displaystyle \log _{*}(g)} are unknown. Once enough equations of this form are found, a linear system can be solved to find log ∗ ⁡ ( g ) {\displaystyle \log _{*}(g)} for all g ∈ S {\displaystyle g\in S} . Taking the whole expression h 1 l o g ∗ ( ϕ ( α i ) ) {\displaystyle h_{1}log_{*}(\phi (\alpha _{i}))} as an unknown helps to gain time, since h {\displaystyle h} , h 1 {\displaystyle h_{1}} , α i {\displaystyle \alpha _{i}} or ϕ ( α i ) {\displaystyle \phi (\alpha _{i})} don't have to be computed. Eventually for each g ∈ S {\displaystyle g\in S} the unit corresponding to the restricted discrete logarithm can be calculated which then gives log a ⁡ ( g ) = log ∗ ⁡ ( g ) − log a ⁡ ( u ) {\displaystyle \log _{a}(g)=\log _{*}(g)-\log _{a}(u)} . === Reduction step === First a l b {\displaystyle a^{l}b} mod f {\displaystyle f} are computed for a random l < n {\displaystyle l<n} . With sufficiently high probability this is n B {\displaystyle {\sqrt {nB}}} -smooth, so one can factor it as a l b = ∏ b i {\displaystyle a^{l}b=\prod b_{i}} for b i ∈ F p [ x ] {\displaystyle b_{i}\in \mathbb {F} _{p}[x]} with deg ⁡ ( b i ) < n B {\displaystyle \deg(b_{i})<{\sqrt {nB}}} . Each of these polynomials b i {\displaystyle b_{i}} can be reduced to polynomials of smaller degree using a generalization of the Coppersmith method. We can reduce the degree until we get a product of B {\displaystyle B} -smooth polynomials. Then, taking the logarithm to the base a {\displaystyle a} , we can eventually compute log a ⁡ ( b ) = ∑ g i ∈ S log a ⁡ ( g i ) − l {\displaystyle \log _{a}(b)=\sum _{g_{i}\in S}\log _{a}(g_{i})-l} , which solves the DLP. == Complexity == The Function Field Sieve is thought to run in subexponential time in exp ⁡ ( ( 32 9 3 + o ( 1 ) ) ( ln ⁡ p ) 1 3 ( ln ⁡ ln ⁡ p ) 2 3 ) = L p [ 1 3 , 32 9 3 ] {\displaystyle \exp \left(\left({\sqrt[{3}]{\frac {32}{9}}}+o(1)\right)(\ln p)^{\frac {1}{3}}(\ln \ln p)^{\frac {2}{3}}\right)=L_{p}\left[{\frac {1}{3}},{\sqrt[{3}]{\frac {32}{9}}}\right]} using the L-notation. There is no rigorous proof of this complexity since it relies on some heuristic assumptions. For example in the sieving step we assume that numbers of the form ( r m + s ) N ( r y + s ) {\displaystyle (rm+s)N(ry+s)} behave like random numbers in a given range. == Comparison with other methods == There are two other well known algorithms that solve the discrete logarithm problem in sub-exponential time: the index calculus algorithm and a version of the Number Field Sieve. In their easiest forms both solve the DLP in a finite field of prime order but they can be expanded to solve the DLP in F p n {\displaystyle \mathbb {F} _{p^{n}}} as well. The Number Field Sieve for the DLP in F p n {\displaystyle \mathbb {F} _{p^{n}}} has a complexity of L p [ 1 / 3 , ( 64 / 9 ) 1 / 3 + o ( 1 ) ] {\displaystyle L_{p}[1/3,(64/9)^{1/3}+o(1)]} and is therefore slightly slower than the best performance of the Function Field Sieve. However, it is faster than the Function Field Sieve when n << ( log ⁡ ( p ) ) 1 / 2 {\displaystyle n<<(\log(p))^{1/2}} . It is not surprising that there exist two similar algorithms, one with number fields and the other one with function fields. In fact there is an extensive analogy between these two kinds of global fields. The index calculus algorithm is much easier to state than the Function Field Sieve and the Number Field Sieve since it does not involve any advanced algebraic structures. It is asymptotically slower with a complexity of L p [ 1 / 2 , 2 ] {\displaystyle L_{p}[1/2,{\sqrt {2}}]} . The main reason why the Number Field Sieve and the Function Field Sieve are faster is that these algorithms can run with a smaller smoothness bound B {\displaystyle B} , so most of the computations can be done with smaller numbers. == See also == Algebraic function field Number field sieve Index calculus algorithm == References ==
Wikipedia/Function_field_sieve
The Fast Library for Number Theory (FLINT) is a C library for number theory applications. The two major areas of functionality currently implemented in FLINT are polynomial arithmetic over the integers and a quadratic sieve. The library is designed to be compiled with the GNU Multi-Precision Library (GMP) and is released under the GNU General Public License. It is developed by William Hart of the University of Kaiserslautern (formerly University of Warwick) and David Harvey of University of New South Wales (formerly Harvard University) to address the speed limitations of the PARI and NTL libraries. == Design Philosophy == Asymptotically Fast Algorithms Implementations Fast as or Faster than Alternatives Written in Pure C Reliance on GMP Extensively Tested Extensively Profiled Support for Parallel Computation == Functionality == Polynomial Arithmetic over the Integers Quadratic Sieve == References == == Further reading == FLINT 1.0.9: Fast Library for Number Theory by William Hart and David Harvey
Wikipedia/Fast_Library_for_Number_Theory
In automatic control, a regulator is a device which has the function of maintaining a designated characteristic. It performs the activity of managing or maintaining a range of values in a machine. The measurable property of a device is managed closely by specified conditions or an advance set value; or it can be a variable according to a predetermined arrangement scheme. It can be used generally to connote any set of various controls or devices for regulating or controlling items or objects. Examples are a voltage regulator (which can be a transformer whose voltage ratio of transformation can be adjusted, or an electronic circuit that produces a defined voltage), a pressure regulator, such as a diving regulator, which maintains its output at a fixed pressure lower than its input, and a fuel regulator (which controls the supply of fuel). Regulators can be designed to control anything from gases or fluids, to light or electricity. Speed can be regulated by electronic, mechanical, or electro-mechanical means. Such instances include; Electronic regulators as used in modern railway sets where the voltage is raised or lowered to control the speed of the engine Mechanical systems such as valves as used in fluid control systems. Purely mechanical pre-automotive systems included such designs as the Watt centrifugal governor whereas modern systems may have electronic fluid speed sensing components directing solenoids to set the valve to the desired rate. Complex electro-mechanical speed control systems used to maintain speeds in modern cars (cruise control) - often including hydraulic components, An aircraft engine's constant speed unit changes the propeller pitch to maintain engine speed. == See also == Controller (control theory) Governor (device) Process control
Wikipedia/Regulator_(automatic_control)
Automation describes a wide range of technologies that reduce human intervention in processes, mainly by predetermining decision criteria, subprocess relationships, and related actions, as well as embodying those predeterminations in machines. Automation has been achieved by various means including mechanical, hydraulic, pneumatic, electrical, electronic devices, and computers, usually in combination. Complicated systems, such as modern factories, airplanes, and ships typically use combinations of all of these techniques. The benefit of automation includes labor savings, reducing waste, savings in electricity costs, savings in material costs, and improvements to quality, accuracy, and precision. Automation includes the use of various equipment and control systems such as machinery, processes in factories, boilers, and heat-treating ovens, switching on telephone networks, steering, stabilization of ships, aircraft and other applications and vehicles with reduced human intervention. Examples range from a household thermostat controlling a boiler to a large industrial control system with tens of thousands of input measurements and output control signals. Automation has also found a home in the banking industry. It can range from simple on-off control to multi-variable high-level algorithms in terms of control complexity. In the simplest type of an automatic control loop, a controller compares a measured value of a process with a desired set value and processes the resulting error signal to change some input to the process, in such a way that the process stays at its set point despite disturbances. This closed-loop control is an application of negative feedback to a system. The mathematical basis of control theory was begun in the 18th century and advanced rapidly in the 20th. The term automation, inspired by the earlier word automatic (coming from automaton), was not widely used before 1947, when Ford established an automation department. It was during this time that the industry was rapidly adopting feedback controllers, Technological advancements introduced in the 1930s revolutionized various industries significantly. The World Bank's World Development Report of 2019 shows evidence that the new industries and jobs in the technology sector outweigh the economic effects of workers being displaced by automation. Job losses and downward mobility blamed on automation have been cited as one of many factors in the resurgence of nationalist, protectionist and populist politics in the US, UK and France, among other countries since the 2010s. == History == === Early history === It was a preoccupation of the Greeks and Arabs (in the period between about 300 BC and about 1200 AD) to keep accurate track of time. In Ptolemaic Egypt, about 270 BC, Ctesibius described a float regulator for a water clock, a device not unlike the ball and cock in a modern flush toilet. This was the earliest feedback-controlled mechanism. The appearance of the mechanical clock in the 14th century made the water clock and its feedback control system obsolete. The Persian Banū Mūsā brothers, in their Book of Ingenious Devices (850 AD), described a number of automatic controls. Two-step level controls for fluids, a form of discontinuous variable structure controls, were developed by the Banu Musa brothers. They also described a feedback controller. The design of feedback control systems up through the Industrial Revolution was by trial-and-error, together with a great deal of engineering intuition. It was not until the mid-19th century that the stability of feedback control systems was analyzed using mathematics, the formal language of automatic control theory. The centrifugal governor was invented by Christiaan Huygens in the seventeenth century, and used to adjust the gap between millstones. === Industrial Revolution in Western Europe === The introduction of prime movers, or self-driven machines advanced grain mills, furnaces, boilers, and the steam engine created a new requirement for automatic control systems including temperature regulators (invented in 1624; see Cornelius Drebbel), pressure regulators (1681), float regulators (1700) and speed control devices. Another control mechanism was used to tent the sails of windmills. It was patented by Edmund Lee in 1745. Also in 1745, Jacques de Vaucanson invented the first automated loom. Around 1800, Joseph Marie Jacquard created a punch-card system to program looms. In 1771 Richard Arkwright invented the first fully automated spinning mill driven by water power, known at the time as the water frame. An automatic flour mill was developed by Oliver Evans in 1785, making it the first completely automated industrial process. A centrifugal governor was used by Mr. Bunce of England in 1784 as part of a model steam crane. The centrifugal governor was adopted by James Watt for use on a steam engine in 1788 after Watt's partner Boulton saw one at a flour mill Boulton & Watt were building. The governor could not actually hold a set speed; the engine would assume a new constant speed in response to load changes. The governor was able to handle smaller variations such as those caused by fluctuating heat load to the boiler. Also, there was a tendency for oscillation whenever there was a speed change. As a consequence, engines equipped with this governor were not suitable for operations requiring constant speed, such as cotton spinning. Several improvements to the governor, plus improvements to valve cut-off timing on the steam engine, made the engine suitable for most industrial uses before the end of the 19th century. Advances in the steam engine stayed well ahead of science, both thermodynamics and control theory. The governor received relatively little scientific attention until James Clerk Maxwell published a paper that established the beginning of a theoretical basis for understanding control theory. === 20th century === Relay logic was introduced with factory electrification, which underwent rapid adaption from 1900 through the 1920s. Central electric power stations were also undergoing rapid growth and the operation of new high-pressure boilers, steam turbines and electrical substations created a large demand for instruments and controls. Central control rooms became common in the 1920s, but as late as the early 1930s, most process controls were on-off. Operators typically monitored charts drawn by recorders that plotted data from instruments. To make corrections, operators manually opened or closed valves or turned switches on or off. Control rooms also used color-coded lights to send signals to workers in the plant to manually make certain changes. The development of the electronic amplifier during the 1920s, which was important for long-distance telephony, required a higher signal-to-noise ratio, which was solved by negative feedback noise cancellation. This and other telephony applications contributed to the control theory. In the 1940s and 1950s, German mathematician Irmgard Flügge-Lotz developed the theory of discontinuous automatic controls, which found military applications during the Second World War to fire control systems and aircraft navigation systems. Controllers, which were able to make calculated changes in response to deviations from a set point rather than on-off control, began being introduced in the 1930s. Controllers allowed manufacturing to continue showing productivity gains to offset the declining influence of factory electrification. Factory productivity was greatly increased by electrification in the 1920s. U.S. manufacturing productivity growth fell from 5.2%/yr 1919–29 to 2.76%/yr 1929–41. Alexander Field notes that spending on non-medical instruments increased significantly from 1929 to 1933 and remained strong thereafter. The First and Second World Wars saw major advancements in the field of mass communication and signal processing. Other key advances in automatic controls include differential equations, stability theory and system theory (1938), frequency domain analysis (1940), ship control (1950), and stochastic analysis (1941). Starting in 1958, various systems based on solid-state digital logic modules for hard-wired programmed logic controllers (the predecessors of programmable logic controllers [PLC]) emerged to replace electro-mechanical relay logic in industrial control systems for process control and automation, including early Telefunken/AEG Logistat, Siemens Simatic, Philips/Mullard/Valvo Norbit, BBC Sigmatronic, ACEC Logacec, Akkord Estacord, Krone Mibakron, Bistat, Datapac, Norlog, SSR, or Procontic systems. In 1959 Texaco's Port Arthur Refinery became the first chemical plant to use digital control. Conversion of factories to digital control began to spread rapidly in the 1970s as the price of computer hardware fell. === Significant applications === The automatic telephone switchboard was introduced in 1892 along with dial telephones. By 1929, 31.9% of the Bell system was automatic.: 158  Automatic telephone switching originally used vacuum tube amplifiers and electro-mechanical switches, which consumed a large amount of electricity. Call volume eventually grew so fast that it was feared the telephone system would consume all electricity production, prompting Bell Labs to begin research on the transistor. The logic performed by telephone switching relays was the inspiration for the digital computer. The first commercially successful glass bottle-blowing machine was an automatic model introduced in 1905. The machine, operated by a two-man crew working 12-hour shifts, could produce 17,280 bottles in 24 hours, compared to 2,880 bottles made by a crew of six men and boys working in a shop for a day. The cost of making bottles by machine was 10 to 12 cents per gross compared to $1.80 per gross by the manual glassblowers and helpers. Sectional electric drives were developed using control theory. Sectional electric drives are used on different sections of a machine where a precise differential must be maintained between the sections. In steel rolling, the metal elongates as it passes through pairs of rollers, which must run at successively faster speeds. In paper making paper, the sheet shrinks as it passes around steam-heated drying arranged in groups, which must run at successively slower speeds. The first application of a sectional electric drive was on a paper machine in 1919. One of the most important developments in the steel industry during the 20th century was continuous wide strip rolling, developed by Armco in 1928. Before automation, many chemicals were made in batches. In 1930, with the widespread use of instruments and the emerging use of controllers, the founder of Dow Chemical Co. was advocating continuous production. Self-acting machine tools that displaced hand dexterity so they could be operated by boys and unskilled laborers were developed by James Nasmyth in the 1840s. Machine tools were automated with Numerical control (NC) using punched paper tape in the 1950s. This soon evolved into computerized numerical control (CNC). Today extensive automation is practiced in practically every type of manufacturing and assembly process. Some of the larger processes include electrical power generation, oil refining, chemicals, steel mills, plastics, cement plants, fertilizer plants, pulp and paper mills, automobile and truck assembly, aircraft production, glass manufacturing, natural gas separation plants, food and beverage processing, canning and bottling and manufacture of various kinds of parts. Robots are especially useful in hazardous applications like automobile spray painting. Robots are also used to assemble electronic circuit boards. Automotive welding is done with robots and automatic welders are used in applications like pipelines. === Space/computer age === With the advent of the space age in 1957, controls design, particularly in the United States, turned away from the frequency-domain techniques of classical control theory and backed into the differential equation techniques of the late 19th century, which were couched in the time domain. During the 1940s and 1950s, German mathematician Irmgard Flugge-Lotz developed the theory of discontinuous automatic control, which became widely used in hysteresis control systems such as navigation systems, fire-control systems, and electronics. Through Flugge-Lotz and others, the modern era saw time-domain design for nonlinear systems (1961), navigation (1960), optimal control and estimation theory (1962), nonlinear control theory (1969), digital control and filtering theory (1974), and the personal computer (1983). == Advantages, disadvantages, and limitations == Perhaps the most cited advantage of automation in industry is that it is associated with faster production and cheaper labor costs. Another benefit could be that it replaces hard, physical, or monotonous work. Additionally, tasks that take place in hazardous environments or that are otherwise beyond human capabilities can be done by machines, as machines can operate even under extreme temperatures or in atmospheres that are radioactive or toxic. They can also be maintained with simple quality checks. However, at the time being, not all tasks can be automated, and some tasks are more expensive to automate than others. Initial costs of installing the machinery in factory settings are high, and failure to maintain a system could result in the loss of the product itself. Moreover, some studies seem to indicate that industrial automation could impose ill effects beyond operational concerns, including worker displacement due to systemic loss of employment and compounded environmental damage; however, these findings are both convoluted and controversial in nature, and could potentially be circumvented. The main advantages of automation are: Increased throughput or productivity Improved quality Increased predictability Improved robustness (consistency), of processes or product Increased consistency of output Reduced direct human labor costs and expenses Reduced cycle time Increased accuracy Relieving humans of monotonously repetitive work Required work in development, deployment, maintenance, and operation of automated processes — often structured as "jobs" Increased human freedom to do other things Automation primarily describes machines replacing human action, but it is also loosely associated with mechanization, machines replacing human labor. Coupled with mechanization, extending human capabilities in terms of size, strength, speed, endurance, visual range & acuity, hearing frequency & precision, electromagnetic sensing & effecting, etc., advantages include: Relieving humans of dangerous work stresses and occupational injuries (e.g., fewer strained backs from lifting heavy objects) Removing humans from dangerous environments (e.g. fire, space, volcanoes, nuclear facilities, underwater, etc.) The main disadvantages of automation are: High initial cost Faster production without human intervention can mean faster unchecked production of defects where automated processes are defective. Scaled-up capacities can mean scaled-up problems when systems fail — releasing dangerous toxins, forces, energies, etc., at scaled-up rates. Human adaptiveness is often poorly understood by automation initiators. It is often difficult to anticipate every contingency and develop fully preplanned automated responses for every situation. The discoveries inherent in automating processes can require unanticipated iterations to resolve, causing unanticipated costs and delays. People anticipating employment income may be seriously disrupted by others deploying automation where no similar income is readily available. === Paradox of automation === The paradox of automation says that the more efficient the automated system, the more crucial the human contribution of the operators. Humans are less involved, but their involvement becomes more critical. Lisanne Bainbridge, a cognitive psychologist, identified these issues notably in her widely cited paper "Ironies of Automation." If an automated system has an error, it will multiply that error until it is fixed or shut down. This is where human operators come in. A fatal example of this was Air France Flight 447, where a failure of automation put the pilots into a manual situation they were not prepared for. === Limitations === Current technology is unable to automate all the desired tasks. Many operations using automation have large amounts of invested capital and produce high volumes of products, making malfunctions extremely costly and potentially hazardous. Therefore, some personnel is needed to ensure that the entire system functions properly and that safety and product quality are maintained. As a process becomes increasingly automated, there is less and less labor to be saved or quality improvement to be gained. This is an example of both diminishing returns and the logistic function. As more and more processes become automated, there are fewer remaining non-automated processes. This is an example of the exhaustion of opportunities. New technological paradigms may, however, set new limits that surpass the previous limits. ==== Current limitations ==== Many roles for humans in industrial processes presently lie beyond the scope of automation. Human-level pattern recognition, language comprehension, and language production ability are well beyond the capabilities of modern mechanical and computer systems (but see Watson computer). Tasks requiring subjective assessment or synthesis of complex sensory data, such as scents and sounds, as well as high-level tasks such as strategic planning, currently require human expertise. In many cases, the use of humans is more cost-effective than mechanical approaches even where the automation of industrial tasks is possible. Therefore, algorithmic management as the digital rationalization of human labor instead of its substitution has emerged as an alternative technological strategy. Overcoming these obstacles is a theorized path to post-scarcity economics. === Societal impact and unemployment === Increased automation often causes workers to feel anxious about losing their jobs as technology renders their skills or experience unnecessary. Early in the Industrial Revolution, when inventions like the steam engine were making some job categories expendable, workers forcefully resisted these changes. Luddites, for instance, were English textile workers who protested the introduction of weaving machines by destroying them. More recently, some residents of Chandler, Arizona, have slashed tires and pelted rocks at self-driving car, in protest over the cars' perceived threat to human safety and job prospects. The relative anxiety about automation reflected in opinion polls seems to correlate closely with the strength of organized labor in that region or nation. For example, while a study by the Pew Research Center indicated that 72% of Americans are worried about increasing automation in the workplace, 80% of Swedes see automation and artificial intelligence (AI) as a good thing, due to the country's still-powerful unions and a more robust national safety net. According to one estimate, 47% of all current jobs in the US have the potential to be fully automated by 2033. Furthermore, wages and educational attainment appear to be strongly negatively correlated with an occupation's risk of being automated. Erik Brynjolfsson and Andrew McAfee argue that "there's never been a better time to be a worker with special skills or the right education, because these people can use technology to create and capture value. However, there's never been a worse time to be a worker with only 'ordinary' skills and abilities to offer, because computers, robots, and other digital technologies are acquiring these skills and abilities at an extraordinary rate." Others however argue that highly skilled professional jobs like a lawyer, doctor, engineer, journalist are also at risk of automation. According to a 2020 study in the Journal of Political Economy, automation has robust negative effects on employment and wages: "One more robot per thousand workers reduces the employment-to-population ratio by 0.2 percentage points and wages by 0.42%." A 2025 study in the American Economic Journal found that the introduction of industrial robots reduced 1993 and 2014 led to reduced employment of men and women by 3.7 and 1.6 percentage points. Research by Carl Benedikt Frey and Michael Osborne of the Oxford Martin School argued that employees engaged in "tasks following well-defined procedures that can easily be performed by sophisticated algorithms" are at risk of displacement, and 47% of jobs in the US were at risk. The study, released as a working paper in 2013 and published in 2017, predicted that automation would put low-paid physical occupations most at risk, by surveying a group of colleagues on their opinions. However, according to a study published in McKinsey Quarterly in 2015 the impact of computerization in most cases is not the replacement of employees but the automation of portions of the tasks they perform. The methodology of the McKinsey study has been heavily criticized for being intransparent and relying on subjective assessments. The methodology of Frey and Osborne has been subjected to criticism, as lacking evidence, historical awareness, or credible methodology. Additionally, the Organisation for Economic Co-operation and Development (OECD) found that across the 21 OECD countries, 9% of jobs are automatable. Based on a formula by Gilles Saint-Paul, an economist at Toulouse 1 University, the demand for unskilled human capital declines at a slower rate than the demand for skilled human capital increases. In the long run and for society as a whole it has led to cheaper products, lower average work hours, and new industries forming (i.e., robotics industries, computer industries, design industries). These new industries provide many high salary skill-based jobs to the economy. By 2030, between 3 and 14 percent of the global workforce will be forced to switch job categories due to automation eliminating jobs in an entire sector. While the number of jobs lost to automation is often offset by jobs gained from technological advances, the same type of job loss is not the same one replaced and that leading to increasing unemployment in the lower-middle class. This occurs largely in the US and developed countries where technological advances contribute to higher demand for highly skilled labor but demand for middle-wage labor continues to fall. Economists call this trend "income polarization" where unskilled labor wages are driven down and skilled labor is driven up and it is predicted to continue in developed economies. == Lights-out manufacturing == Lights-out manufacturing is a production system with no human workers, to eliminate labor costs. It grew in popularity in the U.S. when General Motors in 1982 implemented humans "hands-off" manufacturing to "replace risk-averse bureaucracy with automation and robots". However, the factory never reached full "lights out" status. The expansion of lights out manufacturing requires: Reliability of equipment Long-term mechanic capabilities Planned preventive maintenance Commitment from the staff == Health and environment == The costs of automation to the environment are different depending on the technology, product or engine automated. There are automated engines that consume more energy resources from the Earth in comparison with previous engines and vice versa. Hazardous operations, such as oil refining, the manufacturing of industrial chemicals, and all forms of metal working, were always early contenders for automation. The automation of vehicles could prove to have a substantial impact on the environment, although the nature of this impact could be beneficial or harmful depending on several factors. Because automated vehicles are much less likely to get into accidents compared to human-driven vehicles, some precautions built into current models (such as anti-lock brakes or laminated glass) would not be required for self-driving versions. Removal of these safety features reduces the weight of the vehicle, and coupled with more precise acceleration and braking, as well as fuel-efficient route mapping, can increase fuel economy and reduce emissions. Despite this, some researchers theorize that an increase in the production of self-driving cars could lead to a boom in vehicle ownership and usage, which could potentially negate any environmental benefits of self-driving cars if they are used more frequently. Automation of homes and home appliances is also thought to impact the environment. A study of energy consumption of automated homes in Finland showed that smart homes could reduce energy consumption by monitoring levels of consumption in different areas of the home and adjusting consumption to reduce energy leaks (e.g. automatically reducing consumption during the nighttime when activity is low). This study, along with others, indicated that the smart home's ability to monitor and adjust consumption levels would reduce unnecessary energy usage. However, some research suggests that smart homes might not be as efficient as non-automated homes. A more recent study has indicated that, while monitoring and adjusting consumption levels do decrease unnecessary energy use, this process requires monitoring systems that also consume an amount of energy. The energy required to run these systems sometimes negates their benefits, resulting in little to no ecological benefit. == Convertibility and turnaround time == Another major shift in automation is the increased demand for flexibility and convertibility in manufacturing processes. Manufacturers are increasingly demanding the ability to easily switch from manufacturing Product A to manufacturing Product B without having to completely rebuild the production lines. Flexibility and distributed processes have led to the introduction of Automated Guided Vehicles with Natural Features Navigation. Digital electronics helped too. Former analog-based instrumentation was replaced by digital equivalents which can be more accurate and flexible, and offer greater scope for more sophisticated configuration, parametrization, and operation. This was accompanied by the fieldbus revolution which provided a networked (i.e. a single cable) means of communicating between control systems and field-level instrumentation, eliminating hard-wiring. Discrete manufacturing plants adopted these technologies fast. The more conservative process industries with their longer plant life cycles have been slower to adopt and analog-based measurement and control still dominate. The growing use of Industrial Ethernet on the factory floor is pushing these trends still further, enabling manufacturing plants to be integrated more tightly within the enterprise, via the internet if necessary. Global competition has also increased demand for Reconfigurable Manufacturing Systems. == Automation tools == Engineers can now have numerical control over automated devices. The result has been a rapidly expanding range of applications and human activities. Computer-aided technologies (or CAx) now serve as the basis for mathematical and organizational tools used to create complex systems. Notable examples of CAx include computer-aided design (CAD software) and computer-aided manufacturing (CAM software). The improved design, analysis, and manufacture of products enabled by CAx has been beneficial for industry. Information technology, together with industrial machinery and processes, can assist in the design, implementation, and monitoring of control systems. One example of an industrial control system is a programmable logic controller (PLC). PLCs are specialized hardened computers which are frequently used to synchronize the flow of inputs from (physical) sensors and events with the flow of outputs to actuators and events. Human-machine interfaces (HMI) or computer human interfaces (CHI), formerly known as man-machine interfaces, are usually employed to communicate with PLCs and other computers. Service personnel who monitor and control through HMIs can be called by different names. In the industrial process and manufacturing environments, they are called operators or something similar. In boiler houses and central utility departments, they are called stationary engineers. Different types of automation tools exist: ANN – Artificial neural network DCS – Distributed control system HMI – Human machine interface RPA – Robotic process automation SCADA – Supervisory control and data acquisition PLC – Programmable logic controller Instrumentation Motion control Robotics Host simulation software (HSS) is a commonly used testing tool that is used to test the equipment software. HSS is used to test equipment performance concerning factory automation standards (timeouts, response time, processing time). == Cognitive automation == Cognitive automation, as a subset of AI, is an emerging genus of automation enabled by cognitive computing. Its primary concern is the automation of clerical tasks and workflows that consist of structuring unstructured data. Cognitive automation relies on multiple disciplines: natural language processing, real-time computing, machine learning algorithms, big data analytics, and evidence-based learning. According to Deloitte, cognitive automation enables the replication of human tasks and judgment "at rapid speeds and considerable scale." Such tasks include: Document redaction Data extraction and document synthesis / reporting Contract management Natural language search Customer, employee, and stakeholder onboarding Manual activities and verifications Follow-up and email communications == Recent and emerging applications == === CAD AI === Artificially intelligent computer-aided design (CAD) can use text-to-3D, image-to-3D, and video-to-3D to automate in 3D modeling. AI CAD libraries could also be developed using linked open data of schematics and diagrams. Ai CAD assistants are used as tools to help streamline workflow. === Automated power production === Technologies like solar panels, wind turbines, and other renewable energy sources—together with smart grids, micro-grids, battery storage—can automate power production. === Agricultural production === Many agricultural operations are automated with machinery and equipment to improve their diagnosis, decision-making and/or performing. Agricultural automation can relieve the drudgery of agricultural work, improve the timeliness and precision of agricultural operations, raise productivity and resource-use efficiency, build resilience, and improve food quality and safety. Increased productivity can free up labour, allowing agricultural households to spend more time elsewhere. The technological evolution in agriculture has resulted in progressive shifts to digital equipment and robotics. Motorized mechanization using engine power automates the performance of agricultural operations such as ploughing and milking. With digital automation technologies, it also becomes possible to automate diagnosis and decision-making of agricultural operations. For example, autonomous crop robots can harvest and seed crops, while drones can gather information to help automate input application. Precision agriculture often employs such automation technologies Motorized mechanization has generally increased in recent years. Sub-Saharan Africa is the only region where the adoption of motorized mechanization has stalled over the past decades. Automation technologies are increasingly used for managing livestock, though evidence on adoption is lacking. Global automatic milking system sales have increased over recent years, but adoption is likely mostly in Northern Europe, and likely almost absent in low- and middle-income countries. Automated feeding machines for both cows and poultry also exist, but data and evidence regarding their adoption trends and drivers is likewise scarce. === Retail === Many supermarkets and even smaller stores are rapidly introducing self-checkout systems reducing the need for employing checkout workers. In the U.S., the retail industry employs 15.9 million people as of 2017 (around 1 in 9 Americans in the workforce). Globally, an estimated 192 million workers could be affected by automation according to research by Eurasia Group. Online shopping could be considered a form of automated retail as the payment and checkout are through an automated online transaction processing system, with the share of online retail accounting jumping from 5.1% in 2011 to 8.3% in 2016. However, two-thirds of books, music, and films are now purchased online. In addition, automation and online shopping could reduce demands for shopping malls, and retail property, which in the United States is currently estimated to account for 31% of all commercial property or around 7 billion square feet (650 million square metres). Amazon has gained much of the growth in recent years for online shopping, accounting for half of the growth in online retail in 2016. Other forms of automation can also be an integral part of online shopping, for example, the deployment of automated warehouse robotics such as that applied by Amazon using Kiva Systems. === Food and drink === The food retail industry has started to apply automation to the ordering process; McDonald's has introduced touch screen ordering and payment systems in many of its restaurants, reducing the need for as many cashier employees. The University of Texas at Austin has introduced fully automated cafe retail locations. Some cafes and restaurants have utilized mobile and tablet "apps" to make the ordering process more efficient by customers ordering and paying on their device. Some restaurants have automated food delivery to tables of customers using a conveyor belt system. The use of robots is sometimes employed to replace waiting staff. === Construction === Automation in construction is the combination of methods, processes, and systems that allow for greater machine autonomy in construction activities. Construction automation may have multiple goals, including but not limited to, reducing jobsite injuries, decreasing activity completion times, and assisting with quality control and quality assurance. === Mining === Automated mining involves the removal of human labor from the mining process. The mining industry is currently in the transition towards automation. Currently, it can still require a large amount of human capital, particularly in the third world where labor costs are low so there is less incentive for increasing efficiency through automation. === Video surveillance === The Defense Advanced Research Projects Agency (DARPA) started the research and development of automated visual surveillance and monitoring (VSAM) program, between 1997 and 1999, and airborne video surveillance (AVS) programs, from 1998 to 2002. Currently, there is a major effort underway in the vision community to develop a fully-automated tracking surveillance system. Automated video surveillance monitors people and vehicles in real-time within a busy environment. Existing automated surveillance systems are based on the environment they are primarily designed to observe, i.e., indoor, outdoor or airborne, the number of sensors that the automated system can handle and the mobility of sensors, i.e., stationary camera vs. mobile camera. The purpose of a surveillance system is to record properties and trajectories of objects in a given area, generate warnings or notify the designated authorities in case of occurrence of particular events. === Highway systems === As demands for safety and mobility have grown and technological possibilities have multiplied, interest in automation has grown. Seeking to accelerate the development and introduction of fully automated vehicles and highways, the U.S. Congress authorized more than $650 million over six years for intelligent transport systems (ITS) and demonstration projects in the 1991 Intermodal Surface Transportation Efficiency Act (ISTEA). Congress legislated in ISTEA that:[T]he Secretary of Transportation shall develop an automated highway and vehicle prototype from which future fully automated intelligent vehicle-highway systems can be developed. Such development shall include research in human factors to ensure the success of the man-machine relationship. The goal of this program is to have the first fully automated highway roadway or an automated test track in operation by 1997. This system shall accommodate the installation of equipment in new and existing motor vehicles.Full automation commonly defined as requiring no control or very limited control by the driver; such automation would be accomplished through a combination of sensor, computer, and communications systems in vehicles and along the roadway. Fully automated driving would, in theory, allow closer vehicle spacing and higher speeds, which could enhance traffic capacity in places where additional road building is physically impossible, politically unacceptable, or prohibitively expensive. Automated controls also might enhance road safety by reducing the opportunity for driver error, which causes a large share of motor vehicle crashes. Other potential benefits include improved air quality (as a result of more-efficient traffic flows), increased fuel economy, and spin-off technologies generated during research and development related to automated highway systems. === Waste management === Automated waste collection trucks prevent the need for as many workers as well as easing the level of labor required to provide the service. === Business process === Business process automation (BPA) is the technology-enabled automation of complex business processes. It can help to streamline a business for simplicity, achieve digital transformation, increase service quality, improve service delivery or contain costs. BPA consists of integrating applications, restructuring labor resources and using software applications throughout the organization. Robotic process automation (RPA; or RPAAI for self-guided RPA 2.0) is an emerging field within BPA and uses AI. BPAs can be implemented in a number of business areas including marketing, sales and workflow. === Home === Home automation (also called domotics) designates an emerging practice of increased automation of household appliances and features in residential dwellings, particularly through electronic means that allow for things impracticable, overly expensive or simply not possible in recent past decades. The rise in the usage of home automation solutions has taken a turn reflecting the increased dependency of people on such automation solutions. However, the increased comfort that gets added through these automation solutions is remarkable. === Laboratory === Automation is essential for many scientific and clinical applications. Therefore, automation has been extensively employed in laboratories. From as early as 1980 fully automated laboratories have already been working. However, automation has not become widespread in laboratories due to its high cost. This may change with the ability of integrating low-cost devices with standard laboratory equipment. Autosamplers are common devices used in laboratory automation. === Logistics automation === Logistics automation is the application of computer software or automated machinery to improve the efficiency of logistics operations. Typically this refers to operations within a warehouse or distribution center, with broader tasks undertaken by supply chain engineering systems and enterprise resource planning systems. === Industrial automation === Industrial automation deals primarily with the automation of manufacturing, quality control, and material handling processes. General-purpose controllers for industrial processes include programmable logic controllers, stand-alone I/O modules, and computers. Industrial automation is to replace the human action and manual command-response activities with the use of mechanized equipment and logical programming commands. One trend is increased use of machine vision to provide automatic inspection and robot guidance functions, another is a continuing increase in the use of robots. Industrial automation is simply required in industries. ==== Industrial Automation and Industry 4.0 ==== The rise of industrial automation is directly tied to the "Fourth Industrial Revolution", which is better known now as Industry 4.0. Originating from Germany, Industry 4.0 encompasses numerous devices, concepts, and machines, as well as the advancement of the industrial internet of things (IIoT). An "Internet of Things is a seamless integration of diverse physical objects in the Internet through a virtual representation." These new revolutionary advancements have drawn attention to the world of automation in an entirely new light and shown ways for it to grow to increase productivity and efficiency in machinery and manufacturing facilities. Industry 4.0 works with the IIoT and software/hardware to connect in a way that (through communication technologies) add enhancements and improve manufacturing processes. Being able to create smarter, safer, and more advanced manufacturing is now possible with these new technologies. It opens up a manufacturing platform that is more reliable, consistent, and efficient than before. Implementation of systems such as SCADA is an example of software that takes place in Industrial Automation today. SCADA is a supervisory data collection software, just one of the many used in Industrial Automation. Industry 4.0 vastly covers many areas in manufacturing and will continue to do so as time goes on. ==== Industrial robotics ==== Industrial robotics is a sub-branch in industrial automation that aids in various manufacturing processes. Such manufacturing processes include machining, welding, painting, assembling and material handling to name a few. Industrial robots use various mechanical, electrical as well as software systems to allow for high precision, accuracy and speed that far exceed any human performance. The birth of industrial robots came shortly after World War II as the U.S. saw the need for a quicker way to produce industrial and consumer goods. Servos, digital logic and solid-state electronics allowed engineers to build better and faster systems and over time these systems were improved and revised to the point where a single robot is capable of running 24 hours a day with little or no maintenance. In 1997, there were 700,000 industrial robots in use, the number has risen to 1.8M in 2017 In recent years, AI with robotics is also used in creating an automatic labeling solution, using robotic arms as the automatic label applicator, and AI for learning and detecting the products to be labelled. ==== Programmable Logic Controllers ==== Industrial automation incorporates programmable logic controllers in the manufacturing process. Programmable logic controllers (PLCs) use a processing system which allows for variation of controls of inputs and outputs using simple programming. PLCs make use of programmable memory, storing instructions and functions like logic, sequencing, timing, counting, etc. Using a logic-based language, a PLC can receive a variety of inputs and return a variety of logical outputs, the input devices being sensors and output devices being motors, valves, etc. PLCs are similar to computers, however, while computers are optimized for calculations, PLCs are optimized for control tasks and use in industrial environments. They are built so that only basic logic-based programming knowledge is needed and to handle vibrations, high temperatures, humidity, and noise. The greatest advantage PLCs offer is their flexibility. With the same basic controllers, a PLC can operate a range of different control systems. PLCs make it unnecessary to rewire a system to change the control system. This flexibility leads to a cost-effective system for complex and varied control systems. PLCs can range from small "building brick" devices with tens of I/O in a housing integral with the processor, to large rack-mounted modular devices with a count of thousands of I/O, and which are often networked to other PLC and SCADA systems. They can be designed for multiple arrangements of digital and analog inputs and outputs (I/O), extended temperature ranges, immunity to electrical noise, and resistance to vibration and impact. Programs to control machine operation are typically stored in battery-backed-up or non-volatile memory. It was from the automotive industry in the United States that the PLC was born. Before the PLC, control, sequencing, and safety interlock logic for manufacturing automobiles was mainly composed of relays, cam timers, drum sequencers, and dedicated closed-loop controllers. Since these could number in the hundreds or even thousands, the process for updating such facilities for the yearly model change-over was very time-consuming and expensive, as electricians needed to individually rewire the relays to change their operational characteristics. When digital computers became available, being general-purpose programmable devices, they were soon applied to control sequential and combinatorial logic in industrial processes. However, these early computers required specialist programmers and stringent operating environmental control for temperature, cleanliness, and power quality. To meet these challenges, the PLC was developed with several key attributes. It would tolerate the shop-floor environment, it would support discrete (bit-form) input and output in an easily extensible manner, it would not require years of training to use, and it would permit its operation to be monitored. Since many industrial processes have timescales easily addressed by millisecond response times, modern (fast, small, reliable) electronics greatly facilitate building reliable controllers, and performance could be traded off for reliability. ==== Agent-assisted automation ==== Agent-assisted automation refers to automation used by call center agents to handle customer inquiries. The key benefit of agent-assisted automation is compliance and error-proofing. Agents are sometimes not fully trained or they forget or ignore key steps in the process. The use of automation ensures that what is supposed to happen on the call actually does, every time. There are two basic types: desktop automation and automated voice solutions. == Control == === Open-loop and closed-loop === === Discrete control (on/off) === One of the simplest types of control is on-off control. An example is a thermostat used on household appliances which either open or close an electrical contact. (Thermostats were originally developed as true feedback-control mechanisms rather than the on-off common household appliance thermostat.) Sequence control, in which a programmed sequence of discrete operations is performed, often based on system logic that involves system states. An elevator control system is an example of sequence control. === PID controller === A proportional–integral–derivative controller (PID controller) is a control loop feedback mechanism (controller) widely used in industrial control systems. In a PID loop, the controller continuously calculates an error value e ( t ) {\displaystyle e(t)} as the difference between a desired setpoint and a measured process variable and applies a correction based on proportional, integral, and derivative terms, respectively (sometimes denoted P, I, and D) which give their name to the controller type. The theoretical understanding and application date from the 1920s, and they are implemented in nearly all analog control systems; originally in mechanical controllers, and then using discrete electronics and latterly in industrial process computers. === Sequential control and logical sequence or system state control === Sequential control may be either to a fixed sequence or to a logical one that will perform different actions depending on various system states. An example of an adjustable but otherwise fixed sequence is a timer on a lawn sprinkler. States refer to the various conditions that can occur in a use or sequence scenario of the system. An example is an elevator, which uses logic based on the system state to perform certain actions in response to its state and operator input. For example, if the operator presses the floor n button, the system will respond depending on whether the elevator is stopped or moving, going up or down, or if the door is open or closed, and other conditions. Early development of sequential control was relay logic, by which electrical relays engage electrical contacts which either start or interrupt power to a device. Relays were first used in telegraph networks before being developed for controlling other devices, such as when starting and stopping industrial-sized electric motors or opening and closing solenoid valves. Using relays for control purposes allowed event-driven control, where actions could be triggered out of sequence, in response to external events. These were more flexible in their response than the rigid single-sequence cam timers. More complicated examples involved maintaining safe sequences for devices such as swing bridge controls, where a lock bolt needed to be disengaged before the bridge could be moved, and the lock bolt could not be released until the safety gates had already been closed. The total number of relays and cam timers can number into the hundreds or even thousands in some factories. Early programming techniques and languages were needed to make such systems manageable, one of the first being ladder logic, where diagrams of the interconnected relays resembled the rungs of a ladder. Special computers called programmable logic controllers were later designed to replace these collections of hardware with a single, more easily re-programmed unit. In a typical hard-wired motor start and stop circuit (called a control circuit) a motor is started by pushing a "Start" or "Run" button that activates a pair of electrical relays. The "lock-in" relay locks in contacts that keep the control circuit energized when the push-button is released. (The start button is a normally open contact and the stop button is a normally closed contact.) Another relay energizes a switch that powers the device that throws the motor starter switch (three sets of contacts for three-phase industrial power) in the main power circuit. Large motors use high voltage and experience high in-rush current, making speed important in making and breaking contact. This can be dangerous for personnel and property with manual switches. The "lock-in" contacts in the start circuit and the main power contacts for the motor are held engaged by their respective electromagnets until a "stop" or "off" button is pressed, which de-energizes the lock in relay. Commonly interlocks are added to a control circuit. Suppose that the motor in the example is powering machinery that has a critical need for lubrication. In this case, an interlock could be added to ensure that the oil pump is running before the motor starts. Timers, limit switches, and electric eyes are other common elements in control circuits. Solenoid valves are widely used on compressed air or hydraulic fluid for powering actuators on mechanical components. While motors are used to supply continuous rotary motion, actuators are typically a better choice for intermittently creating a limited range of movement for a mechanical component, such as moving various mechanical arms, opening or closing valves, raising heavy press-rolls, applying pressure to presses. === Computer control === Computers can perform both sequential control and feedback control, and typically a single computer will do both in an industrial application. Programmable logic controllers (PLCs) are a type of special-purpose microprocessor that replaced many hardware components such as timers and drum sequencers used in relay logic–type systems. General-purpose process control computers have increasingly replaced stand-alone controllers, with a single computer able to perform the operations of hundreds of controllers. Process control computers can process data from a network of PLCs, instruments, and controllers to implement typical (such as PID) control of many individual variables or, in some cases, to implement complex control algorithms using multiple inputs and mathematical manipulations. They can also analyze data and create real-time graphical displays for operators and run reports for operators, engineers, and management. Control of an automated teller machine (ATM) is an example of an interactive process in which a computer will perform a logic-derived response to a user selection based on information retrieved from a networked database. The ATM process has similarities with other online transaction processes. The different logical responses are called scenarios. Such processes are typically designed with the aid of use cases and flowcharts, which guide the writing of the software code. The earliest feedback control mechanism was the water clock invented by Greek engineer Ctesibius (285–222 BC). == See also == == References == === Citations === === Sources === This article incorporates text from a free content work. Licensed under CC BY-SA 3.0 (license statement/permission). Text taken from In Brief to The State of Food and Agriculture 2022 – Leveraging automation in agriculture for transforming agrifood systems​, FAO, FAO. == Further reading ==
Wikipedia/Automatic_control
Linear parameter-varying control (LPV control) deals with the control of linear parameter-varying systems, a class of nonlinear systems which can be modelled as parametrized linear systems whose parameters change with their state. == Gain scheduling == In designing feedback controllers for dynamical systems a variety of modern, multivariable controllers are used. In general, these controllers are often designed at various operating points using linearized models of the system dynamics and are scheduled as a function of a parameter or parameters for operation at intermediate conditions. It is an approach for the control of non-linear systems that uses a family of linear controllers, each of which provides satisfactory control for a different operating point of the system. One or more observable variables, called the scheduling variables, are used to determine the current operating region of the system and to enable the appropriate linear controller. For example, in case of aircraft control, a set of controllers are designed at different gridded locations of corresponding parameters such as AoA, Mach, dynamic pressure, CG etc. In brief, gain scheduling is a control design approach that constructs a nonlinear controller for a nonlinear plant by patching together a collection of linear controllers. These linear controllers are blended in real-time via switching or interpolation. Scheduling multivariable controllers can be a very tedious and time-consuming task. A new paradigm is the linear parameter-varying (LPV) techniques which synthesize of automatically scheduled multivariable controller. === Drawbacks of classical gain scheduling === An important drawback of classical gain scheduling approach is that adequate performance and in some cases even stability is not guaranteed at operating conditions other than the design points. Scheduling multivariable controllers is often a tedious and time-consuming task and it holds true especially in the field of aerospace control where the parameter dependency of controllers are large due to increased operating envelopes with more demanding performance requirements. It is also important that the selected scheduling variables reflect changes in plant dynamics as operating conditions change. It is possible in gain scheduling to incorporate linear robust control methodologies into nonlinear control design; however the global stability, robustness and performance properties are not addressed explicitly in the design process. Though the approach is simple and the computational burden of linearization scheduling approaches is often much less than for other nonlinear design approaches, its inherent drawbacks sometimes outweigh its advantages and necessitates a new paradigm for the control of dynamical systems. New methodologies such as Adaptive control based on Artificial Neural Networks (ANN), Fuzzy logic, Reinforcement Learning, etc. try to address such problems, the lack of proof of stability and performance of such approaches over entire operating parameter regime requires design of a parameter dependent controller with guaranteed properties for which, a Linear Parameter Varying controller could be an ideal candidate. == Linear parameter-varying systems == LPV systems are a very special class of nonlinear systems which appears to be well suited for control of dynamical systems with parameter variations. In general, LPV techniques provide a systematic design procedure for gain-scheduled multivariable controllers. This methodology allows performance, robustness and bandwidth limitations to be incorporated into a unified framework. A brief introduction on the LPV systems and the explanation of terminologies are given below. === Parameter dependent systems === In control engineering, a state-space representation is a mathematical model of a physical system as a set of input, u {\displaystyle u} output, y {\displaystyle y} and state variables, x {\displaystyle x} related by first-order differential equations. The dynamic evolution of a nonlinear, non-autonomous system is represented by x ˙ = f ( x , u , t ) {\displaystyle {\dot {x}}=f(x,u,t)} If the system is time variant x ˙ = f ( x ( t ) , u ( t ) , t ) , x ( t 0 ) {\displaystyle {\dot {x}}=f(x(t),u(t),t),x(t_{0})} x ( t 0 ) = x 0 , u ( t 0 ) = u 0 {\displaystyle x(t_{0})=x_{0},u(t_{0})=u_{0}} The state variables describe the mathematical "state" of a dynamical system and in modeling large complex nonlinear systems if such state variables are chosen to be compact for the sake of practicality and simplicity, then parts of dynamic evolution of system are missing. The state space description will involve other variables called exogenous variables whose evolution is not understood or is too complicated to be modeled but affect the state variables evolution in a known manner and are measurable in real-time using sensors. When a large number of sensors are used, some of these sensors measure outputs in the system theoretic sense as known, explicit nonlinear functions of the modeled states and time, while other sensors are accurate estimates of the exogenous variables. Hence, the model will be a time varying, nonlinear system, with the future time variation unknown, but measured by the sensors in real-time. In this case, if w ( t ) , w {\displaystyle w(t),w} denotes the exogenous variable vector, and x ( t ) {\displaystyle x(t)} denotes the modeled state, then the state equations are written as x ˙ = f ( x ( t ) , w ( t ) , w ˙ ( t ) , u ( t ) ) {\displaystyle {\dot {x}}=f(x(t),w(t),{\dot {w}}(t),u(t))} The parameter w {\displaystyle w} is not known but its evolution is measured in real time and used for control. If the above equation of parameter dependent system is linear in time then it is called Linear Parameter Dependent systems. They are written similar to Linear Time Invariant form albeit the inclusion in time variant parameter. x ˙ = A ( w ( t ) ) x ( t ) + B ( w ( t ) ) u ( t ) {\displaystyle {\dot {x}}=A(w(t))x(t)+B(w(t))u(t)} y = C ( w ( t ) ) x ( t ) + D ( w ( t ) ) u ( t ) {\displaystyle y=C(w(t))x(t)+D(w(t))u(t)} Parameter-dependent systems are linear systems, whose state-space descriptions are known functions of time-varying parameters. The time variation of each of the parameters is not known in advance, but is assumed to be measurable in real time. The controller is restricted to be a linear system, whose state-space entries depend causally on the parameter’s history. There exist three different methodologies to design a LPV controller namely, Linear fractional transformations which relies on the small gain theorem for bounds on performance and robustness. Single Quadratic Lyapunov Function (SQLF) Parameter Dependent Quadratic Lyapunov Function (PDQLF) to bound the achievable level of performance. These problems are solved by reformulating the control design into finite-dimensional, convex feasibility problems which can be solved exactly, and infinite-dimensional convex feasibility problems which can be solved approximately. This formulation constitutes a type of gain scheduling problem and contrast to classical gain scheduling, this approach address the effect of parameter variations with assured stability and performance. == References == == Further reading ==
Wikipedia/Linear_parameter-varying_control
Feedback occurs when outputs of a system are routed back as inputs as part of a chain of cause and effect that forms a circuit or loop. The system can then be said to feed back into itself. The notion of cause-and-effect has to be handled carefully when applied to feedback systems: Simple causal reasoning about a feedback system is difficult because the first system influences the second and second system influences the first, leading to a circular argument. This makes reasoning based upon cause and effect tricky, and it is necessary to analyze the system as a whole. As provided by Webster, feedback in business is the transmission of evaluative or corrective information about an action, event, or process to the original or controlling source. == History == Self-regulating mechanisms have existed since antiquity, and the idea of feedback started to enter economic theory in Britain by the 18th century, but it was not at that time recognized as a universal abstraction and so did not have a name. The first ever known artificial feedback device was a float valve, for maintaining water at a constant level, invented in 270 BC in Alexandria, Egypt. This device illustrated the principle of feedback: a low water level opens the valve, the rising water then provides feedback into the system, closing the valve when the required level is reached. This then reoccurs in a circular fashion as the water level fluctuates. Centrifugal governors were used to regulate the distance and pressure between millstones in windmills since the 17th century. In 1788, James Watt designed his first centrifugal governor following a suggestion from his business partner Matthew Boulton, for use in the steam engines of their production. Early steam engines employed a purely reciprocating motion, and were used for pumping water – an application that could tolerate variations in the working speed, but the use of steam engines for other applications called for more precise control of the speed. In 1868, James Clerk Maxwell wrote a famous paper, "On governors", that is widely considered a classic in feedback control theory. This was a landmark paper on control theory and the mathematics of feedback. The verb phrase to feed back, in the sense of returning to an earlier position in a mechanical process, was in use in the US by the 1860s, and in 1909, Nobel laureate Karl Ferdinand Braun used the term "feed-back" as a noun to refer to (undesired) coupling between components of an electronic circuit. By the end of 1912, researchers using early electronic amplifiers (audions) had discovered that deliberately coupling part of the output signal back to the input circuit would boost the amplification (through regeneration), but would also cause the audion to howl or sing. This action of feeding back of the signal from output to input gave rise to the use of the term "feedback" as a distinct word by 1920. The development of cybernetics from the 1940s onwards was centred around the study of circular causal feedback mechanisms. Over the years there has been some dispute as to the best definition of feedback. According to cybernetician Ashby (1956), mathematicians and theorists interested in the principles of feedback mechanisms prefer the definition of "circularity of action", which keeps the theory simple and consistent. For those with more practical aims, feedback should be a deliberate effect via some more tangible connection. [Practical experimenters] object to the mathematician's definition, pointing out that this would force them to say that feedback was present in the ordinary pendulum ... between its position and its momentum—a "feedback" that, from the practical point of view, is somewhat mystical. To this the mathematician retorts that if feedback is to be considered present only when there is an actual wire or nerve to represent it, then the theory becomes chaotic and riddled with irrelevancies.: 54  Focusing on uses in management theory, Ramaprasad (1983) defines feedback generally as "...information about the gap between the actual level and the reference level of a system parameter" that is used to "alter the gap in some way". He emphasizes that the information by itself is not feedback unless translated into action. == Types == === Positive and negative feedback === Positive feedback: If the signal feedback from output is in phase with the input signal, the feedback is called positive feedback. Negative feedback: If the signal feedback is out of phase by 180° with respect to the input signal, the feedback is called negative feedback. As an example of negative feedback, the diagram might represent a cruise control system in a car that matches a target speed such as the speed limit. The controlled system is the car; its input includes the combined torque from the engine and from the changing slope of the road (the disturbance). The car's speed (status) is measured by a speedometer. The error signal is the difference of the speed as measured by the speedometer from the target speed (set point). The controller interprets the speed to adjust the accelerator, commanding the fuel flow to the engine (the effector). The resulting change in engine torque, the feedback, combines with the torque exerted by the change of road grade to reduce the error in speed, minimising the changing slope. The terms "positive" and "negative" were first applied to feedback prior to WWII. The idea of positive feedback already existed in the 1920s when the regenerative circuit was made. Friis and Jensen (1924) described this circuit in a set of electronic amplifiers as a case where the "feed-back" action is positive in contrast to negative feed-back action, which they mentioned only in passing. Harold Stephen Black's classic 1934 paper first details the use of negative feedback in electronic amplifiers. According to Black: Positive feed-back increases the gain of the amplifier, negative feed-back reduces it. According to Mindell (2002) confusion in the terms arose shortly after this: ... Friis and Jensen had made the same distinction Black used between "positive feed-back" and "negative feed-back", based not on the sign of the feedback itself but rather on its effect on the amplifier's gain. In contrast, Nyquist and Bode, when they built on Black's work, referred to negative feedback as that with the sign reversed. Black had trouble convincing others of the utility of his invention in part because confusion existed over basic matters of definition.: 121  Even before these terms were being used, James Clerk Maxwell had described their concept through several kinds of "component motions" associated with the centrifugal governors used in steam engines. He distinguished those that lead to a continued increase in a disturbance or the amplitude of a wave or oscillation, from those that lead to a decrease of the same quality. ==== Terminology ==== The terms positive and negative feedback are defined in different ways within different disciplines. the change of the gap between reference and actual values of a parameter or trait, based on whether the gap is widening (positive) or narrowing (negative). the valence of the action or effect that alters the gap, based on whether it makes the recipient or observer happy (positive) or unhappy (negative). The two definitions may be confusing, like when an incentive (reward) is used to boost poor performance (narrow a gap). Referring to definition 1, some authors use alternative terms, replacing positive and negative with self-reinforcing and self-correcting, reinforcing and balancing, discrepancy-enhancing and discrepancy-reducing or regenerative and degenerative respectively. And for definition 2, some authors promote describing the action or effect as positive and negative reinforcement or punishment rather than feedback. Yet even within a single discipline an example of feedback can be called either positive or negative, depending on how values are measured or referenced. This confusion may arise because feedback can be used to provide information or motivate, and often has both a qualitative and a quantitative component. As Connellan and Zemke (1993) put it: Quantitative feedback tells us how much and how many. Qualitative feedback tells us how good, bad or indifferent.: 102  ==== Limitations of negative and positive feedback ==== While simple systems can sometimes be described as one or the other type, many systems with feedback loops cannot be shoehorned into either type, and this is especially true when multiple loops are present. When there are only two parts joined so that each affects the other, the properties of the feedback give important and useful information about the properties of the whole. But when the parts rise to even as few as four, if every one affects the other three, then twenty circuits can be traced through them; and knowing the properties of all the twenty circuits does not give complete information about the system.: 54  === Other types of feedback === In general, feedback systems can have many signals fed back and the feedback loop frequently contain mixtures of positive and negative feedback where positive and negative feedback can dominate at different frequencies or different points in the state space of a system. The term bipolar feedback has been coined to refer to biological systems where positive and negative feedback systems can interact, the output of one affecting the input of another, and vice versa. Some systems with feedback can have very complex behaviors such as chaotic behaviors in non-linear systems, while others have much more predictable behaviors, such as those that are used to make and design digital systems. Feedback is used extensively in digital systems. For example, binary counters and similar devices employ feedback where the current state and inputs are used to calculate a new state which is then fed back and clocked back into the device to update it. == Applications == === Mathematics and dynamical systems === By using feedback properties, the behavior of a system can be altered to meet the needs of an application; systems can be made stable, responsive or held constant. It is shown that dynamical systems with a feedback experience an adaptation to the edge of chaos. === Physics === Physical systems present feedback through the mutual interactions of its parts. Feedback is also relevant for the regulation of experimental conditions, noise reduction, and signal control. The thermodynamics of feedback-controlled systems has intrigued physicist since the Maxwell's demon, with recent advances on the consequences for entropy reduction and performance increase. === Biology === In biological systems such as organisms, ecosystems, or the biosphere, most parameters must stay under control within a narrow range around a certain optimal level under certain environmental conditions. The deviation of the optimal value of the controlled parameter can result from the changes in internal and external environments. A change of some of the environmental conditions may also require change of that range to change for the system to function. The value of the parameter to maintain is recorded by a reception system and conveyed to a regulation module via an information channel. An example of this is insulin oscillations. Biological systems contain many types of regulatory circuits, both positive and negative. As in other contexts, positive and negative do not imply that the feedback causes good or bad effects. A negative feedback loop is one that tends to slow down a process, whereas the positive feedback loop tends to accelerate it. The mirror neurons are part of a social feedback system, when an observed action is "mirrored" by the brain—like a self-performed action. Normal tissue integrity is preserved by feedback interactions between diverse cell types mediated by adhesion molecules and secreted molecules that act as mediators; failure of key feedback mechanisms in cancer disrupts tissue function. In an injured or infected tissue, inflammatory mediators elicit feedback responses in cells, which alter gene expression, and change the groups of molecules expressed and secreted, including molecules that induce diverse cells to cooperate and restore tissue structure and function. This type of feedback is important because it enables coordination of immune responses and recovery from infections and injuries. During cancer, key elements of this feedback fail. This disrupts tissue function and immunity. Mechanisms of feedback were first elucidated in bacteria, where a nutrient elicits changes in some of their metabolic functions. Feedback is also central to the operations of genes and gene regulatory networks. Repressor (see Lac repressor) and activator proteins are used to create genetic operons, which were identified by François Jacob and Jacques Monod in 1961 as feedback loops. These feedback loops may be positive (as in the case of the coupling between a sugar molecule and the proteins that import sugar into a bacterial cell), or negative (as is often the case in metabolic consumption). On a larger scale, feedback can have a stabilizing effect on animal populations even when profoundly affected by external changes, although time lags in feedback response can give rise to predator-prey cycles. In zymology, feedback serves as regulation of activity of an enzyme by its direct product(s) or downstream metabolite(s) in the metabolic pathway (see Allosteric regulation). The hypothalamic–pituitary–adrenal axis is largely controlled by positive and negative feedback, much of which is still unknown. In psychology, the body receives a stimulus from the environment or internally that causes the release of hormones. Release of hormones then may cause more of those hormones to be released, causing a positive feedback loop. This cycle is also found in certain behaviour. For example, "shame loops" occur in people who blush easily. When they realize that they are blushing, they become even more embarrassed, which leads to further blushing, and so on. === Climate science === The climate system is characterized by strong positive and negative feedback loops between processes that affect the state of the atmosphere, ocean, and land. A simple example is the ice–albedo positive feedback loop whereby melting snow exposes more dark ground (of lower albedo), which in turn absorbs heat and causes more snow to melt. === Control theory === Feedback is extensively used in control theory, using a variety of methods including state space (controls), full state feedback, and so forth. In the context of control theory, "feedback" is traditionally assumed to specify "negative feedback". The most common general-purpose controller using a control-loop feedback mechanism is a proportional-integral-derivative (PID) controller. Heuristically, the terms of a PID controller can be interpreted as corresponding to time: the proportional term depends on the present error, the integral term on the accumulation of past errors, and the derivative term is a prediction of future error, based on current rate of change. === Education === For feedback in the educational context, see corrective feedback. === Mechanical engineering === In ancient times, the float valve was used to regulate the flow of water in Greek and Roman water clocks; similar float valves are used to regulate fuel in a carburettor and also used to regulate tank water level in the flush toilet. The Dutch inventor Cornelius Drebbel (1572–1633) built thermostats (c1620) to control the temperature of chicken incubators and chemical furnaces. In 1745, the windmill was improved by blacksmith Edmund Lee, who added a fantail to keep the face of the windmill pointing into the wind. In 1787, Tom Mead regulated the rotation speed of a windmill by using a centrifugal pendulum to adjust the distance between the bedstone and the runner stone (i.e., to adjust the load). The use of the centrifugal governor by James Watt in 1788 to regulate the speed of his steam engine was one factor leading to the Industrial Revolution. Steam engines also use float valves and pressure release valves as mechanical regulation devices. A mathematical analysis of Watt's governor was done by James Clerk Maxwell in 1868. The Great Eastern was one of the largest steamships of its time and employed a steam powered rudder with feedback mechanism designed in 1866 by John McFarlane Gray. Joseph Farcot coined the word servo in 1873 to describe steam-powered steering systems. Hydraulic servos were later used to position guns. Elmer Ambrose Sperry of the Sperry Corporation designed the first autopilot in 1912. Nicolas Minorsky published a theoretical analysis of automatic ship steering in 1922 and described the PID controller. Internal combustion engines of the late 20th century employed mechanical feedback mechanisms such as the vacuum timing advance but mechanical feedback was replaced by electronic engine management systems once small, robust and powerful single-chip microcontrollers became affordable. === Electronic engineering === The use of feedback is widespread in the design of electronic components such as amplifiers, oscillators, and stateful logic circuit elements such as flip-flops and counters. Electronic feedback systems are also very commonly used to control mechanical, thermal and other physical processes. If the signal is inverted on its way round the control loop, the system is said to have negative feedback; otherwise, the feedback is said to be positive. Negative feedback is often deliberately introduced to increase the stability and accuracy of a system by correcting or reducing the influence of unwanted changes. This scheme can fail if the input changes faster than the system can respond to it. When this happens, the lag in arrival of the correcting signal can result in over-correction, causing the output to oscillate or "hunt". While often an unwanted consequence of system behaviour, this effect is used deliberately in electronic oscillators. Harry Nyquist at Bell Labs derived the Nyquist stability criterion for determining the stability of feedback systems. An easier method, but less general, is to use Bode plots developed by Hendrik Bode to determine the gain margin and phase margin. Design to ensure stability often involves frequency compensation to control the location of the poles of the amplifier. Electronic feedback loops are used to control the output of electronic devices, such as amplifiers. A feedback loop is created when all or some portion of the output is fed back to the input. A device is said to be operating open loop if no output feedback is being employed and closed loop if feedback is being used. When two or more amplifiers are cross-coupled using positive feedback, complex behaviors can be created. These multivibrators are widely used and include: astable circuits, which act as oscillators monostable circuits, which can be pushed into a state, and will return to the stable state after some time bistable circuits, which have two stable states that the circuit can be switched between ==== Negative feedback ==== Negative feedback occurs when the fed-back output signal has a relative phase of 180° with respect to the input signal (upside down). This situation is sometimes referred to as being out of phase, but that term also is used to indicate other phase separations, as in "90° out of phase". Negative feedback can be used to correct output errors or to desensitize a system to unwanted fluctuations. In feedback amplifiers, this correction is generally for waveform distortion reduction or to establish a specified gain level. A general expression for the gain of a negative feedback amplifier is the asymptotic gain model. ==== Positive feedback ==== Positive feedback occurs when the fed-back signal is in phase with the input signal. Under certain gain conditions, positive feedback reinforces the input signal to the point where the output of the device oscillates between its maximum and minimum possible states. Positive feedback may also introduce hysteresis into a circuit. This can cause the circuit to ignore small signals and respond only to large ones. It is sometimes used to eliminate noise from a digital signal. Under some circumstances, positive feedback may cause a device to latch, i.e., to reach a condition in which the output is locked to its maximum or minimum state. This fact is very widely used in digital electronics to make bistable circuits for volatile storage of information. The loud squeals that sometimes occurs in audio systems, PA systems, and rock music are known as audio feedback. If a microphone is in front of a loudspeaker that it is connected to, sound that the microphone picks up comes out of the speaker, and is picked up by the microphone and re-amplified. If the loop gain is sufficient, howling or squealing at the maximum power of the amplifier is possible. ==== Oscillator ==== An electronic oscillator is an electronic circuit that produces a periodic, oscillating electronic signal, often a sine wave or a square wave. Oscillators convert direct current (DC) from a power supply to an alternating current signal. They are widely used in many electronic devices. Common examples of signals generated by oscillators include signals broadcast by radio and television transmitters, clock signals that regulate computers and quartz clocks, and the sounds produced by electronic beepers and video games. Oscillators are often characterized by the frequency of their output signal: A low-frequency oscillator (LFO) is an electronic oscillator that generates a frequency below ≈20 Hz. This term is typically used in the field of audio synthesizers, to distinguish it from an audio frequency oscillator. An audio oscillator produces frequencies in the audio range, about 16 Hz to 20 kHz. An RF oscillator produces signals in the radio frequency (RF) range of about 100 kHz to 100 GHz. Oscillators designed to produce a high-power AC output from a DC supply are usually called inverters. There are two main types of electronic oscillator: the linear or harmonic oscillator and the nonlinear or relaxation oscillator. ==== Latches and flip-flops ==== A latch or a flip-flop is a circuit that has two stable states and can be used to store state information. They typically constructed using feedback that crosses over between two arms of the circuit, to provide the circuit with a state. The circuit can be made to change state by signals applied to one or more control inputs and will have one or two outputs. It is the basic storage element in sequential logic. Latches and flip-flops are fundamental building blocks of digital electronics systems used in computers, communications, and many other types of systems. Latches and flip-flops are used as data storage elements. Such data storage can be used for storage of state, and such a circuit is described as sequential logic. When used in a finite-state machine, the output and next state depend not only on its current input, but also on its current state (and hence, previous inputs). It can also be used for counting of pulses, and for synchronizing variably-timed input signals to some reference timing signal. Flip-flops can be either simple (transparent or opaque) or clocked (synchronous or edge-triggered). Although the term flip-flop has historically referred generically to both simple and clocked circuits, in modern usage it is common to reserve the term flip-flop exclusively for discussing clocked circuits; the simple ones are commonly called latches. Using this terminology, a latch is level-sensitive, whereas a flip-flop is edge-sensitive. That is, when a latch is enabled it becomes transparent, while a flip flop's output only changes on a single type (positive going or negative going) of clock edge. === Software === Feedback loops provide generic mechanisms for controlling the running, maintenance, and evolution of software and computing systems. Feedback-loops are important models in the engineering of adaptive software, as they define the behaviour of the interactions among the control elements over the adaptation process, to guarantee system properties at run-time. Feedback loops and foundations of control theory have been successfully applied to computing systems. In particular, they have been applied to the development of products such as IBM Db2 and IBM Tivoli. From a software perspective, the autonomic (MAPE, monitor analyze plan execute) loop proposed by researchers of IBM is another valuable contribution to the application of feedback loops to the control of dynamic properties and the design and evolution of autonomic software systems. ==== Software Development ==== ==== User interface design ==== Feedback is also a useful design principle for designing user interfaces. === Video feedback === Video feedback is the video equivalent of acoustic feedback. It involves a loop between a video camera input and a video output, e.g., a television screen or monitor. Aiming the camera at the display produces a complex video image based on the feedback. === Human resource management === == See also == == References == == Further reading == Katie Salen and Eric Zimmerman. Rules of Play. MIT Press. 2004. ISBN 0-262-24045-9. Chapter 18: Games as Cybernetic Systems. Korotayev A., Malkov A., Khaltourina D. Introduction to Social Macrodynamics: Secular Cycles and Millennial Trends. Moscow: URSS, 2006. ISBN 5-484-00559-0 Dijk, E., Cremer, D.D., Mulder, L.B., and Stouten, J. "How Do We React to Feedback in Social Dilemmas?" In Biel, Eek, Garling & Gustafsson, (eds.), New Issues and Paradigms in Research on Social Dilemmas, New York: Springer, 2008. == External links == Media related to Feedback at Wikimedia Commons
Wikipedia/Feedback_control
A control panel is a flat, often vertical, area where control or monitoring instruments are displayed or it is an enclosed unit that is the part of a system that users can access, such as the control panel of a security system (also called control unit). They are found in factories to monitor and control machines or production lines and in places such as nuclear power plants, ships, aircraft and mainframe computers. Older control panels are most often equipped with push buttons and analog instruments, whereas nowadays in many cases touchscreens are used for monitoring and control purposes. == Gallery == === Flat panels === === Enclosed control unit === == See also == == References ==
Wikipedia/Control_panel_(engineering)
An industrial control system (ICS) is an electronic control system and associated instrumentation used for industrial process control. Control systems can range in size from a few modular panel-mounted controllers to large interconnected and interactive distributed control systems (DCSs) with many thousands of field connections. Control systems receive data from remote sensors measuring process variables (PVs), compare the collected data with desired setpoints (SPs), and derive command functions that are used to control a process through the final control elements (FCEs), such as control valves. Larger systems are usually implemented by supervisory control and data acquisition (SCADA) systems, or DCSs, and programmable logic controllers (PLCs), though SCADA and PLC systems are scalable down to small systems with few control loops. Such systems are extensively used in industries such as chemical processing, pulp and paper manufacture, power generation, oil and gas processing, and telecommunications. == Discrete controllers == The simplest control systems are based around small discrete controllers with a single control loop each. These are usually panel mounted which allows direct viewing of the front panel and provides means of manual intervention by the operator, either to manually control the process or to change control setpoints. Originally these would be pneumatic controllers, a few of which are still in use, but nearly all are now electronic. Quite complex systems can be created with networks of these controllers communicating using industry-standard protocols. Networking allows the use of local or remote SCADA operator interfaces, and enables the cascading and interlocking of controllers. However, as the number of control loops increase for a system design there is a point where the use of a programmable logic controller (PLC) or distributed control system (DCS) is more manageable or cost-effective. == Distributed control systems == A distributed control system (DCS) is a digital process control system (PCS) for a process or plant, wherein controller functions and field connection modules are distributed throughout the system. As the number of control loops grows, DCS becomes more cost effective than discrete controllers. Additionally, a DCS provides supervisory viewing and management over large industrial processes. In a DCS, a hierarchy of controllers is connected by communication networks, allowing centralized control rooms and local on-plant monitoring and control. A DCS enables easy configuration of plant controls such as cascaded loops and interlocks, and easy interfacing with other computer systems such as production control. It also enables more sophisticated alarm handling, introduces automatic event logging, removes the need for physical records such as chart recorders and allows the control equipment to be networked and thereby located locally to the equipment being controlled to reduce cabling. A DCS typically uses custom-designed processors as controllers and uses either proprietary interconnections or standard protocols for communication. Input and output modules form the peripheral components of the system. The processors receive information from input modules, process the information and decide control actions to be performed by the output modules. The input modules receive information from sensing instruments in the process (or field) and the output modules transmit instructions to the final control elements, such as control valves. The field inputs and outputs can either be continuously changing analog signals e.g. current loop or 2 state signals that switch either on or off, such as relay contacts or a semiconductor switch. Distributed control systems can normally also support Foundation Fieldbus, PROFIBUS, HART, Modbus and other digital communication buses that carry not only input and output signals but also advanced messages such as error diagnostics and status signals. == SCADA systems == Supervisory control and data acquisition (SCADA) is a control system architecture that uses computers, networked data communications and graphical user interfaces for high-level process supervisory management. The operator interfaces which enable monitoring and the issuing of process commands, such as controller setpoint changes, are handled through the SCADA supervisory computer system. However, the real-time control logic or controller calculations are performed by networked modules which connect to other peripheral devices such as programmable logic controllers and discrete PID controllers which interface to the process plant or machinery. The SCADA concept was developed as a universal means of remote access to a variety of local control modules, which could be from different manufacturers allowing access through standard automation protocols. In practice, large SCADA systems have grown to become very similar to distributed control systems in function, but using multiple means of interfacing with the plant. They can control large-scale processes that can include multiple sites, and work over large distances. This is a commonly-used architecture industrial control systems, however there are concerns about SCADA systems being vulnerable to cyberwarfare or cyberterrorism attacks. The SCADA software operates on a supervisory level as control actions are performed automatically by RTUs or PLCs. SCADA control functions are usually restricted to basic overriding or supervisory level intervention. A feedback control loop is directly controlled by the RTU or PLC, but the SCADA software monitors the overall performance of the loop. For example, a PLC may control the flow of cooling water through part of an industrial process to a set point level, but the SCADA system software will allow operators to change the set points for the flow. The SCADA also enables alarm conditions, such as loss of flow or high temperature, to be displayed and recorded. == Programmable logic controllers == PLCs can range from small modular devices with tens of inputs and outputs (I/O) in a housing integral with the processor, to large rack-mounted modular devices with a count of thousands of I/O, and which are often networked to other PLC and SCADA systems. They can be designed for multiple arrangements of digital and analog inputs and outputs, extended temperature ranges, immunity to electrical noise, and resistance to vibration and impact. Programs to control machine operation are typically stored in battery-backed-up or non-volatile memory. == History == Process control of large industrial plants has evolved through many stages. Initially, control was from panels local to the process plant. However this required personnel to attend to these dispersed panels, and there was no overall view of the process. The next logical development was the transmission of all plant measurements to a permanently-staffed central control room. Often the controllers were behind the control room panels, and all automatic and manual control outputs were individually transmitted back to plant in the form of pneumatic or electrical signals. Effectively this was the centralisation of all the localised panels, with the advantages of reduced manpower requirements and consolidated overview of the process. However, whilst providing a central control focus, this arrangement was inflexible as each control loop had its own controller hardware so system changes required reconfiguration of signals by re-piping or re-wiring. It also required continual operator movement within a large control room in order to monitor the whole process. With the coming of electronic processors, high-speed electronic signalling networks and electronic graphic displays it became possible to replace these discrete controllers with computer-based algorithms, hosted on a network of input/output racks with their own control processors. These could be distributed around the plant and would communicate with the graphic displays in the control room. The concept of distributed control was realised. The introduction of distributed control allowed flexible interconnection and re-configuration of plant controls such as cascaded loops and interlocks, and interfacing with other production computer systems. It enabled sophisticated alarm handling, introduced automatic event logging, removed the need for physical records such as chart recorders, allowed the control racks to be networked and thereby located locally to plant to reduce cabling runs, and provided high-level overviews of plant status and production levels. For large control systems, the general commercial name distributed control system (DCS) was coined to refer to proprietary modular systems from many manufacturers which integrated high-speed networking and a full suite of displays and control racks. While the DCS was tailored to meet the needs of large continuous industrial processes, in industries where combinatorial and sequential logic was the primary requirement, the PLC evolved out of a need to replace racks of relays and timers used for event-driven control. The old controls were difficult to re-configure and debug, and PLC control enabled networking of signals to a central control area with electronic displays. PLCs were first developed for the automotive industry on vehicle production lines, where sequential logic was becoming very complex. It was soon adopted in a large number of other event-driven applications as varied as printing presses and water treatment plants. SCADA's history is rooted in distribution applications, such as power, natural gas, and water pipelines, where there is a need to gather remote data through potentially unreliable or intermittent low-bandwidth and high-latency links. SCADA systems use open-loop control with sites that are widely separated geographically. A SCADA system uses remote terminal units (RTUs) to send supervisory data back to a control centre. Most RTU systems always had some capacity to handle local control while the master station is not available. However, over the years RTU systems have grown more and more capable of handling local control. The boundaries between DCS and SCADA/PLC systems are blurring as time goes on. The technical limits that drove the designs of these various systems are no longer as much of an issue. Many PLC platforms can now perform quite well as a small DCS, using remote I/O and are sufficiently reliable that some SCADA systems actually manage closed-loop control over long distances. With the increasing speed of today's processors, many DCS products have a full line of PLC-like subsystems that weren't offered when they were initially developed. In 1993, with the release of IEC-1131, later to become IEC-61131-3, the industry moved towards increased code standardization with reusable, hardware-independent control software. For the first time, object-oriented programming (OOP) became possible within industrial control systems. This led to the development of both programmable automation controllers (PAC) and industrial PCs (IPC). These are platforms programmed in the five standardized IEC languages: ladder logic, structured text, function block, instruction list and sequential function chart. They can also be programmed in modern high-level languages such as C or C++. Additionally, they accept models developed in analytical tools such as MATLAB and Simulink. Unlike traditional PLCs, which use proprietary operating systems, IPCs utilize Windows IoT. IPC's have the advantage of powerful multi-core processors with much lower hardware costs than traditional PLCs and fit well into multiple form factors such as DIN rail mount, combined with a touch-screen as a panel PC, or as an embedded PC. New hardware platforms and technology have contributed significantly to the evolution of DCS and SCADA systems, further blurring the boundaries and changing definitions. == Security == SCADA and PLCs are vulnerable to cyber attack. The U.S. Government Joint Capability Technology Demonstration (JCTD) known as MOSAICS (More Situational Awareness for Industrial Control Systems) is the initial demonstration of cybersecurity defensive capability for critical infrastructure control systems. MOSAICS addresses the Department of Defense (DOD) operational need for cyber defense capabilities to defend critical infrastructure control systems from cyber attack, such as power, water and wastewater, and safety controls, affect the physical environment. The MOSAICS JCTD prototype will be shared with commercial industry through Industry Days for further research and development, an approach intended to lead to an innovative, game-changing capabilities for cybersecurity for critical infrastructure control systems. == See also == Automation Plant process and emergency shutdown systems MTConnect OPC Foundation Safety instrumented system (SIS) Control system security Operational Technology == References == == Further reading == Guide to Industrial Control Systems (ICS) Security, SP800-82 Rev2, National Institute of Standards and Technology, May 2015. Walker, Mark John (2012-09-08). The Programmable Logic Controller: its prehistory, emergence and application (PDF) (PhD thesis). Department of Communication and Systems Faculty of Mathematics, Computing and Technology: The Open University. Archived (PDF) from the original on 2018-06-20. Retrieved 2018-06-20. == External links == "New Age of Industrial Controllers". Archived from the original on 2016-03-03. Proview, an open source process control system "10 Reasons to choose PC Based Control". Manufacturing Automation. February 2015.
Wikipedia/Industrial_control_system
Temperature control is a process in which change of temperature of a space (and objects collectively there within), or of a substance, is measured or otherwise detected, and the passage of heat energy into or out of the space or substance is adjusted to achieve a desired temperature. Thermoregulation is the act of keeping the body at a static and regulated temperature that is suitable for the host despite the external temperature conditions. == See also == Heat exchanger Moving bed heat exchanger Thermal Control System Thermodynamic equilibrium Industrial automation Spacecraft thermal control == External links == Media related to Temperature control at Wikimedia Commons Article about PID control by Bob Pease (from archive.org) [1] == References ==
Wikipedia/Temperature_control
A flow control valve regulates the flow or pressure of a fluid. Control valves normally respond to signals generated by independent devices such as flow meters or temperature gauges. == Operation == Control valves are normally fitted with actuators and positioners. Pneumatically-actuated globe valves and diaphragm valves are widely used for control purposes in many industries, although quarter-turn types such as (modified) ball and butterfly valves are also used. Control valves can also work with hydraulic actuators (also known as hydraulic pilots). These types of valves are also known as automatic control valves. The hydraulic actuators respond to changes of pressure or flow and will open or close the valve. Automatic control valves do not require an external power source, meaning that the fluid pressure is enough to open and close them. Flow control valves, a type of automatic control valve, regulate fluid flow by maintaining a predetermined flow rate, independent of variations in system pressure. These valves achieve this using pressure-compensated mechanisms, which automatically adjust the valve opening to ensure a steady flow rate. Some designs incorporate a dual-chamber configuration that enhances regulation at lower pressures, improving stability in applications such as irrigation, industrial water systems, and municipal water distribution. Additionally, pilot-operated flow control valves are used in more advanced systems to provide precise flow adjustments while optimizing energy efficiency. These configurations allow the valve to respond dynamically to changing conditions, ensuring efficient fluid management. Automatic control valves include pressure reducing valves, flow control valves, back-pressure sustaining valves, altitude valves, and relief valves. == Application == Process plants consist of hundreds, or even thousands, of control loops all networked together to produce a product to be offered for sale. Each of these control loops is designed to keep some important process variable, such as pressure, flow, level, or temperature, within a required operating range to ensure the quality of the end product. Each loop receives and internally creates disturbances that detrimentally affect the process variable, and interaction from other loops in the network provides disturbances that influence the process variable. To reduce the effect of these load disturbances, sensors and transmitters collect information about the process variable and its relationship to some desired set point. A controller then processes this information and decides what must be done to get the process variable back to where it should be after a load disturbance occurs. When all the measuring, comparing, and calculating are done, some type of final control element must implement the strategy selected by the controller. The most common final control element in the process control industries is the control valve. The control valve manipulates a flowing fluid, such as gas, steam, water, or chemical compounds, to compensate for the load disturbance and keep the regulated process variable as close as possible to the desired set point. == Images == == See also == == References ==
Wikipedia/Flow_control_valve
A distributed control system (DCS) is a computerized control system for a process or plant usually with many control loops, in which autonomous controllers are distributed throughout the system, but there is no central operator supervisory control. This is in contrast to systems that use centralized controllers; either discrete controllers located at a central control room or within a central computer. The DCS concept increases reliability and reduces installation costs by localizing control functions near the process plant, with remote monitoring and supervision. Distributed control systems first emerged in large, high value, safety critical process industries, and were attractive because the DCS manufacturer would supply both the local control level and central supervisory equipment as an integrated package, thus reducing design integration risk. Today the functionality of Supervisory control and data acquisition (SCADA) and DCS systems are very similar, but DCS tends to be used on large continuous process plants where high reliability and security is important, and the control room is not necessarily geographically remote. Many machine control systems exhibit similar properties as plant and process control systems do. == Structure == The key attribute of a DCS is its reliability due to the distribution of the control processing around nodes in the system. This mitigates a single processor failure. If a processor fails, it will only affect one section of the plant process, as opposed to a failure of a central computer which would affect the whole process. This distribution of computing power local to the field Input/Output (I/O) connection racks also ensures fast controller processing times by removing possible network and central processing delays. The accompanying diagram is a general model which shows functional manufacturing levels using computerised control. Referring to the diagram; Level 0 contains the field devices such as flow and temperature sensors, and final control elements, such as control valves Level 1 contains the industrialised Input/Output (I/O) modules, and their associated distributed electronic processors. Level 2 contains the supervisory computers, which collect information from processor nodes on the system, and provide the operator control screens. Level 3 is the production control level, which does not directly control the process, but is concerned with monitoring production and monitoring targets Level 4 is the production scheduling level. Levels 1 and 2 are the functional levels of a traditional DCS, in which all equipment are part of an integrated system from a single manufacturer. Levels 3 and 4 are not strictly process control in the traditional sense, but where production control and scheduling takes place. === Technical points === The processor nodes and operator graphical displays are connected over proprietary or industry standard networks, and network reliability is increased by dual redundancy cabling over diverse routes. This distributed topology also reduces the amount of field cabling by siting the I/O modules and their associated processors close to the process plant. The processors receive information from input modules, process the information and decide control actions to be signalled by the output modules. The field inputs and outputs can be analog signals e.g. 4–20 mA DC current loop or two-state signals that switch either "on" or "off", such as relay contacts or a semiconductor switch. DCSs are connected to sensors and actuators and use setpoint control to control the flow of material through the plant. A typical application is a PID controller fed by a flow meter and using a control valve as the final control element. The DCS sends the setpoint required by the process to the controller which instructs a valve to operate so that the process reaches and stays at the desired setpoint. (see 4–20 mA schematic for example). Large oil refineries and chemical plants have several thousand I/O points and employ very large DCS. Processes are not limited to fluidic flow through pipes, however, and can also include things like paper machines and their associated quality controls, variable speed drives and motor control centers, cement kilns, mining operations, ore processing facilities, and many others. DCSs in very high reliability applications can have dual redundant processors with "hot" switch over on fault, to enhance the reliability of the control system. Although 4–20 mA has been the main field signalling standard, modern DCS systems can also support fieldbus digital protocols, such as Foundation Fieldbus, profibus, HART, modbus, PC Link, etc. Modern DCSs also support neural networks and fuzzy logic applications. Recent research focuses on the synthesis of optimal distributed controllers, which optimizes a certain H-infinity or the H 2 control criterion. == Typical applications == Distributed control systems (DCS) are dedicated systems used in manufacturing processes that are continuous or batch-oriented. Processes where a DCS might be used include: Chemical plants Petrochemical plants, refineries, Oil platforms, FPSOs and LNG plants Pulp and paper mills (see also: quality control system QCS) Boiler controls and power plant systems Nuclear power plants Environmental control systems Water management systems Water treatment plants Sewage treatment plants Food and food processing Agrochemical and fertilizer Metal and mines Automobile manufacturing Metallurgical process plants Pharmaceutical manufacturing Sugar refining plants Agriculture applications == History == === Evolution of process control operations === Process control of large industrial plants has evolved through many stages. Initially, control would be from panels local to the process plant. However this required a large amount of human oversight to attend to these dispersed panels, and there was no overall view of the process. The next logical development was the transmission of all plant measurements to a permanently-staffed central control room. Effectively this was the centralisation of all the localised panels, with the advantages of lower manning levels and easier overview of the process. Often the controllers were behind the control room panels, and all automatic and manual control outputs were transmitted back to plant. However, whilst providing a central control focus, this arrangement was inflexible as each control loop had its own controller hardware, and continual operator movement within the control room was required to view different parts of the process. With the coming of electronic processors and graphic displays it became possible to replace these discrete controllers with computer-based algorithms, hosted on a network of input/output racks with their own control processors. These could be distributed around plant, and communicate with the graphic display in the control room or rooms. The distributed control system was born. The introduction of DCSs allowed easy interconnection and re-configuration of plant controls such as cascaded loops and interlocks, and easy interfacing with other production computer systems. It enabled sophisticated alarm handling, introduced automatic event logging, removed the need for physical records such as chart recorders, allowed the control racks to be networked and thereby located locally to plant to reduce cabling runs, and provided high level overviews of plant status and production levels. === Origins === Early minicomputers were used in the control of industrial processes since the beginning of the 1960s. The IBM 1800, for example, was an early computer that had input/output hardware to gather process signals in a plant for conversion from field contact levels (for digital points) and analog signals to the digital domain. The first industrial control computer system was built 1959 at the Texaco Port Arthur, Texas, refinery with an RW-300 of the Ramo-Wooldridge Company. In 1975, both Yamatake-Honeywell and Japanese electrical engineering firm Yokogawa introduced their own independently produced DCS's - TDC 2000 and CENTUM systems, respectively. US-based Bristol also introduced their UCS 3000 universal controller in 1975. In 1978 Valmet introduced their own DCS system called Damatic (latest web-based generation Valmet DNAe). In 1980, Bailey (now part of ABB) introduced the NETWORK 90 system, Fisher Controls (now part of Emerson Electric) introduced the PROVoX system, Fischer & Porter Company (now also part of ABB) introduced DCI-4000 (DCI stands for Distributed Control Instrumentation). The DCS largely came about due to the increased availability of microcomputers and the proliferation of microprocessors in the world of process control. Computers had already been applied to process automation for some time in the form of both direct digital control (DDC) and setpoint control. In the early 1970s Taylor Instrument Company, (now part of ABB) developed the 1010 system, Foxboro the FOX1 system, Fisher Controls the DC2 system and Bailey Controls the 1055 systems. All of these were DDC applications implemented within minicomputers (DEC PDP-11, Varian Data Machines, MODCOMP etc.) and connected to proprietary Input/Output hardware. Sophisticated (for the time) continuous as well as batch control was implemented in this way. A more conservative approach was setpoint control, where process computers supervised clusters of analog process controllers. A workstation provided visibility into the process using text and crude character graphics. Availability of a fully functional graphical user interface was a way away. === Development === Central to the DCS model was the inclusion of control function blocks. Function blocks evolved from early, more primitive DDC concepts of "Table Driven" software. One of the first embodiments of object-oriented software, function blocks were self-contained "blocks" of code that emulated analog hardware control components and performed tasks that were essential to process control, such as execution of PID algorithms. Function blocks continue to endure as the predominant method of control for DCS suppliers, and are supported by key technologies such as Foundation Fieldbus today. Midac Systems, of Sydney, Australia, developed an objected-oriented distributed direct digital control system in 1982. The central system ran 11 microprocessors sharing tasks and common memory and connected to a serial communication network of distributed controllers each running two Z80s. The system was installed at the University of Melbourne. Digital communication between distributed controllers, workstations and other computing elements (peer to peer access) was one of the primary advantages of the DCS. Attention was duly focused on the networks, which provided the all-important lines of communication that, for process applications, had to incorporate specific functions such as determinism and redundancy. As a result, many suppliers embraced the IEEE 802.4 networking standard. This decision set the stage for the wave of migrations necessary when information technology moved into process automation and IEEE 802.3 rather than IEEE 802.4 prevailed as the control LAN. === The network-centric era of the 1980s === In the 1980s, users began to look at DCSs as more than just basic process control. A very early example of a Direct Digital Control DCS was completed by the Australian business Midac in 1981–82 using R-Tec Australian designed hardware. The system installed at the University of Melbourne used a serial communications network, connecting campus buildings back to a control room "front end". Each remote unit ran two Z80 microprocessors, while the front end ran eleven Z80s in a parallel processing configuration with paged common memory to share tasks and that could run up to 20,000 concurrent control objects. It was believed that if openness could be achieved and greater amounts of data could be shared throughout the enterprise that even greater things could be achieved. The first attempts to increase the openness of DCSs resulted in the adoption of the predominant operating system of the day: UNIX. UNIX and its companion networking technology TCP-IP were developed by the US Department of Defense for openness, which was precisely the issue the process industries were looking to resolve. As a result, suppliers also began to adopt Ethernet-based networks with their own proprietary protocol layers. The full TCP/IP standard was not implemented, but the use of Ethernet made it possible to implement the first instances of object management and global data access technology. The 1980s also witnessed the first PLCs integrated into the DCS infrastructure. Plant-wide historians also emerged to capitalize on the extended reach of automation systems. The first DCS supplier to adopt UNIX and Ethernet networking technologies was Foxboro, who introduced the I/A Series system in 1987. === The application-centric era of the 1990s === The drive toward openness in the 1980s gained momentum through the 1990s with the increased adoption of commercial off-the-shelf (COTS) components and IT standards. Probably the biggest transition undertaken during this time was the move from the UNIX operating system to the Windows environment. While the realm of the real time operating system (RTOS) for control applications remains dominated by real time commercial variants of UNIX or proprietary operating systems, everything above real-time control has made the transition to Windows. The introduction of Microsoft at the desktop and server layers resulted in the development of technologies such as OLE for process control (OPC), which is now a de facto industry connectivity standard. Internet technology also began to make its mark in automation and the world, with most DCS HMI supporting Internet connectivity. The 1990s were also known for the "Fieldbus Wars", where rival organizations competed to define what would become the IEC fieldbus standard for digital communication with field instrumentation instead of 4–20 milliamp analog communications. The first fieldbus installations occurred in the 1990s. Towards the end of the decade, the technology began to develop significant momentum, with the market consolidated around Ethernet I/P, Foundation Fieldbus and Profibus PA for process automation applications. Some suppliers built new systems from the ground up to maximize functionality with fieldbus, such as Rockwell PlantPAx System, Honeywell with Experion & Plantscape SCADA systems, ABB with System 800xA, Emerson Process Management with the Emerson Process Management DeltaV control system, Siemens with the SPPA-T3000 or Simatic PCS 7, Forbes Marshall with the Microcon+ control system and Azbil Corporation with the Harmonas-DEO system. Fieldbus technics have been used to integrate machine, drives, quality and condition monitoring applications to one DCS with Valmet DNA system. The impact of COTS, however, was most pronounced at the hardware layer. For years, the primary business of DCS suppliers had been the supply of large amounts of hardware, particularly I/O and controllers. The initial proliferation of DCSs required the installation of prodigious amounts of this hardware, most of it manufactured from the bottom up by DCS suppliers. Standard computer components from manufacturers such as Intel and Motorola, however, made it cost prohibitive for DCS suppliers to continue making their own components, workstations, and networking hardware. As the suppliers made the transition to COTS components, they also discovered that the hardware market was shrinking fast. COTS not only resulted in lower manufacturing costs for the supplier, but also steadily decreasing prices for the end users, who were also becoming increasingly vocal over what they perceived to be unduly high hardware costs. Some suppliers that were previously stronger in the PLC business, such as Rockwell Automation and Siemens, were able to leverage their expertise in manufacturing control hardware to enter the DCS marketplace with cost effective offerings, while the stability/scalability/reliability and functionality of these emerging systems are still improving. The traditional DCS suppliers introduced new generation DCS System based on the latest Communication and IEC Standards, which resulting in a trend of combining the traditional concepts/functionalities for PLC and DCS into a one for all solution—named "Process Automation System" (PAS). The gaps among the various systems remain at the areas such as: the database integrity, pre-engineering functionality, system maturity, communication transparency and reliability. While it is expected the cost ratio is relatively the same (the more powerful the systems are, the more expensive they will be), the reality of the automation business is often operating strategically case by case. The current next evolution step is called Collaborative Process Automation Systems. To compound the issue, suppliers were also realizing that the hardware market was becoming saturated. The life cycle of hardware components such as I/O and wiring is also typically in the range of 15 to over 20 years, making for a challenging replacement market. Many of the older systems that were installed in the 1970s and 1980s are still in use today, and there is a considerable installed base of systems in the market that are approaching the end of their useful life. Developed industrial economies in North America, Europe, and Japan already had many thousands of DCSs installed, and with few if any new plants being built, the market for new hardware was shifting rapidly to smaller, albeit faster growing regions such as China, Latin America, and Eastern Europe. Because of the shrinking hardware business, suppliers began to make the challenging transition from a hardware-based business model to one based on software and value-added services. It is a transition that is still being made today. The applications portfolio offered by suppliers expanded considerably in the '90s to include areas such as production management, model-based control, real-time optimization, plant asset management (PAM), Real-time performance management (RPM) tools, alarm management, and many others. To obtain the true value from these applications, however, often requires a considerable service content, which the suppliers also provide. === Modern systems (2010 onwards) === The latest developments in DCS include the following new technologies: Wireless systems and protocols Remote transmission, logging and data historian Mobile interfaces and controls Embedded web-servers Increasingly, and ironically, DCS are becoming centralised at plant level, with the ability to log into the remote equipment. This enables operator to control both at enterprise level ( macro ) and at the equipment level (micro), both within and outside the plant, because the importance of the physical location drops due to interconnectivity primarily thanks to wireless and remote access. The more wireless protocols are developed and refined, the more they are included in DCS. DCS controllers are now often equipped with embedded servers and provide on-the-go web access. Whether DCS will lead Industrial Internet of Things (IIOT) or borrow key elements from remains to be seen. Many vendors provide the option of a mobile HMI, ready for both Android and iOS. With these interfaces, the threat of security breaches and possible damage to plant and process are now very real. == See also == Annunciator panel Building automation EPICS Industrial control system Plant process and emergency shutdown systems Safety instrumented system (SIS) TANGO == References ==
Wikipedia/Distributed_Control_System
A fuzzy control system is a control system based on fuzzy logic – a mathematical system that analyzes analog input values in terms of logical variables that take on continuous values between 0 and 1, in contrast to classical or digital logic, which operates on discrete values of either 1 or 0 (true or false, respectively). Fuzzy logic is widely used in machine control. The term "fuzzy" refers to the fact that the logic involved can deal with concepts that cannot be expressed as the "true" or "false" but rather as "partially true". Although alternative approaches such as genetic algorithms and neural networks can perform just as well as fuzzy logic in many cases, fuzzy logic has the advantage that the solution to the problem can be cast in terms that human operators can understand, such that that their experience can be used in the design of the controller. This makes it easier to mechanize tasks that are already successfully performed by humans. == History and applications == Fuzzy logic was proposed by Lotfi A. Zadeh of the University of California at Berkeley in a 1965 paper. He elaborated on his ideas in a 1973 paper that introduced the concept of "linguistic variables", which in this article equates to a variable defined as a fuzzy set. Other research followed, with the first industrial application, a cement kiln built in Denmark, coming on line in 1976. Fuzzy systems were initially implemented in Japan. Interest in fuzzy systems was sparked by Seiji Yasunobu and Soji Miyamoto of Hitachi, who in 1985 provided simulations that demonstrated the feasibility of fuzzy control systems for the Sendai Subway. Their ideas were adopted, and fuzzy systems were used to control accelerating, braking, and stopping when the Namboku Line opened in 1987. In 1987, Takeshi Yamakawa demonstrated the use of fuzzy control, through a set of simple dedicated fuzzy logic chips, in an "inverted pendulum" experiment. This is a classic control problem, in which a vehicle tries to keep a pole mounted on its top by a hinge upright by moving back and forth. Yamakawa subsequently made the demonstration more sophisticated by mounting a wine glass containing water and even a live mouse to the top of the pendulum: the system maintained stability in both cases. Yamakawa eventually went on to organize his own fuzzy-systems research lab to help exploit his patents in the field. Japanese engineers subsequently developed a wide range of fuzzy systems for both industrial and consumer applications. In 1988 Japan established the Laboratory for International Fuzzy Engineering (LIFE), a cooperative arrangement between 48 companies to pursue fuzzy research. The automotive company Volkswagen was the only foreign corporate member of LIFE, dispatching a researcher for a duration of three years. Japanese consumer goods often incorporate fuzzy systems. Matsushita vacuum cleaners use microcontrollers running fuzzy algorithms to interrogate dust sensors and adjust suction power accordingly. Hitachi washing machines use fuzzy controllers to load-weight, fabric-mix, and dirt sensors and automatically set the wash cycle for the best use of power, water, and detergent. Canon developed an autofocusing camera that uses a charge-coupled device (CCD) to measure the clarity of the image in six regions of its field of view and use the information provided to determine if the image is in focus. It also tracks the rate of change of lens movement during focusing, and controls its speed to prevent overshoot. The camera's fuzzy control system uses 12 inputs: 6 to obtain the current clarity data provided by the CCD and 6 to measure the rate of change of lens movement. The output is the position of the lens. The fuzzy control system uses 13 rules and requires 1.1 kilobytes of memory. An industrial air conditioner designed by Mitsubishi uses 25 heating rules and 25 cooling rules. A temperature sensor provides input, with control outputs fed to an inverter, a compressor valve, and a fan motor. Compared to the previous design, the fuzzy controller heats and cools five times faster, reduces power consumption by 24%, increases temperature stability by a factor of two, and uses fewer sensors. Other applications investigated or implemented include: character and handwriting recognition; optical fuzzy systems; robots, including one for making Japanese flower arrangements; voice-controlled robot helicopters (hovering is a "balancing act" rather similar to the inverted pendulum problem); rehabilitation robotics to provide patient-specific solutions (e.g. to control heart rate and blood pressure ); control of flow of powders in film manufacture; elevator systems; and so on. Work on fuzzy systems is also proceeding in North America and Europe, although on a less extensive scale than in Japan. The US Environmental Protection Agency has investigated fuzzy control for energy-efficient motors, and NASA has studied fuzzy control for automated space docking: simulations show that a fuzzy control system can greatly reduce fuel consumption. Firms such as Boeing, General Motors, Allen-Bradley, Chrysler, Eaton, and Whirlpool have worked on fuzzy logic for use in low-power refrigerators, improved automotive transmissions, and energy-efficient electric motors. In 1995 Maytag introduced an "intelligent" dishwasher based on a fuzzy controller and a "one-stop sensing module" that combines a thermistor, for temperature measurement; a conductivity sensor, to measure detergent level from the ions present in the wash; a turbidity sensor that measures scattered and transmitted light to measure the soiling of the wash; and a magnetostrictive sensor to read spin rate. The system determines the optimum wash cycle for any load to obtain the best results with the least amount of energy, detergent, and water. It even adjusts for dried-on foods by tracking the last time the door was opened, and estimates the number of dishes by the number of times the door was opened. Xiera Technologies Inc. has developed the first auto-tuner for the fuzzy logic controller's knowledge base known as edeX. This technology was tested by Mohawk College and was able to solve non-linear 2x2 and 3x3 multi-input multi-output problems. Research and development is also continuing on fuzzy applications in software, as opposed to firmware, design, including fuzzy expert systems and integration of fuzzy logic with neural-network and so-called adaptive "genetic" software systems, with the ultimate goal of building "self-learning" fuzzy-control systems. These systems can be employed to control complex, nonlinear dynamic plants, for example, human body. == Fuzzy sets == The input variables in a fuzzy control system are in general mapped by sets of membership functions similar to this, known as "fuzzy sets". The process of converting a crisp input value to a fuzzy value is called "fuzzification". The fuzzy logic based approach had been considered by designing two fuzzy systems, one for error heading angle and the other for velocity control. A control system may also have various types of switch, or "ON-OFF", inputs along with its analog inputs, and such switch inputs of course will always have a truth value equal to either 1 or 0, but the scheme can deal with them as simplified fuzzy functions that happen to be either one value or another. Given "mappings" of input variables into membership functions and truth values, the microcontroller then makes decisions for what action to take, based on a set of "rules", each of the form: IF brake temperature IS warm AND speed IS not very fast THEN brake pressure IS slightly decreased. In this example, the two input variables are "brake temperature" and "speed" that have values defined as fuzzy sets. The output variable, "brake pressure" is also defined by a fuzzy set that can have values like "static" or "slightly increased" or "slightly decreased" etc. === Fuzzy control in detail === Fuzzy controllers are very simple conceptually. They consist of an input stage, a processing stage, and an output stage. The input stage maps sensor or other inputs, such as switches, thumbwheels, and so on, to the appropriate membership functions and truth values. The processing stage invokes each appropriate rule and generates a result for each, then combines the results of the rules. Finally, the output stage converts the combined result back into a specific control output value. The most common shape of membership functions is triangular, although trapezoidal and bell curves are also used, but the shape is generally less important than the number of curves and their placement. From three to seven curves are generally appropriate to cover the required range of an input value, or the "universe of discourse" in fuzzy jargon. As discussed earlier, the processing stage is based on a collection of logic rules in the form of IF-THEN statements, where the IF part is called the "antecedent" and the THEN part is called the "consequent". Typical fuzzy control systems have dozens of rules. Consider a rule for a thermostat: IF (temperature is "cold") THEN turn (heater is "high") This rule uses the truth value of the "temperature" input, which is some truth value of "cold", to generate a result in the fuzzy set for the "heater" output, which is some value of "high". This result is used with the results of other rules to finally generate the crisp composite output. Obviously, the greater the truth value of "cold", the higher the truth value of "high", though this does not necessarily mean that the output itself will be set to "high" since this is only one rule among many. In some cases, the membership functions can be modified by "hedges" that are equivalent to adverbs. Common hedges include "about", "near", "close to", "approximately", "very", "slightly", "too", "extremely", and "somewhat". These operations may have precise definitions, though the definitions can vary considerably between different implementations. "Very", for one example, squares membership functions; since the membership values are always less than 1, this narrows the membership function. "Extremely" cubes the values to give greater narrowing, while "somewhat" broadens the function by taking the square root. In practice, the fuzzy rule sets usually have several antecedents that are combined using fuzzy operators, such as AND, OR, and NOT, though again the definitions tend to vary: AND, in one popular definition, simply uses the minimum weight of all the antecedents, while OR uses the maximum value. There is also a NOT operator that subtracts a membership function from 1 to give the "complementary" function. There are several ways to define the result of a rule, but one of the most common and simplest is the "max-min" inference method, in which the output membership function is given the truth value generated by the premise. Rules can be solved in parallel in hardware, or sequentially in software. The results of all the rules that have fired are "defuzzified" to a crisp value by one of several methods. There are dozens, in theory, each with various advantages or drawbacks. The "centroid" method is very popular, in which the "center of mass" of the result provides the crisp value. Another approach is the "height" method, which takes the value of the biggest contributor. The centroid method favors the rule with the output of greatest area, while the height method obviously favors the rule with the greatest output value. The diagram below demonstrates max-min inferencing and centroid defuzzification for a system with input variables "x", "y", and "z" and an output variable "n". Note that "mu" is standard fuzzy-logic nomenclature for "truth value": Notice how each rule provides a result as a truth value of a particular membership function for the output variable. In centroid defuzzification the values are OR'd, that is, the maximum value is used and values are not added, and the results are then combined using a centroid calculation. Fuzzy control system design is based on empirical methods, basically a methodical approach to trial-and-error. The general process is as follows: Document the system's operational specifications and inputs and outputs. Document the fuzzy sets for the inputs. Document the rule set. Determine the defuzzification method. Run through test suite to validate system, adjust details as required. Complete document and release to production. As a general example, consider the design of a fuzzy controller for a steam turbine. The block diagram of this control system appears as follows: The input and output variables map into the following fuzzy set: —where: N3: Large negative. N2: Medium negative. N1: Small negative. Z: Zero. P1: Small positive. P2: Medium positive. P3: Large positive. The rule set includes such rules as: rule 1: IF temperature IS cool AND pressure IS weak, THEN throttle is P3. rule 2: IF temperature IS cool AND pressure IS low, THEN throttle is P2. rule 3: IF temperature IS cool AND pressure IS ok, THEN throttle is Z. rule 4: IF temperature IS cool AND pressure IS strong, THEN throttle is N2. In practice, the controller accepts the inputs and maps them into their membership functions and truth values. These mappings are then fed into the rules. If the rule specifies an AND relationship between the mappings of the two input variables, as the examples above do, the minimum of the two is used as the combined truth value; if an OR is specified, the maximum is used. The appropriate output state is selected and assigned a membership value at the truth level of the premise. The truth values are then defuzzified. For example, assume the temperature is in the "cool" state, and the pressure is in the "low" and "ok" states. The pressure values ensure that only rules 2 and 3 fire: The two outputs are then defuzzified through centroid defuzzification: __________________________________________________________________ | Z P2 1 -+ * * | * * * * | * * * * | * * * * | * 222222222 | * 22222222222 | 333333332222222222222 +---33333333222222222222222--> ^ +150 __________________________________________________________________ The output value will adjust the throttle and then the control cycle will begin again to generate the next value. === Building a fuzzy controller === Consider implementing with a microcontroller chip a simple feedback controller: A fuzzy set is defined for the input error variable "e", and the derived change in error, "delta", as well as the "output", as follows: LP: large positive SP: small positive ZE: zero SN: small negative LN: large negative If the error ranges from -1 to +1, with the analog-to-digital converter used having a resolution of 0.25, then the input variable's fuzzy set (which, in this case, also applies to the output variable) can be described very simply as a table, with the error / delta / output values in the top row and the truth values for each membership function arranged in rows beneath: _______________________________________________________________________ -1 -0.75 -0.5 -0.25 0 0.25 0.5 0.75 1 _______________________________________________________________________ mu(LP) 0 0 0 0 0 0 0.3 0.7 1 mu(SP) 0 0 0 0 0.3 0.7 1 0.7 0.3 mu(ZE) 0 0 0.3 0.7 1 0.7 0.3 0 0 mu(SN) 0.3 0.7 1 0.7 0.3 0 0 0 0 mu(LN) 1 0.7 0.3 0 0 0 0 0 0 _______________________________________________________________________ —or, in graphical form (where each "X" has a value of 0.1): LN SN ZE SP LP +------------------------------------------------------------------+ | | -1.0 | XXXXXXXXXX XXX : : : | -0.75 | XXXXXXX XXXXXXX : : : | -0.5 | XXX XXXXXXXXXX XXX : : | -0.25 | : XXXXXXX XXXXXXX : : | 0.0 | : XXX XXXXXXXXXX XXX : | 0.25 | : : XXXXXXX XXXXXXX : | 0.5 | : : XXX XXXXXXXXXX XXX | 0.75 | : : : XXXXXXX XXXXXXX | 1.0 | : : : XXX XXXXXXXXXX | | | +------------------------------------------------------------------+ Suppose this fuzzy system has the following rule base: rule 1: IF e = ZE AND delta = ZE THEN output = ZE rule 2: IF e = ZE AND delta = SP THEN output = SN rule 3: IF e = SN AND delta = SN THEN output = LP rule 4: IF e = LP OR delta = LP THEN output = LN These rules are typical for control applications in that the antecedents consist of the logical combination of the error and error-delta signals, while the consequent is a control command output. The rule outputs can be defuzzified using a discrete centroid computation: SUM( I = 1 TO 4 OF ( mu(I) * output(I) ) ) / SUM( I = 1 TO 4 OF mu(I) ) Now, suppose that at a given time: e = 0.25 delta = 0.5 Then this gives: ________________________ e delta ________________________ mu(LP) 0 0.3 mu(SP) 0.7 1 mu(ZE) 0.7 0.3 mu(SN) 0 0 mu(LN) 0 0 ________________________ Plugging this into rule 1 gives: rule 1: IF e = ZE AND delta = ZE THEN output = ZE mu(1) = MIN( 0.7, 0.3 ) = 0.3 output(1) = 0 -- where: mu(1): Truth value of the result membership function for rule 1. In terms of a centroid calculation, this is the "mass" of this result for this discrete case. output(1): Value (for rule 1) where the result membership function (ZE) is maximum over the output variable fuzzy set range. That is, in terms of a centroid calculation, the location of the "center of mass" for this individual result. This value is independent of the value of "mu". It simply identifies the location of ZE along the output range. The other rules give: rule 2: IF e = ZE AND delta = SP THEN output = SN mu(2) = MIN( 0.7, 1 ) = 0.7 output(2) = -0.5 rule 3: IF e = SN AND delta = SN THEN output = LP mu(3) = MIN( 0.0, 0.0 ) = 0 output(3) = 1 rule 4: IF e = LP OR delta = LP THEN output = LN mu(4) = MAX( 0.0, 0.3 ) = 0.3 output(4) = -1 The centroid computation yields: m u ( 1 ) ⋅ o u t p u t ( 1 ) + m u ( 2 ) ⋅ o u t p u t ( 2 ) + m u ( 3 ) ⋅ o u t p u t ( 3 ) + m u ( 4 ) ⋅ o u t p u t ( 4 ) m u ( 1 ) + m u ( 2 ) + m u ( 3 ) + m u ( 4 ) {\displaystyle {\frac {mu(1)\cdot output(1)+mu(2)\cdot output(2)+mu(3)\cdot output(3)+mu(4)\cdot output(4)}{mu(1)+mu(2)+mu(3)+mu(4)}}} = ( 0.3 ⋅ 0 ) + ( 0.7 ⋅ − 0.5 ) + ( 0 ⋅ 1 ) + ( 0.3 ⋅ − 1 ) 0.3 + 0.7 + 0 + 0.3 {\displaystyle ={\frac {(0.3\cdot 0)+(0.7\cdot -0.5)+(0\cdot 1)+(0.3\cdot -1)}{0.3+0.7+0+0.3}}} = − 0.5 {\displaystyle =-0.5} —for the final control output. Simple. Of course the hard part is figuring out what rules actually work correctly in practice. If you have problems figuring out the centroid equation, remember that a centroid is defined by summing all the moments (location times mass) around the center of gravity and equating the sum to zero. So if X 0 {\displaystyle X_{0}} is the center of gravity, X i {\displaystyle X_{i}} is the location of each mass, and M i {\displaystyle M_{i}} is each mass, this gives: 0 = ( X 1 − X 0 ) ⋅ M 1 + ( X 2 − X 0 ) ⋅ M 2 + … + ( X n − X 0 ) ⋅ M n {\displaystyle 0=(X_{1}-X_{0})\cdot M_{1}+(X_{2}-X_{0})\cdot M_{2}+\ldots +(X_{n}-X_{0})\cdot M_{n}} 0 = ( X 1 ⋅ M 1 + X 2 ⋅ M 2 + … + X n ⋅ M n ) − X 0 ⋅ ( M 1 + M 2 + … + M n ) {\displaystyle 0=(X_{1}\cdot M_{1}+X_{2}\cdot M_{2}+\ldots +X_{n}\cdot M_{n})-X_{0}\cdot (M_{1}+M_{2}+\ldots +M_{n})} X 0 ⋅ ( M 1 + M 2 + … + M n ) = X 1 ⋅ M 1 + X 2 ⋅ M 2 + … + X n ⋅ M n {\displaystyle X_{0}\cdot (M_{1}+M_{2}+\ldots +M_{n})=X_{1}\cdot M_{1}+X_{2}\cdot M_{2}+\ldots +X_{n}\cdot M_{n}} X 0 = X 1 ⋅ M 1 + X 2 ⋅ M 2 + … + X n ⋅ M n M 1 + M 2 + … + M n {\displaystyle X_{0}={\frac {X_{1}\cdot M_{1}+X_{2}\cdot M_{2}+\ldots +X_{n}\cdot M_{n}}{M_{1}+M_{2}+\ldots +M_{n}}}} In our example, the values of mu correspond to the masses, and the values of X to location of the masses (mu, however, only 'corresponds to the masses' if the initial 'mass' of the output functions are all the same/equivalent. If they are not the same, i.e. some are narrow triangles, while others maybe wide trapezoids or shouldered triangles, then the mass or area of the output function must be known or calculated. It is this mass that is then scaled by mu and multiplied by its location X_i). This system can be implemented on a standard microprocessor, but dedicated fuzzy chips are now available. For example, Adaptive Logic INC of San Jose, California, sells a "fuzzy chip", the AL220, that can accept four analog inputs and generate four analog outputs. A block diagram of the chip is shown below: +---------+ +-------+ analog --4-->| analog | | mux / +--4--> analog in | mux | | SH | out +----+----+ +-------+ | ^ V | +-------------+ +--+--+ | ADC / latch | | DAC | +------+------+ +-----+ | ^ | | 8 +-----------------------------+ | | | | V | | +-----------+ +-------------+ | +-->| fuzzifier | | defuzzifier +--+ +-----+-----+ +-------------+ | ^ | +-------------+ | | | rule | | +->| processor +--+ | (50 rules) | +------+------+ | +------+------+ | parameter | | memory | | 256 x 8 | +-------------+ ADC: analog-to-digital converter DAC: digital-to-analog converter SH: sample/hold == Antilock brakes == As an example, consider an anti-lock braking system, directed by a microcontroller chip. The microcontroller has to make decisions based on brake temperature, speed, and other variables in the system. The variable "temperature" in this system can be subdivided into a range of "states": "cold", "cool", "moderate", "warm", "hot", "very hot". The transition from one state to the next is hard to define. An arbitrary static threshold might be set to divide "warm" from "hot". For example, at exactly 90 degrees, warm ends and hot begins. But this would result in a discontinuous change when the input value passed over that threshold. The transition wouldn't be smooth, as would be required in braking situations. The way around this is to make the states fuzzy. That is, allow them to change gradually from one state to the next. In order to do this, there must be a dynamic relationship established between different factors. Start by defining the input temperature states using "membership functions": With this scheme, the input variable's state no longer jumps abruptly from one state to the next. Instead, as the temperature changes, it loses value in one membership function while gaining value in the next. In other words, its ranking in the category of cold decreases as it becomes more highly ranked in the warmer category. At any sampled timeframe, the "truth value" of the brake temperature will almost always be in some degree part of two membership functions: i.e.: '0.6 nominal and 0.4 warm', or '0.7 nominal and 0.3 cool', and so on. The above example demonstrates a simple application, using the abstraction of values from multiple values. This only represents one kind of data, however, in this case, temperature. Adding additional sophistication to this braking system, could be done by additional factors such as traction, speed, inertia, set up in dynamic functions, according to the designed fuzzy system. == Logical interpretation of fuzzy control == In spite of the appearance there are several difficulties to give a rigorous logical interpretation of the IF-THEN rules. As an example, interpret a rule as IF (temperature is "cold") THEN (heater is "high") by the first order formula Cold(x)→High(y) and assume that r is an input such that Cold(r) is false. Then the formula Cold(r)→High(t) is true for any t and therefore any t gives a correct control given r. A rigorous logical justification of fuzzy control is given in Hájek's book (see Chapter 7) where fuzzy control is represented as a theory of Hájek's basic logic. In Gerla 2005 another logical approach to fuzzy control is proposed based on fuzzy logic programming: Denote by f the fuzzy function arising of an IF-THEN systems of rules. Then this system can be translated into a fuzzy program P containing a series of rules whose head is "Good(x,y)". The interpretation of this predicate in the least fuzzy Herbrand model of P coincides with f. This gives further useful tools to fuzzy control. == Fuzzy qualitative simulation == Before an Artificial Intelligence system is able to plan the action sequence, some kind of model is needed. For video games, the model is equal to the game rules. From the programming perspective, the game rules are implemented as a Physics engine which accepts an action from a player and calculates, if the action is valid. After the action was executed, the game is in follow up state. If the aim isn't only to play mathematical games but determine the actions for real world applications, the most obvious bottleneck is, that no game rules are available. The first step is to model the domain. System identification can be realized with precise mathematical equations or with Fuzzy rules. Using Fuzzy logic and ANFIS systems (Adaptive network based fuzzy inference system) for creating the forward model for a domain has many disadvantages. A qualitative simulation isn't able to determine the correct follow up state, but the system will only guess what will happen if the action was taken. The Fuzzy qualitative simulation can't predict the exact numerical values, but it's using imprecise natural language to speculate about the future. It takes the current situation plus the actions from the past and generates the expected follow up state of the game. The output of the ANFIS system isn't providing correct information, but only a Fuzzy set notation, for example [0,0.2,0.4,0]. After converting the set notation back into numerical values the accuracy get worse. This makes Fuzzy qualitative simulation a bad choice for practical applications. == Applications == Fuzzy control systems are suitable when the process complexity is high including uncertainty and nonlinear behavior, and there are no precise mathematical models available. Successful applications of fuzzy control systems have been reported worldwide mainly in Japan with pioneering solutions since 80s. Some applications reported in the literature are: Air conditioners Automatic focus systems in cameras Domestic appliances (refrigerators, washing machines...) Control and optimization of industrial processes and system Writing systems Fuel efficiency in engines Environment Expert systems Decision trees Robotics Autonomous vehicles == See also == Dynamic logic Bayesian inference Function approximation Fuzzy concept Fuzzy markup language Hysteresis Neuro-fuzzy Fuzzy control language Type-2 fuzzy sets and systems == References == == Further reading == Kevin M. Passino and Stephen Yurkovich, Fuzzy Control, Addison Wesley Longman, Menlo Park, CA, 1998 (522 pages) Archived 2008-12-15 at the Wayback Machine Kazuo Tanaka; Hua O. Wang (2001). Fuzzy control systems design and analysis: a linear matrix inequality approach. John Wiley and Sons. ISBN 978-0-471-32324-2. Cox, E. (Oct. 1992). Fuzzy fundamentals. IEEE Spectrum, 29:10. pp. 58–61. Cox, E. (Feb. 1993) Adaptive fuzzy systems. IEEE Spectrum, 30:2. pp. 7–31. Jan Jantzen, "Tuning Of Fuzzy PID Controllers", Technical University of Denmark, report 98-H 871, September 30, 1998. [1] Jan Jantzen, Foundations of Fuzzy Control. Wiley, 2007 (209 pages) (Table of contents) Computational Intelligence: A Methodological Introduction by Kruse, Borgelt, Klawonn, Moewes, Steinbrecher, Held, 2013, Springer, ISBN 9781447150121 == External links == Robert Babuska and Ebrahim Mamdani, ed. (2008). "Fuzzy control". Scholarpedia. Retrieved 31 December 2022. Introduction to Fuzzy Control Archived 2010-08-05 at the Wayback Machine Fuzzy Logic in Embedded Microcomputers and Control Systems IEC 1131-7 CD1 Archived 2021-03-04 at the Wayback Machine IEC 1131-7 CD1 PDF Online interactive demonstration of a system with 3 fuzzy rules Data driven fuzzy systems
Wikipedia/Fuzzy_control_system
A control valve is a valve used to control fluid flow by varying the size of the flow passage as directed by a signal from a controller. This enables the direct control of flow rate and the consequential control of process quantities such as pressure, temperature, and liquid level. In automatic control terminology, a control valve is termed a "final control element". == Operation == The opening or closing of automatic control valves is usually done by electrical, hydraulic or pneumatic actuators. Normally with a modulating valve, which can be set to any position between fully open and fully closed, valve positioners are used to ensure the valve attains the desired degree of opening. Air-actuated valves are commonly used because of their simplicity, as they only require a compressed air supply, whereas electrically operated valves require additional cabling and switch gear, and hydraulically actuated valves required high pressure supply and return lines for the hydraulic fluid. The pneumatic control signals are traditionally based on a pressure range of 3–15 psi (0.2–1.0 bar), or more commonly now, an electrical signal of 4-20mA for industry, or 0–10 V for HVAC systems. Electrical control now often includes a "Smart" communication signal superimposed on the 4–20 mA control current, such that the health and verification of the valve position can be signalled back to the controller. The HART, Fieldbus Foundation, and Profibus are the most common protocols. An automatic control valve consists of three main parts in which each part exist in several types and designs: Valve actuator – which moves the valve's modulating element, such as ball or butterfly. Valve positioner – which ensures the valve has reached the desired degree of opening. This overcomes the problems of friction and wear. Valve body – in which the modulating element, a plug, globe, ball or butterfly, is contained. == Control action == Taking the example of an air-operated valve, there are two control actions possible: "Air or current to open" – The flow restriction decreases with increased control signal value. "Air or current to close" – The flow restriction increases with increased control signal value. There can also be failure to safety modes: "Air or control signal failure to close" – On failure of compressed air to the actuator, the valve closes under spring pressure or by backup power. "Air or control signal failure to open" – On failure of compressed air to actuator, the valve opens under spring pressure or by backup power. The modes of failure operation are requirements of the failure to safety process control specification of the plant. In the case of cooling water it may be to fail open, and the case of delivering a chemical it may be to fail closed. == Valve positioners == The fundamental function of a positioner is to deliver pressurized air to the valve actuator, such that the position of the valve stem or shaft corresponds to the set point from the control system. Positioners are typically used when a valve requires throttling action. A positioner requires position feedback from the valve stem or shaft and delivers pneumatic pressure to the actuator to open and close the valve. The positioner must be mounted on or near the control valve assembly. There are three main categories of positioners, depending on the type of control signal, the diagnostic capability, and the communication protocol: pneumatic, analog, and digital. === Pneumatic positioners === Processing units may use pneumatic pressure signaling as the control set point to the control valves. Pressure is typically modulated between 20.7 and 103 kPa (3 to 15 psig) to move the valve from 0 to 100% position. In a common pneumatic positioner, the position of the valve stem or shaft is compared with the position of a bellows that receives the pneumatic control signal. When the input signal increases, the bellows expands and moves a beam. The beam pivots about an input axis, which moves a flapper closer to the nozzle. The nozzle pressure increases, which increases the output pressure to the actuator through a pneumatic amplifier relay. The increased output pressure to the actuator causes the valve stem to move. Stem movement is fed back to the beam by means of a cam. As the cam rotates, the beam pivots about the feedback axis to move the flapper slightly away from the nozzle. The nozzle pressure decreases and reduces the output pressure to the actuator. Stem movement continues, backing the flapper away from the nozzle until equilibrium is reached. When the input signal decreases, the bellows contracts (aided by an internal range spring) and the beam pivots about the input axis to move the flapper away from the nozzle. Nozzle decreases and the relay permits the release of diaphragm casing pressure to the atmosphere, which allows the actuator stem to move upward. Through the cam, stem movement is fed back to the beam to reposition the flapper closer to the nozzle. When equilibrium conditions are obtained, stem movement stops and the flapper is positioned to prevent any further decrease in actuator pressure. === Analog positioners === The second type of positioner is an analog I/P positioner. Most modern processing units use a 4 to 20 mA DC signal to modulate the control valves. This introduces electronics into the positioner design and requires that the positioner convert the electronic current signal into a pneumatic pressure signal (current-to-pneumatic or I/P). In a typical analog I/P positioner, the converter receives a DC input signal and provides a proportional pneumatic output signal through a nozzle/flapper arrangement. The pneumatic output signal provides the input signal to the pneumatic positioner. Otherwise, the design is the same as the pneumatic positioner === Digital positioners === While pneumatic positioners and analog I/P positioners provide basic valve position control, digital valve controllers add another dimension to positioner capabilities. This type of positioner is a microprocessor-based instrument. The microprocessor enables diagnostics and two-way communication to simplify setup and troubleshooting. In a typical digital valve controller, the control signal is read by the microprocessor, processed by a digital algorithm, and converted into a drive current signal to the I/P converter. The microprocessor performs the position control algorithm rather than a mechanical beam, cam, and flapper assembly. As the control signal increases, the drive signal to the I/P converter increases, increasing the output pressure from the I/P converter. This pressure is routed to a pneumatic amplifier relay and provides two output pressures to the actuator. With increasing control signal, one output pressure always increases and the other output pressure decreases Double-acting actuators use both outputs, whereas single-acting actuators use only one output. The changing output pressure causes the actuator stem or shaft to move. Valve position is fed back to the microprocessor. The stem continues to move until the correct position is attained. At this point, the microprocessor stabilizes the drive signal to the I/P converter until equilibrium is obtained. In addition to the function of controlling the position of the valve, a digital valve controller has two additional capabilities: diagnostics and two-way digital communication. Widely used communication protocols include HART, FOUNDATION fieldbus, and PROFIBUS. Advantages of placing a smart positioner on a control valve: Automatic calibration and configuration of positioner. Real time diagnostics. Reduced cost of loop commissioning, including installation and calibration. Use of diagnostics to maintain loop performance levels. Improved process control accuracy that reduces process variability. == Types of control valve == Control valves are classified by attributes and features. === Based on the pressure drop profile === High recovery valve: These valves typically regain most of static pressure drop from the inlet to vena contracta at the outlet. They are characterised by a lower recovery coefficient. Examples: butterfly valve, ball valve, plug valve, gate valve Low recovery valve: These valves typically regain little of the static pressure drop from the inlet to vena contracta at the outlet. They are characterised by a higher recovery coefficient. Examples: globe valve, angle valve === Based on the movement profile of the controlling element === Sliding stem: The valve stem / plug moves in a linear, or straight line motion. Examples: Globe valve, angle valve, wedge type gate valve Rotary valve: The valve disc rotates. Examples: Butterfly valve, ball valve === Based on the functionality === Control valve: Controls flow parameters proportional to an input signal received from the central control system. Examples: Globe valve, angle valve, ball valve Shut-off / On-off valve: These valves are either completely open or closed. Examples: Gate valve, ball valve, globe valve, angle valve, pinch valve, diaphragm valve Check valve: Allows flow only in a single direction Steam conditioning valve: Regulates the pressure and temperature of inlet media to required parameters at outlet. Examples: Turbine bypass valve, process steam letdown station Spring-loaded safety valve: Closed by the force of a spring, which retracts to open when the inlet pressure is equal to the spring force === Based on the actuating medium === Manual valve: Actuated by hand wheel Pneumatic valve: Actuated using a compressible medium like air, hydrocarbon, or nitrogen, with a spring diaphragm, piston cylinder or piston-spring type actuator Hydraulic valve: Actuated by a non-compressible medium such as water or oil Electric valve: Actuated by an electric motor A wide variety of valve types and control operation exist. However, there are two main forms of action, the sliding stem and the rotary. The most common and versatile types of control valves are sliding-stem globe, V-notch ball, butterfly and angle types. Their popularity derives from rugged construction and the many options available that make them suitable for a variety of process applications. Control valve bodies may be categorized as below: === List of common types of control valve === Sliding stem Globe valve – Flow control device Angle body valve Angle seat piston valve Axial Flow valve Rotary Butterfly valve – Flow control device Ball valve – Flow control device Other Pinch valve – Valve closed by squeezing a tube Diaphragm valve – Flow control device == See also == == References == == External links == [1] Control Valve Handbook [2] Fluid Control Research Institute [3] Valve World Magazine [4] New era of valve design and engineering [5] Machine learning based Valve Design Application
Wikipedia/Control_valve
In Diophantine approximation, a subfield of number theory, the Oppenheim conjecture concerns representations of numbers by real quadratic forms in several variables. It was formulated in 1929 by Alexander Oppenheim and later the conjectured property was further strengthened by Harold Davenport and Oppenheim. Initial research on this problem took the number n of variables to be large, and applied a version of the Hardy-Littlewood circle method. The definitive work of Margulis, settling the conjecture in the affirmative, used methods arising from ergodic theory and the study of discrete subgroups of semisimple Lie groups. == Overview == Meyer's theorem states that an indefinite integral quadratic form Q in n variables, n ≥ 5, nontrivially represents zero, i.e. there exists a non-zero vector x with integer components such that Q(x) = 0. The Oppenheim conjecture can be viewed as an analogue of this statement for forms Q that are not multiples of a rational form. It states that in this case, the set of values of Q on integer vectors is a dense subset of the real line. == History == Several versions of the conjecture were formulated by Oppenheim and Harold Davenport. Let Q be a real nondegenerate indefinite quadratic form in n variables. Suppose that n ≥ 3 and Q is not a multiple of a form with rational coefficients. Then for any ε > 0 there exists a non-zero vector x with integer components such that |Q(x)| < ε. For n ≥ 5 this was conjectured by Oppenheim in 1929; the stronger version is due to Davenport in 1946. Let Q and n have the same meaning as before. Then for any ε > 0 there exists a non-zero vector x with integer components such that 0 < |Q(x, x)| < ε. This was conjectured by Oppenheim in 1953 and proved by Birch, Davenport, and Ridout for n at least 21, and by Davenport and Heilbronn for diagonal forms in five variables. Other partial results are due to Oppenheim (for forms in four variables, but under the strong restriction that the form represents zero over Z), Watson, Iwaniec, Baker–Schlickewey. Early work analytic number theory and reduction theory of quadratic forms. The conjecture was proved in 1987 by Margulis in complete generality using methods of ergodic theory. Geometry of actions of certain unipotent subgroups of the orthogonal group on the homogeneous space of the lattices in R3 plays a decisive role in this approach. It is sufficient to establish the case n = 3. The idea to derive the Oppenheim conjecture from a statement about homogeneous group actions is usually attributed to M. S. Raghunathan, who observed in the 1970s that the conjecture for n = 3 is equivalent to the following property of the space of lattices: Any relatively compact orbit of SO(2, 1) in SL(3, R)/SL(3, Z) is compact. However, Margulis later remarked that in an implicit form this equivalence occurred already in a 1955 paper of Cassels and H. P. F. Swinnerton-Dyer, albeit in a different language. Shortly after Margulis's breakthrough, the proof was simplified and generalized by Dani and Margulis. Qualitative versions of the Oppenheim conjecture were later proved by Eskin–Margulis–Mozes. Borel and Prasad established some S-arithmetic analogues. The study of the properties of unipotent and quasiunipotent flows on homogeneous spaces remains an active area of research, with applications to further questions in the theory of Diophantine approximation. == See also == Ratner's theorems == References == Borel, Armand (1995). "Values of indefinite quadratic forms at integral points and flows on spaces of lattices". Bull. Amer. Math. Soc. 32 (2): 184–204. arXiv:math/9504223. Bibcode:1995math......4223B. doi:10.1090/S0273-0979-1995-00587-2. MR 1302785. S2CID 17947810. Davenport, Harold (2005) [1963]. T. D. Browning (ed.). Analytic methods for Diophantine equations and Diophantine inequalities. Cambridge Mathematical Library. With a preface by R. C. Vaughan, D. R. Heath-Brown and D. E. Freeman (2nd ed.). Cambridge University Press. ISBN 0-521-60583-0. MR 2152164. Zbl 1125.11018. Margulis, Grigory (1997). "Oppenheim conjecture". In Atiyah, Michael; Iagolnitzer, Daniel (eds.). Fields Medallists' lectures. World Scientific Series in 20th Century Mathematics. Vol. 5. River Edge, NJ: World Scientific Publishing Co, Inc. pp. 272–327. doi:10.1142/9789812385215_0035. ISBN 981-02-3117-2. MR 1622909. Oppenheim, Alexander (1929). "The minima of indefinite quaternary quadratic forms". Proc. Natl. Acad. Sci. U.S.A. 15 (9): 724–727. Bibcode:1929PNAS...15..724O. doi:10.1073/pnas.15.9.724. PMC 522544. PMID 16577226.
Wikipedia/Oppenheim_conjecture
In number theory, specifically the study of Diophantine approximation, the lonely runner conjecture is a conjecture about the long-term behavior of runners on a circular track. It states that n {\displaystyle n} runners on a track of unit length, with constant speeds all distinct from one another, will each be lonely at some time—at least 1 / n {\displaystyle 1/n} units away from all others. The conjecture was first posed in 1967 by German mathematician Jörg M. Wills, in purely number-theoretic terms, and independently in 1974 by T. W. Cusick; its illustrative and now-popular formulation dates to 1998. The conjecture is known to be true for seven runners or fewer, but the general case remains unsolved. Implications of the conjecture include solutions to view-obstruction problems and bounds on properties, related to chromatic numbers, of certain graphs. == Formulation == Consider n {\displaystyle n} runners on a circular track of unit length. At the initial time t = 0 {\displaystyle t=0} , all runners are at the same position and start to run; the runners' speeds are constant, all distinct, and may be negative. A runner is said to be lonely at time t {\displaystyle t} if they are at a distance (measured along the circle) of at least 1 / n {\displaystyle 1/n} from every other runner. The lonely runner conjecture states that each runner is lonely at some time, no matter the choice of speeds. This visual formulation of the conjecture was first published in 1998. In many formulations, including the original by Jörg M. Wills, some simplifications are made. The runner to be lonely is stationary at 0 (with zero speed), and therefore n − 1 {\displaystyle n-1} other runners, with nonzero speeds, are considered. The moving runners may be further restricted to positive speeds only: by symmetry, runners with speeds x {\displaystyle x} and − x {\displaystyle -x} have the same distance from 0 at all times, and so are essentially equivalent. Proving the result for any stationary runner implies the general result for all runners, since they can be made stationary by subtracting their speed from all runners, leaving them with zero speed. The conjecture then states that, for any collection v 1 , v 2 , … , v n − 1 {\displaystyle v_{1},v_{2},\dots ,v_{n-1}} of positive, distinct speeds, there exists some time t > 0 {\displaystyle t>0} such that 1 n ≤ frac ⁡ ( v i t ) ≤ 1 − 1 n ( i = 1 , … , n − 1 ) , {\displaystyle {\frac {1}{n}}\leq \operatorname {frac} (v_{i}t)\leq 1-{\frac {1}{n}}\qquad (i=1,\dots ,n-1),} where frac ⁡ ( x ) {\displaystyle \operatorname {frac} (x)} denotes the fractional part of x {\displaystyle x} . Interpreted visually, if the runners are running counterclockwise, the middle term of the inequality is the distance from the origin to the i {\displaystyle i} th runner at time t {\displaystyle t} , measured counterclockwise. This convention is used for the rest of this article. Wills' conjecture was part of his work in Diophantine approximation, the study of how closely fractions can approximate irrational numbers. == Implications == Suppose C {\displaystyle C} is a n-hypercube of side length s {\displaystyle s} in n-dimensional space ( n ≥ 2 {\displaystyle n\geq 2} ). Place a centered copy of C {\displaystyle C} at every point with half-integer coordinates. A ray from the origin may either miss all of the copies of C {\displaystyle C} , in which case there is a (infinitesimal) gap, or hit at least one copy. Cusick (1973) made an independent formulation of the lonely runner conjecture in this context; the conjecture implies that there are gaps if and only if s < ( n − 1 ) / ( n + 1 ) {\displaystyle s<(n-1)/(n+1)} , ignoring rays lying in one of the coordinate hyperplanes. For example, placed in 2-dimensional space, squares any smaller than 1 / 3 {\displaystyle 1/3} in side length will leave gaps, as shown, and squares with side length 1 / 3 {\displaystyle 1/3} or greater will obstruct every ray that is not parallel to an axis. The conjecture generalizes this observation into any number of dimensions. In graph theory, a distance graph G {\displaystyle G} on the set of integers, and using some finite set D {\displaystyle D} of positive integer distances, has an edge between x , y {\displaystyle x,y} if and only if | x − y | ∈ D {\displaystyle |x-y|\in D} . For example, if D = { 2 } {\displaystyle D=\{2\}} , every consecutive pair of even integers, and of odd integers, is adjacent, all together forming two connected components. A k-regular coloring of the integers with step λ ∈ R {\displaystyle \lambda \in \mathbb {R} } assigns to each integer n {\displaystyle n} one of k {\displaystyle k} colors based on the residue of ⌊ λ n ⌋ {\displaystyle \lfloor \lambda n\rfloor } modulo k {\displaystyle k} . For example, if λ = 0.5 {\displaystyle \lambda =0.5} , the coloring repeats every 2 k {\displaystyle 2k} integers and each pair of integers 2 m , 2 m + 1 {\displaystyle 2m,2m+1} are the same color. Taking k = | D | + 1 {\displaystyle k=|D|+1} , the lonely runner conjecture implies G {\displaystyle G} admits a proper k-regular coloring (i.e., each node is colored differently than its adjacencies) for some step value. For example, ( k , λ ) = ( 2 , 0.5 ) {\displaystyle (k,\lambda )=(2,0.5)} generates a proper coloring on the distance graph generated by D = { 2 } {\displaystyle D=\{2\}} . ( k {\displaystyle k} is known as the regular chromatic number of D {\displaystyle D} .) Given a directed graph G {\displaystyle G} , a nowhere-zero flow on G {\displaystyle G} associates a positive value f ( e ) {\displaystyle f(e)} to each edge e {\displaystyle e} , such that the flow outward from each node is equal to the flow inward. The lonely runner conjecture implies that, if G {\displaystyle G} has a nowhere-zero flow with at most k {\displaystyle k} distinct integer values, then G {\displaystyle G} has a nowhere-zero flow with values only in { 1 , 2 , … , k } {\displaystyle \{1,2,\ldots ,k\}} (possibly after reversing the directions of some arcs of G {\displaystyle G} ). This result was proven for k ≥ 5 {\displaystyle k\geq 5} with separate methods, and because the smaller cases of the lonely runner conjecture are settled, the full theorem is proven. == Known results == For a given setup of runners, let δ {\displaystyle \delta } denote the smallest of the runners' maximum distances of loneliness, and the gap of loneliness δ n {\displaystyle \delta _{n}} denote the minimum δ {\displaystyle \delta } across all setups with n {\displaystyle n} runners. In this notation, the conjecture asserts that δ n ≥ 1 / n {\displaystyle \delta _{n}\geq 1/n} , a bound which, if correct, cannot be improved. For example, if the runner to be lonely is stationary and speeds v i = i {\displaystyle v_{i}=i} are chosen, then there is no time at which they are strictly more than 1 / n {\displaystyle 1/n} units away from all others, showing that δ n ≤ 1 / n {\displaystyle \delta _{n}\leq 1/n} . Alternatively, this conclusion can be quickly derived from the Dirichlet approximation theorem. For n ≥ 2 {\displaystyle n\geq 2} a simple lower bound δ n ≥ 1 / ( 2 n − 2 ) {\displaystyle \delta _{n}\geq 1/(2n-2)} may be obtained via a probability argument. The conjecture can be reduced to restricting the runners' speeds to positive integers: If the conjecture is true for n {\displaystyle n} runners with integer speeds, it is true for n {\displaystyle n} runners with real speeds. === Tighter bounds === Slight improvements on the lower bound 1 / ( 2 n − 2 ) {\displaystyle 1/(2n-2)} are known. Chen & Cusick (1999) showed for n ≥ 5 {\displaystyle n\geq 5} that if 2 n − 5 {\displaystyle 2n-5} is prime, then δ n ≥ 1 2 n − 5 {\displaystyle \delta _{n}\geq {\tfrac {1}{2n-5}}} , and if 4 n − 9 {\displaystyle 4n-9} is prime, then δ n ≥ 2 4 n − 9 {\displaystyle \delta _{n}\geq {\tfrac {2}{4n-9}}} . Perarnau & Serra (2016) showed unconditionally for sufficiently large n {\displaystyle n} that δ n ≥ 1 2 n − 4 + o ( 1 ) . {\displaystyle \delta _{n}\geq {\frac {1}{2n-4+o(1)}}.} Tao (2018) proved the current best known asymptotic result: for sufficiently large n {\displaystyle n} , δ n ≥ 1 2 n − 2 + c log ⁡ n n 2 ( log ⁡ log ⁡ n ) 2 {\displaystyle \delta _{n}\geq {\frac {1}{2n-2}}+{\frac {c\log n}{n^{2}(\log \log n)^{2}}}} for some constant c > 0 {\displaystyle c>0} . He also showed that the full conjecture is implied by proving the conjecture for integer speeds of size n O ( n 2 ) {\displaystyle n^{O(n^{2})}} (see big O notation). This implication theoretically allows proving the conjecture for a given n {\displaystyle n} by checking a finite set of cases, but the number of cases grows too quickly to be practical. The conjecture has been proven under specific assumptions on the runners' speeds. For sufficiently large n {\displaystyle n} , it holds true if v i + 1 v i ≥ 1 + 22 log ⁡ ( n − 1 ) n − 1 ( i = 1 , … , n − 2 ) . {\displaystyle {\frac {v_{i+1}}{v_{i}}}\geq 1+{\frac {22\log(n-1)}{n-1}}\qquad (i=1,\dots ,n-2).} In other words, the conjecture holds true for large n {\displaystyle n} if the speeds grow quickly enough. If the constant 22 is replaced with 33, then the conjecture holds true for n ≥ 16343 {\displaystyle n\geq 16343} . A similar result for sufficiently large n {\displaystyle n} only requires a similar assumption for i = ⌊ n / 22 ⌋ − 1 , … , n − 2 {\displaystyle i=\lfloor n/22\rfloor -1,\dots ,n-2} . Unconditionally on n {\displaystyle n} , the conjecture is true if v i + 1 / v i ≥ 2 {\displaystyle v_{i+1}/v_{i}\geq 2} for all i {\displaystyle i} . === For specific n === The conjecture is true for n ≤ 7 {\displaystyle n\leq 7} runners. The proofs for n ≤ 3 {\displaystyle n\leq 3} are elementary; the n = 4 {\displaystyle n=4} case was established in 1972. The n = 5 {\displaystyle n=5} , n = 6 {\displaystyle n=6} , and n = 7 {\displaystyle n=7} cases were settled in 1984, 2001 and 2008, respectively. The first proof for n = 5 {\displaystyle n=5} was computer-assisted, but all cases have since been proved with elementary methods. For some n {\displaystyle n} , there exist sporadic examples with a maximum separation of 1 / n {\displaystyle 1/n} besides the example of v i = i {\displaystyle v_{i}=i} given above. For n = 5 {\displaystyle n=5} , the only known example (up to shifts and scaling) is { 0 , 1 , 3 , 4 , 7 } {\displaystyle \{0,1,3,4,7\}} ; for n = 6 {\displaystyle n=6} the only known example is { 0 , 1 , 3 , 4 , 5 , 9 } {\displaystyle \{0,1,3,4,5,9\}} ; and for n = 8 {\displaystyle n=8} the known examples are { 0 , 1 , 4 , 5 , 6 , 7 , 11 , 13 } {\displaystyle \{0,1,4,5,6,7,11,13\}} and { 0 , 1 , 2 , 3 , 4 , 5 , 7 , 12 } {\displaystyle \{0,1,2,3,4,5,7,12\}} . There exists an explicit infinite family of such sporadic cases. Kravitz (2021) formulated a sharper version of the conjecture that addresses near-equality cases. More specifically, he conjectures that for a given set of speeds v i {\displaystyle v_{i}} , either δ = s / ( s ( n − 1 ) + 1 ) {\displaystyle \delta =s/(s(n-1)+1)} for some positive integer s {\displaystyle s} , or δ ≥ 1 / ( n − 1 ) {\displaystyle \delta \geq 1/(n-1)} , where δ {\displaystyle \delta } is that setup's gap of loneliness. He confirmed this conjecture for n ≤ 4 {\displaystyle n\leq 4} and a few special cases. Rifford (2022) addressed the question of the size of the time required for a runner to get lonely. He formulated a stronger conjecture stating that for every integer n ≥ 3 {\displaystyle n\geq 3} there is a positive integer N {\displaystyle N} such that for any collection v 1 , v 2 , … , v n − 1 {\displaystyle v_{1},v_{2},\dots ,v_{n-1}} of positive, distinct speeds, there exists some time t > 0 {\displaystyle t>0} such that frac ⁡ ( v i t ) ∈ [ 1 / n , 1 − 1 / n ] {\displaystyle \operatorname {frac} (v_{i}t)\in [1/n,1-1/n]} for i = 1 , … , n − 1 {\displaystyle i=1,\dots ,n-1} with t ≤ N min ⁡ ( v 1 , … , v n − 1 ) . {\displaystyle t\leq {\frac {N}{\operatorname {min} (v_{1},\dots ,v_{n-1})}}.} Rifford confirmed this conjecture for n = 3 , 4 , 5 , 6 {\displaystyle n=3,4,5,6} and showed that the minimal N {\displaystyle N} in each case is given by N = 1 {\displaystyle N=1} for n = 3 , 4 , 5 {\displaystyle n=3,4,5} and N = 2 {\displaystyle N=2} for . The latter result ( N = 2 {\displaystyle N=2} for n = 6 {\displaystyle n=6} ) shows that if we consider six runners starting from 0 {\displaystyle 0} at time t = 0 {\displaystyle t=0} with constant speeds v 0 , v 1 , … , v 5 {\displaystyle v_{0},v_{1},\dots ,v_{5}} with v 0 = 0 {\displaystyle v_{0}=0} and v 1 , … , v 5 {\displaystyle v_{1},\dots ,v_{5}} distinct and positive then the static runner is separated by a distance at least 1 / 6 {\displaystyle 1/6} from the others during the first two rounds of the slowest non-static runner (but not necessary during the first round). === Other results === A much stronger result exists for randomly chosen speeds: using the stationary-runner convention, if n {\displaystyle n} and ε > 0 {\displaystyle \varepsilon >0} are fixed and n − 1 {\displaystyle n-1} runners with nonzero speeds are chosen uniformly at random from { 1 , 2 , … , k } {\displaystyle \{1,2,\ldots ,k\}} , then P ( δ ≥ 1 / 2 − ε ) → 1 {\displaystyle P(\delta \geq 1/2-\varepsilon )\to 1} as k → ∞ {\displaystyle k\to \infty } . In other words, runners with random speeds are likely at some point to be "very lonely"—nearly 1 / 2 {\displaystyle 1/2} units from the nearest other runner. The full conjecture is true if "loneliness" is replaced with "almost aloneness", meaning at most one other runner is within 1 / n {\displaystyle 1/n} of a given runner. The conjecture has been generalized to an analog in algebraic function fields. == Notes and references == === Notes === === Citations === === Works cited === == External links == Article in the Open Problem Garden no. 4, 551–562.
Wikipedia/Lonely_runner_conjecture
In mathematics, the Littlewood conjecture is an open problem (as of April 2024) in Diophantine approximation, proposed by John Edensor Littlewood around 1930. It states that for any two real numbers α and β, lim inf n → ∞ n ‖ n α ‖ ‖ n β ‖ = 0 , {\displaystyle \liminf _{n\to \infty }\ n\,\Vert n\alpha \Vert \,\Vert n\beta \Vert =0,} where ‖ x ‖ := min ( | x − ⌊ x ⌋ | , | x − ⌈ x ⌉ | ) {\displaystyle \Vert x\Vert :=\min(|x-\lfloor x\rfloor |,|x-\lceil x\rceil |)} is the distance to the nearest integer. == Formulation and explanation == This means the following: take a point (α, β) in the plane, and then consider the sequence of points (2α, 2β), (3α, 3β), ... . For each of these, multiply the distance to the closest line with integer x-coordinate by the distance to the closest line with integer y-coordinate. This product will certainly be at most 1/4. The conjecture makes no statement about whether this sequence of values will converge; it typically does not, in fact. The conjecture states something about the limit inferior, and says that there is a subsequence for which the distances decay faster than the reciprocal, i.e. o(1/n) in the little-o notation. == Connection to further conjectures == It is known that this would follow from a result in the geometry of numbers, about the minimum on a non-zero lattice point of a product of three linear forms in three real variables: the implication was shown in 1955 by Cassels and Swinnerton-Dyer. This can be formulated another way, in group-theoretic terms. There is now another conjecture, expected to hold for n ≥ 3: it is stated in terms of G = SLn(R), Γ = SLn(Z), and the subgroup D of diagonal matrices in G. Conjecture: for any g in G/Γ such that Dg is relatively compact (in G/Γ), then Dg is closed. This in turn is a special case of a general conjecture of Margulis on Lie groups. == Partial results == Borel showed in 1909 that the exceptional set of real pairs (α,β) violating the statement of the conjecture is of Lebesgue measure zero. Manfred Einsiedler, Anatole Katok and Elon Lindenstrauss have shown that it must have Hausdorff dimension zero; and in fact is a union of countably many compact sets of box-counting dimension zero. The result was proved by using a measure classification theorem for diagonalizable actions of higher-rank groups, and an isolation theorem proved by Lindenstrauss and Barak Weiss. These results imply that non-trivial pairs satisfying the conjecture exist: indeed, given a real number α such that inf n ≥ 1 n ⋅ | | n α | | > 0 {\displaystyle \inf _{n\geq 1}n\cdot ||n\alpha ||>0} , it is possible to construct an explicit β such that (α,β) satisfies the conjecture. == See also == Littlewood polynomial == References == Adamczewski, Boris; Bugeaud, Yann (2010). "8. Transcendence and diophantine approximation". In Berthé, Valérie; Rigo, Michael (eds.). Combinatorics, automata, and number theory. Encyclopedia of Mathematics and its Applications. Vol. 135. Cambridge: Cambridge University Press. pp. 410–451. ISBN 978-0-521-51597-9. Zbl 1271.11073. == Further reading == Akshay Venkatesh (2007-10-29). "The work of Einsiedler, Katok, and Lindenstrauss on the Littlewood conjecture". Bull. Amer. Math. Soc. (N.S.). 45 (1): 117–134. doi:10.1090/S0273-0979-07-01194-9. MR 2358379. Zbl 1194.11075.
Wikipedia/Littlewood_conjecture
In number theory, Hurwitz's theorem, named after Adolf Hurwitz, gives a bound on a Diophantine approximation. The theorem states that for every irrational number ξ there are infinitely many relatively prime integers m, n such that | ξ − m n | < 1 5 n 2 . {\displaystyle \left|\xi -{\frac {m}{n}}\right|<{\frac {1}{{\sqrt {5}}\,n^{2}}}.} The condition that ξ is irrational cannot be omitted. Moreover, the constant 5 {\displaystyle {\sqrt {5}}} is the best possible; if we replace 5 {\displaystyle {\sqrt {5}}} by any number A > 5 {\displaystyle A>{\sqrt {5}}} and we let ξ = ( 1 + 5 ) / 2 {\displaystyle \xi =(1+{\sqrt {5}})/2} (the golden ratio) then there exist only finitely many relatively prime integers m, n such that the formula above holds. The theorem is equivalent to the claim that the Markov constant of every number is larger than 5 {\displaystyle {\sqrt {5}}} . == See also == Dirichlet's approximation theorem Lagrange number == References == Hurwitz, A. (1891). "Ueber die angenäherte Darstellung der Irrationalzahlen durch rationale Brüche" [On the approximate representation of irrational numbers by rational fractions]. Mathematische Annalen (in German). 39 (2): 279–284. doi:10.1007/BF01206656. JFM 23.0222.02. S2CID 119535189. G. H. Hardy, Edward M. Wright, Roger Heath-Brown, Joseph Silverman, Andrew Wiles (2008). "Theorem 193". An introduction to the Theory of Numbers (6th ed.). Oxford science publications. p. 209. ISBN 978-0-19-921986-5.{{cite book}}: CS1 maint: multiple names: authors list (link) LeVeque, William Judson (1956). Topics in number theory. Addison-Wesley Publishing Co., Inc., Reading, Mass. MR 0080682. Ivan Niven (2013). Diophantine Approximations. Courier Corporation. ISBN 978-0486462677.
Wikipedia/Hurwitz's_theorem_(number_theory)
For historical reasons and in order to have application to the solution of Diophantine equations, results in number theory have been scrutinised more than in other branches of mathematics to see if their content is effectively computable. Where it is asserted that some list of integers is finite, the question is whether in principle the list could be printed out after a machine computation. == Littlewood's result == An early example of an ineffective result was J. E. Littlewood's theorem of 1914, that in the prime number theorem the differences of both ψ(x) and π(x) with their asymptotic estimates change sign infinitely often. In 1933 Stanley Skewes obtained an effective upper bound for the first sign change, now known as Skewes' number. In more detail, writing for a numerical sequence f (n), an effective result about its changing sign infinitely often would be a theorem including, for every value of N, a value M > N such that f (N) and f (M) have different signs, and such that M could be computed with specified resources. In practical terms, M would be computed by taking values of n from N onwards, and the question is 'how far must you go?' A special case is to find the first sign change. The interest of the question was that the numerical evidence known showed no change of sign: Littlewood's result guaranteed that this evidence was just a small number effect, but 'small' here included values of n up to a billion. The requirement of computability is reflected in and contrasts with the approach used in the analytic number theory to prove the results. It for example brings into question any use of Landau notation and its implied constants: are assertions pure existence theorems for such constants, or can one recover a version in which 1000 (say) takes the place of the implied constant? In other words, if it were known that there was M > N with a change of sign and such that M = O(G(N)) for some explicit function G, say built up from powers, logarithms and exponentials, that means only M < A.G(N) for some absolute constant A. The value of A, the so-called implied constant, may also need to be made explicit, for computational purposes. One reason Landau notation was a popular introduction is that it hides exactly what A is. In some indirect forms of proof it may not be at all obvious that the implied constant can be made explicit. == The 'Siegel period' == Many of the principal results of analytic number theory that were proved in the period 1900–1950 were in fact ineffective. The main examples were: The Thue–Siegel–Roth theorem Siegel's theorem on integral points, from 1929 The 1934 theorem of Hans Heilbronn and Edward Linfoot on the class number 1 problem The 1935 result on the Siegel zero The Siegel–Walfisz theorem based on the Siegel zero. The concrete information that was left theoretically incomplete included lower bounds for class numbers (ideal class groups for some families of number fields grow); and bounds for the best rational approximations to algebraic numbers in terms of denominators. These latter could be read quite directly as results on Diophantine equations, after the work of Axel Thue. The result used for Liouville numbers in the proof is effective in the way it applies the mean value theorem: but improvements (to what is now the Thue–Siegel–Roth theorem) were not. == Later work == Later results, particularly of Alan Baker, changed the position. Qualitatively speaking, Baker's theorems look weaker, but they have explicit constants and can actually be applied, in conjunction with machine computation, to prove that lists of solutions (suspected to be complete) are actually the entire solution set. == Theoretical issues == The difficulties here were met by radically different proof techniques, taking much more care about proofs by contradiction. The logic involved is closer to proof theory than to that of computability theory and computable functions. It is rather loosely conjectured that the difficulties may lie in the realm of computational complexity theory. Ineffective results are still being proved in the shape A or B, where we have no way of telling which. == References == == External links == Sprindzhuk, V.G. (2001) [1994], "Diophantine approximations", Encyclopedia of Mathematics, EMS Press
Wikipedia/Effective_results_in_number_theory
The Koukoulopoulos–Maynard theorem, also known as the Duffin-Schaeffer conjecture, is a theorem in mathematics, specifically, the Diophantine approximation proposed as a conjecture by R. J. Duffin and A. C. Schaeffer in 1941 and proven in 2019 by Dimitris Koukoulopoulos and James Maynard. It states that if f : N → R + {\displaystyle f:\mathbb {N} \rightarrow \mathbb {R} ^{+}} is a real-valued function taking on positive values, then for almost all α {\displaystyle \alpha } (with respect to Lebesgue measure), the inequality | α − p q | < f ( q ) q {\displaystyle \left|\alpha -{\frac {p}{q}}\right|<{\frac {f(q)}{q}}} has infinitely many solutions in coprime integers p , q {\displaystyle p,q} with q > 0 {\displaystyle q>0} if and only if ∑ q = 1 ∞ φ ( q ) f ( q ) q = ∞ , {\displaystyle \sum _{q=1}^{\infty }\varphi (q){\frac {f(q)}{q}}=\infty ,} where φ ( q ) {\displaystyle \varphi (q)} is Euler's totient function. A higher-dimensional analogue of this conjecture was resolved by Vaughan and Pollington in 1990. == Introduction == That existence of the rational approximations implies divergence of the series follows from the Borel–Cantelli lemma. The converse implication is the crux of the conjecture. There have been many partial results of the Duffin–Schaeffer conjecture established to date. Paul Erdős established in 1970 that the conjecture holds if there exists a constant c > 0 {\displaystyle c>0} such that for every integer n {\displaystyle n} we have either f ( n ) = c / n {\displaystyle f(n)=c/n} or f ( n ) = 0 {\displaystyle f(n)=0} . This was strengthened by Jeffrey Vaaler in 1978 to the case f ( n ) = O ( n − 1 ) {\displaystyle f(n)=O(n^{-1})} . More recently, this was strengthened to the conjecture being true whenever there exists some ε > 0 {\displaystyle \varepsilon >0} such that the series ∑ n = 1 ∞ ( f ( n ) n ) 1 + ε φ ( n ) = ∞ . {\displaystyle \sum _{n=1}^{\infty }\left({\frac {f(n)}{n}}\right)^{1+\varepsilon }\varphi (n)=\infty .} This was done by Haynes, Pollington, and Velani. In 2006, Beresnevich and Velani proved that a Hausdorff measure analogue of the Duffin–Schaeffer conjecture is equivalent to the original Duffin–Schaeffer conjecture, which is a priori weaker. This result was published in the Annals of Mathematics. == See also == Khinchin's theorem == Notes == == References == Harman, Glyn (1998). Metric number theory. London Mathematical Society Monographs. New Series. Vol. 18. Oxford: Clarendon Press. ISBN 978-0-19-850083-4. Zbl 1081.11057. Harman, Glyn (2002). "One hundred years of normal numbers". In Bennett, M. A.; Berndt, B.C.; Boston, N.; Diamond, H.G.; Hildebrand, A.J.; Philipp, W. (eds.). Surveys in number theory: Papers from the millennial conference on number theory. Natick, MA: A K Peters. pp. 57–74. ISBN 978-1-56881-162-8. Zbl 1062.11052. == External links == Quanta magazine article about Duffin-Schaeffer conjecture. Numberphile interview with James Maynard about the proof.
Wikipedia/Duffin–Schaeffer_conjecture
Theoretical computer science is a subfield of computer science and mathematics that focuses on the abstract and mathematical foundations of computation. It is difficult to circumscribe the theoretical areas precisely. The ACM's Special Interest Group on Algorithms and Computation Theory (SIGACT) provides the following description: TCS covers a wide variety of topics including algorithms, data structures, computational complexity, parallel and distributed computation, probabilistic computation, quantum computation, automata theory, information theory, cryptography, program semantics and verification, algorithmic game theory, machine learning, computational biology, computational economics, computational geometry, and computational number theory and algebra. Work in this field is often distinguished by its emphasis on mathematical technique and rigor. == History == While logical inference and mathematical proof had existed previously, in 1931 Kurt Gödel proved with his incompleteness theorem that there are fundamental limitations on what statements could be proved or disproved. Information theory was added to the field with a 1948 mathematical theory of communication by Claude Shannon. In the same decade, Donald Hebb introduced a mathematical model of learning in the brain. With mounting biological data supporting this hypothesis with some modification, the fields of neural networks and parallel distributed processing were established. In 1971, Stephen Cook and, working independently, Leonid Levin, proved that there exist practically relevant problems that are NP-complete – a landmark result in computational complexity theory. Modern theoretical computer science research is based on these basic developments, but includes many other mathematical and interdisciplinary problems that have been posed, as shown below: == Topics == === Algorithms === An algorithm is a step-by-step procedure for calculations. Algorithms are used for calculation, data processing, and automated reasoning. An algorithm is an effective method expressed as a finite list of well-defined instructions for calculating a function. Starting from an initial state and initial input (perhaps empty), the instructions describe a computation that, when executed, proceeds through a finite number of well-defined successive states, eventually producing "output" and terminating at a final ending state. The transition from one state to the next is not necessarily deterministic; some algorithms, known as randomized algorithms, incorporate random input. === Automata theory === Automata theory is the study of abstract machines and automata, as well as the computational problems that can be solved using them. It is a theory in theoretical computer science, under discrete mathematics (a section of mathematics and also of computer science). Automata comes from the Greek word αὐτόματα meaning "self-acting". Automata Theory is the study of self-operating virtual machines to help in the logical understanding of input and output process, without or with intermediate stage(s) of computation (or any function/process). === Coding theory === Coding theory is the study of the properties of codes and their fitness for a specific application. Codes are used for data compression, cryptography, error correction and more recently also for network coding. Codes are studied by various scientific disciplines – such as information theory, electrical engineering, mathematics, and computer science – for the purpose of designing efficient and reliable data transmission methods. This typically involves the removal of redundancy and the correction (or detection) of errors in the transmitted data. === Computational complexity theory === Computational complexity theory is a branch of the theory of computation that focuses on classifying computational problems according to their inherent difficulty, and relating those classes to each other. A computational problem is understood to be a task that is in principle amenable to being solved by a computer, which is equivalent to stating that the problem may be solved by mechanical application of mathematical steps, such as an algorithm. A problem is regarded as inherently difficult if its solution requires significant resources, whatever the algorithm used. The theory formalizes this intuition, by introducing mathematical models of computation to study these problems and quantifying the amount of resources needed to solve them, such as time and storage. Other complexity measures are also used, such as the amount of communication (used in communication complexity), the number of gates in a circuit (used in circuit complexity) and the number of processors (used in parallel computing). One of the roles of computational complexity theory is to determine the practical limits on what computers can and cannot do. === Computational geometry === Computational geometry is a branch of computer science devoted to the study of algorithms that can be stated in terms of geometry. Some purely geometrical problems arise out of the study of computational geometric algorithms, and such problems are also considered to be part of computational geometry. The main impetus for the development of computational geometry as a discipline was progress in computer graphics and computer-aided design and manufacturing (CAD/CAM), but many problems in computational geometry are classical in nature, and may come from mathematical visualization. Other important applications of computational geometry include robotics (motion planning and visibility problems), geographic information systems (GIS) (geometrical location and search, route planning), integrated circuit design (IC geometry design and verification), computer-aided engineering (CAE) (mesh generation), computer vision (3D reconstruction). === Computational learning theory === Theoretical results in machine learning mainly deal with a type of inductive learning called supervised learning. In supervised learning, an algorithm is given samples that are labeled in some useful way. For example, the samples might be descriptions of mushrooms, and the labels could be whether or not the mushrooms are edible. The algorithm takes these previously labeled samples and uses them to induce a classifier. This classifier is a function that assigns labels to samples including the samples that have never been previously seen by the algorithm. The goal of the supervised learning algorithm is to optimize some measure of performance such as minimizing the number of mistakes made on new samples. === Computational number theory === Computational number theory, also known as algorithmic number theory, is the study of algorithms for performing number theoretic computations. The best known problem in the field is integer factorization. === Cryptography === Cryptography is the practice and study of techniques for secure communication in the presence of third parties (called adversaries). More generally, it is about constructing and analyzing protocols that overcome the influence of adversaries and that are related to various aspects in information security such as data confidentiality, data integrity, authentication, and non-repudiation. Modern cryptography intersects the disciplines of mathematics, computer science, and electrical engineering. Applications of cryptography include ATM cards, computer passwords, and electronic commerce. Modern cryptography is heavily based on mathematical theory and computer science practice; cryptographic algorithms are designed around computational hardness assumptions, making such algorithms hard to break in practice by any adversary. It is theoretically possible to break such a system, but it is infeasible to do so by any known practical means. These schemes are therefore termed computationally secure; theoretical advances, e.g., improvements in integer factorization algorithms, and faster computing technology require these solutions to be continually adapted. There exist information-theoretically secure schemes that provably cannot be broken even with unlimited computing power—an example is the one-time pad—but these schemes are more difficult to implement than the best theoretically breakable but computationally secure mechanisms. === Data structures === A data structure is a particular way of organizing data in a computer so that it can be used efficiently. Different kinds of data structures are suited to different kinds of applications, and some are highly specialized to specific tasks. For example, databases use B-tree indexes for small percentages of data retrieval and compilers and databases use dynamic hash tables as look up tables. Data structures provide a means to manage large amounts of data efficiently for uses such as large databases and internet indexing services. Usually, efficient data structures are key to designing efficient algorithms. Some formal design methods and programming languages emphasize data structures, rather than algorithms, as the key organizing factor in software design. Storing and retrieving can be carried out on data stored in both main memory and in secondary memory. === Distributed computation === Distributed computing studies distributed systems. A distributed system is a software system in which components located on networked computers communicate and coordinate their actions by passing messages. The components interact with each other in order to achieve a common goal. Three significant characteristics of distributed systems are: concurrency of components, lack of a global clock, and independent failure of components. Examples of distributed systems vary from SOA-based systems to massively multiplayer online games to peer-to-peer applications, and blockchain networks like Bitcoin. A computer program that runs in a distributed system is called a distributed program, and distributed programming is the process of writing such programs. There are many alternatives for the message passing mechanism, including RPC-like connectors and message queues. An important goal and challenge of distributed systems is location transparency. === Information-based complexity === Information-based complexity (IBC) studies optimal algorithms and computational complexity for continuous problems. IBC has studied continuous problems as path integration, partial differential equations, systems of ordinary differential equations, nonlinear equations, integral equations, fixed points, and very-high-dimensional integration. === Formal methods === Formal methods are a particular kind of mathematics based techniques for the specification, development and verification of software and hardware systems. The use of formal methods for software and hardware design is motivated by the expectation that, as in other engineering disciplines, performing appropriate mathematical analysis can contribute to the reliability and robustness of a design. Formal methods are best described as the application of a fairly broad variety of theoretical computer science fundamentals, in particular logic calculi, formal languages, automata theory, and program semantics, but also type systems and algebraic data types to problems in software and hardware specification and verification. === Information theory === Information theory is a branch of applied mathematics, electrical engineering, and computer science involving the quantification of information. Information theory was developed by Claude E. Shannon to find fundamental limits on signal processing operations such as compressing data and on reliably storing and communicating data. Since its inception it has broadened to find applications in many other areas, including statistical inference, natural language processing, cryptography, neurobiology, the evolution and function of molecular codes, model selection in statistics, thermal physics, quantum computing, linguistics, plagiarism detection, pattern recognition, anomaly detection and other forms of data analysis. Applications of fundamental topics of information theory include lossless data compression (e.g. ZIP files), lossy data compression (e.g. MP3s and JPEGs), and channel coding (e.g. for Digital Subscriber Line (DSL)). The field is at the intersection of mathematics, statistics, computer science, physics, neurobiology, and electrical engineering. Its impact has been crucial to the success of the Voyager missions to deep space, the invention of the compact disc, the feasibility of mobile phones, the development of the Internet, the study of linguistics and of human perception, the understanding of black holes, and numerous other fields. Important sub-fields of information theory are source coding, channel coding, algorithmic complexity theory, algorithmic information theory, information-theoretic security, and measures of information. === Machine learning === Machine learning is a scientific discipline that deals with the construction and study of algorithms that can learn from data. Such algorithms operate by building a model based on inputs: 2  and using that to make predictions or decisions, rather than following only explicitly programmed instructions. Machine learning can be considered a subfield of computer science and statistics. It has strong ties to artificial intelligence and optimization, which deliver methods, theory and application domains to the field. Machine learning is employed in a range of computing tasks where designing and programming explicit, rule-based algorithms is infeasible. Example applications include spam filtering, optical character recognition (OCR), search engines and computer vision. Machine learning is sometimes conflated with data mining, although that focuses more on exploratory data analysis. Machine learning and pattern recognition "can be viewed as two facets of the same field.": vii  === Natural computation === === Parallel computation === Parallel computing is a form of computation in which many calculations are carried out simultaneously, operating on the principle that large problems can often be divided into smaller ones, which are then solved "in parallel". There are several different forms of parallel computing: bit-level, instruction level, data, and task parallelism. Parallelism has been employed for many years, mainly in high-performance computing, but interest in it has grown lately due to the physical constraints preventing frequency scaling. As power consumption (and consequently heat generation) by computers has become a concern in recent years, parallel computing has become the dominant paradigm in computer architecture, mainly in the form of multi-core processors. Parallel computer programs are more difficult to write than sequential ones, because concurrency introduces several new classes of potential software bugs, of which race conditions are the most common. Communication and synchronization between the different subtasks are typically some of the greatest obstacles to getting good parallel program performance. The maximum possible speed-up of a single program as a result of parallelization is known as Amdahl's law. === Programming language theory and program semantics === Programming language theory is a branch of computer science that deals with the design, implementation, analysis, characterization, and classification of programming languages and their individual features. It falls within the discipline of theoretical computer science, both depending on and affecting mathematics, software engineering, and linguistics. It is an active research area, with numerous dedicated academic journals. In programming language theory, semantics is the field concerned with the rigorous mathematical study of the meaning of programming languages. It does so by evaluating the meaning of syntactically legal strings defined by a specific programming language, showing the computation involved. In such a case that the evaluation would be of syntactically illegal strings, the result would be non-computation. Semantics describes the processes a computer follows when executing a program in that specific language. This can be shown by describing the relationship between the input and output of a program, or an explanation of how the program will execute on a certain platform, hence creating a model of computation. === Quantum computation === A quantum computer is a computation system that makes direct use of quantum-mechanical phenomena, such as superposition and entanglement, to perform operations on data. Quantum computers are different from digital computers based on transistors. Whereas digital computers require data to be encoded into binary digits (bits), each of which is always in one of two definite states (0 or 1), quantum computation uses qubits (quantum bits), which can be in superpositions of states. A theoretical model is the quantum Turing machine, also known as the universal quantum computer. Quantum computers share theoretical similarities with non-deterministic and probabilistic computers; one example is the ability to be in more than one state simultaneously. The field of quantum computing was first introduced by Yuri Manin in 1980 and Richard Feynman in 1982. A quantum computer with spins as quantum bits was also formulated for use as a quantum space–time in 1968. Experiments have been carried out in which quantum computational operations were executed on a very small number of qubits. Both practical and theoretical research continues, and many national governments and military funding agencies support quantum computing research to develop quantum computers for both civilian and national security purposes, such as cryptanalysis. === Symbolic computation === Computer algebra, also called symbolic computation or algebraic computation is a scientific area that refers to the study and development of algorithms and software for manipulating mathematical expressions and other mathematical objects. Although, properly speaking, computer algebra should be a subfield of scientific computing, they are generally considered as distinct fields because scientific computing is usually based on numerical computation with approximate floating point numbers, while symbolic computation emphasizes exact computation with expressions containing variables that have not any given value and are thus manipulated as symbols (therefore the name of symbolic computation). Software applications that perform symbolic calculations are called computer algebra systems, with the term system alluding to the complexity of the main applications that include, at least, a method to represent mathematical data in a computer, a user programming language (usually different from the language used for the implementation), a dedicated memory manager, a user interface for the input/output of mathematical expressions, a large set of routines to perform usual operations, like simplification of expressions, differentiation using chain rule, polynomial factorization, indefinite integration, etc. === Very-large-scale integration === Very-large-scale integration (VLSI) is the process of creating an integrated circuit (IC) by combining thousands of transistors into a single chip. VLSI began in the 1970s when complex semiconductor and communication technologies were being developed. The microprocessor is a VLSI device. Before the introduction of VLSI technology most ICs had a limited set of functions they could perform. An electronic circuit might consist of a CPU, ROM, RAM and other glue logic. VLSI allows IC makers to add all of these circuits into one chip. == Organizations == European Association for Theoretical Computer Science SIGACT Simons Institute for the Theory of Computing == Journals and newsletters == Discrete Mathematics and Theoretical Computer Science Information and Computation Theory of Computing (open access journal) Formal Aspects of Computing Journal of the ACM SIAM Journal on Computing (SICOMP) SIGACT News Theoretical Computer Science Theory of Computing Systems TheoretiCS (open access journal) International Journal of Foundations of Computer Science Chicago Journal of Theoretical Computer Science (open access journal) Foundations and Trends in Theoretical Computer Science Journal of Automata, Languages and Combinatorics Acta Informatica Fundamenta Informaticae ACM Transactions on Computation Theory Computational Complexity Journal of Complexity ACM Transactions on Algorithms Information Processing Letters Open Computer Science (open access journal) == Conferences == Annual ACM Symposium on Theory of Computing (STOC) Annual IEEE Symposium on Foundations of Computer Science (FOCS) Innovations in Theoretical Computer Science (ITCS) Mathematical Foundations of Computer Science (MFCS) International Computer Science Symposium in Russia (CSR) ACM–SIAM Symposium on Discrete Algorithms (SODA) IEEE Symposium on Logic in Computer Science (LICS) Computational Complexity Conference (CCC) International Colloquium on Automata, Languages and Programming (ICALP) Annual Symposium on Computational Geometry (SoCG) ACM Symposium on Principles of Distributed Computing (PODC) ACM Symposium on Parallelism in Algorithms and Architectures (SPAA) Annual Conference on Learning Theory (COLT) International Conference on Current Trends in Theory and Practice of Computer Science (SOFSEM) Symposium on Theoretical Aspects of Computer Science (STACS) European Symposium on Algorithms (ESA) Workshop on Approximation Algorithms for Combinatorial Optimization Problems (APPROX) Workshop on Randomization and Computation (RANDOM) International Symposium on Algorithms and Computation (ISAAC) International Symposium on Fundamentals of Computation Theory (FCT) International Workshop on Graph-Theoretic Concepts in Computer Science (WG) == See also == Formal science Unsolved problems in computer science Sun–Ni law == Notes == == Further reading == Martin Davis, Ron Sigal, Elaine J. Weyuker, Computability, complexity, and languages: fundamentals of theoretical computer science, 2nd ed., Academic Press, 1994, ISBN 0-12-206382-1. Covers theory of computation, but also program semantics and quantification theory. Aimed at graduate students. == External links == SIGACT directory of additional theory links (archived 15 July 2017) Theory Matters Wiki Theoretical Computer Science (TCS) Advocacy Wiki List of academic conferences in the area of theoretical computer science at confsearch Theoretical Computer Science – StackExchange, a Question and Answer site for researchers in theoretical computer science Computer Science Animated Theory of computation at the Massachusetts Institute of Technology
Wikipedia/Theoretical_Computer_Science
Programming languages are typically created by designing a form of representation of a computer program, and writing an implementation for the developed concept, usually an interpreter or compiler. Interpreters are designed to read programs, usually in some variation of a text format, and perform actions based on what it reads, whereas compilers convert code to a lower level from, such as object code. == Design == In programming language design, there are a wide variety of factors to consider. Some factors may be mutually exclusive (e.g. security versus speed). It may be necessary to consider whether a programming language will perform better interpreted, or compiled, if a language should be dynamically or statically typed, if inheritance will be in, and the general syntax of the language. Many factors involved with the design of a language can be decided on by the goals behind the language. It's important to consider the target audience of a language, its unique features and its purpose. It is good practice to look at what existing languages lack, or make difficult, to make sure a language serves a purpose. Various experts have suggested useful design principles: As the last paragraph of an article published in 1972, Tony Hoare has provided some general advice for any software project: "So my advice to the designers and implementer of software of the future is in a nutshell: do not decide exactly what you are going to do until you know how to do it; and do not decide how to do it until you have evaluated your plan against all the desired criteria of quality. And if you cannot do that, simplify your design until you can." At a SIGPLAN symposium in 1973, Tony Hoare discussed various language aspects in some detail. He also identifies a number of shortcomings in (then) current programming languages. "a programming language is a tool which should assist the programmer in the most difficult aspects of his art, namely program design, documentation, and debugging." "objective criteria for good language design may be summarized in five catch phrases: simplicity, security, fast translation, efficient object code, and readability." "It is absurd to make elaborate security checks on debugging runs, when no trust is put in the results, and then remove them in production runs, when an erroneous result could be expensive or disastrous. What would we think of a sailing enthusiast who wears his life-jacket when training on dry land but takes it off as soon as he goes to sea?" At IFIP Congress 1974, Niklaus Wirth, designer of Pascal, presented a paper "On the design of programming languages". Wirth listed a number of competing suggestions, most notably that a language should be easy to learn and use, it should be usable without new features being added, the compiler should generate efficient code, a compiler should be fast, and that a language should be compatible with libraries, the system it is running on, and programs written in other languages. == Implementation == === Interpreters === An interpreter is a program that reads another program, typically as text, as seen in languages like Python. Interpreters read code, and produce the result directly. Interpreters typically read code line by line, and parse it to convert and execute the code as operations and actions. === Compilers === Compilers are programs that read programs, also usually as some form of text, and converts the code into lower level machine code or operations. Compiled formats generated by compilers store the lower level actions as a file. Compiled languages converted to machine code, tend to be a lot faster, as lower level operations are easier to run, and outcomes can be predicted and compiled ahead of time. == Process == Processes of making a programming language may differ from developer to developer; however, here is a general process of how one might create a programming language, which includes common concepts: Design: Design aspects are considered, such as types, syntax, semantics, and library usage to develop a language. Consideration: Syntax, implementation, and other factors are considered. Languages like Python interpret code at runtime, whereas languages like C++ follow an approach of basing its compiler off of C's compiler. Create an implementation: A first implementation is written. Compilers will convert to other formats, usually ending up as low-level as assembly, even down to binary. Improve your implementation: Implementations should be improved upon. Expand the programming language, aiming for it to have enough functionality to bootstrap, where a programming language is capable of writing an implementation of itself. Bootstrapping: If using a compiler, a developer may use the process of bootstrapping, where a compiler for a programming language is rewritten in itself. This is good for bug checking, and proving its capability. Bootstrapping also comes with the benefit of only needing to program the language in itself from there-on. == References ==
Wikipedia/Programming_language_design
Prototype theory is a theory of categorization in cognitive science, particularly in psychology and cognitive linguistics, in which there is a graded degree of belonging to a conceptual category, and some members are more central than others. It emerged in 1971 with the work of psychologist Eleanor Rosch, and it has been described as a "Copernican Revolution" in the theory of categorization for its departure from the traditional Aristotelian categories. It has been criticized by those that still endorse the traditional theory of categories, like linguist Eugenio Coseriu and other proponents of the structural semantics paradigm. In this prototype theory, any given concept in any given language has a real world example that best represents this concept. For example: when asked to give an example of the concept furniture, a couch is more frequently cited than, say, a wardrobe. Prototype theory has also been applied in linguistics, as part of the mapping from phonological structure to semantics. In formulating prototype theory, Rosch drew in part from previous insights in particular the formulation of a category model based on family resemblance by Wittgenstein (1953), and by Roger Brown's How shall a thing be called? (1958). == Overview and terminology == The term prototype, as defined in psychologist Eleanor Rosch's study "Natural Categories", was initially defined as denoting a stimulus, which takes a salient position in the formation of a category, due to the fact that it is the first stimulus to be associated with that category. Rosch later defined it as the most central member of a category. Rosch and others developed prototype theory as a response to, and radical departure from, the classical theory of concepts, which defines concepts by necessary and sufficient conditions. Necessary conditions refers to the set of features every instance of a concept must present, and sufficient conditions are those that no other entity possesses. Rather than defining concepts by features, the prototype theory defines categories based on either a specific artifact of that category or by a set of entities within the category that represent a prototypical member. The prototype of a category can be understood in lay terms by the object or member of a class most often associated with that class. The prototype is the center of the class, with all other members moving progressively further from the prototype, which leads to the gradation of categories. Every member of the class is not equally central in human cognition. As in the example of furniture above, couch is more central than wardrobe. Contrary to the classical view, prototypes and gradations lead to an understanding of category membership not as an all-or-nothing approach, but as more of a web of interlocking categories which overlap. Further development of prototype theory by psychologist James Hampton, and others replaced the notion of prototypes being the most typical exemplar, with the proposal that a prototype is a bundle of correlated features. These features may or may not be true of all members of the class (necessary or defining features), but they will all be associated with being a typical member or the class. By this means, two aspects of concept structure can be explained. Some exemplars are more typical of a category than others, because they are a better fit to the concept prototype, having more of the features. Importantly, Hampton's prototype model explains the vagueness that can occur at the boundary of conceptual categories. While some may think of pictures, telephones or cookers as atypical furniture, others will say they are not furniture at all. Membership of a category can be a matter of degree, and the same features that give rise to typicality structure are also responsible for graded degrees of category membership. In Cognitive linguistics it has been argued that linguistic categories also have a prototype structure, like categories of common words in a language. == Categories == === Basic level categories === The other notion related to prototypes is that of a basic level in cognitive categorization. Basic categories are relatively homogeneous in terms of sensory-motor affordances — a chair is associated with bending of one's knees, a fruit with picking it up and putting it in your mouth, etc. At the subordinate level (e.g. [dentist's chairs], [kitchen chairs] etc.) few significant features can be added to that of the basic level; whereas at the superordinate level, these conceptual similarities are hard to pinpoint. A picture of a chair is easy to draw (or visualize), but drawing furniture would be more difficult. Psychologists Eleanor Rosch, Carolyn Mervis and colleagues defined the basic level as that level that has the highest degree of cue validity and category validity. Thus, a category like [animal] may have a prototypical member, but no cognitive visual representation. On the other hand, basic categories in [animal], i.e. [dog], [bird], [fish], are full of informational content and can easily be categorized in terms of Gestalt and semantic features. Basic level categories tend to have the same parts and recognizable images. Clearly semantic models based on attribute-value pairs fail to identify privileged levels in the hierarchy. Functionally, it is thought that basic level categories are a decomposition of the world into maximally informative categories. Thus, they maximize the number of attributes shared by members of the category, and minimize the number of attributes shared with other categories However, the notion of Basic-ness as a Level can be problematic. Linguistically, types of bird (swallow, robin, gull) are basic level - they have mono-morphemic nouns, which fall under the superordinate BIRD, and have subordinates expressed by noun phrases (herring gull, male robin). Yet in psychological terms, bird behaves as a basic level term. At the same time, atypical birds such as ostrich and penguin are themselves basic level terms, having very distinct outlines and not sharing obvious parts with other birds. More problems arise when the notion of a prototype is applied to lexical categories other than the noun. Verbs, for example, seem to defy a clear prototype: [to run] is hard to split up in more or less central members. In her 1975 paper, Rosch asked 200 American college students to rate, on a scale of 1 to 7, whether they regarded certain items as good examples of the category furniture. These items ranged from chair and sofa, ranked number 1, to a love seat (number 10), to a lamp (number 31), all the way to a telephone, ranked number 60. While one may differ from this list in terms of cultural specifics, the point is that such a graded categorization is likely to be present in all cultures. Further evidence that some members of a category are more privileged than others came from experiments involving: 1. Response Times: in which queries involving prototypical members (e.g. is a robin a bird) elicited faster response times than for non-prototypical members. 2. Priming: When primed with the higher-level (superordinate) category, subjects were faster in identifying if two words are the same. Thus, after flashing furniture, the equivalence of chair-chair is detected more rapidly than stove-stove. 3. Exemplars: When asked to name a few exemplars, the more prototypical items came up more frequently. Subsequent to Rosch's work, prototype effects have been investigated widely in areas such as colour cognition, and also for more abstract notions: subjects may be asked, e.g. "to what degree is this narrative an instance of telling a lie?". Similar work has been done on actions (verbs like look, kill, speak, walk [Pulman:83]), adjectives like "tall", etc. Another aspect in which Prototype Theory departs from traditional Aristotelian categorization is that there do not appear to be natural kind categories (bird, dog) vs. artifacts (toys, vehicles). A common comparison is the use of prototype or the use of exemplars in category classification. Medin, Altom, and Murphy found that using a mixture of prototype and exemplar information, participants were more accurately able to judge categories. Participants who were presented with prototype values classified based on similarity to stored prototypes and stored exemplars, whereas participants who only had experience with exemplar only relied on the similarity to stored exemplars. Smith and Minda looked at the use of prototypes and exemplars in dot-pattern category learning. They found that participants used more prototypes than they used exemplars, with the prototypes being the center of the category, and exemplars surrounding it. == Distance between concepts == The notion of prototypes is related to Wittgenstein's (later) discomfort with the traditional notion of category. This influential theory has resulted in a view of semantic components more as possible rather than necessary contributors to the meaning of texts. His discussion on the category game is particularly incisive:Consider for example the proceedings that we call 'games'. I mean board games, card games, ball games, Olympic games, and so on. What is common to them all? Don't say, "There must be something common, or they would not be called 'games'"--but look and see whether there is anything common to all. For if you look at them you will not see something common to all, but similarities, relationships, and a whole series of them at that. To repeat: don't think, but look! Look for example at board games, with their multifarious relationships. Now pass to card games; here you find many correspondences with the first group, but many common features drop out, and others appear. When we pass next to ball games, much that is common is retained, but much is lost. Are they all 'amusing'? Compare chess with noughts and crosses. Or is there always winning and losing, or competition between players? Think of patience. In ball games there is winning and losing; but when a child throws his ball at the wall and catches it again, this feature has disappeared. Look at the parts played by skill and luck; and at the difference between skill in chess and skill in tennis. Think now of games like ring-a-ring-a-roses; here is the element of amusement, but how many other characteristic features have disappeared! And we can go through the many, many other groups of games in the same way; can see how similarities crop up and disappear. And the result of this examination is: we see a complicated network of similarities overlapping and criss-crossing: sometimes overall similarities, sometimes similarities of detail. Wittgenstein's theory of family resemblance describes the phenomenon when people group concepts based on a series of overlapping features, rather than by one feature which exists throughout all members of the category. For example, basketball and baseball share the use of a ball, and baseball and chess share the feature of a winner, etc., rather than one defining feature of "games". Therefore, there is a distance between focal, or prototypical members of the category, and those that continue outwards from them, linked by shared features. Peter Gärdenfors has elaborated a possible partial explanation of prototype theory in terms of multi-dimensional feature spaces called conceptual spaces, where a category is defined in terms of a conceptual distance. More central members of a category are "between" the peripheral members. He postulates that most natural categories exhibit a convexity in conceptual space, in that if x and y are elements of a category, and if z is between x and y, then z is also likely to belong to the category. == Combining categories == Within language we find instances of combined categories, such as tall man or small elephant. Combining categories was a problem for extensional semantics, where the semantics of a word such as red is to be defined as the set of objects having this property. This does not apply as well to modifiers such as small; a small mouse is very different from a small elephant. These combinations pose a lesser problem in terms of prototype theory. In situations involving adjectives (e.g. tall), one encounters the question of whether or not the prototype of [tall] is a 6 foot tall man, or a 400-foot skyscraper. The solution emerges by contextualizing the notion of prototype in terms of the object being modified. This extends even more radically in compounds such as red wine or red hair which are hardly red in the prototypical sense, but the red indicates merely a shift from the prototypical colour of wine or hair respectively. The addition of red shifts the prototype from the one of hair to that of red hair. The prototype is changed by additional specific information, and combines features from the prototype of red and wine. == Dynamic structure and distance == Mikulincer, Mario & Paz, Dov & Kedem, Perry focused on the dynamic nature of prototypes and how represented semantic categories actually changes due to emotional states. The 4 part study assessed the relationships between situational stress and trait anxiety and the way people organize the hierarchical level at which semantic stimuli are categorized, the way people categorize natural objects, the narrowing of the breadth of categories and the proneness to use less inclusive levels of categorization instead of more inclusive ones. == Critique == Prototype theory has been criticized by those that still endorse the classic theory of categories, like linguist Eugenio Coseriu and other proponents of the structural semantics paradigm. === Exemplar theory === Douglas L. Medin and Marguerite M. Schaffer showed by experiment that a context theory of classification which derives concepts purely from exemplars (cf. exemplar theory) worked better than a class of theories that included prototype theory. === Graded categorization === Linguists, including Stephen Laurence writing with Eric Margolis, have suggested problems with the prototype theory. In their 1999 paper, they raise several issues. One of which is that prototype theory does not intrinsically guarantee graded categorization. When subjects were asked to rank how well certain members exemplify the category, they rated some members above others. For example, robins were seen as being "birdier" than ostriches, but when asked whether these categories are "all-or-nothing" or have fuzzier boundaries, the subjects stated that they were defined, "all-or-nothing" categories. Laurence and Margolis concluded that "prototype structure has no implication for whether subjects represent a category as being graded" (p. 33). === Compound concepts === Daniel Osherson and Edward Smith raised the issue of pet fish for which the prototype might be a guppy kept in a bowl in someone's house. The prototype for pet might be a dog or cat, and the prototype for fish might be trout or salmon. However, the features of these prototypes do not present in the prototype for pet fish, therefore this prototype must be generated from something other than its constituent parts. James Hampton found that prototypes for conjunctive concepts such as pet fish are produced by a compositional function operating on the features of each concept. Initially all features of each concept are added to the prototype of the conjunction. There is then a consistency check - for example pets are warm and cuddly but fish cannot be. Fish are often eaten for dinner, but pets are never. Hence the conjunctive prototype fails to inherit features of either concept that are incompatible with the other concept. A final stage in the process looks for knowledge of the class in long term memory, and if the class is familiar may add extra features - a process called "extensional feedback". The model was tested by showing how apparently logical syntactic conjunctions or disjunctions, such as "A sport which is also a game" or "Vehicles that are not Machines", or "Fruits or Vegetables" fail to conform to Boolean set logic. Chess is considered to be a sport which is a game, but is not considered to be a sport. Mushrooms are considered to be either a fruit or a vegetable, but when asked separately very few people consider them to be a vegetable and no-one considers them to be a fruit. Antonio Lieto and Gian Luca Pozzato have proposed a typicality-based compositional logic (TCL) that is able to account for both complex human-like concept combinations (like the PET-FISH problem) and conceptual blending. Their framework shows how concepts expressed as prototypes can account for the phenomenon of prototypical compositionality in concept combination. == See also == Composite photography – British eugenist, polymath, and behavioural geneticist (1822–1911) Composite portrait – compositing of images such as faces to produce an Ideal typePages displaying wikidata descriptions as a fallback Exemplar theory – Psychological categorization proposal Family resemblance – Philosophical idea popularized by Ludwig Wittgenstein Folksonomy – Classification based on users' tags Frame semantics – Linguistic theory Intuitive statistics – cognitive phenomenon where organisms use data to make generalizations and predictions about the worldPages displaying wikidata descriptions as a fallback Platonic ideal – Philosophical theory attributed to PlatoPages displaying short descriptions of redirect targets Semantic feature-comparison model Similarity (philosophy) – Relation of resemblance between objects == Footnotes == == References ==
Wikipedia/Prototype_theory
In computer science, functional programming is a programming paradigm where programs are constructed by applying and composing functions. It is a declarative programming paradigm in which function definitions are trees of expressions that map values to other values, rather than a sequence of imperative statements which update the running state of the program. In functional programming, functions are treated as first-class citizens, meaning that they can be bound to names (including local identifiers), passed as arguments, and returned from other functions, just as any other data type can. This allows programs to be written in a declarative and composable style, where small functions are combined in a modular manner. Functional programming is sometimes treated as synonymous with purely functional programming, a subset of functional programming that treats all functions as deterministic mathematical functions, or pure functions. When a pure function is called with some given arguments, it will always return the same result, and cannot be affected by any mutable state or other side effects. This is in contrast with impure procedures, common in imperative programming, which can have side effects (such as modifying the program's state or taking input from a user). Proponents of purely functional programming claim that by restricting side effects, programs can have fewer bugs, be easier to debug and test, and be more suited to formal verification. Functional programming has its roots in academia, evolving from the lambda calculus, a formal system of computation based only on functions. Functional programming has historically been less popular than imperative programming, but many functional languages are seeing use today in industry and education, including Common Lisp, Scheme, Clojure, Wolfram Language, Racket, Erlang, Elixir, OCaml, Haskell, and F#. Lean is a functional programming language commonly used for verifying mathematical theorems. Functional programming is also key to some languages that have found success in specific domains, like JavaScript in the Web, R in statistics, J, K and Q in financial analysis, and XQuery/XSLT for XML. Domain-specific declarative languages like SQL and Lex/Yacc use some elements of functional programming, such as not allowing mutable values. In addition, many other programming languages support programming in a functional style or have implemented features from functional programming, such as C++11, C#, Kotlin, Perl, PHP, Python, Go, Rust, Raku, Scala, and Java (since Java 8). == History == The lambda calculus, developed in the 1930s by Alonzo Church, is a formal system of computation built from function application. In 1937 Alan Turing proved that the lambda calculus and Turing machines are equivalent models of computation, showing that the lambda calculus is Turing complete. Lambda calculus forms the basis of all functional programming languages. An equivalent theoretical formulation, combinatory logic, was developed by Moses Schönfinkel and Haskell Curry in the 1920s and 1930s. Church later developed a weaker system, the simply typed lambda calculus, which extended the lambda calculus by assigning a data type to all terms. This forms the basis for statically typed functional programming. The first high-level functional programming language, Lisp, was developed in the late 1950s for the IBM 700/7000 series of scientific computers by John McCarthy while at Massachusetts Institute of Technology (MIT). Lisp functions were defined using Church's lambda notation, extended with a label construct to allow recursive functions. Lisp first introduced many paradigmatic features of functional programming, though early Lisps were multi-paradigm languages, and incorporated support for numerous programming styles as new paradigms evolved. Later dialects, such as Scheme and Clojure, and offshoots such as Dylan and Julia, sought to simplify and rationalise Lisp around a cleanly functional core, while Common Lisp was designed to preserve and update the paradigmatic features of the numerous older dialects it replaced. Information Processing Language (IPL), 1956, is sometimes cited as the first computer-based functional programming language. It is an assembly-style language for manipulating lists of symbols. It does have a notion of generator, which amounts to a function that accepts a function as an argument, and, since it is an assembly-level language, code can be data, so IPL can be regarded as having higher-order functions. However, it relies heavily on the mutating list structure and similar imperative features. Kenneth E. Iverson developed APL in the early 1960s, described in his 1962 book A Programming Language (ISBN 9780471430148). APL was the primary influence on John Backus's FP. In the early 1990s, Iverson and Roger Hui created J. In the mid-1990s, Arthur Whitney, who had previously worked with Iverson, created K, which is used commercially in financial industries along with its descendant Q. In the mid-1960s, Peter Landin invented SECD machine, the first abstract machine for a functional programming language, described a correspondence between ALGOL 60 and the lambda calculus, and proposed the ISWIM programming language. John Backus presented FP in his 1977 Turing Award lecture "Can Programming Be Liberated From the von Neumann Style? A Functional Style and its Algebra of Programs". He defines functional programs as being built up in a hierarchical way by means of "combining forms" that allow an "algebra of programs"; in modern language, this means that functional programs follow the principle of compositionality. Backus's paper popularized research into functional programming, though it emphasized function-level programming rather than the lambda-calculus style now associated with functional programming. The 1973 language ML was created by Robin Milner at the University of Edinburgh, and David Turner developed the language SASL at the University of St Andrews. Also in Edinburgh in the 1970s, Burstall and Darlington developed the functional language NPL. NPL was based on Kleene Recursion Equations and was first introduced in their work on program transformation. Burstall, MacQueen and Sannella then incorporated the polymorphic type checking from ML to produce the language Hope. ML eventually developed into several dialects, the most common of which are now OCaml and Standard ML. In the 1970s, Guy L. Steele and Gerald Jay Sussman developed Scheme, as described in the Lambda Papers and the 1985 textbook Structure and Interpretation of Computer Programs. Scheme was the first dialect of lisp to use lexical scoping and to require tail-call optimization, features that encourage functional programming. In the 1980s, Per Martin-Löf developed intuitionistic type theory (also called constructive type theory), which associated functional programs with constructive proofs expressed as dependent types. This led to new approaches to interactive theorem proving and has influenced the development of subsequent functional programming languages. The lazy functional language, Miranda, developed by David Turner, initially appeared in 1985 and had a strong influence on Haskell. With Miranda being proprietary, Haskell began with a consensus in 1987 to form an open standard for functional programming research; implementation releases have been ongoing as of 1990. More recently it has found use in niches such as parametric CAD in the OpenSCAD language built on the CGAL framework, although its restriction on reassigning values (all values are treated as constants) has led to confusion among users who are unfamiliar with functional programming as a concept. Functional programming continues to be used in commercial settings. == Concepts == A number of concepts and paradigms are specific to functional programming, and generally foreign to imperative programming (including object-oriented programming). However, programming languages often cater to several programming paradigms, so programmers using "mostly imperative" languages may have utilized some of these concepts. === First-class and higher-order functions === Higher-order functions are functions that can either take other functions as arguments or return them as results. In calculus, an example of a higher-order function is the differential operator d / d x {\displaystyle d/dx} , which returns the derivative of a function f {\displaystyle f} . Higher-order functions are closely related to first-class functions in that higher-order functions and first-class functions both allow functions as arguments and results of other functions. The distinction between the two is subtle: "higher-order" describes a mathematical concept of functions that operate on other functions, while "first-class" is a computer science term for programming language entities that have no restriction on their use (thus first-class functions can appear anywhere in the program that other first-class entities like numbers can, including as arguments to other functions and as their return values). Higher-order functions enable partial application or currying, a technique that applies a function to its arguments one at a time, with each application returning a new function that accepts the next argument. This lets a programmer succinctly express, for example, the successor function as the addition operator partially applied to the natural number one. === Pure functions === Pure functions (or expressions) have no side effects (memory or I/O). This means that pure functions have several useful properties, many of which can be used to optimize the code: If the result of a pure expression is not used, it can be removed without affecting other expressions. If a pure function is called with arguments that cause no side-effects, the result is constant with respect to that argument list (sometimes called referential transparency or idempotence), i.e., calling the pure function again with the same arguments returns the same result. (This can enable caching optimizations such as memoization.) If there is no data dependency between two pure expressions, their order can be reversed, or they can be performed in parallel and they cannot interfere with one another (in other terms, the evaluation of any pure expression is thread-safe). If the entire language does not allow side-effects, then any evaluation strategy can be used; this gives the compiler freedom to reorder or combine the evaluation of expressions in a program (for example, using deforestation). While most compilers for imperative programming languages detect pure functions and perform common-subexpression elimination for pure function calls, they cannot always do this for pre-compiled libraries, which generally do not expose this information, thus preventing optimizations that involve those external functions. Some compilers, such as gcc, add extra keywords for a programmer to explicitly mark external functions as pure, to enable such optimizations. Fortran 95 also lets functions be designated pure. C++11 added constexpr keyword with similar semantics. === Recursion === Iteration (looping) in functional languages is usually accomplished via recursion. Recursive functions invoke themselves, letting an operation be repeated until it reaches the base case. In general, recursion requires maintaining a stack, which consumes space in a linear amount to the depth of recursion. This could make recursion prohibitively expensive to use instead of imperative loops. However, a special form of recursion known as tail recursion can be recognized and optimized by a compiler into the same code used to implement iteration in imperative languages. Tail recursion optimization can be implemented by transforming the program into continuation passing style during compiling, among other approaches. The Scheme language standard requires implementations to support proper tail recursion, meaning they must allow an unbounded number of active tail calls. Proper tail recursion is not simply an optimization; it is a language feature that assures users that they can use recursion to express a loop and doing so would be safe-for-space. Moreover, contrary to its name, it accounts for all tail calls, not just tail recursion. While proper tail recursion is usually implemented by turning code into imperative loops, implementations might implement it in other ways. For example, Chicken intentionally maintains a stack and lets the stack overflow. However, when this happens, its garbage collector will claim space back, allowing an unbounded number of active tail calls even though it does not turn tail recursion into a loop. Common patterns of recursion can be abstracted away using higher-order functions, with catamorphisms and anamorphisms (or "folds" and "unfolds") being the most obvious examples. Such recursion schemes play a role analogous to built-in control structures such as loops in imperative languages. Most general purpose functional programming languages allow unrestricted recursion and are Turing complete, which makes the halting problem undecidable, can cause unsoundness of equational reasoning, and generally requires the introduction of inconsistency into the logic expressed by the language's type system. Some special purpose languages such as Coq allow only well-founded recursion and are strongly normalizing (nonterminating computations can be expressed only with infinite streams of values called codata). As a consequence, these languages fail to be Turing complete and expressing certain functions in them is impossible, but they can still express a wide class of interesting computations while avoiding the problems introduced by unrestricted recursion. Functional programming limited to well-founded recursion with a few other constraints is called total functional programming. === Strict versus non-strict evaluation === Functional languages can be categorized by whether they use strict (eager) or non-strict (lazy) evaluation, concepts that refer to how function arguments are processed when an expression is being evaluated. The technical difference is in the denotational semantics of expressions containing failing or divergent computations. Under strict evaluation, the evaluation of any term containing a failing subterm fails. For example, the expression: print length([2+1, 3*2, 1/0, 5-4]) fails under strict evaluation because of the division by zero in the third element of the list. Under lazy evaluation, the length function returns the value 4 (i.e., the number of items in the list), since evaluating it does not attempt to evaluate the terms making up the list. In brief, strict evaluation always fully evaluates function arguments before invoking the function. Lazy evaluation does not evaluate function arguments unless their values are required to evaluate the function call itself. The usual implementation strategy for lazy evaluation in functional languages is graph reduction. Lazy evaluation is used by default in several pure functional languages, including Miranda, Clean, and Haskell. Hughes 1984 argues for lazy evaluation as a mechanism for improving program modularity through separation of concerns, by easing independent implementation of producers and consumers of data streams. Launchbury 1993 describes some difficulties that lazy evaluation introduces, particularly in analyzing a program's storage requirements, and proposes an operational semantics to aid in such analysis. Harper 2009 proposes including both strict and lazy evaluation in the same language, using the language's type system to distinguish them. === Type systems === Especially since the development of Hindley–Milner type inference in the 1970s, functional programming languages have tended to use typed lambda calculus, rejecting all invalid programs at compilation time and risking false positive errors, as opposed to the untyped lambda calculus, that accepts all valid programs at compilation time and risks false negative errors, used in Lisp and its variants (such as Scheme), as they reject all invalid programs at runtime when the information is enough to not reject valid programs. The use of algebraic data types makes manipulation of complex data structures convenient; the presence of strong compile-time type checking makes programs more reliable in absence of other reliability techniques like test-driven development, while type inference frees the programmer from the need to manually declare types to the compiler in most cases. Some research-oriented functional languages such as Coq, Agda, Cayenne, and Epigram are based on intuitionistic type theory, which lets types depend on terms. Such types are called dependent types. These type systems do not have decidable type inference and are difficult to understand and program with. But dependent types can express arbitrary propositions in higher-order logic. Through the Curry–Howard isomorphism, then, well-typed programs in these languages become a means of writing formal mathematical proofs from which a compiler can generate certified code. While these languages are mainly of interest in academic research (including in formalized mathematics), they have begun to be used in engineering as well. Compcert is a compiler for a subset of the language C that is written in Coq and formally verified. A limited form of dependent types called generalized algebraic data types (GADT's) can be implemented in a way that provides some of the benefits of dependently typed programming while avoiding most of its inconvenience. GADT's are available in the Glasgow Haskell Compiler, in OCaml and in Scala, and have been proposed as additions to other languages including Java and C#. === Referential transparency === Functional programs do not have assignment statements, that is, the value of a variable in a functional program never changes once defined. This eliminates any chances of side effects because any variable can be replaced with its actual value at any point of execution. So, functional programs are referentially transparent. Consider C assignment statement x=x*10, this changes the value assigned to the variable x. Let us say that the initial value of x was 1, then two consecutive evaluations of the variable x yields 10 and 100 respectively. Clearly, replacing x=x*10 with either 10 or 100 gives a program a different meaning, and so the expression is not referentially transparent. In fact, assignment statements are never referentially transparent. Now, consider another function such as int plusone(int x) {return x+1;} is transparent, as it does not implicitly change the input x and thus has no such side effects. Functional programs exclusively use this type of function and are therefore referentially transparent. === Data structures === Purely functional data structures are often represented in a different way to their imperative counterparts. For example, the array with constant access and update times is a basic component of most imperative languages, and many imperative data-structures, such as the hash table and binary heap, are based on arrays. Arrays can be replaced by maps or random access lists, which admit purely functional implementation, but have logarithmic access and update times. Purely functional data structures have persistence, a property of keeping previous versions of the data structure unmodified. In Clojure, persistent data structures are used as functional alternatives to their imperative counterparts. Persistent vectors, for example, use trees for partial updating. Calling the insert method will result in some but not all nodes being created. == Comparison to imperative programming == Functional programming is very different from imperative programming. The most significant differences stem from the fact that functional programming avoids side effects, which are used in imperative programming to implement state and I/O. Pure functional programming completely prevents side-effects and provides referential transparency. Higher-order functions are rarely used in older imperative programming. A traditional imperative program might use a loop to traverse and modify a list. A functional program, on the other hand, would probably use a higher-order "map" function that takes a function and a list, generating and returning a new list by applying the function to each list item. === Imperative vs. functional programming === The following two examples (written in JavaScript) achieve the same effect: they multiply all even numbers in an array by 10 and add them all, storing the final sum in the variable result. Traditional imperative loop: Functional programming with higher-order functions: Sometimes the abstractions offered by functional programming might lead to development of more robust code that avoids certain issues that might arise when building upon large amount of complex, imperative code, such as off-by-one errors (see Greenspun's tenth rule). === Simulating state === There are tasks (for example, maintaining a bank account balance) that often seem most naturally implemented with state. Pure functional programming performs these tasks, and I/O tasks such as accepting user input and printing to the screen, in a different way. The pure functional programming language Haskell implements them using monads, derived from category theory. Monads offer a way to abstract certain types of computational patterns, including (but not limited to) modeling of computations with mutable state (and other side effects such as I/O) in an imperative manner without losing purity. While existing monads may be easy to apply in a program, given appropriate templates and examples, many students find them difficult to understand conceptually, e.g., when asked to define new monads (which is sometimes needed for certain types of libraries). Functional languages also simulate states by passing around immutable states. This can be done by making a function accept the state as one of its parameters, and return a new state together with the result, leaving the old state unchanged. Impure functional languages usually include a more direct method of managing mutable state. Clojure, for example, uses managed references that can be updated by applying pure functions to the current state. This kind of approach enables mutability while still promoting the use of pure functions as the preferred way to express computations. Alternative methods such as Hoare logic and uniqueness have been developed to track side effects in programs. Some modern research languages use effect systems to make the presence of side effects explicit. === Efficiency issues === Functional programming languages are typically less efficient in their use of CPU and memory than imperative languages such as C and Pascal. This is related to the fact that some mutable data structures like arrays have a very straightforward implementation using present hardware. Flat arrays may be accessed very efficiently with deeply pipelined CPUs, prefetched efficiently through caches (with no complex pointer chasing), or handled with SIMD instructions. It is also not easy to create their equally efficient general-purpose immutable counterparts. For purely functional languages, the worst-case slowdown is logarithmic in the number of memory cells used, because mutable memory can be represented by a purely functional data structure with logarithmic access time (such as a balanced tree). However, such slowdowns are not universal. For programs that perform intensive numerical computations, functional languages such as OCaml and Clean are only slightly slower than C according to The Computer Language Benchmarks Game. For programs that handle large matrices and multidimensional databases, array functional languages (such as J and K) were designed with speed optimizations. Immutability of data can in many cases lead to execution efficiency by allowing the compiler to make assumptions that are unsafe in an imperative language, thus increasing opportunities for inline expansion. Even if the involved copying that may seem implicit when dealing with persistent immutable data structures might seem computationally costly, some functional programming languages, like Clojure solve this issue by implementing mechanisms for safe memory sharing between formally immutable data. Rust distinguishes itself by its approach to data immutability which involves immutable references and a concept called lifetimes. Immutable data with separation of identity and state and shared-nothing schemes can also potentially be more well-suited for concurrent and parallel programming by the virtue of reducing or eliminating the risk of certain concurrency hazards, since concurrent operations are usually atomic and this allows eliminating the need for locks. This is how for example java.util.concurrent classes are implemented, where some of them are immutable variants of the corresponding classes that are not suitable for concurrent use. Functional programming languages often have a concurrency model that instead of shared state and synchronization, leverages message passing mechanisms (such as the actor model, where each actor is a container for state, behavior, child actors and a message queue). This approach is common in Erlang/Elixir or Akka. Lazy evaluation may also speed up the program, even asymptotically, whereas it may slow it down at most by a constant factor (however, it may introduce memory leaks if used improperly). Launchbury 1993 discusses theoretical issues related to memory leaks from lazy evaluation, and O'Sullivan et al. 2008 give some practical advice for analyzing and fixing them. However, the most general implementations of lazy evaluation making extensive use of dereferenced code and data perform poorly on modern processors with deep pipelines and multi-level caches (where a cache miss may cost hundreds of cycles) . ==== Abstraction cost ==== Some functional programming languages might not optimize abstractions such as higher order functions like "map" or "filter" as efficiently as the underlying imperative operations. Consider, as an example, the following two ways to check if 5 is an even number in Clojure: When benchmarked using the Criterium tool on a Ryzen 7900X GNU/Linux PC in a Leiningen REPL 2.11.2, running on Java VM version 22 and Clojure version 1.11.1, the first implementation, which is implemented as: has the mean execution time of 4.76 ms, while the second one, in which .equals is a direct invocation of the underlying Java method, has a mean execution time of 2.8 μs – roughly 1700 times faster. Part of that can be attributed to the type checking and exception handling involved in the implementation of even?. For instance the lo library for Go, which implements various higher-order functions common in functional programming languages using generics. In a benchmark provided by the library's author, calling map is 4% slower than an equivalent for loop and has the same allocation profile, which can be attributed to various compiler optimizations, such as inlining. One distinguishing feature of Rust are zero-cost abstractions. This means that using them imposes no additional runtime overhead. This is achieved thanks to the compiler using loop unrolling, where each iteration of a loop, be it imperative or using iterators, is converted into a standalone Assembly instruction, without the overhead of the loop controlling code. If an iterative operation writes to an array, the resulting array's elements will be stored in specific CPU registers, allowing for constant-time access at runtime. === Functional programming in non-functional languages === It is possible to use a functional style of programming in languages that are not traditionally considered functional languages. For example, both D and Fortran 95 explicitly support pure functions. JavaScript, Lua, Python and Go had first class functions from their inception. Python had support for "lambda", "map", "reduce", and "filter" in 1994, as well as closures in Python 2.2, though Python 3 relegated "reduce" to the functools standard library module. First-class functions have been introduced into other mainstream languages such as Perl 5.0 in 1994, PHP 5.3, Visual Basic 9, C# 3.0, C++11, and Kotlin. In Perl, lambda, map, reduce, filter, and closures are fully supported and frequently used. The book Higher-Order Perl, released in 2005, was written to provide an expansive guide on using Perl for functional programming. In PHP, anonymous classes, closures and lambdas are fully supported. Libraries and language extensions for immutable data structures are being developed to aid programming in the functional style. In Java, anonymous classes can sometimes be used to simulate closures; however, anonymous classes are not always proper replacements to closures because they have more limited capabilities. Java 8 supports lambda expressions as a replacement for some anonymous classes. In C#, anonymous classes are not necessary, because closures and lambdas are fully supported. Libraries and language extensions for immutable data structures are being developed to aid programming in the functional style in C#. Many object-oriented design patterns are expressible in functional programming terms: for example, the strategy pattern simply dictates use of a higher-order function, and the visitor pattern roughly corresponds to a catamorphism, or fold. Similarly, the idea of immutable data from functional programming is often included in imperative programming languages, for example the tuple in Python, which is an immutable array, and Object.freeze() in JavaScript. == Comparison to logic programming == Logic programming can be viewed as a generalisation of functional programming, in which functions are a special case of relations. For example, the function, mother(X) = Y, (every X has only one mother Y) can be represented by the relation mother(X, Y). Whereas functions have a strict input-output pattern of arguments, relations can be queried with any pattern of inputs and outputs. Consider the following logic program: The program can be queried, like a functional program, to generate mothers from children: But it can also be queried backwards, to generate children: It can even be used to generate all instances of the mother relation: Compared with relational syntax, functional syntax is a more compact notation for nested functions. For example, the definition of maternal grandmother in functional syntax can be written in the nested form: The same definition in relational notation needs to be written in the unnested form: Here :- means if and , means and. However, the difference between the two representations is simply syntactic. In Ciao Prolog, relations can be nested, like functions in functional programming: Ciao transforms the function-like notation into relational form and executes the resulting logic program using the standard Prolog execution strategy. == Applications == === Text editors === Emacs, a highly extensible text editor family uses its own Lisp dialect for writing plugins. The original author of the most popular Emacs implementation, GNU Emacs and Emacs Lisp, Richard Stallman considers Lisp one of his favorite programming languages. Helix, since version 24.03 supports previewing AST as S-expressions, which are also the core feature of the Lisp programming language family. === Spreadsheets === Spreadsheets can be considered a form of pure, zeroth-order, strict-evaluation functional programming system. However, spreadsheets generally lack higher-order functions as well as code reuse, and in some implementations, also lack recursion. Several extensions have been developed for spreadsheet programs to enable higher-order and reusable functions, but so far remain primarily academic in nature. === Microservices === Due to their composability, functional programming paradigms can be suitable for microservices-based architectures. === Academia === Functional programming is an active area of research in the field of programming language theory. There are several peer-reviewed publication venues focusing on functional programming, including the International Conference on Functional Programming, the Journal of Functional Programming, and the Symposium on Trends in Functional Programming. === Industry === Functional programming has been employed in a wide range of industrial applications. For example, Erlang, which was developed by the Swedish company Ericsson in the late 1980s, was originally used to implement fault-tolerant telecommunications systems, but has since become popular for building a range of applications at companies such as Nortel, Facebook, Électricité de France and WhatsApp. Scheme, a dialect of Lisp, was used as the basis for several applications on early Apple Macintosh computers and has been applied to problems such as training-simulation software and telescope control. OCaml, which was introduced in the mid-1990s, has seen commercial use in areas such as financial analysis, driver verification, industrial robot programming and static analysis of embedded software. Haskell, though initially intended as a research language, has also been applied in areas such as aerospace systems, hardware design and web programming. Other functional programming languages that have seen use in industry include Scala, F#, Wolfram Language, Lisp, Standard ML and Clojure. Scala has been widely used in Data science, while ClojureScript, Elm or PureScript are some of the functional frontend programming languages used in production. Elixir's Phoenix framework is also used by some relatively popular commercial projects, such as Font Awesome or Allegro (one of the biggest e-commerce platforms in Poland)'s classified ads platform Allegro Lokalnie. Functional "platforms" have been popular in finance for risk analytics (particularly with large investment banks). Risk factors are coded as functions that form interdependent graphs (categories) to measure correlations in market shifts, similar in manner to Gröbner basis optimizations but also for regulatory frameworks such as Comprehensive Capital Analysis and Review. Given the use of OCaml and Caml variations in finance, these systems are sometimes considered related to a categorical abstract machine. Functional programming is heavily influenced by category theory. === Education === Many universities teach functional programming. Some treat it as an introductory programming concept while others first teach imperative programming methods. Outside of computer science, functional programming is used to teach problem-solving, algebraic and geometric concepts. It has also been used to teach classical mechanics, as in the book Structure and Interpretation of Classical Mechanics. In particular, Scheme has been a relatively popular choice for teaching programming for years. == See also == Eager evaluation Functional reactive programming Inductive functional programming List of functional programming languages List of functional programming topics Nested function Purely functional programming == Notes and references == == Further reading == Abelson, Hal; Sussman, Gerald Jay (1985). Structure and Interpretation of Computer Programs. MIT Press. Bibcode:1985sicp.book.....A. Cousineau, Guy and Michel Mauny. The Functional Approach to Programming. Cambridge, UK: Cambridge University Press, 1998. Curry, Haskell Brooks and Feys, Robert and Craig, William. Combinatory Logic. Volume I. North-Holland Publishing Company, Amsterdam, 1958. Curry, Haskell B.; Hindley, J. Roger; Seldin, Jonathan P. (1972). Combinatory Logic. Vol. II. Amsterdam: North Holland. ISBN 978-0-7204-2208-5. Dominus, Mark Jason. Higher-Order Perl. Morgan Kaufmann. 2005. Felleisen, Matthias; Findler, Robert; Flatt, Matthew; Krishnamurthi, Shriram (2018). How to Design Programs. MIT Press. Graham, Paul. ANSI Common LISP. Englewood Cliffs, New Jersey: Prentice Hall, 1996. MacLennan, Bruce J. Functional Programming: Practice and Theory. Addison-Wesley, 1990. Michaelson, Greg (10 April 2013). An Introduction to Functional Programming Through Lambda Calculus. Courier Corporation. ISBN 978-0-486-28029-5. O'Sullivan, Brian; Stewart, Don; Goerzen, John (2008). Real World Haskell. O'Reilly. Pratt, Terrence W. and Marvin Victor Zelkowitz. Programming Languages: Design and Implementation. 3rd ed. Englewood Cliffs, New Jersey: Prentice Hall, 1996. Salus, Peter H. Functional and Logic Programming Languages. Vol. 4 of Handbook of Programming Languages. Indianapolis, Indiana: Macmillan Technical Publishing, 1998. Thompson, Simon. Haskell: The Craft of Functional Programming. Harlow, England: Addison-Wesley Longman Limited, 1996. == External links == Ford, Neal. "Functional thinking". Retrieved 2021-11-10. Akhmechet, Slava (2006-06-19). "defmacro – Functional Programming For The Rest of Us". Retrieved 2013-02-24. An introduction Functional programming in Python (by David Mertz): part 1, part 2, part 3
Wikipedia/Functional_programming_language
Predicate transformer semantics were introduced by Edsger Dijkstra in his seminal paper "Guarded commands, nondeterminacy and formal derivation of programs". They define the semantics of an imperative programming paradigm by assigning to each statement in this language a corresponding predicate transformer: a total function between two predicates on the state space of the statement. In this sense, predicate transformer semantics are a kind of denotational semantics. Actually, in guarded commands, Dijkstra uses only one kind of predicate transformer: the well-known weakest preconditions (see below). Moreover, predicate transformer semantics are a reformulation of Floyd–Hoare logic. Whereas Hoare logic is presented as a deductive system, predicate transformer semantics (either by weakest-preconditions or by strongest-postconditions see below) are complete strategies to build valid deductions of Hoare logic. In other words, they provide an effective algorithm to reduce the problem of verifying a Hoare triple to the problem of proving a first-order formula. Technically, predicate transformer semantics perform a kind of symbolic execution of statements into predicates: execution runs backward in the case of weakest-preconditions, or runs forward in the case of strongest-postconditions. == Weakest preconditions == === Definition === For a statement S and a postcondition R, a weakest precondition is a predicate Q such that for any precondition P, { P } S { R } {\displaystyle \{P\}S\{R\}} if and only if P ⇒ Q {\displaystyle P\Rightarrow Q} . In other words, it is the "loosest" or least restrictive requirement needed to guarantee that R holds after S. Uniqueness follows easily from the definition: If both Q and Q' are weakest preconditions, then by the definition { Q ′ } S { R } {\displaystyle \{Q'\}S\{R\}} so Q ′ ⇒ Q {\displaystyle Q'\Rightarrow Q} and { Q } S { R } {\displaystyle \{Q\}S\{R\}} so Q ⇒ Q ′ {\displaystyle Q\Rightarrow Q'} , and thus Q = Q ′ {\displaystyle Q=Q'} . We often use w p ( S , R ) {\displaystyle wp(S,R)} to denote the weakest precondition for statement S with respect to a postcondition R. === Conventions === We use T to denote the predicate that is everywhere true and F to denote the one that is everywhere false. We shouldn't at least conceptually confuse ourselves with a Boolean expression defined by some language syntax, which might also contain true and false as Boolean scalars. For such scalars we need to do a type coercion such that we have T = predicate(true) and F = predicate(false). Such a promotion is carried out often casually, so people tend to take T as true and F as false. === Skip === === Abort === === Assignment === We give below two equivalent weakest-preconditions for the assignment statement. In these formulas, R [ x ← E ] {\displaystyle R[x\leftarrow E]} is a copy of R where free occurrences of x are replaced by E. Hence, here, expression E is implicitly coerced into a valid term of the underlying logic: it is thus a pure expression, totally defined, terminating and without side effect. version 1: version 2: Provided that E is well defined, we just apply the so-called one-point rule on version 1. Then The first version avoids a potential duplication of x in R, whereas the second version is simpler when there is at most a single occurrence of x in R. The first version also reveals a deep duality between weakest-precondition and strongest-postcondition (see below). An example of a valid calculation of wp (using version 2) for assignments with integer valued variable x is: w p ( x := x − 5 , x > 10 ) = x − 5 > 10 ⇔ x > 15 {\displaystyle {\begin{array}{rcl}wp(x:=x-5,x>10)&=&x-5>10\\&\Leftrightarrow &x>15\end{array}}} This means that in order for the postcondition x > 10 to be true after the assignment, the precondition x > 15 must be true before the assignment. This is also the "weakest precondition", in that it is the "weakest" restriction on the value of x which makes x > 10 true after the assignment. === Sequence === For example, w p ( x := x − 5 ; x := x ∗ 2 , x > 20 ) = w p ( x := x − 5 , w p ( x := x ∗ 2 , x > 20 ) ) = w p ( x := x − 5 , x ∗ 2 > 20 ) = ( x − 5 ) ∗ 2 > 20 = x > 15 {\displaystyle {\begin{array}{rcl}wp(x:=x-5;x:=x*2\ ,\ x>20)&=&wp(x:=x-5,wp(x:=x*2,x>20))\\&=&wp(x:=x-5,x*2>20)\\&=&(x-5)*2>20\\&=&x>15\end{array}}} === Conditional === As example: w p ( if x < y then x := y else skip end , x ≥ y ) = ( x < y ⇒ w p ( x := y , x ≥ y ) ) ∧ ( ¬ ( x < y ) ⇒ w p ( skip , x ≥ y ) ) = ( x < y ⇒ y ≥ y ) ∧ ( ¬ ( x < y ) ⇒ x ≥ y ) ⇔ true {\displaystyle {\begin{array}{rcl}wp({\texttt {if}}\ x<y\ {\texttt {then}}\ x:=y\ {\texttt {else}}\;\;{\texttt {skip}}\;\;{\texttt {end}},\ x\geq y)&=&(x<y\Rightarrow wp(x:=y,x\geq y))\ \wedge \ (\neg (x<y)\Rightarrow wp({\texttt {skip}},x\geq y))\\&=&(x<y\Rightarrow y\geq y)\ \wedge \ (\neg (x<y)\Rightarrow x\geq y)\\&\Leftrightarrow &{\texttt {true}}\end{array}}} === While loop === ==== Partial correctness ==== Ignoring termination for a moment, we can define the rule for the weakest liberal precondition, denoted wlp, using a predicate INV, called the Loop INVariant, typically supplied by the programmer: ==== Total correctness ==== To show total correctness, we also have to show that the loop terminates. For this we define a well-founded relation on the state space denoted as (wfs, <) and define a variant function vf , such that we have: Informally, in the above conjunction of three formulas: the first one means that the variant must be part of the well-founded relation before entering the loop; the second one means that the body of the loop (i.e. statement S) must preserve the invariant and reduce the variant; the last one means that the loop postcondition R must be established when the loop finishes. However, the conjunction of those three is not a necessary condition. Exactly, we have === Non-deterministic guarded commands === Actually, Dijkstra's Guarded Command Language (GCL) is an extension of the simple imperative language given until here with non-deterministic statements. Indeed, GCL aims to be a formal notation to define algorithms. Non-deterministic statements represent choices left to the actual implementation (in an effective programming language): properties proved on non-deterministic statements are ensured for all possible choices of implementation. In other words, weakest-preconditions of non-deterministic statements ensure that there exists a terminating execution (e.g. there exists an implementation), and, that the final state of all terminating execution satisfies the postcondition. Notice that the definitions of weakest-precondition given above (in particular for while-loop) preserve this property. ==== Selection ==== Selection is a generalization of if statement: Here, when two guards E i {\displaystyle E_{i}} and E j {\displaystyle E_{j}} are simultaneously true, then execution of this statement can run any of the associated statement S i {\displaystyle S_{i}} or S j {\displaystyle S_{j}} . ==== Repetition ==== Repetition is a generalization of while statement in a similar way. === Specification statement === Refinement calculus extends GCL with the notion of specification statement. Syntactically, we prefer to write a specification statement as x : l [ p r e , p o s t ] {\displaystyle x:l[pre,post]} which specifies a computation that starts in a state satisfying pre and is guaranteed to end in a state satisfying post by changing only x. We call l {\displaystyle l} a logical constant employed to aid in a specification. For example, we can specify a computation that increment x by 1 as x : l [ x = l , x = l + 1 ] {\displaystyle x:l[x=l,x=l+1]} Another example is a computation of a square root of an integer. x : l [ x = l 2 , x = l ] {\displaystyle x:l[x=l^{2},x=l]} The specification statement appears like a primitive in the sense that it does not contain other statements. However, it is very expressive, as pre and post are arbitrary predicates. Its weakest precondition is as follows. It combines Morgan's syntactic idea with the sharpness idea by Bijlsma, Matthews and Wiltink. The very advantage of this is its capability of defining wp of goto L and other jump statements. === Goto statement === Formalization of jump statements like goto L takes a very long bumpy process. A common belief seems to indicate the goto statement could only be argued operationally. This is probably due to a failure to recognize that goto L is actually miraculous (i.e. non-strict) and does not follow Dijkstra's coined Law of Miracle Excluded, as stood in itself. But it enjoys an extremely simple operational view from the weakest precondition perspective, which was unexpected. We define For goto L execution transfers control to label L at which the weakest precondition has to hold. The way that wpL is referred to in the rule should not be taken as a big surprise. It is just ⁠ w p ( L : S , Q ) {\displaystyle wp(L:S,Q)} ⁠ for some Q computed to that point. This is like any wp rules, using constituent statements to give wp definitions, even though goto L appears a primitive. The rule does not require the uniqueness for locations where wpL holds within a program, so theoretically it allows the same label to appear in multiple locations as long as the weakest precondition at each location is the same wpL. The goto statement can jump to any of such locations. This actually justifies that we could place the same labels at the same location multiple times, as ⁠ S ( L : L : S 1 ) {\displaystyle S(L:L:S1)} ⁠, which is the same as ⁠ S ( L : S 1 ) {\displaystyle S(L:S1)} ⁠. Also, it does not imply any scoping rule, thus allowing a jump into a loop body, for example. Let us calculate wp of the following program S, which has a jump into the loop body. wp(do x > 0 → L: x := x-1 od; if x < 0 → x := -x; goto L ⫿ x ≥ 0 → skip fi, post) = { sequential composition and alternation rules } wp(do x > 0 → L: x := x-1 od, (x<0 ∧ wp(x := -x; goto L, post)) ∨ (x ≥ 0 ∧ post) = { sequential composition, goto, assignment rules } wp(do x > 0 → L: x := x-1 od, x<0 ∧ wpL(x ← -x) ∨ x≥0 ∧ post) = { repetition rule } the strongest solution of Z: [ Z ≡ x > 0 ∧ wp(L: x := x-1, Z) ∨ x < 0 ∧ wpL(x ← -x) ∨ x=0 ∧ post ] = { assignment rule, found wpL = Z(x ← x-1) } the strongest solution of Z: [ Z ≡ x > 0 ∧ Z(x ← x-1) ∨ x < 0 ∧ Z(x ← x-1) (x ← -x) ∨ x=0 ∧ post] = { substitution } the strongest solution of Z:[ Z ≡ x > 0 ∧ Z(x ← x-1) ∨ x < 0 ∧ Z(x ← -x-1) ∨ x=0 ∧ post ] = { solve the equation by approximation } post(x ← 0) Therefore, wp(S, post) = post(x ← 0). == Other predicate transformers == === Weakest liberal precondition === An important variant of the weakest precondition is the weakest liberal precondition w l p ( S , R ) {\displaystyle wlp(S,R)} , which yields the weakest condition under which S either does not terminate or establishes R. It therefore differs from wp in not guaranteeing termination. Hence it corresponds to Hoare logic in partial correctness: for the statement language given above, wlp differs with wp only on while-loop, in not requiring a variant (see above). === Strongest postcondition === Given S a statement and R a precondition (a predicate on the initial state), then s p ( S , R ) {\displaystyle sp(S,R)} is their strongest-postcondition: it implies any postcondition satisfied by the final state of any execution of S, for any initial state satisfying R. In other words, a Hoare triple { P } S { Q } {\displaystyle \{P\}S\{Q\}} is provable in Hoare logic if and only if the predicate below hold: ∀ x , s p ( S , P ) ⇒ Q {\displaystyle \forall x,sp(S,P)\Rightarrow Q} Usually, strongest-postconditions are used in partial correctness. Hence, we have the following relation between weakest-liberal-preconditions and strongest-postconditions: ( ∀ x , P ⇒ w l p ( S , Q ) ) ⇔ ( ∀ x , s p ( S , P ) ⇒ Q ) {\displaystyle (\forall x,P\Rightarrow wlp(S,Q))\ \Leftrightarrow \ (\forall x,sp(S,P)\Rightarrow Q)} For example, on assignment we have: Above, the logical variable y represents the initial value of variable x. Hence, s p ( x := x − 5 , x > 15 ) = ∃ y , x = y − 5 ∧ y > 15 ⇔ x > 10 {\displaystyle sp(x:=x-5,x>15)\ =\ \exists y,x=y-5\wedge y>15\ \Leftrightarrow \ x>10} On sequence, it appears that sp runs forward (whereas wp runs backward): === Win and sin predicate transformers === Leslie Lamport has suggested win and sin as predicate transformers for concurrent programming. == Predicate transformers properties == This section presents some characteristic properties of predicate transformers. Below, S denotes a predicate transformer (a function between two predicates on the state space) and P a predicate. For instance, S(P) may denote wp(S,P) or sp(S,P). We keep x as the variable of the state space. === Monotonic === Predicate transformers of interest (wp, wlp, and sp) are monotonic. A predicate transformer S is monotonic if and only if: ( ∀ x : P : Q ) ⇒ ( ∀ x : S ( P ) : S ( Q ) ) {\displaystyle (\forall x:P:Q)\Rightarrow (\forall x:S(P):S(Q))} This property is related to the consequence rule of Hoare logic. === Strict === A predicate transformer S is strict iff: S ( F ) ⇔ F {\displaystyle S({\texttt {F}})\ \Leftrightarrow \ {\texttt {F}}} For instance, wp is artificially made strict, whereas wlp is generally not. In particular, if statement S may not terminate then w l p ( S , F ) {\displaystyle wlp(S,{\texttt {F}})} is satisfiable. We have w l p ( while true do skip done , F ) ⇔ T {\displaystyle wlp({\texttt {while}}\ {\texttt {true}}\ {\texttt {do}}\ {\texttt {skip}}\ {\texttt {done}},{\texttt {F}})\ \Leftrightarrow {\texttt {T}}} Indeed, T is a valid invariant of that loop. The non-strict but monotonic or conjunctive predicate transformers are called miraculous and can also be used to define a class of programming constructs, in particular, jump statements, which Dijkstra cared less about. Those jump statements include straight goto L, break and continue in a loop and return statements in a procedure body, exception handling, etc. It turns out that all jump statements are executable miracles, i.e. they can be implemented but not strict. === Terminating === A predicate transformer S is terminating if: S ( T ) ⇔ T {\displaystyle S({\texttt {T}})\ \Leftrightarrow \ {\texttt {T}}} Actually, this terminology makes sense only for strict predicate transformers: indeed, w p ( S , T ) {\displaystyle wp(S,{\texttt {T}})} is the weakest-precondition ensuring termination of S. It seems that naming this property non-aborting would be more appropriate: in total correctness, non-termination is abortion, whereas in partial correctness, it is not. === Conjunctive === A predicate transformer S is conjunctive iff: S ( P ∧ Q ) ⇔ S ( P ) ∧ S ( Q ) {\displaystyle S(P\wedge Q)\ \Leftrightarrow \ S(P)\wedge S(Q)} This is the case for w p ( S , . ) {\displaystyle wp(S,.)} , even if statement S is non-deterministic as a selection statement or a specification statement. === Disjunctive === A predicate transformer S is disjunctive iff: S ( P ∨ Q ) ⇔ S ( P ) ∨ S ( Q ) {\displaystyle S(P\vee Q)\ \Leftrightarrow \ S(P)\vee S(Q)} This is generally not the case of w p ( S , . ) {\displaystyle wp(S,.)} when S is non-deterministic. Indeed, consider a non-deterministic statement S choosing an arbitrary Boolean. This statement is given here as the following selection statement: S = if true → x := 0 [ ] true → x := 1 fi {\displaystyle S\ =\ {\texttt {if}}\ {\texttt {true}}\rightarrow x:=0\ [\!]\ {\texttt {true}}\rightarrow x:=1\ {\texttt {fi}}} Then, w p ( S , R ) {\displaystyle wp(S,R)} reduces to the formula R [ x ← 0 ] ∧ R [ x ← 1 ] {\displaystyle R[x\leftarrow 0]\wedge R[x\leftarrow 1]} . Hence, w p ( S , x = 0 ∨ x = 1 ) {\displaystyle wp(S,\ x=0\vee x=1)} reduces to the tautology ( 0 = 0 ∨ 0 = 1 ) ∧ ( 1 = 0 ∨ 1 = 1 ) {\displaystyle (0=0\vee 0=1)\wedge (1=0\vee 1=1)} Whereas, the formula w p ( S , x = 0 ) ∨ w p ( S , x = 1 ) {\displaystyle wp(S,x=0)\vee wp(S,x=1)} reduces to the wrong proposition ( 0 = 0 ∧ 1 = 0 ) ∨ ( 1 = 0 ∧ 1 = 1 ) {\displaystyle (0=0\wedge 1=0)\vee (1=0\wedge 1=1)} . == Applications == Computations of weakest-preconditions are largely used to statically check assertions in programs using a theorem-prover (like SMT-solvers or proof assistants): see Frama-C or ESC/Java2. Unlike many other semantic formalisms, predicate transformer semantics was not designed as an investigation into foundations of computation. Rather, it was intended to provide programmers with a methodology to develop their programs as "correct by construction" in a "calculation style". This "top-down" style was advocated by Dijkstra and N. Wirth. It has been formalized further by R.-J. Back and others in the refinement calculus. Some tools like B-Method now provide automated reasoning in order to promote this methodology. In the meta-theory of Hoare logic, weakest-preconditions appear as a key notion in the proof of relative completeness. == Beyond predicate transformers == === Weakest-preconditions and strongest-postconditions of imperative expressions === In predicate transformers semantics, expressions are restricted to terms of the logic (see above). However, this restriction seems too strong for most existing programming languages, where expressions may have side effects (call to a function having a side effect), may not terminate or abort (like division by zero). There are many proposals to extend weakest-preconditions or strongest-postconditions for imperative expression languages and in particular for monads. Among them, Hoare Type Theory combines Hoare logic for a Haskell-like language, separation logic and type theory. This system is currently implemented as a Coq library called Ynot. In this language, evaluation of expressions corresponds to computations of strongest-postconditions. === Probabilistic Predicate Transformers === Probabilistic Predicate Transformers are an extension of predicate transformers for probabilistic programs. Indeed, such programs have many applications in cryptography (hiding of information using some randomized noise), distributed systems (symmetry breaking). == See also == Axiomatic semantics — includes predicate transformer semantics Dynamic logic — where predicate transformers appear as modalities Formal semantics of programming languages — an overview == Notes == == References ==
Wikipedia/Predicate_transformer_semantics
In computer science, an abstract semantic graph (ASG) or term graph is a form of abstract syntax in which an expression of a formal or programming language is represented by a graph whose vertices are the expression's subterms. An ASG is at a higher level of abstraction than an abstract syntax tree (or AST), which is used to express the syntactic structure of an expression or program. ASGs are more complex and concise than ASTs because they may contain shared subterms (also known as "common subexpressions"). Abstract semantic graphs are often used as an intermediate representation by compilers to store the results of performing common subexpression elimination upon abstract syntax trees. ASTs are trees and are thus incapable of representing shared terms. ASGs are usually directed acyclic graphs (DAG), although in some applications graphs containing cycles may be permitted. For example, a graph containing a cycle might be used to represent the recursive expressions that are commonly used in functional programming languages as non-looping iteration constructs. The mutability of these types of graphs, is studied in the field of graph rewriting. The nomenclature term graph is associated with the field of term graph rewriting, which involves the transformation and processing of expressions by the specification of rewriting rules, whereas abstract semantic graph is used when discussing linguistics, programming languages, type systems and compilation. Abstract syntax trees are not capable of sharing subexpression nodes because it is not possible for a node in a proper tree to have more than one parent. Although this conceptual simplicity is appealing, it may come at the cost of redundant representation and, in turn, possibly inefficiently duplicating the computation of identical terms. For this reason ASGs are often used as an intermediate language at a subsequent compilation stage to abstract syntax tree construction via parsing. An abstract semantic graph is typically constructed from an abstract syntax tree by a process of enrichment and abstraction. The enrichment can for example be the addition of back-pointers, edges from an identifier node (where a variable is being used) to a node representing the declaration of that variable. The abstraction can entail the removal of details which are relevant only in parsing, not for semantics. == Example: Code Refactoring == For example, consider the case of code refactoring. To represent the implementation of a function that takes an input argument, the received parameter is conventionally given an arbitrary, distinct name in the source code so that it can be referenced. The abstract representation of this conceptual entity, a "function argument" instance, will likely be mentioned in the function signature, and also one or more times within the implementation code body. Since the function as a whole is the parent of both its header or "signature" information as well as its implementation body, an AST would not be able to use the same node to co-identify the multiple uses or appearances of the argument entity. This is solved by the DAG nature of an ASG. A key advantage of having a single, distinct node identity for any given code element is that each element's properties are, by definition, uniquely stored. This simplifies refactoring operations, because there is exactly one existential nexus for any given property instantiation. If the developer decides to change a property value such as the "name" of any code element (the "function argument" in this example), the ASG inherently exposes that value in exactly one place, and it follows that any such property changes are implicitly, trivially, and immediately propagated globally. == See also == Ontology (computer science) Semantic Web Semantic Grid == References == == Further reading == Dean, Tom. "CPPX — C/C++ Fact Extractor". Devanbu, Premkumar T.; Rosenblum, David S.; Wolf, Alexander L. "Generating Testing and Analysis Tools with Aria". Archived from the original on 2006-05-27. Mamas, Evan; Kontogiannis, Kostas (2000). Towards Portable Source Code Representations Using XML. Seventh Working Conference on Reverse Engineering. pp. 172–182. CiteSeerX 10.1.1.88.6173. Raghavan, Shruti; Rohana, Rosanne; Leon, David; Podgurski, Andy; Augustine, Vinay (2004). "Dex: a semantic-graph differencing tool for studying changes in large code bases". 20th IEEE International Conference on Software Maintenance, 2004. Proceedings. IEEE International Conference on Software Maintenance. pp. 188–197. CiteSeerX 10.1.1.228.9292. doi:10.1109/icsm.2004.1357803. ISBN 0-7695-2213-0. Archived from the original on 2008-01-17. Retrieved 2007-05-01.
Wikipedia/Abstract_semantic_graph
Force dynamics is a semantic category that describes the way in which entities interact with reference to force. Force Dynamics gained a good deal of attention in cognitive linguistics due to its claims of psychological plausibility and the elegance with which it generalizes ideas not usually considered in the same context. The semantic category of force dynamics pervades language on several levels. Not only does it apply to expressions in the physical domain like leaning on or dragging, but it also plays an important role in expressions involving psychological forces (e.g. wanting or being urged). Furthermore, the concept of force dynamics can be extended to discourse. For example, the situation in which speakers A and B argue, after which speaker A gives in to speaker B, exhibits a force dynamic pattern. == Context == Introduced by cognitive linguist Leonard Talmy in 1981, force dynamics started out as a generalization of the traditional notion of the causative, dividing causation into finer primitives and considering the notions of letting, hindering, and helping. Talmy further developed the field in his 1985, 1988 and 2000 works. Talmy places force dynamics within the broader context of cognitive semantics. In his view, a general idea underlying this discipline is the existence of a fundamental distinction in language between closed-class (grammatical) and open-class (lexical) categories. This distinction is motivated by the fact that language uses certain categories of notions to structure and organize meaning, while other categories are excluded from this function. For example, Talmy remarks that many languages mark the number of nouns in a systematic way, but that nouns are not marked in the same way for color. Force Dynamics is considered to be one of the closed-class notional categories, together with such generally recognized categories as number, aspect, mood, and evidentiality. Aspects of force dynamics have been incorporated into the theoretical frameworks of Mark Johnson (1987), Steven Pinker (1997) and Ray Jackendoff (1990) (see Deane 1996 for a critical review of Jackendoff’s version of Force Dynamics). Force dynamics plays an important role in several recent accounts of modal verbs in various languages (including Brandt 1992, Achard 1996, Boye 2001, and Vandenberghe 2002). Other applications of force dynamics include use in discourse analysis (Talmy 1988, 2000), lexical semantics (Deane 1992, Da Silva 2003) and morphosyntactical analysis (Chun & Zubin 1990, Langacker 1999:352-4). == Theoretical outline == === Basic concepts === Expressions can exhibit a force dynamic pattern or can be force-dynamically neutral. A sentence like The door is closed is force-dynamically neutral, because there are no forces opposing each other. The sentence The door cannot open, on the other hand, exhibits a force dynamic pattern: apparently the door has some tendency toward opening, but there is some other force preventing it from being opened (e.g., it may be jammed). A basic feature of a force-dynamic expression is the presence of two force-exerting elements. Languages make a distinction between these two forces based on their roles. The force entity that is in focus is called the agonist and the force entity opposing it is the Antagonist (see a, figure 1). In the example, the door is the agonist and the force preventing the door from being opened is the Antagonist. Force entities have an intrinsic force tendency, either toward action or toward rest. For the agonist, this tendency is marked with an arrowhead (action) or with a large dot (rest) (see b, figure 1). Since the antagonist by definition has an opposing tendency, it need not be marked. In the example, the door has a tendency toward action. A third relevant factor is the balance between the two forces. The forces are out of balance by definition; if the two forces are equally strong, the situation is not interesting from a force-dynamic point of view. One force is therefore stronger or weaker than the other. A stronger force is marked with a plus sign, a weaker force with a minus sign (c, figure 1). In the example, the Antagonist is stronger, since it actually holds back the door. The outcome of the Force-Dynamic scenario depends on both the intrinsic tendency and the balance between the forces. The result is represented by a line beneath Agonist and Antagonist. The line has an arrowhead if the outcome is action and a large dot if the outcome is rest (d, figure 1). In the example, the door stays closed; the Antagonist succeeds in preventing it from being opened. The sentence 'The door cannot open' can be Force-Dynamically represented by the diagram at the top of this page. Using these basic concepts, several generalizations can be made. The force dynamic situations in which the Agonist is stronger are expressed in sentences like ‘X happened despite Y’, while situations in which the Antagonist is stronger are expressed in the form of ‘X happened because of Y’. In the latter, a form of causation that Talmy termed extended causation is captured. === More complexity === More possibilities arise when another variable is introduced: change over time. This variable is exemplified by such expressions as A gust of wind made the pages of my book turn. In force dynamic terms, the situation can be described as the entering of an antagonist (the wind) that is stronger in force than the agonist (the pages) and changes the force tendency of the pages from a state of rest to a state of action (turning). In force dynamic diagrams, this motion (‘change over time’) of the Antagonist is represented by an arrow. The diagrams in Figure 2 to the right combine a shifting antagonist with agonists of varying force tendencies. The following sentences are examples for these patterns: a. A gust of wind made the pages of my book turn. b. The appearance of the headmaster made the pupils calm down. c. The breaking of the dam let the water flow from the storage lake. d. The abating of the wind let the sailboat slow down. In this series of scenarios, various kinds of causation are described. Furthermore, a basic relationship between the concepts of ‘causing something to happen’ and ‘letting something happen’ emerges, definable in terms of the balance between the force entities and the resultants of the interaction. Force entities do not have to be physical entities. Force dynamics is directly applicable to terms involving psychological forces like to persuade and to urge. The force dynamic aspect of the sentence Herbie did not succeed in persuading Diana to sing another song can be graphically represented as easily as the earlier example sentence The door cannot open (and, incidentally, by the same diagram). In addition, force entities do not have to be physically separate. A case in point is reflexive force dynamic constructions of the type Chet was dragging himself instead of walking. It is perfectly possible to represent this in a Force Dynamic diagram (representing Chet’s will as the Agonist keeping the body — the Antagonist — in motion). Thus, even though Chet is one person, his will and his body are conceptualized separately. === Psychological basis === The key elements of force dynamics are very basic to human cognition. Deane (1996:56) commented that “[f]rom a cognitive perspective, Talmy’s theory is a striking example of a psychologically plausible theory of causation. Its key elements are such concepts as the (amount of) force exerted by an entity, the balance between two such forces, and the force vector which results from their interaction. Such concepts have an obvious base in ordinary motor activities: the brain must be able to calculate the force vector produced by muscular exertion, and calculate the probable outcome when that force is exerted against an object in the outside world.” In cognitive linguistic terms, force dynamic expressions reflect a conceptual archetype because of their conceptual basality (Langacker 1999:24). In this view, expressions involving psychological forces reflect an extension of the category of force dynamics from the physical domain to the psychological domain. == Limitations and criticism == From the perspective of lexical semantics, some people have argued that force dynamics fails to be explanatory. For example, Goddard (1998:262–266) raised the objection that "a visual representation cannot — in and of itself — convey a meaning. (…) From a semiotic point of view, a diagram never stands alone; it always depends on a system of verbal captions, whether these are explicit or implied." He goes on to attack the verbal definition of causation Talmy provides, claiming that it is circular and obscure. Furthermore, Goddard objects to the use of the "semantically obscure concept of force". However, Goddard's objections lose some of their strength in light of the fact that Force Dynamics does not present itself as a complete semantic description of the constructions involving Force Dynamic concepts. Another objection regarding force dynamics is the question, raised by Goddard (1998:81), of how different representational devices are supposed to interact with one another. As the field of cognitive linguistics is still in a state of theoretical flux, no systematic account addresses this issue yet. However, it is an objection many cognitive linguists are aware of. Some cognitive linguists have replied to such objections by pointing out that the goal of Cognitive Linguistics is not to construct a formal system in which theorems are proved, but rather to better understand the cognitive basis of language (cf. Newman 1996:xii). Jackendoff (1990, 1996:120–3), in the process of incorporating aspects of force dynamics into his theory of conceptual semantics, has proposed a reconfiguration of some of its basic notions. In Jackendoff’s view, this reconfiguration "conforms better to the syntax of force-dynamic verbs" (1996:121). == References == === Primary sources === Talmy, Leonard (2000) ‘Force Dynamics in Language and Cognition’ Chapter 7 of Talmy, Toward a cognitive semantics vol I: Concept structuring systems. Cambridge: MIT Press. [This chapter is a modestly rewritten version of:] Talmy, Leonard (1988a) ‘Force Dynamics in language and cognition’ In Cognitive Science, 12, 1, 49–100. [This article is a moderately rewritten version of:] Talmy, Leonard (1985a) ‘Force Dynamics in language and thought’ In Papers from the Regional Meetings, Chicago Linguistic Society, 21, 293–337. === Secondary sources === Achard, Michel (1996) ‘French modals and speaker control’ In Goldberg, Adele (ed.), Conceptual Structure, Discourse and Language. Stanford, CA.: CSL&I. Boye, Kasper (2001) ‘The Force-Dynamic core meaning of Danish modal verbs’ In Acta Linguistica Hafniensia, 33, 19–66. Brandt, Per Aage (1989) 'Agonistique et analyse dynamique catastrophiste du modal et de l’aspectuel: quelques remarques sur la linguistique cognitive de L. Talmy’ In Semiotica, 77, 1–3, 151–162. Brandt, Per Aage (1992) La charpente modale du sens: Pour une simio-linguistique morphogenitique et dynamique. Amsterdam: John Benjamins. Chun, Soon Ae & David A Zubin (1990) ‘Experiential vs. Agentive Constructions in Korean Narrative’. In Proceedings of the Berkeley linguistics Society 16, 81–93. Deane, Paul D (1992) 'Polysemy as the consequence of internal conceptual complexity: the case of over’ In Proceedings of the Eastern States Conference on Linguistics (ESCOL) , 9, 32–43. Deane, Paul D (1996) ‘On Jackendoff’s conceptual semantics’ In Cognitive Linguistics, 7, 1, 35–91. Goddard, Cliff (1998) ‘‘Semantic Analysis: A Practical Introduction‘‘ New York: Oxford University Press. (esp p 262-266) Jackendoff, Ray (1990) Semantic Structures. Cambridge, Mass.: MIT Press. Jackendoff, Ray (1996) 'Conceptual semantics and cognitive linguistics’. In Cognitive Linguistics, 7, 1, 93–129. Johnson, Mark (1987). The Body in the Mind: The Bodily Basis of Meaning, Imagination, and Reason, University of Chicago. Langacker, Ronald W. (1999) Grammar and Conceptualization. Cognitive Linguistics Research vol. 14. Berlin/New York: Mouton de Gruyter. Pinker, Steven. 1997. How the mind works. New York: Norton. Silva, Augusto Soares da (2003) ‘Image schemas and category coherence: the Case of the Portuguese Verb deixar’. In Cognitive Approaches to Lexical Semantics, Cuyckens & Dirve & Taylor (eds.), 281–322. Sweetser, Eve (1982) ‘A proposal for uniting deontic and epistemic modals. In Proceedings of the Eighth Annual Meeting of the Berkeley Linguistics Society. Berkeley, California: Berkeley Linguistics Society. Sweetser, Eve (1984) ‘Semantic structure and semantic change: A cognitive linguistic study of modality, perception, speech acts, and logical relations. Doctoral dissertation, University of California, Berkeley. Talmy, Leonard (1976a) ‘Semantic causative types’ In Shibatani (ed.), Syntax and semantics (vol 6) : The grammar of causative constructions. New York: Academic Press. Talmy, Leonard (1981) ‘Force Dynamics’. Paper presented at conference on Language and Mental Imagery. May 1981, University of California, Berkeley. Talmy, Leonard (1985b) ‘Force Dynamics as a generalization over causative’ In Georgetown University Round Table on Languages and Linguistics, 67–85. Vandenberghe, Wim (2002) ‘Instigative Setting-Constructions: Force Dynamic Research on ‘New’ Types of Agency’ In Leuvense Bijdragen, 90, 4, 365–390. == External links == Presentation of Force Dynamics on the CogSci index. Toward a Cognitive Semantics — read-only online version of Talmy (2000) Toward a Cognitive Semantics. Force Dynamics in Language and Cognition — direct link to the chapter on Force Dynamics on the above webpage (PDF).
Wikipedia/Force_dynamics
In computer science, algebraic semantics is a formal approach to programming language theory that uses algebraic methods for defining, specifying, and reasoning about the behavior of programs. It is a form of axiomatic semantics that provides a mathematical framework for analyzing programs through the use of algebraic structures and equational logic. Algebraic semantics represents programs and data types as algebras—mathematical structures consisting of sets equipped with operations that satisfy certain equational laws. This approach enables rigorous formal verification of software by treating program properties as algebraic properties that can be proven through mathematical reasoning. A key advantage of algebraic semantics is its ability to separate the specification of what a program does from how it is implemented, supporting abstraction and modularity in software design. == Syntax == The syntax of an algebraic specification is formulated in two steps: (1) defining a formal signature of data types and operation symbols, and (2) interpreting the signature through sets and functions. === Definition of a signature === The signature of an algebraic specification defines its formal syntax. The word "signature" is used like the concept of "key signature" in musical notation. A signature consists of a set S {\displaystyle S} of data types, known as sorts, together with a family Σ {\displaystyle \Sigma } of sets, each set containing operation symbols (or simply symbols) that relate the sorts. We use Σ s 1 s 2 . . . s n , s {\displaystyle \Sigma _{s_{1}s_{2}...s_{n},~s}} to denote the set of operation symbols relating the sorts s 1 , s 2 , . . . , s n ∈ S {\displaystyle s_{1},~s_{2},~...,~s_{n}\in S} to the sort s ∈ S {\displaystyle s\in S} . For example, for the signature of integer stacks, we define two sorts, namely, i n t {\displaystyle int} and s t a c k {\displaystyle stack} , and the following family of operation symbols: Σ Λ , s t a c k = { n e w } Σ i n t s t a c k , s t a c k = { p u s h } Σ s t a c k , s t a c k = { p o p } Σ s t a c k , i n t = { d e p t h , t o p } {\displaystyle {\begin{aligned}\Sigma _{\Lambda ,~stack}&=\{{\rm {new}}\}\\\Sigma _{int~stack,~stack}&=\{{\rm {push}}\}\\\Sigma _{stack,~stack}&=\{{\rm {pop}}\}\\\Sigma _{stack,~int}&=\{{\rm {depth}},{\rm {top}}\}\\\end{aligned}}} where Λ {\displaystyle \Lambda } denotes the empty string. === Set-theoretic interpretation of signature === An algebra A {\displaystyle A} interprets the sorts and operation symbols as sets and functions. Each sort s {\displaystyle s} is interpreted as a set A s {\displaystyle A_{s}} , which is called the carrier of A {\displaystyle A} of sort s {\displaystyle s} , and each symbol σ {\displaystyle \sigma } in Σ s 1 s 2 . . . s n , s {\displaystyle \Sigma _{s_{1}s_{2}...s_{n},~s}} is mapped to a function σ A : A s 1 × A s 2 × . . . × A s n {\displaystyle \sigma _{A}:A_{s_{1}}\times A_{s_{2}}\times ~...\times ~A_{s_{n}}} , which is called an operation of A {\displaystyle A} . With respect to the signature of integer stacks, we interpret the sort i n t {\displaystyle int} as the set Z {\displaystyle \mathbb {Z} } of integers, and interpret the sort s t a c k {\displaystyle stack} as the set S t a c k {\displaystyle Stack} of integer stacks. We further interpret the family of operation symbols as the following functions: n e w : → S t a c k p u s h : Z × S t a c k → S t a c k p o p : S t a c k → S t a c k d e p t h : S t a c k → Z t o p : S t a c k → Z {\displaystyle {\begin{aligned}{\rm {new}}&:~\to Stack\\{\rm {push}}&:~\mathbb {Z} \times Stack\to Stack\\{\rm {pop}}&:~Stack\to Stack\\{\rm {depth}}&:~Stack\to \mathbb {Z} \\{\rm {top}}&:~Stack\to \mathbb {Z} \\\end{aligned}}} == Semantics == Semantics refers to the meaning or behavior. An algebraic specification provides both the meaning and behavior of the object in question. === Equational axioms === The semantics of an algebraic specifications is defined by axioms in the form of conditional equations. With respect to the signature of integer stacks, we have the following axioms: For any z ∈ Z {\displaystyle z\in \mathbb {Z} } and s ∈ S t a c k {\displaystyle s\in Stack} , A 1 : p o p ( p u s h ( z , s ) ) = s A 2 : d e p t h ( p u s h ( z , s ) ) = d e p t h ( s ) + 1 A 3 : t o p ( p u s h ( z , s ) ) = z A 4 : p o p ( n e w ) = n e w A 5 : d e p t h ( n e w ) = 0 A 6 : t o p ( s ) = − 404 i f d e p t h ( s ) = 0 {\displaystyle {\begin{aligned}&A1:~~{\rm {pop}}({\rm {push}}(z,s))=s\\&A2:~~{\rm {depth}}({\rm {push}}(z,s))={\rm {depth}}(s)+1\\&A3:~~{\rm {top}}({\rm {push}}(z,s))=z\\&A4:~~{\rm {pop}}({\rm {new}})={\rm {new}}\\&A5:~~{\rm {depth}}({\rm {new}})=0\\&A6:~~{\rm {top}}(s)=-404~{\rm {if~depth}}(s)=0\\\end{aligned}}} where " − 404 {\displaystyle -404} " indicates "not found". === Mathematical semantics === The mathematical semantics (also known as denotational semantics) of a specification refers to its mathematical meaning. The mathematical semantics of an algebraic specification is the class of all algebras that satisfy the specification. In particular, the classic approach by Goguen et al. takes the initial algebra (unique up to isomorphism) as the "most representative" model of the algebraic specification. === Operational semantics === The operational semantics of a specification means how to interpret it as a sequence of computational steps. We define a ground term as an algebraic expression without variables. The operational semantics of an algebraic specification refers to how ground terms can be transformed using the given equational axioms as left-to-right rewrite rules, until such terms reach their normal forms, where no more rewriting is possible. Consider the axioms for integer stacks. Let " ⇛ {\displaystyle \Rrightarrow } " denote "rewrites to". t o p ( p o p ( p o p ( p u s h ( 1 , p u s h ( 2 , p u s h ( 3 , p o p ( n e w ) ) ) ) ) ) ) ⇛ t o p ( p o p ( p o p ( p u s h ( 1 , p u s h ( 2 , p u s h ( 3 , n e w ) ) ) ) ) ) ( b y A x i o m A 4 ) ⇛ t o p ( p o p ( p u s h ( 2 , p u s h ( 3 , n e w ) ) ) ) ( b y A x i o m A 1 ) ⇛ t o p ( p u s h ( 3 , n e w ) ) ( b y A x i o m A 1 ) ⇛ 3 ( b y A x i o m A 3 ) {\displaystyle {\begin{aligned}&{\rm {top}}({\rm {pop}}({\rm {pop}}({\rm {push}}(1,~{\rm {push}}(2,~{\rm {push}}(3,~{\rm {pop}}({\rm {new}})))))))&\\\Rrightarrow ~&{\rm {top}}({\rm {pop}}({\rm {pop}}({\rm {push}}(1,~{\rm {push}}(2,~{\rm {push}}(3,~{\rm {new}}))))))&({\rm {by~Axiom~}}A4)\\\Rrightarrow ~&{\rm {top}}({\rm {pop}}({\rm {push}}(2,~{\rm {push}}(3,~{\rm {new}}))))&({\rm {by~Axiom~}}A1)\\\Rrightarrow ~&{\rm {top}}({\rm {push}}(3,~{\rm {new}}))&({\rm {by~Axiom~}}A1)\\\Rrightarrow ~&3&({\rm {by~Axiom~}}A3)\\\end{aligned}}} === Canonical property === An algebraic specification is said to be confluent (also known as Church-Rosser) if the rewriting of any ground term leads to the same normal form. It is said to be terminating if the rewriting of any ground term will lead to a normal form after a finite number of steps. The algebraic specification is said to be canonical (also known as convergent) if it is both confluent and terminating. In other words, it is canonical if the rewriting of any ground term leads to a unique normal form after a finite number of steps. Given any canonical algebraic specification, the mathematical semantics agrees with the operational semantics. As a result, canonical algebraic specifications have been widely applied to address program correctness issues. For example, numerous researchers have applied such specifications to the testing of observational equivalence of objects in object-oriented programming. See Chen and Tse as a secondary source that provides a historical review of prominent research from 1981 to 2013. == See also == Algebraic semantics (mathematical logic) OBJ (programming language) Joseph Goguen == References ==
Wikipedia/Algebraic_semantics_(computer_science)
In computer science, model checking or property checking is a method for checking whether a finite-state model of a system meets a given specification (also known as correctness). This is typically associated with hardware or software systems, where the specification contains liveness requirements (such as avoidance of livelock) as well as safety requirements (such as avoidance of states representing a system crash). In order to solve such a problem algorithmically, both the model of the system and its specification are formulated in some precise mathematical language. To this end, the problem is formulated as a task in logic, namely to check whether a structure satisfies a given logical formula. This general concept applies to many kinds of logic and many kinds of structures. A simple model-checking problem consists of verifying whether a formula in the propositional logic is satisfied by a given structure. == Overview == Property checking is used for verification when two descriptions are not equivalent. During refinement, the specification is complemented with details that are unnecessary in the higher-level specification. There is no need to verify the newly introduced properties against the original specification since this is not possible. Therefore, the strict bi-directional equivalence check is relaxed to a one-way property check. The implementation or design is regarded as a model of the system, whereas the specifications are properties that the model must satisfy. An important class of model-checking methods has been developed for checking models of hardware and software designs where the specification is given by a temporal logic formula. Pioneering work in temporal logic specification was done by Amir Pnueli, who received the 1996 Turing award for "seminal work introducing temporal logic into computing science". Model checking began with the pioneering work of E. M. Clarke, E. A. Emerson, by J. P. Queille, and J. Sifakis. Clarke, Emerson, and Sifakis shared the 2007 Turing Award for their seminal work founding and developing the field of model checking. Model checking is most often applied to hardware designs. For software, because of undecidability (see computability theory) the approach cannot be fully algorithmic, apply to all systems, and always give an answer; in the general case, it may fail to prove or disprove a given property. In embedded-systems hardware, it is possible to validate a specification delivered, e.g., by means of UML activity diagrams or control-interpreted Petri nets. The structure is usually given as a source code description in an industrial hardware description language or a special-purpose language. Such a program corresponds to a finite-state machine (FSM), i.e., a directed graph consisting of nodes (or vertices) and edges. A set of atomic propositions is associated with each node, typically stating which memory elements are one. The nodes represent states of a system, the edges represent possible transitions that may alter the state, while the atomic propositions represent the basic properties that hold at a point of execution. Formally, the problem can be stated as follows: given a desired property, expressed as a temporal logic formula p {\displaystyle p} , and a structure M {\displaystyle M} with initial state s {\displaystyle s} , decide if M , s ⊨ p {\displaystyle M,s\models p} . If M {\displaystyle M} is finite, as it is in hardware, model checking reduces to a graph search. == Symbolic model checking == Instead of enumerating reachable states one at a time, the state space can sometimes be traversed more efficiently by considering large numbers of states at a single step. When such state-space traversal is based on representations of a set of states and transition relations as logical formulas, binary decision diagrams (BDD) or other related data structures, the model-checking method is symbolic. Historically, the first symbolic methods used BDDs. After the success of propositional satisfiability in solving the planning problem in artificial intelligence (see satplan) in 1996, the same approach was generalized to model checking for linear temporal logic (LTL): the planning problem corresponds to model checking for safety properties. This method is known as bounded model checking. The success of Boolean satisfiability solvers in bounded model checking led to the widespread use of satisfiability solvers in symbolic model checking. === Example === One example of such a system requirement: Between the time an elevator is called at a floor and the time it opens its doors at that floor, the elevator can arrive at that floor at most twice. The authors of "Patterns in Property Specification for Finite-State Verification" translate this requirement into the following LTL formula: ◻ ( ( call ∧ ◊ open ) → ( ( ¬ atfloor ∧ ¬ open ) U ( open ∨ ( ( atfloor ∧ ¬ open ) U ( open ∨ ( ( ¬ atfloor ∧ ¬ open ) U ( open ∨ ( ( atfloor ∧ ¬ open ) U ( open ∨ ( ¬ atfloor U open ) ) ) ) ) ) ) ) ) ) {\displaystyle {\begin{aligned}\Box {\Big (}({\texttt {call}}\land \Diamond {\texttt {open}})\to &{\big (}(\lnot {\texttt {atfloor}}\land \lnot {\texttt {open}})~{\mathcal {U}}\\&({\texttt {open}}\lor (({\texttt {atfloor}}\land \lnot {\texttt {open}})~{\mathcal {U}}\\&({\texttt {open}}\lor ((\lnot {\texttt {atfloor}}\land \lnot {\texttt {open}})~{\mathcal {U}}\\&({\texttt {open}}\lor (({\texttt {atfloor}}\land \lnot {\texttt {open}})~{\mathcal {U}}\\&({\texttt {open}}\lor (\lnot {\texttt {atfloor}}~{\mathcal {U}}~{\texttt {open}})))))))){\big )}{\Big )}\end{aligned}}} Here, ◻ {\displaystyle \Box } should be read as "always", ◊ {\displaystyle \Diamond } as "eventually", U {\displaystyle {\mathcal {U}}} as "until" and the other symbols are standard logical symbols, ∨ {\displaystyle \lor } for "or", ∧ {\displaystyle \land } for "and" and ¬ {\displaystyle \lnot } for "not". == Techniques == Model-checking tools face a combinatorial blow up of the state-space, commonly known as the state explosion problem, that must be addressed to solve most real-world problems. There are several approaches to combat this problem. Symbolic algorithms avoid ever explicitly constructing the graph for the FSM; instead, they represent the graph implicitly using a formula in quantified propositional logic. The use of binary decision diagrams (BDDs) was made popular by the work of Ken McMillan, as well as of Olivier Coudert and Jean-Christophe Madre, and the development of open-source BDD manipulation libraries such as CUDD and BuDDy. Bounded model-checking algorithms unroll the FSM for a fixed number of steps, k {\displaystyle k} , and check whether a property violation can occur in k {\displaystyle k} or fewer steps. This typically involves encoding the restricted model as an instance of SAT. The process can be repeated with larger and larger values of k {\displaystyle k} until all possible violations have been ruled out (cf. Iterative deepening depth-first search). Abstraction attempts to prove properties of a system by first simplifying it. The simplified system usually does not satisfy exactly the same properties as the original one so that a process of refinement may be necessary. Generally, one requires the abstraction to be sound (the properties proved on the abstraction are true of the original system); however, sometimes the abstraction is not complete (not all true properties of the original system are true of the abstraction). An example of abstraction is to ignore the values of non-Boolean variables and to only consider Boolean variables and the control flow of the program; such an abstraction, though it may appear coarse, may, in fact, be sufficient to prove e.g. properties of mutual exclusion. Counterexample-guided abstraction refinement (CEGAR) begins checking with a coarse (i.e. imprecise) abstraction and iteratively refines it. When a violation (i.e. counterexample) is found, the tool analyzes it for feasibility (i.e., is the violation genuine or the result of an incomplete abstraction?). If the violation is feasible, it is reported to the user. If it is not, the proof of infeasibility is used to refine the abstraction and checking begins again. Model-checking tools were initially developed to reason about the logical correctness of discrete state systems, but have since been extended to deal with real-time and limited forms of hybrid systems. == First-order logic == Model checking is also studied in the field of computational complexity theory. Specifically, a first-order logical formula is fixed without free variables and the following decision problem is considered: Given a finite interpretation, for instance, one described as a relational database, decide whether the interpretation is a model of the formula. This problem is in the circuit class AC0. It is tractable when imposing some restrictions on the input structure: for instance, requiring that it has treewidth bounded by a constant (which more generally implies the tractability of model checking for monadic second-order logic), bounding the degree of every domain element, and more general conditions such as bounded expansion, locally bounded expansion, and nowhere-dense structures. These results have been extended to the task of enumerating all solutions to a first-order formula with free variables. == Tools == Here is a list of significant model-checking tools: Afra: a model checker for Rebeca which is an actor-based language for modeling concurrent and reactive systems Alloy (Alloy Analyzer) BLAST (Berkeley Lazy Abstraction Software Verification Tool) CADP (Construction and Analysis of Distributed Processes) a toolbox for the design of communication protocols and distributed systems CPAchecker: an open-source software model checker for C programs, based on the CPA framework ECLAIR: a platform for the automatic analysis, verification, testing, and transformation of C and C++ programs FDR2: a model checker for verifying real-time systems modelled and specified as CSP Processes FizzBee: an easier to use alternative to TLA+, that uses Python-like specification language, that has both behavioral modeling like TLA+ and probabilistic modeling like PRISM ISP code level verifier for MPI programs Java Pathfinder: an open-source model checker for Java programs Libdmc: a framework for distributed model checking mCRL2 Toolset, Boost Software License, Based on ACP NuSMV: a new symbolic model checker PAT: an enhanced simulator, model checker and refinement checker for concurrent and real-time systems Prism: a probabilistic symbolic model checker Roméo: an integrated tool environment for modelling, simulation, and verification of real-time systems modelled as parametric, time, and stopwatch Petri nets SPIN: a general tool for verifying the correctness of distributed software models in a rigorous and mostly automated fashion Storm: A model checker for probabilistic systems. TAPAs: a tool for the analysis of process algebra TAPAAL: an integrated tool environment for modelling, validation, and verification of Timed-Arc Petri Nets TLA+ model checker by Leslie Lamport UPPAAL: an integrated tool environment for modelling, validation, and verification of real-time systems modelled as networks of timed automata Zing – experimental tool from Microsoft to validate state models of software at various levels: high-level protocol descriptions, work-flow specifications, web services, device drivers, and protocols in the core of the operating system. Zing is currently being used for developing drivers for Windows. == See also == == References == == Further reading ==
Wikipedia/Model_checking
The theory of descriptions is the philosopher Bertrand Russell's most significant contribution to the philosophy of language. It is also known as Russell's theory of descriptions (commonly abbreviated as RTD). In short, Russell argued that the syntactic form of descriptions (phrases that took the form of "The flytrap" and "A flytrap") is misleading, as it does not correlate their logical and/or semantic architecture. While descriptions may seem like fairly uncontroversial phrases, Russell argued that providing a satisfactory analysis of the linguistic and logical properties of a description is vital to clarity in important philosophical debates, particularly in semantic arguments, epistemology and metaphysical elements. Since the first development of the theory in Russell's 1905 paper "On Denoting", RTD has been hugely influential and well-received within the philosophy of language. However, it has not been without its critics. In particular, the philosophers P. F. Strawson and Keith Donnellan have given notable, well known criticisms of the theory. Most recently, RTD has been defended by various philosophers and even developed in promising ways to bring it into harmony with generative grammar in Noam Chomsky's sense, particularly by Stephen Neale. Such developments have themselves been criticised, and debate continues. Russell viewed his theory of descriptions as a kind of analysis that is now called propositional analysis (not to be confused with propositional calculus). == Overview == Bertrand Russell's theory of descriptions was initially put forth in his 1905 essay "On Denoting", published in the journal of philosophy Mind. Russell's theory is focused on the logical form of expressions involving denoting phrases, which he divides into three groups: Denoting phrases which do not denote anything, for example "the current Emperor of Kentucky". Phrases which denote one definite object, for example "the present President of the U.S.A." We need not know which object the phrase refers to for it to be unambiguous, for example "the cutest kitten" is a unique individual but his or her actual identity is unknown. Phrases which denote ambiguously, for example, "a flytrap". Indefinite descriptions constitute Russell's third group. Descriptions most frequently appear in the standard subject–predicate form. Russell put forward his theory of descriptions to solve a number of problems in the philosophy of language. The two major problems are (1) co-referring expressions and (2) non-referring expressions. The problem of co-referring expressions originated primarily with Gottlob Frege as the problem of informative identities. For example, if the morning star and the evening star are the same planet in the sky seen at different times of day (indeed, they are both the planet Venus: the morning star is the planet Venus seen in the morning sky and the evening star is the planet Venus seen in the evening sky), how is it that someone can think that the morning star rises in the morning but the evening star does not? This is apparently problematic because although the two expressions seem to denote the same thing, one cannot substitute one for the other, which one ought to be able to do with identical or synonymous expressions. The problem of non-referring expressions is that certain expressions that are meaningful do not truly refer to anything. For example, by "any dog is annoying" it is not meant that there is a particular individual dog, namely any dog, that has the property of being annoying (similar considerations go for "some dog", "every dog", "a dog", and so on). Likewise, by "the current Emperor of Kentucky is gray" it is not meant that there is some individual, namely the current Emperor of Kentucky , who has the property of being gray; Kentucky was never a monarchy, so there is currently no Emperor. Thus, what Russell wants to avoid is admitting mysterious non-existent objects into his ontology. Furthermore, the law of the excluded middle requires that one of the following propositions, for example, must be true: either "the current Emperor of Kentucky is gray" or "it is not the case that the current Emperor of Kentucky is gray". Normally, propositions of the subject-predicate form are said to be true if and only if the subject is in the extension of the predicate. But, there is currently no Emperor of Kentucky. So, since the subject does not exist, it is not in the extension of either predicate (it is not on the list of gray people or non-gray people). Thus, it appears that this is a case in which the law of excluded middle is violated, which is also an indication that something has gone wrong. == Definite descriptions == Russell analyzes definite descriptions similarly to indefinite descriptions, except that the individual is now uniquely specified. Take as an example of a definite description the sentence "the current Emperor of Kentucky is gray". Russell analyses this phrase into the following component parts (with 'x' and 'y' representing variables): there is an x such that x is an emperor of Kentucky. for every x and every y, if both x and y are emperors of Kentucky, then y is x (i.e. there is at most one emperor of Kentucky). anything that is an emperor of Kentucky is gray. Thus, a definite description (of the general form 'the F is G') becomes the following existentially quantified phrase in classic symbolic logic (where 'x' and 'y' are variables and 'F' and 'G' are predicates – in the example above, F would be "is an emperor of Kentucky", and G would be "is gray"): ∃ x ( ( F x ∧ ∀ y ( F y → x = y ) ) ∧ G x ) {\displaystyle \exists x((Fx\land \forall y(Fy\rightarrow x=y))\land Gx)} Informally, this reads as follows: something exists with the property F, there is only one such thing, and this unique thing also has the property G. This analysis, according to Russell, solves the two problems noted above as related to definite descriptions: "The morning star rises in the morning" no longer needs to be thought of as having the subject-predicate form. It is instead analysed as "there is one unique thing such that it is the morning star and it rises in the morning". Thus, strictly speaking, the two expressions "the morning star..." and "the evening star..." are not synonymous, so it makes sense that they cannot be substituted (the analysed description of the evening star is "there is one unique thing such that it is the evening star and it rises in the evening"). This solves Gottlob Frege's problem of informative identities. Since the phrase "the current Emperor of Kentucky is gray" is not a referring expression, according to Russell's theory it need not refer to a mysterious non-existent entity. Russell says that if there are no entities X with property F, the proposition "X has property G" is false for all values of X. Russell says that all propositions in which the Emperor of Kentucky has a primary occurrence are false. The denials of such propositions are true, but in these cases the Emperor of Kentucky has a secondary occurrence (the truth value of the proposition is not a function of the truth of the existence of the Emperor of Kentucky). == Indefinite descriptions == Take as an example of an indefinite description the sentence "some dog is annoying". Russell analyses this phrase into the following component parts (with 'x' and 'y' representing variables): There is an x such that: x is a dog; and x is being annoying. Thus, an indefinite description (of the general form 'a D is A') becomes the following existentially quantified phrase in classic symbolic logic (where 'x' and 'y' are variables and 'D' and 'A' are predicates): ∃ x ( D x ∧ A x ) {\displaystyle \exists x(Dx\land Ax)} Informally, this reads as follows: there is something such that it is D and A. This analysis, according to Russell, solves the second problem noted above as related to indefinite descriptions. Since the phrase "some dog is annoying" is not a referring expression, according to Russell's theory, it need not refer to a mysterious non-existent entity. Furthermore, the law of excluded middle need not be violated (i.e. it remains a law), because "some dog is annoying" comes out true: there is a thing that is both a dog and annoying. Thus, Russell's theory seems to be a better analysis insofar as it solves several problems. == Criticism of Russell's analysis == === P. F. Strawson === P. F. Strawson argued that Russell had failed to correctly represent what one means when one says a sentence in the form of "the current Emperor of Kentucky is gray." According to Strawson, this sentence is not contradicted by "No one is the current Emperor of Kentucky", for the former sentence contains not an existential assertion, but attempts to use "the current Emperor of Kentucky" as a referring (or denoting) phrase. Since there is no current Emperor of Kentucky, the phrase fails to refer to anything, and so the sentence is neither true nor false. Another kind of counter-example that Strawson and philosophers since have raised concerns that of "incomplete" definite descriptions, that is sentences which have the form of a definite description but which do not uniquely denote an object. Strawson gives the example "the table is covered with books". Under Russell's theory, for such a sentence to be true there would have to be only one table in all of existence. But by uttering a phrase such as "the table is covered with books", the speaker is referring to a particular table: for instance, one that is in the vicinity of the speaker. Two broad responses have been constructed to this failure: a semantic and a pragmatic approach. The semantic approach of philosophers like Stephen Neale suggests that the sentence does in fact have the appropriate meaning as to make it true. Such meaning is added to the sentence by the particular context of the speaker—that, say, the context of standing next to a table "completes" the sentence. Ernie Lepore suggests that this approach treats "definite descriptions as harboring hidden indexical expressions, so that whatever descriptive meaning alone leaves unfinished its context of use can complete". Pragmatist responses deny this intuition and say instead that the sentence itself, following Russell's analysis, is not true but that the act of uttering the false sentence communicated true information to the listener. === Keith Donnellan === According to Keith Donnellan, there are two distinct ways we may use a definite description such as "the current Emperor of Kentucky is gray", and thus makes his distinction between the referential and the attributive use of a definite description. He argues that both Russell and Strawson make the mistake of attempting to analyse sentences removed from their context. We can mean different and distinct things while using the same sentence in different situations. For example, suppose Smith has been brutally murdered. When the person who discovers Smith's body says, "Smith's murderer is insane", we may understand this as the attributive use of the definite description "Smith's murderer", and analyse the sentence according to Russell. This is because the discoverer might equivalently have worded the assertion, "Whoever killed Smith is insane." Now consider another speaker: suppose Jones, though innocent, has been arrested for the murder of Smith, and is now on trial. When a reporter sees Jones talking to himself outside the courtroom, and describes what she sees by saying, "Smith's murderer is insane", we may understand this as the referring use of the definite description, for we may equivalently reword the reporter's assertion thus: "That person who I see talking to himself, and who I believe murdered Smith, is insane." In this case, we should not accept Russell's analysis as correctly representing the reporter's assertion. On Russell's analysis, the sentence is to be understood as an existential quantification of the conjunction of three components: There is an x such that: x murdered Smith; there is no y, y not equal x, such that y murdered Smith; and x is insane. If this analysis of the reporter's assertion were correct, then since Jones is innocent, we should take her to mean what the discoverer of Smith's body meant, that whoever murdered Smith is insane. We should then take her observation of Jones talking to himself to be irrelevant to the truth of her assertion. This clearly misses her point. Thus the same sentence, "Smith's murderer is insane", can be used to mean quite different things in different contexts. There are, accordingly, contexts in which "the current Emperor of Kentucky is not gray" is false because no one is the current Emperor of Kentucky, and contexts in which it is a sentence referring to a person whom the speaker takes to be the current Emperor of Kentucky, true or false according to the hair of the pretender. === Saul Kripke === In Reference and Existence, Saul Kripke argues that while Donnellan is correct to point out two uses of the phrase, it does not follow that the phrase is ambiguous between two meanings. For example, when the reporter finds out that Jones, the person she has been calling Smith's murderer did not murder Smith, she will admit that her use of the name was incorrect.: 422–423  Kripke defends Russell's analysis of definite descriptions, and argues that Donnellan does not adequately distinguish meaning from use, or, speaker's meaning from sentence meaning.: 295  === Other Objections === The theory of descriptions is regarded as a redundant and cumbersome method. The theory claims that ‘The present King of France is bald’ means ‘One and only one entity is the present King of France, and that one is bald’. L. Susan Stebbing suggests that if ‘that’ is used referentially, ‘that one is bald’ is logically equivalent to the entire conjunction. Hence, the conjunction of three propositions is unnecessary as one proposition is already adequate. P. T. Geach maintains that Russell's theory commits the fallacy of too many questions. Such a sentence as "The present President of Sealand is bald" involves two questions: (1) Is anybody at the moment a President of Sealand ? (2) Are there at the moment different people each of whom is a President of Sealand? Unless the answer to 1 is affirmative and the answer to 2 negative, the affirmative answer " yes, the President of Sealand is bald " is not false but indeterminate. In addition, Russell's theory involves unnecessary logical complications. Furthermore, Honcques Laus contends that Russell's analysis involves an error in the truth value that all sentences can be either true or false. Russell's nonacceptance of multiple-valued logic makes himself unable to assign a proper truth value to unverifiable and unfalsifiable sentences and causes the puzzle of the laws of thought. The third truth value, namely ‘indeterminate’ or ‘undefined’ should be accepted in the event that both truth and falsity are absent or inapplicable. William G. Lycan argues that Russell's theory intrinsically applies solely to one extraordinary subclass of singular terms but an adequate solution to the puzzles must be generalized. His theory merely addresses the principal use of the definite article "the", but fails to deal with plural uses or the generic use. Russell also fails to consider anaphoric uses of singular referential expressions. Arthur Pap argues that the theory of descriptions must be rejected because according to the theory of descriptions, 'the present king of France is bald' and 'the present king of France is not bald' are both false and not contradictories otherwise the law of excluded middle would be violated. == See also == Russellian view Sense and reference == Notes == == References and further reading == Bertolet, Rod. (1999). "Theory of Descriptions", The Cambridge Dictionary of Philosophy, second edition. New York: Cambridge University Press. Donnellan, Keith. (1966). "Reference and Definite Descriptions", Philosophical Review, 75, pp. 281–304. Kripke, Saul. (1977). "Speaker's Reference and Semantic Reference", Midwest Studies in Philosophy, 2, pp. 255–276. Ludlow, Peter. (2005). "Descriptions", The Stanford Encyclopedia of Philosophy, E. Zalta (ed.). Online text Neale, Stephen (1990). Descriptions Bradford, MIT Press. Neale, Stephen (2005). "A Century Later", Mind 114, pp. 809–871. Ostertag, Gary (ed.). (1998) Definite Descriptions: A Reader Bradford, MIT Press. (Includes Donnellan (1966), Kripke (1977), Chapter 3 of Neale (1990), Russell (1905), Chapter 16 of Russell (1919). and Strawson (1950).) Russell, Bertrand. (1905). "On Denoting", Mind 14, pp. 479–493. Online at Wikisource and Augsburg University of Applied Sciences. Russell, Bertrand. (1919). Introduction to Mathematical Philosophy, London: George Allen and Unwin. Strawson, P. F. (1950). "On Referring", Mind 59, pp. 320–344. == External links == Russell's Theory of Descriptions – section 2 of Ludlow's article on the Stanford Encyclopedia of Philosophy. Russell's Theory of Descriptions – by Thomas C Ryckman. Russell's theory of descriptions – at Oxford University's Introduction to Logic. Russell's Theory of Descriptions special issue of Mind celebrating the 100th anniversary of Russell's "On Denoting" in which the theory of descriptions was first presented.
Wikipedia/Theory_of_descriptions
A mobile application or app is a computer program or software application designed to run on a mobile device such as a phone, tablet, or watch. Mobile applications often stand in contrast to desktop applications which are designed to run on desktop computers, and web applications which run in mobile web browsers rather than directly on the mobile device. Apps were originally intended for productivity assistance such as email, calendar, and contact databases, but the public demand for apps caused rapid expansion into other areas such as mobile games, factory automation, GPS and location-based services, order-tracking, and ticket purchases, so that there are now millions of apps available. Many apps require Internet access. Apps are generally downloaded from app stores, which are a type of digital distribution platforms. The term "app", short for "application", has since become very popular; in 2010, it was listed as "Word of the Year" by the American Dialect Society. Apps are broadly classified into three types: native apps, hybrid and web apps. Native applications are designed specifically for a mobile operating system, typically iOS or Android. Web apps are written in HTML5 or CSS and typically run through a browser. Hybrid apps are built using web technologies such as JavaScript, CSS, and HTML5 and function like web apps disguised in a native container. == Overview == Most mobile devices are sold with several apps bundled as pre-installed software, such as a web browser, email client, calendar, mapping program, and an app for buying music, other media, or more apps. Some pre-installed apps can be removed by an ordinary uninstall process, thus leaving more storage space for desired ones. Where the software does not allow this, some devices can be rooted to eliminate the undesired apps. Apps that are not preinstalled are usually available through distribution platforms called app stores. These may operated by the owner of the device's mobile operating system, such as the App Store or Google Play Store; by the device manufacturers, such as the Galaxy Store and Huawei AppGallery; or by third parties, such as the Amazon Appstore and F-Droid. Usually, they are downloaded from the platform to a target device, but sometimes they can be downloaded to laptops or desktop computers. Apps can also be installed manually, for example by running an Android application package on Android devices. Some apps are freeware, while others have a price, which can be upfront or a subscription. Some apps also include microtransactions and/or advertising. In any case, the revenue is usually split between the application's creator and the app store. The same app can, therefore, cost a different price depending on the mobile platform. Mobile apps were originally offered for general productivity and information retrieval, including email, calendar, contacts, the stock market and weather information. However, public demand and the availability of developer tools drove rapid expansion into other categories, such as those handled by desktop application software packages. As with other software, the explosion in number and variety of apps made discovery a challenge, which in turn led to the creation of a wide range of review, recommendation, and curation sources, including blogs, magazines, and dedicated online app-discovery services. In 2014 government regulatory agencies began trying to regulate and curate apps, particularly medical apps. Some companies offer apps as an alternative method to deliver content with certain advantages over an official website. With a growing number of mobile applications available at app stores and the improved capabilities of smartphones, people are downloading more applications to their devices. Usage of mobile apps has become increasingly prevalent across mobile phone users. A May 2012 comScore study reported that during the previous quarter, more mobile subscribers used apps than browsed the web on their devices: 51.1% vs. 49.8% respectively. Researchers found that usage of mobile apps strongly correlates with user context and depends on user's location and time of the day. Mobile apps are playing an ever-increasing role within healthcare and when designed and integrated correctly can yield many benefits. Market research firm Gartner predicted that 102 billion apps would be downloaded in 2013 (91% of them free), which would generate $26 billion in the US, up 44.4% on 2012's US$18 billion. By Q2 2015, the Google Play and Apple stores alone generated $5 billion. An analyst report estimates that the app economy creates revenues of more than €10 billion per year within the European Union, while over 529,000 jobs have been created in 28 EU states due to the growth of the app market. == Types == Mobile applications may be classified by numerous methods. A common scheme is to distinguish native, web-based, and hybrid apps. === Native app === All apps targeted toward a particular mobile platform are known as native apps. Therefore, an app intended for Apple device does not run in Android devices. As a result, most businesses develop apps for multiple platforms. While developing native apps, professionals incorporate best-in-class user interface modules. This accounts for better performance, consistency and good user experience. Users also benefit from wider access to application programming interfaces and make limitless use of all apps from the particular device. Further, they also switch over from one app to another effortlessly. The main purpose for creating such apps is to ensure best performance for a specific mobile operating system. === Web-based app === A web-based app is implemented with the standard web technologies of HTML, CSS, and JavaScript. Internet access is typically required for proper behavior or being able to use all features compared to offline usage. Most, if not all, user data is stored in the cloud. The performance of these apps is similar to a web application running in a browser, which can be noticeably slower than the equivalent native app. It also may not have the same level of features as the native app. === Hybrid app === The concept of the hybrid app is a mix of native and web-based apps. Apps developed using Apache Cordova, Flutter, Xamarin, React Native, Sencha Touch, and other frameworks fall into this category. These are made to support web and native technologies across multiple platforms. Moreover, these apps are easier and faster to develop. It involves use of single codebase which works in multiple mobile operating systems. Despite such advantages, hybrid apps exhibit lower performance. Often, apps fail to bear the same look-and-feel in different mobile operating systems. == Development == Developing apps for mobile devices requires considering the constraints and features of these devices. Mobile devices run on battery and have less powerful processors than personal computers and also have more features such as location detection and cameras. Developers also have to consider a wide array of screen sizes, hardware specifications and configurations because of intense competition in mobile software and changes within each of the platforms (although these issues can be overcome with mobile device detection). Mobile application development requires the use of specialized integrated development environments. Mobile apps are first tested within the development environment using emulators and later subjected to field testing. Emulators provide an inexpensive way to test applications on mobile phones to which developers may not have physical access. Mobile user interface (UI) Design is also essential. Mobile UI considers constraints and contexts, screen, input and mobility as outlines for design. The user is often the focus of interaction with their device, and the interface entails components of both hardware and software. User input allows for the users to manipulate a system, and device's output allows the system to indicate the effects of the users' manipulation. Mobile UI design constraints include limited attention and form factors, such as a mobile device's screen size for a user's hand. Mobile UI contexts signal cues from user activity, such as location and scheduling that can be shown from user interactions within a mobile application. Overall, mobile UI design's goal is primarily for an understandable, user-friendly interface. Mobile UIs, or front-ends, rely on mobile back-ends to support access to enterprise systems. The mobile back-end facilitates data routing, security, authentication, authorization, working off-line, and service orchestration. This functionality is supported by a mix of middleware components including mobile app servers, Mobile Backend as a service (MBaaS), and SOA infrastructure. Conversational interfaces display the computer interface and present interactions through text instead of graphic elements. They emulate conversations with real humans. There are two main types of conversational interfaces: voice assistants (like the Amazon Echo) and chatbots. Conversational interfaces are growing particularly practical as users are starting to feel overwhelmed with mobile apps (a term known as "app fatigue"). David Limp, Amazon's senior vice president of devices, says in an interview with Bloomberg, "We believe the next big platform is voice." == Distribution == The three biggest app stores are Google Play for Android, App Store for iOS, and Microsoft Store for Windows 10, Windows 10 Mobile, and Xbox One. === Google Play === Google Play (formerly known as the Android Market) is an international online software store developed by Google for Android devices. It opened in October 2008. In July 2013, the number of apps downloaded via the Google Play Store surpassed 50 billion, of the over 1 million apps available. As of September 2016, according to Statista the number of apps available exceeded 2.4 million. Over 80% of apps in the Google Play Store are free to download. The store generated a revenue of 6 billion U.S. dollars in 2015. === App Store === Apple's App Store for iOS and iPadOS was not the first app distribution service, but it ignited the mobile revolution and was opened on July 10, 2008, and as of September 2016, reported over 140 billion downloads. The original AppStore was first demonstrated to Steve Jobs in 1993 by Jesse Tayler at NeXTWorld Expo As of June 6, 2011, there were 425,000 apps available, which had been downloaded by 200 million iOS users. During Apple's 2012 Worldwide Developers Conference, CEO Tim Cook announced that the App Store has 650,000 available apps to download as well as 30 billion apps downloaded from the app store until that date. From an alternative perspective, figures seen in July 2013 by the BBC from tracking service Adeven indicate over two-thirds of apps in the store are "zombies", barely ever installed by consumers. === Microsoft Store === Microsoft Store (formerly known as the Windows Store) was introduced by Microsoft in 2012 for its Windows 8 and Windows RT platforms. While it can also carry listings for traditional desktop programs certified for compatibility with Windows 8, it is primarily used to distribute "Windows Store apps"—which are primarily built for use on tablets and other touch-based devices (but can still be used with a keyboard and mouse, and on desktop computers and laptops). === Others === Amazon Appstore is an alternative application store for the Android operating system. It was opened in March 2011 and as of June 2015, the app store has nearly 334,000 apps. The Amazon Appstore's Android Apps can also be installed and run on BlackBerry 10 devices. BlackBerry World is the application store for BlackBerry 10 and BlackBerry OS devices. It opened in April 2009 as BlackBerry App World. Ovi (Nokia) for Nokia phones was launched internationally in May 2009. In May 2011, Nokia announced plans to rebrand its Ovi product line under the Nokia brand and Ovi Store was renamed Nokia Store in October 2011. Nokia Store will no longer allow developers to publish new apps or app updates for its legacy Symbian and MeeGo operating systems from January 2014. Windows Phone Store was introduced by Microsoft for its Windows Phone platform, which was launched in October 2010. As of October 2012, it has over 120,000 apps available. Samsung Apps was introduced in September 2009. As of October 2011, Samsung Apps reached 10 million downloads. The store is available in 125 countries and it offers apps for Windows Mobile, Android and Bada platforms. The Electronic AppWrapper was the first electronic distribution service to collectively provide encryption and purchasing electronically F-Droid — Free and open Source Android app repository. Opera Mobile Store is a platform independent app store for iOS, Java, BlackBerry OS, Symbian, iOS, and Windows Mobile, and Android based mobile phones. It was launched internationally in March, 2011. There are numerous other independent app stores for Android devices. == Enterprise management == Mobile application management (MAM) describes software and services responsible for provisioning and controlling access to internally developed and commercially available mobile apps used in business settings. The strategy is meant to off-set the security risk of a Bring Your Own Device (BYOD) work strategy. When an employee brings a personal device into an enterprise setting, mobile application management enables the corporate IT staff to transfer required applications, control access to business data, and remove locally cached business data from the device if it is lost, or when its owner no longer works with the company. Containerization is an alternate approach to security. Rather than controlling an employee/s entire device, containerization apps create isolated pockets separate from personal data. Company control of the device only extends to that separate container. === App wrapping vs. native app management === Especially when employees "bring your own device" (BYOD), mobile apps can be a significant security risk for businesses, because they transfer unprotected sensitive data to the Internet without knowledge and consent of the users. Reports of stolen corporate data show how quickly corporate and personal data can fall into the wrong hands. Data theft is not just the loss of confidential information, but makes companies vulnerable to attack and blackmail. Professional mobile application management helps companies protect their data. One option for securing corporate data is app wrapping. But there also are some disadvantages like copyright infringement or the loss of warranty rights. Functionality, productivity and user experience are particularly limited under app wrapping. The policies of a wrapped app can not be changed. If required, it must be recreated from scratch, adding cost. An app wrapper is a mobile app made wholly from an existing website or platform, with few or no changes made to the underlying application. The "wrapper" is essentially a new management layer that allows developers to set up usage policies appropriate for app use. Examples of these policies include whether or not authentication is required, allowing data to be stored on the device, and enabling/disabling file sharing between users. Because most app wrappers are often websites first, they often do not align with iOS or Android Developer guidelines. Alternatively, it is possible to offer native apps securely through enterprise mobility management. This enables more flexible IT management as apps can be easily implemented and policies adjusted at any time. == See also == Appbox Pro (2009) App store optimization Enterprise mobile application Mobile commerce Super-app Unified Remote WideAngle (2013) == References == == External links == Media related to Mobile phone software at Wikimedia Commons
Wikipedia/Mobile_phone_application
A cover model is a male or female whose photograph appears on the front cover of a magazine. The cover model is generally a fashion model, celebrity, or contest winner. Generally, cover models are depicted solitarily; however, on occasion magazines will present a front cover with multiple cover models. Female cover models are often referred to as cover girls. Cover models generally take part in a fashion or portrait photography photo shoot for the magazine. When a magazine depicts a candid or stock image for the main cover image, the talent is referred to as the magazine "cover" rather than "cover model" or "cover girl". == See also == List of Allure cover models List of Marie Claire cover models List of Sports Illustrated Swimsuit Issue cover models List of Vogue cover models == References ==
Wikipedia/Cover_model
Digital photograph restoration is the practice of restoring the appearance of a digital copy of a physical photograph that has been damaged by natural, man-made, or environmental causes, or affected by age or neglect. Digital photograph restoration uses image editing techniques to remove undesired visible features, such as dirt, scratches, or signs of aging. People use raster graphics editors to repair digital images, or to add or replace torn or missing pieces of the physical photograph. Unwanted color casts are removed and the image's contrast or sharpening may be altered to restore the contrast range or detail believed to have been in the original physical image. Digital image processing techniques included in image enhancement and image restoration software are also applied to digital photograph restoration. == Background == === Agents of deterioration === Photographic material is susceptible to physical, chemical and biological damage caused by physical forces, thieves and vandals, fire, water, pests, pollutants, light, incorrect temperature, incorrect relative humidity, and dissociation (custodial neglect). Traditionally, preservation efforts focused on physical photographs, but preservation of a photograph's digital surrogates has become of equal importance. === Handling practices === Fragile or valuable originals are protected when digital surrogates replace them, and severely damaged photographs that cannot be repaired physically are revitalized when a digital copy is made. Creation of digital surrogates allows originals to be preserved. However, the digitization process itself contributes to the object's wear and tear. It is considered important to ensure the original photograph is minimally damaged by environmental changes or careless handling. === Permissible uses === Digitally scanned or captured images, both unaltered and restored image files are protected under copyright law. Courts agree that by its basic nature digitization involves reproduction—an act exclusively reserved for copyright owners. The ownership of an artwork does not inherently carry with it the rights of reproduction. Images that are digitally reproduced and restored often reflect the intentions of the photographer of the original photograph. It is not recommended that conservators change or add additional information based on personal or institutional bias or opinion. Even without copyright permission, museums can digitally copy and restore images for conservation or informational purposes. == Gallery == == See also == Infrared cleaning Media preservation Photo manipulation Photograph preservation == References == == External links == Media related to Restoration of photographs at Wikimedia Commons
Wikipedia/Digital_photograph_restoration
Deepfake pornography, or simply fake pornography, is a type of synthetic pornography that is created via altering already-existing photographs or video by applying deepfake technology to the images of the participants. The use of deepfake pornography has sparked controversy because it involves the making and sharing of realistic videos featuring non-consenting individuals and is sometimes used for revenge porn. Efforts are being made to combat these ethical concerns through legislation and technology-based solutions. == History == The term "deepfake" was coined in 2017 on a Reddit forum where users shared altered pornographic videos created using machine learning algorithms. It is a combination of the word "deep learning", which refers to the program used to create the videos, and "fake" meaning the videos are not real. Deepfake pornography was originally created on a small individual scale using a combination of machine learning algorithms, computer vision techniques, and AI software. The process began by gathering a large amount of source material (including both images and videos) of a person's face, and then using a deep learning model to train a Generative Adversarial Network to create a fake video that convincingly swaps the face of the source material onto the body of a pornographic performer. However, the production process has significantly evolved since 2018, with the advent of several public apps that have largely automated the process. Deepfake pornography is sometimes confused with fake nude photography, but the two are mostly different. Fake nude photography typically uses non-sexual images and merely makes it appear that the people in them are nude. == Notable cases == Deepfake technology has been used to create non-consensual and pornographic images and videos of famous women. One of the earliest examples occurred in 2017 when a deepfake pornographic video of Gal Gadot was created by a Reddit user and quickly spread online. Since then, there have been numerous instances of similar deepfake content targeting other female celebrities, such as Emma Watson, Natalie Portman, and Scarlett Johansson. Johansson spoke publicly on the issue in December 2018, condemning the practice but also refusing legal action because she views the harassment as inevitable. === Rana Ayyub === In 2018, Rana Ayyub, an Indian investigative journalist, was the target of an online hate campaign stemming from her condemnation of the Indian government, specifically her speaking out against the rape of an eight-year-old Kashmiri girl. Ayyub was bombarded with rape and death threats, and had doctored pornographic video of her circulated online. In a Huffington Post article, Ayyub discussed the long-lasting psychological and social effects this experience has had on her. She explained that she continued to struggle with her mental health and how the images and videos continued to resurface whenever she took a high-profile case. === Atrioc controversy === In 2023, Twitch streamer Atrioc stirred controversy when he accidentally revealed deepfake pornographic material featuring female Twitch streamers while on live. The influencer has since admitted to paying for AI generated porn, and apologized to the women and his fans. === Taylor Swift === In January 2024, AI-generated sexually explicit images of American singer Taylor Swift were posted on X (formerly Twitter), and spread to other platforms such as Facebook, Reddit and Instagram. One tweet with the images was viewed over 45 million times before being removed. A report from 404 Media found that the images appeared to have originated from a Telegram group, whose members used tools such as Microsoft Designer to generate the images, using misspellings and keyword hacks to work around Designer's content filters. After the material was posted, Swift's fans posted concert footage and images to bury the deepfake images, and reported the accounts posting the deepfakes. Searches for Swift's name were temporarily disabled on X, returning an error message instead. Graphika, a disinformation research firm, traced the creation of the images back to a 4chan community. A source close to Swift told the Daily Mail that she would be considering legal action, saying, "Whether or not legal action will be taken is being decided, but there is one thing that is clear: These fake AI-generated images are abusive, offensive, exploitative, and done without Taylor's consent and/or knowledge." The controversy drew condemnation from White House Press Secretary Karine Jean-Pierre, Microsoft CEO Satya Nadella, the Rape, Abuse & Incest National Network, and SAG-AFTRA. Several US politicians called for federal legislation against deepfake pornography. Later in the month, US senators Dick Durbin, Lindsey Graham, Amy Klobuchar and Josh Hawley introduced a bipartisan bill that would allow victims to sue individuals who produced or possessed "digital forgeries" with intent to distribute, or those who received the material knowing it was made non-consensually. === 2024 Telegram deepfake scandal === It emerged in South Korea in August 2024, that many teachers and female students were victims of deepfake images created by users who utilized AI technology. Journalist Ko Narin of The Hankyoreh uncovered the deepfake images through Telegram chats. On Telegram, group chats were created specifically for image-based sexual abuse of women, including middle and high school students, teachers, and even family members. Women with photos on social media platforms like KakaoTalk, Instagram, and Facebook are often targeted as well. Perpetrators use AI bots to generate fake images, which are then sold or widely shared, along with the victims’ social media accounts, phone numbers, and KakaoTalk usernames. One Telegram group reportedly drew around 220,000 members, according to a Guardian report. Investigations revealed numerous chat groups on Telegram where users, mainly teenagers, create and share explicit deepfake images of classmates and teachers. The issue came in the wake of a troubling history of digital sex crimes, notably the notorious Nth Room case in 2019. The Korean Teachers Union estimated that more than 200 schools had been affected by these incidents. Activists called for a "national emergency" declaration to address the problem. South Korean police reported over 800 deepfake sex crime cases by the end of September 2024, a stark rise from just 156 cases in 2021, with most victims and offenders being teenagers. On September 21, 6,000 people gathered at Marronnier Park in northeastern Seoul to demand stronger legal action against deepfake crimes targeting women. On September 26, following widespread outrage over the Telegram scandal, South Korean lawmakers passed a bill criminalizing the possession or viewing of sexually explicit deepfake images and videos, imposing penalties that include prison terms and fines. Under the new law, those caught buying, saving, or watching such material could face up to three years in prison or fines up to 30 million won ($22,600). At the time the bill was proposed, creating sexually explicit deepfakes for distribution carried a maximum penalty of five years, but the new legislation would increase this to seven years, regardless of intent. By October 2024, it was estimated that "nudify" deep fake bots on Telegram were up to four million monthly users. == Ethical considerations == === Deepfake child pornography === Deepfake technology has made the creation of child pornography, faster and easier than it has ever been. Deepfakes can be used to produce new child pornography from already existing material or creating pornography from children who have not been subjected to sexual abuse. Deepfake child pornography can, however, have real and direct implications on children including defamation, grooming, extortion, and bullying. === Differences from generative AI pornography === While both deepfake pornography and generative AI pornography utilize synthetic media, they differ in approach and ethical implications. Generative AI pornography is created entirely through algorithms, producing hyper-realistic content unlinked to real individuals. In contrast, Deepfake pornography alters existing footage of real individuals, often without consent, by superimposing faces or modifying scenes. Hany Farid, a digital image analysis expert, has emphasized these distinctions. === Consent === Most deepfake pornography is made using the faces of people who did not consent to their image being used in such a sexual way. In 2023, Sensity, an identify verification company, has found that "96% of deepfakes are sexually explicit and feature women who didn’t consent to the creation of the content." == Combatting deepfake pornography == === Technical approach === Deepfake detection has become an increasingly important area of research in recent years as the spread of fake videos and images has become more prevalent. One promising approach to detecting deepfakes is through the use of Convolutional Neural Networks (CNNs), which have shown high accuracy in distinguishing between real and fake images. One CNN-based algorithm that has been developed specifically for deepfake detection is DeepRhythm, which has demonstrated an impressive accuracy score of 0.98 (i.e. successful at detecting deepfake images 98% of the time). This algorithm utilizes a pre-trained CNN to extract features from facial regions of interest and then applies a novel attention mechanism to identify discrepancies between the original and manipulated images. While the development of more sophisticated deepfake technology presents ongoing challenges to detection efforts, the high accuracy of algorithms like DeepRhythm offers a promising tool for identifying and mitigating the spread of harmful deepfakes. Aside from detection models, there are also video authenticating tools available to the public. In 2019, Deepware launched the first publicly available detection tool which allowed users to easily scan and detect deepfake videos. Similarly, in 2020 Microsoft released a free and user-friendly video authenticator. Users upload a suspected video or input a link, and receive a confidence score to assess the level of manipulation in a deepfake. === Legal approach === As of 2023, there is a lack of legislation that specifically addresses deepfake pornography. Instead, the harm caused by its creation and distribution is being addressed by the courts through existing criminal and civil laws. Victims of deepfake pornography often have claims for revenge porn, tort claims, and harassment. The legal consequences for revenge porn vary from state to state and country to country. For instance, in Canada, the penalty for publishing non-consensual intimate images is up to 5 years in prison, whereas in Malta it is a fine of up to €5,000. The "Deepfake Accountability Act" was introduced to the United States Congress in 2019 but died in 2020. It aimed to make the production and distribution of digitally altered visual media that was not disclosed to be such, a criminal offense. The title specifies that making any sexual, non-consensual altered media with the intent of humiliating or otherwise harming the participants, may be fined, imprisoned for up to 5 years or both. A newer version of bill was introduced in 2021 which would have required any "advanced technological false personation records" to contain a watermark and an audiovisual disclosure to identify and explain any altered audio and visual elements. The bill also includes that failure to disclose this information with intent to harass or humiliate a person with an "advanced technological false personation record" containing sexual content "shall be fined under this title, imprisoned for not more than 5 years, or both." However this bill has since died in 2023. In the United Kingdom, the Law Commission for England and Wales recommended reform to criminalise sharing of deepfake pornography in 2022. In 2023, the government announced amendments to the Online Safety Bill to that end. The Online Safety Act 2023 amends the Sexual Offences Act 2003 to criminalise sharing intimate images that shows or "appears to show" another (thus including deepfake images) without consent. In 2024, the Government announced that an offence criminalising the production of deepfake pornographic images would be included in the Criminal Justice Bill of 2024. The Bill did not pass before Parliament was dissolved before the general election. In South Korea, the creation, distribution, or possession of deepfake pornography is classified as a sex crime, with a mandatory prison sentence between three to seven years as part of the country's Special Act on Sexual Violence Crimes. ==== Controlling the distribution ==== While the legal landscape remains undeveloped, victims of deepfake pornography have several tools available to contain and remove content, including securing removal through a court order, intellectual property tools like the DMCA takedown, reporting for terms and conditions violations of the hosting platform, and removal by reporting the content to search engines. Several major online platforms have taken steps to ban deepfake pornography. As of 2018, gfycat, reddit, Twitter, Discord, and Pornhub have all prohibited the uploading and sharing of deepfake pornographic content on their platforms. In September of that same year, Google also added "involuntary synthetic pornographic imagery" to its ban list, allowing individuals to request the removal of such content from search results. == See also == Fake nude photography Revenge porn Another Body, 2023 documentary film about a student's quest for justice after finding deepfake pornography of herself online == References ==
Wikipedia/Deepfake_pornography
The visual arts are art forms such as painting, drawing, printmaking, sculpture, ceramics, photography, video, image, filmmaking, design, crafts, and architecture. Many artistic disciplines such as performing arts, conceptual art, and textile arts, also involve aspects of the visual arts, as well as arts of other types. Within the visual arts, the applied arts, such as industrial design, graphic design, fashion design, interior design, and decorative art are also included. Current usage of the term "visual arts" includes fine art as well as applied or decorative arts and crafts, but this was not always the case. Before the Arts and Crafts Movement in Britain and elsewhere at the turn of the 20th century, the term 'artist' had for some centuries often been restricted to a person working in the fine arts (such as painting, sculpture, or printmaking) and not the decorative arts, crafts, or applied visual arts media. The distinction was emphasized by artists of the Arts and Crafts Movement, who valued vernacular art forms as much as high forms. Art schools made a distinction between the fine arts and the crafts, maintaining that a craftsperson could not be considered a practitioner of the arts. The increasing tendency to privilege painting, and to a lesser degree sculpture, above other arts has been a feature of Western art as well as East Asian art. In both regions, painting has been seen as relying to the highest degree on the imagination of the artist and being the furthest removed from manual labour – in Chinese painting, the most highly valued styles were those of "scholar-painting", at least in theory practiced by gentleman amateurs. The Western hierarchy of genres reflected similar attitudes. == Education and training == Training in the visual arts has generally been through variations of the apprentice and workshop systems. In Europe, the Renaissance movement to increase the prestige of the artist led to the academy system for training artists, and today most of the people who are pursuing a career in the arts train in art schools at tertiary levels. Visual arts have now become an elective subject in most education systems. In East Asia, arts education for nonprofessional artists typically focused on brushwork; calligraphy was numbered among the Six Arts of gentlemen in the Chinese Zhou dynasty, and calligraphy and Chinese painting were numbered among the four arts of scholar-officials in imperial China. Leading country in the development of the arts in Latin America, in 1875 created the National Society of the Stimulus of the Arts, founded by painters Eduardo Schiaffino, Eduardo Sívori, and other artists. Their guild was rechartered as the National Academy of Fine Arts in 1905 and, in 1923, on the initiative of painter and academic Ernesto de la Cárcova, as a department in the University of Buenos Aires, the Superior Art School of the Nation. Currently, the leading educational organization for the arts in the country is the UNA Universidad Nacional de las Artes. == Drawing == Drawing is a means of making an image, illustration or graphic using any of a wide variety of tools and techniques available online and offline. It generally involves making marks on a surface by applying pressure from a tool, or moving a tool across a surface using dry media such as graphite pencils, pen and ink, inked brushes, wax color pencils, crayons, charcoals, pastels, and markers. Digital tools, including pens, stylus, that simulate the effects of these are also used. The main techniques used in drawing are: line drawing, hatching, crosshatching, random hatching, shading, scribbling, stippling, and blending. An artist who excels at drawing is referred to as a draftsman or draughtsman. Drawing and painting go back tens of thousands of years. Art of the Upper Paleolithic includes figurative art beginning at least 40,000 years ago. Non-figurative cave paintings consisting of hand stencils and simple geometric shapes are even older. Paleolithic cave representations of animals are found in areas such as Lascaux, France, Altamira, Spain, Maros, Sulawesi in Asia, and Gabarnmung, Australia. In ancient Egypt, ink drawings on papyrus, often depicting people, were used as models for painting or sculpture. Drawings on Greek vases, initially geometric, later developed into the human form with black-figure pottery during the 6th century BC. With paper becoming more common in Europe by the 14th century, drawing was adopted by masters such as Sandro Botticelli, Raphael, Michelangelo, and Leonardo da Vinci, who sometimes treated drawing as an art in its own right, rather than a preparatory stage for painting or sculpture. == Painting == Painting taken literally is the practice of applying pigment suspended in a carrier (or medium) and a binding agent (a glue) to a surface (support) such as paper, canvas or a wall. However, when used in an artistic sense it means the use of this activity in combination with drawing, composition, or other aesthetic considerations in order to manifest the expressive and conceptual intention of the practitioner. Painting is also used to express spiritual motifs and ideas; sites of this kind of painting range from artwork depicting mythological figures on pottery to The Sistine Chapel, to the human body itself. === History === ==== Origins and early history ==== Like drawing, painting has its documented origins in caves and on rock faces. The earliest known cave paintings, dating to between 32,000-30,000 years ago, are found in the Chauvet cave in southern France; the celebrated polychrome murals of Lascaux date to around 17,000–15,500 years ago. In shades of red, brown, yellow and black, the paintings on the walls and ceilings depict bison, cattle (aurochs), horses and deer. Paintings of human figures can be found in the tombs of ancient Egypt. In the great temple of Ramesses II, Nefertari, his queen, is depicted being led by Isis. The Greeks contributed to painting but much of their work has been lost. One of the best remaining representations are the Hellenistic Fayum mummy portraits. Another example is mosaic of the Battle of Issus at Pompeii, which was probably based on a Greek painting. Greek and Roman art contributed to Byzantine art in the 4th century BC, which initiated a tradition in icon painting. ==== The Renaissance ==== Apart from the illuminated manuscripts produced by monks during the Middle Ages, the next significant contribution to European art was from Italy's renaissance painters. From Giotto in the 13th century to Leonardo da Vinci and Raphael at the beginning of the 16th century, this was the richest period in Italian art as the chiaroscuro techniques were used to create the illusion of 3-D space. Painters in northern Europe too were influenced by the Italian school. Jan van Eyck from Belgium, Pieter Bruegel the Elder from the Netherlands and Hans Holbein the Younger from Germany are among the most successful painters of the times. They used the glazing technique with oils to achieve depth and luminosity. ==== Dutch masters ==== The 17th century witnessed the emergence of the great Dutch masters such as the versatile Rembrandt who was especially remembered for his portraits and Bible scenes, and Vermeer who specialized in interior scenes of Dutch life. ==== Baroque ==== The Baroque started after the Renaissance, from the late 16th century to the late 17th century. Main artists of the Baroque included Caravaggio, who made heavy use of tenebrism. Peter Paul Rubens, a Flemish painter who studied in Italy, worked for local churches in Antwerp and also painted a series for Marie de' Medici. Annibale Carracci took influences from the Sistine Chapel and created the genre of illusionistic ceiling painting. Much of the development that happened in the Baroque was because of the Protestant Reformation and the resulting Counter Reformation. Much of what defines the Baroque is dramatic lighting and overall visuals. ==== Impressionism ==== Impressionism began in France in the 19th century with a loose association of artists including Claude Monet, Pierre-Auguste Renoir and Paul Cézanne who brought a new freely brushed style to painting, often choosing to paint realistic scenes of modern life outside rather than in the studio. This was achieved through a new expression of aesthetic features demonstrated by brush strokes and the impression of reality. They achieved intense color vibration by using pure, unmixed colors and short brush strokes. The movement influenced art as a dynamic, moving through time and adjusting to newfound techniques and perception of art. Attention to detail became less of a priority in achieving, whilst exploring a biased view of landscapes and nature to the artist's eye. ==== Post-impressionism ==== Towards the end of the 19th century, several young painters took impressionism a stage further, using geometric forms and unnatural color to depict emotions while striving for deeper symbolism. Of particular note are Paul Gauguin, who was strongly influenced by Asian, African and Japanese art, Vincent van Gogh, a Dutchman who moved to France where he drew on the strong sunlight of the south, and Toulouse-Lautrec, remembered for his vivid paintings of night life in the Paris district of Montmartre. ==== Symbolism, expressionism and cubism ==== Edvard Munch, a Norwegian artist, developed his symbolistic approach at the end of the 19th century, inspired by the French impressionist Manet. The Scream (1893), his most famous work, is widely interpreted as representing the universal anxiety of modern man. Partly as a result of Munch's influence, the German expressionist movement originated in Germany at the beginning of the 20th century as artists such as Ernst Kirschner and Erich Heckel began to distort reality for an emotional effect. In parallel, the style known as cubism developed in France as artists focused on the volume and space of sharp structures within a composition. Pablo Picasso and Georges Braque were the leading proponents of the movement. Objects are broken up, analyzed, and re-assembled in an abstracted form. By the 1920s, the style had developed into surrealism with Dali and Magritte. == Printmaking == Printmaking is creating, for artistic purposes, an image on a matrix that is then transferred to a two-dimensional (flat) surface by means of ink or other form of pigmentation. Except in the case of a monotype, the same matrix can be used to produce many examples of the print. Historically, the major techniques (also called media) involved are woodcut, line engraving, etching, lithography, and screen printing, (serigraphy, silk screening) and there are many others, including digital techniques. Normally, the print is printed on paper, but other mediums range from cloth and vellum, to more modern materials. === European history === Prints in the Western tradition produced before about 1830 are known as old master prints. In Europe, from around 1400 AD woodcut, was used for master prints on paper by using printing techniques developed in the Byzantine and Islamic worlds. Michael Wolgemut improved German woodcut from about 1475, and Erhard Reuwich, a Dutchman, was the first to use cross-hatching. At the end of the century Albrecht Dürer brought the Western woodcut to a stage that has never been surpassed, increasing the status of the single-leaf woodcut. === Chinese origin and practice === In China, the art of printmaking developed some 1,100 years ago as illustrations alongside text cut in woodblocks for printing on paper. Initially images were mainly religious but in the Song dynasty, artists began to cut landscapes. During the Ming (1368–1644) and Qing (1616–1911) dynasties, the technique was perfected for both religious and artistic engravings. === Development in Japan 1603–1867 === Woodblock printing in Japan (Japanese: 木版画, moku hanga) is a technique best known for its use in the ukiyo-e artistic genre; however, it was also used very widely for printing illustrated books in the same period. Woodblock printing had been used in China for centuries to print books, long before the advent of movable type, but was only widely adopted in Japan during the Edo period (1603–1867). Although similar to woodcut in western printmaking in some regards, moku hanga differs greatly in that water-based inks are used (as opposed to western woodcut, which uses oil-based inks), allowing for a wide range of vivid color, glazes and color transparency. After the decline of ukiyo-e and introduction of modern printing technologies, woodblock printing continued as a method for printing texts as well as for producing art, both within traditional modes such as ukiyo-e and in a variety of more radical or Western forms that might be construed as modern art. In the early 20th century, shin-hanga that fused the tradition of ukiyo-e with the techniques of Western paintings became popular, and the works of Hasui Kawase and Hiroshi Yoshida gained international popularity. Institutes such as the "Adachi Institute of Woodblock Prints" and "Takezasado" continue to produce ukiyo-e prints with the same materials and methods as used in the past. == Photography == Photography is the process of making pictures by means of the action of light. The light patterns reflected or emitted from objects are recorded onto a sensitive medium, or storage chip, through a timed exposure. The process is done through mechanical shutters or electronically-timed exposure of photons into chemical processing or digitizing devices known as cameras. The word comes from the Greek φῶς ‘’phos’’ (“light”) and γραφή ‘’graphê’’ (“drawing” or “writing”), literally meaning “drawing with light”. Traditionally, the product of photography has been called a photograph; the term ‘’photo’’ is an abbreviation and though many call them “pictures,” the term “image” has increasingly replaced “photograph,” reflecting electronic capture and the broader concept of graphical representation in optics and computing. == Architecture == Architecture is the process and the product of planning, designing, and constructing buildings or any other structures. Architectural works, in the material form of buildings, are often perceived as cultural symbols and works of art. Historical civilizations are often identified with their surviving architectural achievements. The earliest surviving written work on architecture is De architectura, by the Roman architect Vitruvius in the early 1st century AD. According to Vitruvius, a good building should satisfy three principles: firmitas, utilitas, venustas, translated as firmness, commodity, and delight. An equivalent in modern English would be: Durability – a building should stand up robustly and remain in good condition. Utility – it should be suitable for the purposes for which it is used. Beauty – it should be aesthetically pleasing. Building first evolved out of the dynamics between needs (shelter, security, worship, etc.) and means (available building materials and attendant skills). As cultures developed and knowledge began to be formalized through oral traditions and practices, building became a craft, and “architecture” is the name given to the most highly formalized versions of that craft. == Filmmaking == Filmmaking is the process of making a motion picture, from an initial conception and research, through scriptwriting, shooting and recording, animation or other special effects, editing, sound and music work and finally distribution to an audience; it refers broadly to the creation of all types of films, embracing documentary, strains of theatre and literature in film, and poetic or experimental practices, and is often used to refer to video-based processes as well. == Computer art == Visual artists are no longer limited to traditional visual arts media. Computers have been used in the visual arts since the 1960s. Uses include the capturing or creating of images and forms, the editing of those images (including exploring multiple compositions) and the final rendering or printing (including 3D printing). Computer art is any in which computers play a role in production or display. Such art can be an image, sound, animation, video, CD-ROM, DVD, video game, website, algorithm, performance or gallery installation. Many traditional disciplines now integrate digital technologies, so the lines between traditional works of art and new media works created using computers have been blurred. For instance, an artist may combine traditional painting with algorithmic art and other digital techniques. As a result, defining computer art by its end product can be difficult. Nevertheless, this type of art appears in art museum exhibits, but can be seen more as a tool, rather than a form as with painting. On the other hand, there are computer-based artworks which belong to a new conceptual and postdigital strand, assuming the same technologies, and their social impact, as an object of inquiry. Computer usage has blurred the distinctions between illustrators, photographers, photo editors, 3-D modelers, and handicraft artists. Sophisticated rendering and editing software has led to multi-skilled image developers. Photographers may become digital artists. Illustrators may become animators. Handicraft may be computer-aided or use computer-generated imagery as a template. Computer clip art usage has made the distinction between visual arts and page layout less obvious due to the easy access and editing of clip art in the process of paginating a document. == Plastic arts == Plastic arts is a term for art forms that involve physical manipulation of a plastic medium by moulding or modeling such as sculpture or ceramics. The term has also been applied to all the visual (non-literary, non-musical) arts. Materials that can be carved or shaped, such as stone, wood, concrete, or steel, have also been included in the narrower definition, since, with appropriate tools, such materials are also capable of modulation. This use of the term “plastic” in the arts is different from Piet Mondrian’s use, and with the movement he termed, “Neoplasticism.” === Sculpture === Sculpture is three-dimensional artwork created by shaping or combining hard or plastic material, sound, or text and or light, commonly stone (either rock or marble), clay, metal, glass, or wood. Some sculptures are created directly by finding or carving; others are assembled, built together and fired, welded, molded, or cast. Sculptures are often painted. A person who creates sculptures is called a sculptor. The earliest undisputed examples of sculpture belong to the Aurignacian culture, which was located in Europe and southwest Asia and active at the beginning of the Upper Paleolithic. As well as producing some of the earliest known cave art, the people of this culture developed finely-crafted stone tools, manufacturing pendants, bracelets, ivory beads, and bone-flutes, as well as three-dimensional figurines. Because sculpture involves the use of materials that can be moulded or modulated, it is considered one of the plastic arts. The majority of public art is sculpture. Many sculptures together in a garden setting may be referred to as a sculpture garden. Sculptors do not always make sculptures by hand. With increasing technology in the 20th century and the popularity of conceptual art over technical mastery, more sculptors turned to art fabricators to produce their artworks. With fabrication, the artist creates a design and pays a fabricator to produce it. This allows sculptors to create larger and more complex sculptures out of materials like cement, metal and plastic, that they would not be able to create by hand. Sculptures can also be made with 3-d printing technology. == US copyright definition of visual art == In the United States, the law protecting the copyright over a piece of visual art gives a more restrictive definition of "visual art". A "work of visual art" is — (1) a painting, drawing, print or sculpture, existing in a single copy, in a limited edition of 200 copies or fewer that are signed and consecutively numbered by the author, or, in the case of a sculpture, in multiple cast, carved, or fabricated sculptures of 200 or fewer that are consecutively numbered by the author and bear the signature or other identifying mark of the author; or (2) a still photographic image produced for exhibition purposes only, existing in a single copy that is signed by the author, or in a limited edition of 200 copies or fewer that are signed and consecutively numbered by the author. A work of visual art does not include — (A)(i) any poster, map, globe, chart, technical drawing, diagram, model, applied art, motion picture or other audiovisual work, book, magazine, newspaper, periodical, data base, electronic information service, electronic publication, or similar publication; (ii) any merchandising item or advertising, promotional, descriptive, covering, or packaging material or container; (iii) any portion or part of any item described in clause (i) or (ii); (B) any work made for hire; or (C) any work not subject to copyright protection under this title. == See also == == References == == External links == ArtLex – online dictionary of visual art terms (archived 24 April 2005) Calendar for Artists – calendar listing of visual art festivals. Art History Timeline by the Metropolitan Museum of Art.
Wikipedia/Graphic_image_developer
Composograph refers to a forerunner method of photo manipulation and is a retouched photographic collage popularized by publisher and physical culture advocate Bernarr Macfadden in his New York Evening Graphic in 1924. The Graphic was dubbed "The Porno-Graphic" by critics of the time and has been called "one of the low points in the history of American journalism". Exploitative and mendacious, in its short life (it closed operations in 1932) the Graphic defined "tabloid journalism" and launched the careers of Ed Sullivan and Walter Winchell, who developed the modern gossip column there. Film director Sam Fuller worked for the Evening Graphic as a crime reporter. "Composographic" images were literally cut and pasted together using images of the heads or faces of current celebrities, glued onto staged images created in Macfadden's in-house studio, often using newspaper staffers as body doubles. Composite photographs, or photomontages, had been used in the nineteenth century by such photographers as William Notman to capture indoor scenes that would not have been otherwise possible before the flashbulb was developed. Macfadden used them to represent events that were inconvenient to photograph, particularly with the equipment of the day: private bedrooms and bathtubs, Rudolph Valentino's unsuccessful surgery, Valentino's funeral, and notably on March 17, 1927, a full-page image of Valentino meeting Enrico Caruso in heaven. One early faked photograph—that of Alice Jones Rhinelander baring her breast in court (part of the Kip Rhinelander divorce trial)—is said to have boosted the Graphic's circulation by 100,000 copies. Apart from their sensational subject matter, composographs have relevance as a historical reference point in the current debate over staged and doctored news photos. Some of the Graphic composographs have an unforgettable eerie visual impact. In a 1997 academic paper called "Staged, faked and mostly naked: Photographic innovations at the Evening Graphic, 1924–1932" and a shorter online essay, Radford University professor Bob Stepno points out that the Graphic was published before improvements in photojournalism technology and standards that made possible the photorealism of Magnum Photos, Black Star and others during World War II. == References == == External links == The Composograph of Alice Rhinelander
Wikipedia/Composograph
Heliography is an early photographic process, based on the hardening of bitumen in sunlight. It was invented by Nicéphore Niépce around 1822. Niépce used the process to make the earliest known surviving photograph from nature, View from the Window at Le Gras (1826 or 1827), and the first realisation of photoresist as means to reproduce artworks through inventions of photolithography and photogravure. == Invention == Nicéphore Niépce began experiments with the aim of achieving a photo-etched printmaking technique in 1811. He knew that the acid-resistant Bitumen of Judea used in etching hardened with exposure to light. In experiments he coated it on plates of glass, zinc, copper and silver-surfaced copper, pewter and limestone (lithography), and found the surface exposed to the most light resisted dissolution in oil of lavender and petroleum, so that the uncoated shadow areas might be traditionally treated through acid etching and aquatint to print black ink. By 1822 had made the first light-resistant heliographic copy of an engraving, made without a lens by placing the print in contact with the light sensitive plate. In 1826 he increasingly used pewter plates because their reflective surface made the image more clearly visible. Niépce prepared a synopsis of his experiments in November 1829: On Heliography, or a method of automatically fixing by the action of light the image formed in the camera obscura which outlines his intention to use his “Heliographic” method of photogravure or photolithography as a means of making lithographic, intaglio or relief master plates for multiple printed reproductions in ink. Although heliography did not achieve his intentions during Niépce's lifetime, it was further developed by his nephew Claude Félix Abel Niépce de Saint-Victor; in 1855, with the help of the copper engraver Lemaître, he succeeded in etching the heliographs and producing prints from them, laying the foundation for later photoengraving processes. == Camera pictures == After his return from London concentrated on making camera images, which, aware of their commercial potential, he ambiguously called “points de vue” in his letters to his brother. In 1816 he had limited success with light-sensitive paper coated with muriate (or chloride) of silver placed in a homemade camera obscura were conducted; impressions of views out of his workroom window. However the images were not permanent. It is certain that in the summer of 1826 Niépce succeeded for the first time in creating permanent photographic images projected by a lens onto the plate inside a camera obscura. Georges Poitonniée asserts, based on the Niépce brothers correspondence, that the first such image was produced as early as 1822. The process used was low in sensistivity; Helmut Gernsheim estimated the exposure time might be eight hours, while Marignier, based on his attempts to recreate the technique, as well evidence from Niépce’s letters, considered three or more days more likely. == Precursor to the daguerreotype == The exposed and solvent-treated plate itself, as in the case of View from the Window at Le Gras, rediscovered by Gernsheim, presents a negative or positive image dependent upon ambient reflection in the 20.3 × 16.5 centimetre pewter plate. By viewing the plate at an appropriate angle the viewer sees the shadow areas reflecting dark in contrast to the lighter film of bitumen, producing a legible, if elusive, positive picture of buildings, a tree, and the landscape beyond. In this regard it was not unlike the daguerreotype which itself was based on Niépce's discoveries taken up by Daguerre who in 1826 had heard through the Parisian opticians Charles and Vincent Chevalier that Niépce, who purchased sophisticated lenses from them, had been using bitumen of Judea to print images on pewter. By then, Niépce had begun using iodine vapors to darken the light parts of camera images produced on silver plates, rendering a positive image. Daguerre and Niépce corresponded, each hesitant to divulge the extent of his progress to the other. == Partnership with Daguerre == After both felt they could develop their work more quickly in collaboration, they formed a company on 14 December 1829. Daguerre preferred the “negative” image obtained on bitumen, and together they invented a new process that rendered a single, unique image, the physautotype, which exploited the photosensitivity of the residue from oil of lavender dissolved in alcohol, resulting in an image that, like the daguerreotype, appeared either positive or negative depending on the angle of reflected light. Daguerre continued to perfect the process to render a unique image using iodine, not to intensify the image, but because of its photosensitivity when applied to silver plates as a vapor. This led Daguerre to the daguerreotype process, in which mercury fumes brought out the latent image in the silver iodide on plates exposed to light in a camera. Daguerre probably produced his first successful daguerreotypes as early as 1834 and after Niépce’s death entered a new partnership with Niépce’s son, Isidore, on 9 May 1835, changing the name from “Niépce-Daguerre” to “Daguerre and Isidore Niépce.” On September 27, 1835 he announced the invention as his in the Journal des artistes. Daguerre’s high successful eponymous process, in the specific chemicals and materials used, thus emerged directly out of his partnership with Niépce, whose own discoveries, never fully realised, sank into relative obscurity. == Chemistry == Bitumen has a complex and varied structure of polycyclic aromatic hydrocarbons (linked benzene rings), containing a small proportion of nitrogen and sulphur; its hardening in proportion to its exposure to light is understood to be due to further cross-linking of the rings, as is the hardening of tree resins (colophony, or abietic acid) by light, first noted by Jean Senebier in 1782. The photochemistry of these processes, which has been studied by Jean-Louis Marignier of Université Paris-Sud since the 1990s, is still to be fully understood. == Alternative meanings == The word has also been used to refer to other phenomena: for description of the sun (cf. geography), for photography in general, for signalling by heliograph (a device less commonly called a heliotrope or helio-telegraph), and for photography of the sun. Although named “héliographie” by Niépce, in the later 19th century “heliography” was used generally for all “sun-printing;” with “heliographic processes” coining to mean specifically the reprographic copying for line, rather than continuous tone, images. The abbreviations héliog. or héliogr., found on old reproductions, may stand for the French word héliogravure, and can then refer to any form of photogravure. == Other early photographic procedures == Physautotype (around 1832) Daguerreotype (around 1835) Calotype (also Talbotype, around 1835) Ambrotype (around 1850) Ferrotype (tintype; around 1850) Collodion wet plate (around 1850) Wothlytype (1864) == Notes == == References == === Other Sources === Art & Architecture Thesaurus, s.v. "heliography". Accessed 10 December 2007. Harry Ransom Center. The University of Texas at Austin. The First Photograph. Accessed 10 December 2007. An Improved Method in the Art of Signalling for Military & Scientific Purposes (1887). Accessed 1 June 2008.
Wikipedia/Heliography
Heinrich Hoffmann (12 September 1885 – 16 December 1957) was Adolf Hitler's official photographer, and a Nazi politician and publisher, who was a member of Hitler's inner circle. Hoffmann's photographs were a significant part of Hitler's propaganda campaign to present himself and the Nazi Party as a significant mass phenomenon. He received royalties from all uses of Hitler's image, which made him a millionaire over the course of Hitler's rule. After the Second World War he was tried and sentenced to 10 years in prison for war profiteering. He was classified by the Allies' Art Looting Investigators to be a "major offender" in Nazi art plundering of Jews, as both art dealer and collector and his art collection, which contained many artworks looted from Jews, was ordered confiscated by the Allies. Hoffmann's sentence was reduced to 4 years on appeal, and he was released from prison in 1950. In 1956, the Bavarian State ordered all art under its control and formerly possessed by Hoffmann to be returned to him. == Early life == Hoffmann was born in Fürth and grew up in Regensburg. He trained as a photographer from 1901 to 1903, in the studio of his father Robert (born 1860) and his uncle Heinrich (1862–1928). Until 1909, he found employment in Heidelberg, Frankfurt am Main, Bad Homburg, Switzerland, France and England. In 1909 he founded a photographic studio on Schellingstraße in Munich and started to work as a press photographer. In 1913, he founded the image agency Photobericht Hoffmann. In 1917, Hoffmann was conscripted into the German Army and served in France as a photo correspondent with the Bavarian Fliegerersatz-Abteilung I. In 1919, he joined the Bavarian Einwohnerwehren, a right-wing citizens' militia. That year he witnessed the short-lived post-war Bavarian Soviet Republic in Munich, and published a collection of photographs he had taken as Ein Jahr Bayrische Revolution im Bilde ("One Year of Bavarian Revolution in Pictures"). The accompanying text, by Emil Herold, suggested a connection between the "Jewish features" shown in the photographs and the subjects' left-wing policies. === Odeonsplatz picture === A noted photograph, taken by Hoffmann in Munich's Odeonsplatz on 2 August 1914, apparently shows a young Hitler among the crowd cheering the outbreak of World War I. The photo was later used in Nazi propaganda, although its authenticity has been questioned. Hoffmann claimed that he only discovered Hitler in the photograph in 1929, after the Nazi leader had visited the photographer's studio. Learning that Hoffmann had photographed the crowd in the Odeonsplatz, Hitler told Hoffmann that he had been there, and Hoffmann said he then searched the glass negative of the image until he found Hitler. The photograph was published in the 12 March 1932 issue of the Illustrierter Beobachter ("Illustrated Observer"), a Nazi newspaper. After the war, the glass negative could not be found. Footage of the event from a similar angle has also been claimed to show Hitler, but there is no evidence he adopted a toothbrush moustache before the war. In 2010, historian Gerd Krumeich, a German expert on the First World War, came to the conclusion that Hoffmann had doctored the image. Krumeich examined other images of the rally and was unable to find Hitler in the place where Hoffmann's photograph placed him. Also, in a different version of Hoffmann's photo in the Bavarian State Archive, Hitler looks like a different man than in the published image. As a result of the doubt raised by those considerations, the curators of a 2010 Berlin exhibition about Hitler's influence inserted a notice saying that the image's authenticity could not be verified. == Serving Hitler's regime == Hoffmann met Hitler in 1919 and joined the Nazi Party on 6 April 1920. He participated in the Beer Hall Putsch as a photographic correspondent. While the Nazi Party was banned in 1923, Hoffmann joined the ephemeral Großdeutsche Volksgemeinschaft then rejoined the Nazi Party in 1925. The following year he co-founded the Illustrierter Beobachter. In November 1929, he represented the Nazi Party in the district assembly of Upper Bavaria and, from December 1929 to December 1933, he served as a city councillor of Munich. After Hitler had taken control of the party in 1921, he named Hoffmann his official photographer, a post he held for over a quarter-century. No other photographer but Hoffmann was allowed to take pictures of Hitler. Hoffmann himself was forbidden to take candid shots. Once, at the Berghof, Hitler's mountain retreat, Hoffmann took a picture of Hitler playing with his mistress Eva Braun's terrier. Hitler told Hoffmann that he could not publish the picture, because "a statesman does not permit himself to be photographed with a little dog. A German sheepdog is the only dog worthy of a real man". Hitler strictly controlled his public image in all respects, having himself photographed in any new suit before he would wear it in public, according to Hoffmann, and ordering in 1933 that all images of himself wearing lederhosen be withdrawn from circulation. He also expressed his disapproval of Benito Mussolini allowing himself to be photographed in his bathing suit. The attempt by Hoffmann to portray Hitler as the epitome of the German people was difficult because Hitler lacked the 'racial profile' of the supposed Nordic race (i.e. tall with blonde hair), which the Nazi New Order sought to impose. Hoffmann tried to portray Hitler in the best light by focusing more on his eyes, which many found dreamy and hypnotic. Hoffmann's photographs were a significant part of Hitler's propaganda campaign to present himself and the Nazi Party as a significant mass phenomenon. In 1926, Hoffmann's images of the Party's rally in Weimar in Thuringia – one of the few German states in which Hitler was not banned from speaking at the time – showed the impressive march-past of 5,000 stormtroopers, saluted by Hitler for the first time with the straight-armed "Roman" or Fascist salute. Those pictures were printed in the main Nazi newspaper, the Völkischer Beobachter, and distributed by the thousands throughout Germany. That rally was the progenitor of the Party's annual mass rallies, which were staged quasi-annually in Nuremberg. Later, Hoffmann's book, The Hitler Nobody Knows (1933) was an important part of Hitler's strenuous effort to manipulate and control his public image. Hitler and Hoffmann became close friends, cemented by his absolute loyalty and lack of political ambition. Historian Alan Bullock succinctly described Hoffmann as an "earthy Bavarian with a weakness for drinking parties and hearty jokes", who "enjoyed the licence of a court jester" with Hitler. Hoffmann later recalled that his lack of rank preserved his access to Hitler. Hoffmann was part of the small party which drove to Landsberg Prison to meet Hitler when he was released from prison on parole on 20 December 1924, and he took Hitler's picture. Later, Hoffmann often dined with Hitler at the Berghof or at the Führer's favorite restaurant in Munich, the Osteria Bavaria, gossiping with him and sharing stories about the painters from Schwabing that Hoffmann knew. He accompanied Hitler on his unprecedented election campaign by air during the presidential election against Field Marshal Paul von Hindenburg in 1932. In the autumn of 1929, Hoffmann and his second wife Erna introduced his Munich studio assistant, Eva Braun, to Hitler. According to Hoffmann, Hitler thought she was "an attractive little thing" – Hitler preferred women to be seen and not heard – but Braun actively pursued him, telling her friends that Hitler was in love with her and claiming she would get him to marry her. Hoffmann reported, however, that even though Braun eventually became a resident of the Berghof – after the death of Geli Raubal (see below) – and was then constantly at Hitler's side during the times he was with his private entourage, she was not immediately his mistress. He believed that did happen at some point, even though Hitler's outward attitude to her never changed. Ultimately, to the surprise of his intimate circle, Hitler married Braun in the Führerbunker in Berlin on 29 April 1945, and the couple committed suicide together the following day. On 17 September 1931, Hitler was with Hoffmann on a trip from Munich to Hamburg when the Führer got word that his niece, Geli Raubal – whom he adored and who accompanied him to almost all social events – had committed suicide by shooting herself. In his post-war memoir, Hitler Was My Friend, Hoffmann expressed the opinion that Raubal killed herself because she was in love with someone other than Hitler, and could not take Hitler's rabidly jealous control of her life, especially after he found out that she had had an affair with Emil Maurice, Hitler's old comrade and chauffeur. When Hitler became the dictator of Germany, Hoffmann was the only person authorized to take official photographs of him. He adopted the title Reichsbildberichterstatter (Reich Picture Reporter) and his company "Heinrich Hoffmann, Verlag Nationalsozialischer Bilder" (Publisher of National Socialist Pictures) became the largest private company of its kind, after the existing press agencies were nationalized. The company had two divisions, one which supplied editorial photographs, and the other which published photo-propaganda books. The manager of the company was Michael Bauer (born 1883) of Munich, but Hoffmann was the sole shareholder. The company steadily expanded, opening multiple branches. Hoffmann's photographs were published as postage stamps, postcards, posters and picture books, making him a millionaire. Hoffmann's companies, which employed 300 employees at their peak, had a turnover of 1 million Reichsmark in 1935, and 15 million or 58 million Reichsmark in 1943 (equivalent to €237,000,000 in 2021).: Ch 12  Hitler received a royalty on all postage stamps featuring his image, which went to his Cultural Fund, instituted in 1937. This amounted to at least 75 million marks over the course of Hitler's reign. When photographing other subjects, Hoffmann was represented by Schostal Photo Agency (Agentur Schostal). During the Third Reich Hoffmann assembled many photo-books on Hitler, such as The Hitler Nobody Knows (1933) – a book that Ron Rosenbaum calls "central to Hitler's extremely shrewd, extremely well-controlled effort to manipulate his image ... to turn his notoriously non-Nordic-looking foreignness, his much-remarked-upon strangeness, into assets to his charisma" – and Jugend um Hitler ("Youth Around Hitler") in 1934. In 1938 Hoffmann wrote three books, Hitler in Italy, Hitler befreit Sudetenland ("Hitler Liberates Sudetenland") and Hitler in seiner Heimat ("Hitler in his Homeland"). His Mit Hitler im Westen ("With Hitler in the West") was published in 1940. His final book of this period, Das Antlitz des Führers ("The Face of the Führer"), was written shortly before the outbreak of the Second World War. In 1936 he had effectively seized control of stereographer Otto Schönstein's publishing house, Raumbild-Verlag, which effectively put him in charge of all mass-market stereoscopic (3D) photography in Germany until the end of the Second World War. The personal esteem Hitler held for Hoffmann is indicated by the fact that, in 1935, he allowed the photographer to issue a limited edition of a portfolio of seven paintings Hitler had made during World War I, even though since becoming Chancellor he had downplayed his desire to become a painter in his youth. In later years, Hitler forbade any publication of or commentary about his work as a painter. Also in 1935, for Hoffmann's 50th birthday, Hitler gave the photographer one of his own paintings of the courtyard of the Alte Residenz ("Old Royal Palace") in Munich, a favorite subject of Hitler's, and one he had painted many times when he was a struggling artist. Hoffmann came to own at least four of Hitler's watercolors. One was purchased in 1944, which provoked Hitler to remark that it would have been "insane" to have paid more than 150 or 200 marks for it, at most. The pictures were seized by the U.S. Army at the end of the war, and were never returned to Germany. In 1937, after the selection jury had outraged and angered Hitler with their choices for the first Great German Art Exhibition to inaugurate the opening of the House of German Art in Munich, he dismissed the panel and put Hoffmann in charge. That dismayed the artistic community, who felt that Hoffmann was unqualified for the role. Frederic Spotts, in Hitler and the Power of Aesthetics, describes Hoffmann as "an alcoholic and cretin who knew little more about painting than did the average plumber". Hoffmann's answer to his critics was that he knew what Hitler wanted and what would appeal to him. Nevertheless, even some of Hoffmann's choices were dismissed from the exhibition by Hitler. A room full of somewhat more modern paintings which Hoffmann had selected as possibilities were angrily dismissed by Hitler with a gesture. Hoffmann remained in charge for subsequent annual Great German Art Exhibitions, making the preliminary selections which were then hung for Hitler to approve or veto. Hoffmann preferred the conventional work of painters from southern Germany, what Propaganda Minister Joseph Goebbels called in his diary "Munich-school kitsch", over that of the more experimental painters from the north. In May 1938, when Hitler decreed the "Law for the Confiscation of the Products of Degenerate Art" – which retroactively justified the Nazis' confiscation, without payment, of modern art from museums and galleries for the exhibition of "Degenerate Art" mounted in Munich in July 1937, and allowed for the further unpaid removal of such art from institutions and individuals – Hoffmann was one of the commissioners named to centralize the condemnation and confiscation process, along with chairman Adolf Ziegler, President of the Reich Chamber for Visual Arts, the art dealer Karl Haberstock, and others. A year later, Josef Goebbels, the Reich Propaganda Minister, brought the commission into his Ministry and restaffed it to include more art dealers, since the sale of the confiscated works internationally was a source of hard currency for the Nazi regime – although not as much as was expected, since the knowledge that the Nazis were putting large numbers of the artworks up for sale depressed their market value. When auctions were halted as war approached, there were still over 12,000 works stored in warehouses which the commission Hoffmann sat on had condemned as artistically worthless. Hitler personally inspected these, and refused to allow them to be returned to the collections from which they had been confiscated. The result was the burning of 1,004 oil paintings and 3,825 other works in the courtyard of Berlin's central fire station, on 20 March 1939. Along with sculptor Arno Breker, stage designer Benno von Arent, architect Gerdy Troost, and museum director Hans Posse, Hoffmann was one of the few people whose artistic judgment Hitler trusted. He bestowed the honorific title of "Professor" on Hoffmann in 1938, something he did for many of his favorites in the arts, such as architects Albert Speer and Hermann Giesler, and sculptors Breker and Josef Thorak. Hoffmann accompanied Hitler on his state visit to Italy in 1938, in which the Führer was much taken by the beauty of the Italian cities of Rome, Naples and Florence and the artworks and architecture they contained. Hoffmann (with von Ribbentrop's photographer Helmut Laux) was in the party that went to the Soviet Union when Foreign Minister Joachim von Ribbentrop secretly negotiated the Non-Aggression Treaty with Vyacheslav Molotov in 1939, which enabled Hitler to invade Poland. Hitler specifically asked Hoffmann to take a close-up photograph of Stalin's earlobes, by which he thought he could determine if the Soviet leader was Jewish or not. Earlobes that were "attached" would indicate Jewish blood, while those that were "separate" would be Aryan. Hoffmann took the requisite image, and Hitler determined, to his own satisfaction, that Stalin was not Jewish. Hitler would not allow Hoffmann to publish photographs of Stalin if he was smoking a cigarette, deeming it inappropriate for a leader of Stalin's status to be shown in that way. Besides introducing him to Eva Braun, Hoffmann also introduced Hitler to art dealer Maria Almas Dietrich, who used that connection to sell hundreds of paintings to Hitler himself, for the collection of Hitler's planned Führermuseum in his hometown of Linz, Austria, as well as to other high-ranking Nazis, and to various German museums. In 1941, Hoffmann was chief among the many Nazi chieftains who took advantage of the occupation of the Netherlands to buy paintings and other artworks from Dutch dealers, sometimes at inflated prices. That drove the art market up, much to the consternation of Hans Posse, who had been commissioned by Hitler to assemble a collection for the planned museum. Posse appealed to Hitler to put a stop to it, but Hitler refused the request. Hoffmann was also the person who recommended Dr. Theodor Morell to Hitler for treatment of his eczema. Morell, who was a member of the Nazi Party, became Hitler's personal physician and treated him for numerous complaints with a panoply of drugs, including amphetamines, cocaine, oxycodone, barbiturates, morphine, strychnine and testosterone, which may have contributed to Hitler's degraded physical condition by the end of the war. In January 1940, Hoffmann was appointed as a member of the Nazi German Reichstag for electoral constituency 22, Düsseldorf East. After the passage of the Enabling Act of 1933, the Reichstag had become a powerless entity with little function except to serve as a stage setting for some of Hitler's policy speeches. After about 1941, Hoffmann began to lose favor with Hitler, primarily because Martin Bormann, Hitler's personal secretary, did not like him. Bormann increasingly controlled access to Hitler, and fed him misinformation and innuendo about any rivals for Hitler's attention, such as Hoffmann. Another reason for Hitler's disfavour was Hoffmann's increasing reliance on alcohol. By 1945, Hoffmann was an alcoholic. == Later life == Hoffmann was arrested by the United States Army on 10 May 1945. He was tried by a denazification court for war profiteering. Hoffmann was classified as a "major offender" in January 1947 by the Munich Spruchkammer, sentenced to 10 years in prison, and had his entire fortune confiscated. Werner Friedman called him one of the "greediest parasites of the Hitler plague." On appeal, Hoffmann's sentence was reduced to four years, because of his lack of official position within the Third Reich. Hoffmann figures prominently in the OSS Art Looting Investigation Unit's Reports 1945–46, Detailed Intelligence Report DIR N°1 carries his name. Hoffmann was released from prison on 31 May 1950, and some of his assets were returned to him. He settled in the small village of Epfach in southern Bavaria. In 1954 a ten-part autobiographical series, "Hoffmann's Tales", was published in the "Münchner Illustrierte", the result of interviews by journalist Joe Heydecker, later collected as a book in 2008. Hoffmann published his memoirs in London in 1955 under the title Hitler Was My Friend. In 1956, the Bavarian State ordered all art under its control and formerly possessed by Hoffmann to be returned to him. Hoffmann's widow, Erna, continued to live there together with the former silent-movie star Wera Engels. He died in 1957 at the age of 72. == Family == Hoffmann married Therese "Lelly" Baumann (1886–1928), who was very fond of Hitler, in 1911. Their daughter Henriette ("Henny") was born on 2 February 1913 and followed by a son, Heinrich ("Heini") on 24 October 1916. Henriette married National Hitler Youth Leader Baldur von Schirach, who provided introductions to many of Hoffmann's picture books, in 1932. Therese Hoffmann died a sudden and unexpected death in 1928. Hoffmann remarried shortly afterwards in 1929; his second wife was composer Erna Gröbke (1904–1996). == Photographic archive == The central image archive of Heinrich Hoffmann's company was seized by the US Army at the end of the war. At this point, the archive comprised about 500,000 photographs (an often-quoted figure of 2.5 million is probably too high). In 1950, most of the archive was taken by the US Army's historical division to the United States, where it was given to the US National Archives and Records Administration. The collection of 280,000 images remains an important source for scholars of Nazi Germany. These photographs are in the public domain in the US owing to their status as seized Nazi property, otherwise their copyrights would first expire on 1 January 2028. These photos were later the subject of a lawsuit, Price v. United States. A smaller part of the photo archive remained in the possession of the Hoffmann family. Hoffmann's son Heinrich Jr sold some photographs through the "Contemporary Image Archive" which he founded. The remaining collection was sold to the Bavarian State Library (Bayerische Staatsbibliothek) in Munich, in 1993. Other smaller collections exist, controlled by Getty Images, the archive of the Austrian Resistance in Vienna, German National Museum in Nuremberg, the "Library for Contemporary History" in Stuttgart, the German Historical Museum in Berlin, and the German Federal Archives. === Secret photos of Hitler === Nine photographs taken by Hoffmann reveal how Adolf Hitler rehearsed poses and hand gestures for his public speeches. He asked Hoffmann to take these shots so he could see what he would look like to his audience, then used them to help shape his performances, which he was constantly refining. Hitler asked that the photographs be destroyed, a request which Hoffmann did not honor. == Postwar claims for Nazi-looted art == Many artworks looted from persecuted Jewish collectors passed through Hoffmann. Restitution claims were met with resistance. In 2020, following years of negotiations, Jan van der Heyden's painting View of a Dutch Square was restituted to the heirs of Gottlieb and Mathilde Kraus, who fled Vienna in March 1938. Hoffmann had received it as a gift under the Nazis. After the war Bavaria made no attempt to return the work to the Kraus family, instead selling it for little money in 1962 to Hoffmann's daughter, Henriette Hoffmann-von Schirach. == References == Informational notes Citations Bibliography Bullock, Alan (1962). Hitler: A Study in Tyranny. London: Penguin. LCCN 63005065. Bullock, Alan (1992). Hitler and Stalin: Parallel Lives. New York: Knopf. ISBN 0-394-58601-8. Evans, Richard J. (2005). The Third Reich in Power. Penguin Books. ISBN 0-14-303790-0. Fest, Joachim C. (1970) [1963]. The Face of the Third Reich. Translated by Bullock, Michael. New York: Penguin. ISBN 978-0201407143. Fest, Joachim C. (1975) [1973]. Hitler. Translated by Winston, Richard; Winston, Clara. New York: Vantage Press. ISBN 0-394-72023-7. Joachimsthaler, Anton (1999) [1995]. The Last Days of Hitler: The Legends, the Evidence, the Truth. Bögler, Helmut (trans.). London: Brockhampton Press. ISBN 978-1-86019-902-8. Kershaw, Ian (2008). Hitler: A Biography. New York: W. W. Norton & Company. ISBN 978-0-393-06757-6. Lilla, Joachim (2004). Statisten in Uniform: Die Mitglieder des Reichstags 1933–1945. Ein biographisches Handbuch unter Einbeziehung der völkischen und nationalsozialistischen Reichstagsabgeordneten ab Mai 1924 [Extras in Uniform: The Members of the Reichstag 1933–1945: A Biographical Handbook: including the Völkisch and National Socialist Deputies from May 1924] (in German). Dusseldorf, Germany: Droste Verlag. ISBN 978-3-7700-5254-7. Rosenbaum, Ron (1998). Explaining Hitler. New York: HarperPerennial. ISBN 0-06-095339-X. Spotts, Frederic (2002). Hitler and the Power of Aesthetics. Woodstock, New York: Overkill Press. ISBN 1-58567-345-5. Stockhorst, Erich (1985). 5000 Köpfe: Wer War Was im 3. Reich. Arndt. ISBN 978-3-887-41116-9. == External links == Newspaper clippings about Heinrich Hoffmann in the 20th Century Press Archives of the ZBW Fotoarchiv Heinrich Hoffmann in the Bavarian State Library (database with 70.000 indexed and digitized images)
Wikipedia/Heinrich_Hoffmann_(photographer)
User-generated content (UGC), alternatively known as user-created content (UCC), emerged from the rise of web services which allow a system's users to create content, such as images, videos, audio, text, testimonials, and software (e.g. video game mods) and interact with other users. Online content aggregation platforms such as social media, discussion forums and wikis by their interactive and social nature, no longer produce multimedia content but provide tools to produce, collaborate, and share a variety of content, which can affect the attitudes and behaviors of the audience in various aspects. This transforms the role of consumers from passive spectators to active participants. User-generated content is used for a wide range of applications, including problem processing, news, entertainment, customer engagement, advertising, gossip, research and more. It is an example of the democratization of content production and the flattening of traditional media hierarchies. The BBC adopted a user-generated content platform for its websites in 2005, and TIME Magazine named "You" as the Person of the Year in 2006, referring to the rise in the production of UGC on Web 2.0 platforms. CNN also developed a similar user-generated content platform, known as iReport. There are other examples of news channels implementing similar protocols, especially in the immediate aftermath of a catastrophe or terrorist attack. Social media users can provide key eyewitness content and information that may otherwise have been inaccessible. Since 2020, there has been an increasing number of businesses who are utilizing User Generated Content (UGC) to promote their products and services. Several factors significantly influence how UGC is received, including the quality of the content, the credibility of the creator, and viewer engagement. These elements can impact users' perceptions and trust towards the brand, as well as influence the buying intentions of potential customers. UGC has proven to be an effective method for brands to connect with consumers, drawing their attention through the sharing of experiences and information on social media platforms. Due to new media and technology affordances, such as low cost and low barriers to entry, the Internet is an easy platform to create and dispense user-generated content, allowing the dissemination of information at a rapid pace in the wake of an event. == Definition == The advent of user-generated content marked a shift among media organizations from creating online content to providing facilities for amateurs to publish their own content. User-generated content has also been characterized as citizen media as opposed to the "packaged goods media" of the past century. Citizen Media is audience-generated feedback and news coverage. People give their reviews and share stories in the form of user-generated and user-uploaded audio and user-generated video. The former is a two-way process in contrast to the one-way distribution of the latter. Conversational or two-way media is a key characteristic of so-called Web 2.0, which encourages the publishing of one's own content and commenting on other people's content. The role of the passive audience, therefore, has shifted since the birth of new media, and an ever-growing number of participatory users are taking advantage of these interactive opportunities, especially on the Internet, to create independent content. Grassroots experimentation then generated an innovation in sounds, artists, techniques, and associations with audiences, which then are being used in mainstream media. The active, participatory, and creative audience is prevailing today with relatively accessible media, tools, and applications, and its culture is in turn affecting mass media corporations and global audiences. The Organisation for Economic Co-operation and Development (OECD) has defined three core variables for UGC: Accessible Content: User-generated content (UGC) is publicly produced through platforms located on the Internet and is available to any individual browsing such a publicly accessible website or a public social media account. There are other contexts where users must remain in a community or closed group to access and publish on such platforms (for example, wikis). This is a way of differentiating that although the content is accessible to the audience, there are certain restrictions for the users who generates the content. Creative effort: Creative effort was put into creating the work or adapting existing works to construct a new one; i.e. users must add their own value to the work. UGC often also has a collaborative element to it, as is the case with websites that users can edit collaboratively. For example, merely copying a portion of a television show and posting it to an online video website (an activity frequently seen on the UGC sites) would not be considered UGC. However, uploading photographs, expressing one's thoughts in a blog post or creating a new music video could be considered UGC. Yet the minimum amount of creative effort is hard to define and depends on the context. Creation outside of professional routines and practices: User-generated content is generally created outside of professional routines and practices. It often does not have an institutional or a commercial market context. In extreme cases, UGC may be produced by non-professionals without the expectation of profit or remuneration. Motivating factors include connecting with peers, achieving a certain level of fame, notoriety, or prestige, and the desire to express oneself. == Media pluralism == According to Cisco, in 2016 an average of 96,000 petabytes was transferred monthly over the Internet, more than twice as many as in 2012. In 2016, the number of active websites surpassed 1 billion, up from approximately 700 million in 2012. This means the content we like others currently have access to is even more diverse, incorporated, and unique than ever before. Reaching 1.66 billion daily active users in Q4 2019, Facebook has emerged as the most popular social media platform globally. Other social media platforms are also dominant at the regional level such as: Twitter in Japan, Naver in the Republic of Korea, Instagram (owned by Facebook) and LinkedIn (owned by Microsoft) in Africa, VKontakte (VK) and Odnoklassniki (eng. Classmates) in Russia and other countries in Central and Eastern Europe, WeChat and QQ in China. However, a concentration phenomenon is occurring globally giving dominance to a few online platforms that become popular for some unique features they provide, most commonly for the added privacy they offer users through disappearing messages or end-to-end encryption (e.g. WhatsApp, Snapchat, Signal, and Telegram), but they have tended to occupy niches and to facilitate the exchanges of information that remain rather invisible to larger audiences. Production of freely accessible information has been increasing since 2012. In January 2017, Wikipedia had more than 43 million articles, almost twice as many as in January 2012. This corresponded to a progressive diversification of content and an increase in contributions in languages other than English. In 2017, less than 12 percent of Wikipedia content was in English, down from 18 percent in 2012. Graham, Straumann, and Hogan say that the increase in the availability and diversity of content has not radically changed the structures and processes for the production of knowledge. For example, while content on Africa has dramatically increased, a significant portion of this content has continued to be produced by contributors operating from North America and Europe, rather than from Africa itself. == History == The massive, multi-volume Oxford English Dictionary was exclusively composed of user-generated content. In 1857, Richard Chenevix Trench of the London Philological Society sought public contributions throughout the English-speaking world for the creation of the first edition of the OED. As Simon Winchester recounts: So what we're going to do, if I have your agreement that we're going to produce such a dictionary, is that we're going to send out invitations, were going to send these invitations to every library, every school, every university, every book shop that we can identify throughout the English-speaking world... everywhere where English is spoken or read with any degree of enthusiasm, people will be invited to contribute words. And the point is, the way they do it, the way they will be asked and instructed to do it, is to read voraciously and whenever they see a word, whether it's a preposition or a sesquipedalian monster, they are to... if it interests them and if where they read it, they see it in a sentence that illustrates the way that that word is used, offers the meaning of the day to that word, then they are to write it on a slip of paper... the top left-hand side you write the word, the chosen word, the catchword, which in this case is 'twilight'. Then the quotation, the quotation illustrates the meaning of the word. And underneath it, the citation, where it came from, whether it was printed or whether it was in manuscript... and then the reference, the volume, the page and so on... and send these slips of paper, these slips are the key to the making of this dictionary, into the headquarters of the dictionary. In the following decades, hundreds of thousands of contributions were sent to the editors. In the 1990s several electronic bulletin board systems were based on user-generated content. Some of these systems have been converted into websites, including the film information site IMDb which started as rec.arts.movies in 1990. With the growth of the World Wide Web the focus moved to websites, several of which were based on user-generated content, including Wikipedia (2001) and Flickr (2004). User-generated Internet video was popularized by YouTube, an online video platform founded by Chad Hurley, Jawed Karim and Steve Chen in April 2005. It enabled the video streaming of MPEG-4 AVC (H.264) user-generated content from anywhere on the World Wide Web. The BBC set up a pilot user-generated content team in April 2005 with 3 staff. In the wake of the 7 July 2005 London bombings and the Buncefield oil depot fire, the team was made permanent and was expanded, reflecting the arrival in the mainstream of the citizen journalist. After the Buncefield disaster the BBC received over 5,000 photos from viewers. The BBC does not normally pay for content generated by its viewers. In 2006, CNN launched CNN iReport, a project designed to bring user-generated news content to CNN. Its rival Fox News Channel launched its project to bring in user-generated news, similarly titled "uReport". This was typical of major television news organizations in 2005–2006, who realized, particularly in the wake of the London 7 July bombings, that citizen journalism could now become a significant part of broadcast news. Sky News, for example, regularly solicits for photographs and videos from its viewers. User-generated content was featured in Time magazine's 2006 Person of the Year, in which the person of the year was "you", meaning all of the people who contribute to user-generated media, including YouTube, Wikipedia and Myspace. A precursor to user-generated content uploaded on YouTube was America's Funniest Home Videos. == Motivation for creating UGC == The benefits derived from user-generated content for the content host are clear, these include a low-cost promotion, positive impact on product sales, and fresh content. However, the benefit to the contributor is less direct. There are various theories behind the motivation for contributing user-generated content, ranging from altruistic, to social, to materialistic. Due to the high value of user-generated content, a number of sites use incentives to encourage their generation. These incentives can be generally categorized into implicit incentives and explicit incentives. Sometimes, users are also given monetary incentives to encourage them to create captivating and inspiring UGC. Implicit incentives: Implicit incentives are not based on anything tangible and are related to users' motivations for creating and sharing content (UGC). Value motives are extrinsic purposes directly linked to sharing useful information and exchanging opinions about something relevant to the community. Likewise, users are motivated to solve a specific problem with the help of the shared knowledge of other users interacting on platforms such as YouTube, Instagram, TikTok, and Twitter. For example, a user creates a video on TikTok with doubts about how to use a product, and users interact by sharing their experiences. On the other hand, users can be socially motivated, through a social reward such as badges within social platforms. These badges are earned when users reach a certain level of participation which may or may not come with additional privileges. Yahoo! Answers is an example of this type of social incentive. The desire for social recognition, such as popularity or respect within a community, is closely tied to personal fulfillment and the enhancement of one's social standing. Social incentives cost the host site very little and can catalyze vital growth; however, their very nature requires a sizable existing community before it can function. Naver Knowledge-iN is another example of this type of social incentive. It uses a point system to encourage users to answer more questions by receiving points. The desire for social recognition, such as popularity or respect within a community, is closely tied to personal fulfillment and enhancing one's social standing. The identification motivation has strong external standardization and internalization of behavioral goals, such as social identity, that is, users will follow some subjective norms and images to constrain and practice their behaviors. The integration has the strongest external standardization and goal internalization, and the agent often integrates its actual actions with the subjective norms of the environment, so it has the effect of self-restraint and self-realization, such as the sense of belonging. Explicit incentives: These incentives refer to tangible rewards. Explicit incentives can be split into externality and projection. External motivation is more inclined to economic and material incentives, such as the reward for engaging in a task, which has little internalization and lacks relevant external norms and constraints. Examples include financial payment, entry into a contest, a voucher, a coupon, or frequent traveler miles. Direct explicit incentives are easily understandable by most and have immediate value regardless of the community size; sites such as the Canadian shopping platform Wishabi and Amazon Mechanical Turk both use this type of financial incentive in slightly different ways to encourage users participation. The projective agent has some external norms, but the degree of internalization is not enough, that is, it has not been fully recognized by the actor. The drawback to explicit incentives is that they may cause the user to be subject to the overjustification effect, eventually believing the only reason for participating is for the explicit incentive. This reduces the influence of the other form of social or altruistic motivation, making it increasingly costly for the content host to retain long-term contributors. == Paid content == A growing subset of user-generated content (UGC) in this field is Paid UGC. It’s primarily used by brands and businesses looking for organic content to leverage authenticity, perspective of their customers and trust associated with user generated content (UGC) for marketing purposes. According to several studies, a large percent of millennials and younger consumers look up information on products through social media and see UGC before making a purchase decision. Research suggests 78% of millennials and 70% of Gen-Z rely on UGC to determine their purchasing decision. Paid UGC is distinct from normal UGC through how it’s created. It’s created by a UGC Creator, someone who creates authentic looking content on a product or service by brand’s request. In return, they receive compensation in form of monetary rewards, free products, discounts, exclusive access or other valuable incentives. It is not to be confused with influencer marketing. Unlike influencers, UGC developers focus on creating organic product reviews and the content isn’t shared on their personal pages and on the company’s page instead. On the other hand, influencers have a strong connection with their audience, showcasing branded content on their social media feeds and directly engaging with their followers. The structure of work differs since influencer deals are more comprehensive and agreements include creating and distributing content across their personal platforms. However, it’s possible for UGC Creators to function as macro-influencers if they have 100k+ followers. In this case, they can accept influencer deals where they post on their personal page in exchange for money or UGC deals where the brands post on their own page. There are several ways where paid UGC differs from non-paid UGC: Incentive - Paid UGC Creators receive compensation for their contributions while non-paid UGC is created voluntarily by their customers Control - Brands can give specific guidelines and request what content they want the user to make, ensuring content aligns with the marketing objective Posting Channel - Unpaid UGC is posted unsolicited by customers on their profile while Paid UGC is posted directly on the brand’s profile Companies leveraging paid UGC see increased credibility with their platform, as customers connect with creators who feel like everyday people facing similar challenges. By showcasing the product as a real solution to a relatable problem, UGC makes brands more trustworthy and authentic. With commercial ads, customers can’t put a face behind the high production edits and don’t connect with it. A survey showcases that UGC is 85% better at increasing conversion rates than any studio content. This showcases the content's ability to impact and potential reasons why companies increasingly utilize it for their social media strategy. Nevertheless, there are concerns on the authenticity of content published on social media, particularly with the increasing prevalence of paid user generated content. Additionally, legal considerations such as copyright laws, privacy regulations and trademark protection play a role in content dissemination. As this field of work grows, there is potential for increased liability, particularly regarding disclosure requirements for paid content and will continue to evolve over time. == Ranking and assessment == The distribution of UGC across the Web provides a high volume data source that is accessible for analysis, and offers utility in enhancing the experiences of end users. Social science research can benefit from having access to the opinions of a population of users, and use this data to make inferences about their traits. Applications in information technology seek to mine end user data to support and improve machine-based processes, such as information retrieval and recommendation. However, processing the high volumes of data offered by UGC necessitate the ability to automatically sort and filter these data points according to their value. Determining the value of user contributions for assessment and ranking can be difficult due to the variation in the quality and structure of this data. The quality and structure of the data provided by UGC is application-dependent, and can include items such as tags, reviews, or comments that may or may not be accompanied by useful metadata. Additionally, the value of this data depends on the specific task for which it will be utilized and the available features of the application domain. Value can ultimately be defined and assessed according to whether the application will provide service to a crowd of humans, a single end user, or a platform designer. The variation of data and specificity of value has resulted in various approaches and methods for assessing and ranking UGC. The performance of each method essentially depends on the features and metrics that are available for analysis. Consequently, it is critical to have an understanding of the task objective and its relation to how the data is collected, structured, and represented in order to choose the most appropriate approach to utilizing it. The methods of assessment and ranking can be categorized into two classes: human-centered and machine-centered. Methods emphasizing human-centered utility consider the ranking and assessment problem in terms of the users and their interactions with the system, whereas the machine-centered method considers the problem in terms of machine learning and computation. The various methods of assessment and ranking can be classified into one of four approaches: community-based, user-based, designer-based, and hybrid. Community-based approaches rely on establishing ground truth based on the wisdom of the crowd regarding the content of interest. The assessments provided by the community of end users is utilized to directly rank content within the system in human-centered methods. The machine-centered method applies these community judgments in training algorithms to automatically assess and rank UGC. User-based approaches emphasize the differences between individual users so that ranking and assessment can interactively adapt or be personalized given the particular requirements of each user. The human-centered approach accentuates interactive interfaces where the user can define and redefine their preferences as their interests shift. On the other hand, machine-centered approaches model the individual user according to explicit and implicit knowledge that is gathered through system interactions. Designer-based approaches primarily use machine-centered methods to essentially maximize the diversity of content presented to users in order to avoid constraining the space of topic selections or perspectives. The diversity of content can be assessed with respect to various dimensions, such as authorship, topics, sentiments, and named entities. Hybrid approaches seek to combine methods from the various frameworks in order to develop a more robust approach for assessing and ranking UGC. Approaches are most often combined in one of two ways: the crowd-based approach is often used to identify hyperlocal content for a user-based approach, or a user-based approach is used to maintain the intent of a designer-based approach. Key concepts Contribution is by users of a product rather than the firm Creative in nature and adds something new Posted online and generally accessible. == Types == There are a number of types of user-generated content: Internet forums, where people talk about different topics; blogs are services where users can post about multiple topics, product reviews on a supplier website or in social media; wikis such as Wikipedia and Fandom allow users, sometimes including anonymous users, to edit the content. Another type of user-generated content are social networking sites like Facebook, Instagram, Tumblr, Twitter, Snapchat, Twitch, TikTok or VK, where users interact with other people via chatting, writing messages, posting images or links, and sharing content. Media hosting sites such as YouTube and Vimeo allow users to post content. Some forms of user-generated content, such as a social commentary blog, can be considered as a form of citizen journalism. === Blogs === Blogs are websites created by individuals, groups, and associations. They mostly consist of journal-style text and enable interaction between a blogger and reader in the form of online comments. Self-hosted blogs can be created by professional entities such as entrepreneurs and small businesses. Blog hosting platforms include WordPress, Blogger, and Medium; Typepad is often used by media companies; Weebly is geared for online shopping. Social networking blogging platforms include Tumblr, LiveJournal, and Weibo. Among the multiple blogs on the web, Boing Boing is a group blog with themes including technology and science fiction; HuffPost blogs include opinions on subjects such as politics, entertainment, and technology. There are also travel blogs such as Head for Points, Adventurous Kate, and an early form of The Points Guy. === Websites === Entertainment social media and information sharing websites include Reddit, 9gag, 4chan, Upworthy and Newgrounds. Sites like 9Gag allow users to create memes and quick video clips. Sites like Tech in Asia and Buzzfeed engage readers with professional communities by posting articles with user-generated comment sections. Other websites include fanfiction sites such as FanFiction.Net; imageboards; artwork communities like DeviantArt; mobile photos and video sharing sites such as Picasa and Flickr; audio social networks such as SoundCloud; crowd funding or crowdsourcing sites like Kickstarter, Indiegogo, and ArtistShare; and customer review sites such as Yelp. After launching in the mid-2000s, major UGC-based adult websites like Pornhub, YouPorn and xHamster and became the dominant mode of consumption and distribution of pornographic content on the internet. The appearance of pornographic content on sites like Wikipedia and Tumblr led moderators and site owners to institute stricter limits on uploads. The travel industry, in particular, has begun utilizing user-generated content to show authentic traveler experiences. Travel-related companies such as The Millennial, Gen Z, and Busabout relaunched their websites featuring UGC images and social content by their customers posted in real time. TripAdvisor includes reviews and recommendations by travelers about hotels, restaurants, and activities. The restaurant industry has also been altered by a review system the places more emphasis on online reviews and content from peers than traditional media reviews. In 2011 Yelp contained 70% of reviews for restaurants in the Seattle area compared to Food & Wine Magazine containing less than 5 percent. === Video games === Video games can have fan-made content in the form of mods, fan patches, fan translations or server emulators. Some games come with level editor programs to aid in their creation. A few massively multiplayer online games including Star Trek Online, Dota 2, and EverQuest 2 have UGC systems integrated into the game itself. A metaverse can be a user-generated world, such as Second Life. Second Life is a 3-D virtual world which provides its users with tools to modify the game world and participate in an economy, trading user content created via online creation for virtual currency. === Advertising === A popular use of UGC involves collaboration between a brand and a user. An example is the "Elf Yourself" videos by Jib Jab that come back every year around Christmas. The Jib Jab website lets people use their photos of friends and family that they have uploaded to make a holiday video to share across the internet. Then, you cut and paste the faces of the people in the pictures to animated dancing elves, to make this work. Some brands are also using UGC images to boost the performance of their paid social ads. For example, Toyota leveraged UGC for their "Feeling the Streets" Facebook ad campaign and were able to increase their total ad engagement by 440%. === Retailers === Some bargain hunting websites feature user-generated content, such as eBay, Dealsplus, and FatWallet which allow users to post, discuss, and control which bargains get promoted within the community. Because of the dependency of social interaction, these sites fall into the category of social commerce. === Educational === Wikipedia, a free encyclopedia, is one of the largest user-generated content databases in the world. Platforms such as YouTube have frequently been used as an instructional aide. Organizations such as the Khan Academy and the Green brothers have used the platform to upload series of videos on topics such as math, science, and history to help aid viewers master or better understand the basics. Educational podcasts have also helped in teaching through an audio platform. Personal websites and messaging systems like Yahoo Messenger have also been used to transmit user-generated educational content. There have also been web forums where users give advice to each other. Students can also manipulate digital images or video clips to their advantage and tag them with easy to find keywords then share them to friends and family worldwide. The category of "student performance content" has risen in the form of discussion boards and chat logs. Students could write reflective journals and diaries that may help others. The websites SparkNotes and Shmoop are used to summarize and analyze books so that they are more accessible to the reader. === Photo sharing === Photo sharing websites are another popular form of UGC. Flickr is a site in which users are able to upload personal photos they have taken and label them in regards to their "motivation".: 46  Flickr not only hosts images but makes them publicly available for reuse and reuse with modification. Instagram is a social media platform that allows users to edit, upload and include location information with photos they post. Panoramio.com and Flickr use metadata, such as GPS coordinates that allows for geographic placement of images. In 1995, Webshots was one of the first online photo sharing platforms. Webshots offered an easy-to-use interface and basic photo editing tools. In 2002, SmugMug was founded, focusing on providing a high-quality photo sharing experience for professional photographers. SmugMug offers features such as custom photo galleries and e-commerce options. In 2003, Yahoo! Photos was one of the most popular photo sharing platforms thanks to its integration with Yahoo's email and search services. === Video sharing === Video sharing websites are another popular form of UGC. YouTube and TikTok allow users to create and upload videos. == Effect on journalism == The incorporation of user-generated content into mainstream journalism outlets is considered to have begun in 2005 with the BBC's creation of a user-generated content team, which was expanded and made permanent in the wake of the 7 July 2005 London bombings. The incorporation of Web 2.0 technologies into news websites allowed user-generated content online to move from more social platforms such as MySpace, LiveJournal, and personal blogs, into the mainstream of online journalism, in the form of comments on news articles written by professional journalists, but also through surveys, content sharing, and other forms of citizen journalism. Since the mid-2000s, journalists and publishers have had to consider the effects that user-generated content has had on how news gets published, read, and shared. A 2016 study on publisher business models suggests that readers of online news sources value articles written both by professional journalists, as well as users—provided that those users are experts in a field relevant to the content that they create. In response to this, it is suggested that online news sites must consider themselves not only a source for articles and other types of journalism but also a platform for engagement and feedback from their communities. The ongoing engagement with a news site that is possible due to the interactive nature of user-generated content is considered a source of sustainable revenue for publishers of online journalism going forward. Journalists are increasingly sourcing UGC from platforms, such as Facebook and TikTok, as news shifts to a digital space. This form of crowdsourcing can include using user content to support claims, using social media platforms to contact witnesses and obtain relevant images and videos for articles. == Use in marketing == The use of user-generated content has been prominent in the efforts of marketing online, especially among millennials. A good reason for this may be that 86% of consumers say authenticity is important when deciding which brands they support, and 60% believe user-generated content is not only the most authentic form of content, but also the most influential when making purchasing decisions. Companies can leverage user-generated content (UGC) to improve their products and services, through feedback obtained by users. Additionally, UGC can improve decision-making processes by strengthening potential consumers and guiding them toward purchasing and consumption decisions. An increasing number of companies have been employing UGC techniques into their marketing efforts, such as Starbucks with their "White Cup Contest" campaign where customers competed to create the best doodle on their cups. The effectiveness of UGC in marketing has been shown to be significant as well. For instance, the "Share a Coke" by Coca-Cola campaign in which customers uploaded images of themselves with bottles to social media attributed to a two percent increase in revenue. Of millennials, UGC can influence purchase decisions up to fifty-nine percent of the time, and eighty-four percent say that UGC on company websites has at least some influence on what they buy, typically in a positive way. As a whole, consumers place peer recommendations and reviews above those of professionals. User-generated content (UGC) can enhance marketing strategies by gathering relevant information from users and directing social media advertising efforts toward UGC marketing, which functions similarly to influencer marketing. However, each serves different purposes and plays distinct roles. The distinction between UGC (User-Generated Content) creators and influencers lies primarily in their approaches to content creation. UGC creators are a varied range of individuals who share content based on their personal experiences with a product, service, or brand. They typically do not collaborate with specific brands, which lends authenticity to their posts and makes them relatable to their audience. In contrast, influencers have a significant and engaged following. They create branded content through sponsorships and paid partnerships with companies. Their role is to influence their followers' purchasing decisions, and their content is usually more polished and aligns closely with the branding and messaging of the companies they work with. User-generated content used in a marketing context has been known to help brands in a number of ways. It encourages more engagement with its users and doubles the likeliness that the content will be shared. It builds trust with consumers. With a majority of consumers trusting user-generated content over brand-provided information, such content can improve consumer relations. It provides SEO value for brands. This in turn means more traffic is driven to the brands' websites and more content is linked back to the website. It assures purchasing decisions which will keep customers shopping. With user-generated content, the conversion rate increases by as much as 4.6%. It increases follower count on various social media platforms. It helps integration with traditional marketing and/or promotional techniques which in turn drives more conversions for companies. It helps increase profit with a significant reduction in costs for companies. It is typically a low-cost form of promotion since content is provided to customers for free. === Facts and statistics === 86% of companies leverage user-generated content in their marketing techniques. 92% of potential customers seek reviews from existing customers. 64% of customers look for reviews and ratings before they start their checkout process. 90% of brands have seen an evident increase in their click-through rates when their ads feature user-generated content. Keeping emails authentic and genuine can help increase the click-through rate by 73%. 35% of Generation Z trusts user-generated content. A 74% increase is found in conversion rates simply because of user-generated content used in product pages. == Opportunities == There are a number of opportunities in user-generated content. The advantage of user-generated content is that it is a quick, easy way to reach the general public. Here are some examples: Companies could use social media for branding, and set up contests for the audience to submit their own creations. Consumers and other general audience members like to engage. Some have used a storytelling platform to both share and converse with others. To raise awareness, whether it be for an organization, company, or event. Reviews play a major role in a customers' decision making. Gain perspectives from members that one would not otherwise get to engage with. Personalization of the content put out; 71% of consumers like personalized ads. Efforts to encourage to participation can be weakened by company content ownership claims. == Criticism == The term "user-generated content" has received some criticism. The criticism to date has addressed issues of fairness, quality, privacy, the sustainable availability of creative work and effort among legal issues namely related to intellectual property rights such as copyrights etc. Some commentators assert that the term "user" implies an illusory or unproductive distinction between different kinds of "publishers", with the term "users" exclusively used to characterize publishers who operate on a much smaller scale than traditional mass-media outlets or who operate for free. Such classification is said to perpetuate an unfair distinction that some argue is diminishing because of the prevalence and affordability of the means of production and publication. A better response might be to offer optional expressions that better capture the spirit and nature of such work, such as EGC, Entrepreneurial Generated Content (see external reference below). Sometimes creative works made by individuals are lost because there are limited or no ways to precisely preserve creations when a UGC Web site service closes down. One example of such loss is the closing of the Disney massively multiplayer online game "VMK". VMK, like most games, has items that are traded from user to user. A number of these items are rare within the game. Users are able to use these items to create their own rooms, avatars and pin lanyard. This site shut down at 10 pm CDT on 21 May 2008. There are ways to preserve the essence, if not the entirety of such work through the users copying text and media to applications on their personal computers or recording live action or animated scenes using screen capture software, and then uploading elsewhere. Long before the Web, creative works were simply lost or went out of publication and disappeared from history unless individuals found ways to keep them in personal collections. Another criticized aspect is the vast array of user-generated product and service reviews that can at times be misleading for consumer on the web. A study conducted at Cornell University found that an estimated 1 to 6 percent of positive user-generated online hotel reviews are fake. Another concern of platforms that rely heavily on user-generated content, such as Twitter and Facebook, is how easy it is to find people who holds the same opinions and interests in addition to how well they facilitate the creation of networks or closed groups. While the strength of these services are that users can broaden their horizon by sharing their knowledge and connect with other people from around the world, these platforms also make it very easy to connect with only a restricted sample of people who holds similar opinions (see Filter bubble). There is also criticism regarding whether or not those who contribute to a platform should be paid for their content. In 2015, a group of 18 famous content creators on Vine attempted to negotiate a deal with Vine representatives to secure a $1.2 million contract for a guaranteed 12 videos a month. This negotiation was not successful. == Legal issues == The ability for services to accept user-generated content opens up a number of legal concerns, from the broader sense to specific local laws. In general, knowing who committed the online crime is difficult because many use pseudonyms or remain anonymous. Sometimes it can be traced back. But in the case of a public coffee shop, they have no way of pinpointing the exact user. There is also a problem with the issues surrounding extremely harmful but not legal acts. For example, the posting of content that instigates a person's suicide. It is a criminal offense if there is proof of "beyond reasonable doubt" but different situations may produce different outcomes. Depending on the country, there is certain laws that come with the Web 2.0. In the United States, the "Section 230" exemptions of the Communications Decency Act state that "no provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider." This clause effectively provides a general immunity for websites that host user-generated content that is defamatory, deceptive or otherwise harmful, even if the operator knows that the third-party content is harmful and refuses to take it down. An exception to this general rule may exist if a website promises to take down the content and then fails to do so. === Copyright laws === Copyright laws also play a factor in relation to user-generated content, as users may use such services to upload works—particularly videos—that they do not have the sufficient rights to distribute. In multiple cases, the use of these materials may be covered by local "fair use" laws, especially if the use of the material submitted is transformative. Local laws also vary on who is liable for any resulting copyright infringements caused by user-generated content; in the United States, the Online Copyright Infringement Liability Limitation Act (OCILLA)—a portion of the Digital Millennium Copyright Act (DMCA), dictates safe harbor provisions for "online service providers" as defined under the act, which grants immunity from secondary liability for the copyright-infringing actions of their users, as long as they promptly remove access to allegedly infringing materials upon the receipt of a notice from a copyright holder or registered agent, and they do not have actual knowledge that their service is being used for infringing activities. In the UK, the Defamation Act of 1996 says that if a person is not the author, editor or publisher and did not know about the situation, they are not convicted. Furthermore, ISPs are not considered authors, editors, or publishers and they cannot have responsibility for people they have no "effective control" over. Just like the DMCA, once the ISP learns about the content, they must delete it immediately. The European Union's approach is horizontal by nature, which means that civil and criminal liability issues are addressed under the Electronic Commerce Directive. Section 4 deals with liability of the ISP while conducting "mere conduit" services, caching and web hosting services. == Research == A study on YouTube analyzing one of the video on demand systems was conducted in 2007. The length of the video had decreased by two-fold from the non-UGC content but they saw a fast production rate. The user behavior is what perpetuates the UGC. The act of P2P (peer-to-peer) was studied and saw a great benefit to the system. They also studied the impact of content aliasing, sharing of multiple copies, and illegal uploads. A study from York University in Ontario in 2012 conducted research that resulted in a proposed framework for comparing brand-related UGC and to understand how the strategy used by a company could influence the brand sentiment across different social media channels including YouTube, Twitter and Facebook. The three scholars of this study examined two clothing brands, Lulu Lemon and American Apparel. The difference between these two brands is that Lulu Lemon had a social media following while American Apparel was the complete opposite with no social media following. Unsurprisingly, Lulu Lemon had much more positive contributions compared to American Apparel which had less positive contributions. Lulu Lemon has three times the number of positive contributions, 64 percent vs 22 percent for American Apparel on Twitter while on Facebook and YouTube, they had roughly an equal number of contributions. This proves that social media can influence how a brand is perceived, usually in a more positive light. A study by Dhar and Chang, published in 2007, found that the volume of blogs posted on a music album was positively correlated with future sales of that album. == See also == == General sources == This article incorporates text from a free content work. Licensed under CC BY SA 3.0 IGO (license statement/permission). Text taken from World Trends in Freedom of Expression and Media Development Global Report 2017/2018​, 202, University of Oxford, UNESCO. == Citations == == External links == OECD study on the Participative Web: User Generated Content A Bigger Bang – an overview of the UGC trend on the Web in 2006 Branding in the Age of Social Media UGC & Examples - Information and statistics on User-Generated Content (UGC) in marketing.
Wikipedia/User-generated_content
Hand-colouring (or hand-coloring) refers to any method of manually adding colour to a monochrome photograph, generally either to heighten the realism of the image or for artistic purposes. Hand-colouring is also known as hand painting or overpainting. Typically, watercolours, oils, crayons or pastels, and other paints or dyes are applied to the image surface using brushes, fingers, cotton swabs or airbrushes. Hand-coloured photographs were most popular in the mid- to late-19th century before the invention of colour photography and some firms specialised in producing hand-coloured photographs. == History == === Pre-1900 === Monochrome (black and white) photography was first exemplified by the daguerreotype in 1839 and later improved by other methods including: calotype, ambrotype, tintype, albumen print, and gelatin silver print. The majority of photography remained monochrome until the mid-20th century, although experiments were producing colour photography as early as 1855 and some photographic processes produced images with an inherent overall colour like the blue of cyanotypes. In an attempt to create more realistic images, photographers and artists would hand-colour monochrome photographs. The first hand-coloured daguerreotypes are attributed to Swiss painter and printmaker Johann Baptist Isenring, who used a mixture of gum arabic and pigments to colour daguerreotypes soon after their invention in 1839. Coloured powder was fixed on the delicate surface of the daguerreotype by the application of heat. Variations of this technique were patented in England by Richard Beard in 1842 and in France by Étienne Lecchi in 1842 and Léotard de Leuze in 1845. Later, hand-colouring was used with successive photographic innovations, from albumen and gelatine silver prints to lantern slides and transparency photography. Parallel efforts to produce coloured photographic images affected the popularity of hand-colouring. In 1842 Daniel Davis Jr. patented a method for colouring daguerreotypes through electroplating, and his work was refined by Warren Thompson the following year. The results of the work of Davis and Thompson were only partially successful in creating colour photographs and the electroplating method was soon abandoned. In 1850 Levi L. Hill announced his invention of a process of daguerreotyping in natural colours in his Treatise on Daguerreotype. Sales of conventional uncoloured and hand-coloured daguerreotypes fell in anticipation of this new technology. Hill delayed publication of the details of his process for several years, however, and his claims soon came to be considered fraudulent. When he finally did publish his treatise in 1856, the process – whether bona fide or not – was certainly impractical and dangerous. With the advent of photographic emulsions on glass came the potential to make enlargements from them, but for the lack of a sufficiently strong light source to project them on to the receiving emulsion as prints on paper, canvas or other supports. The solar camera, employing the focussed light of the sun, addressed the problem in a repurposing of the solar microscope by American portrait artist David Acheson Woodward in 1857, and others, before being superseded by enlargers employing artificial light sources from the 1880s. Life-size portraits made by this means were hand coloured in crayon or overpainted in oils and were popular into the 1910s. Hand-colouring remained the easiest and most effective method to produce full-colour photographic images until the mid-20th century when American Kodak introduced Kodachrome colour film. ==== Japanese hand-coloured photographs (circa 1860–1899) ==== Though the hand-colouring of photographs was introduced in Europe, the technique gained considerable popularity in Japan, where the practice became a respected and refined art form beginning in the 1860s. It is possible that photographer Charles Parker and his artist partner William Parke Andrew were the first to produce such works in Japan, but the first to consistently employ hand-colouring in the country were the photographer Felice Beato and his partner, The Illustrated London News artist and colourist Charles Wirgman. In Beato's studio the refined skills of Japanese watercolourists and woodblock printmakers were successfully applied to European photography, as evidenced in Beato's volume of hand-coloured portraits, Native Types. Another notable early photographer in Japan to use hand-colouring was Yokoyama Matsusaburō. Yokoyama had trained as a painter and lithographer as well as a photographer, and he took advantage of his extensive repertoire of skills and techniques to create what he called shashin abura-e (写真油絵) or "photographic oil paintings", in which the paper support of a photograph was cut away and oil paints then applied to the remaining emulsion. Later practitioners of hand-colouring in Japan included the firm of Stillfried & Andersen, which acquired Beato's studio in 1877 and hand-coloured many of his negatives in addition to its own. Austrian Baron Raimund von Stillfried und Ratenitz, trained Japanese photographer and colourist Kusakabe Kimbei, and together they created hand-coloured images of Japanese daily life that were very popular as souvenirs. Hand-coloured photographs were also produced by Kusakabe Kimbei, Tamamura Kozaburō, Adolfo Farsari, Uchida Kuichi, Ogawa Kazumasa and others. Many high-quality hand-coloured photographs continued to be made in Japan well into the 20th century. === Post-1900 === The so-called golden age of hand-coloured photography in the western hemisphere occurred between 1900 and 1940. The increased demand for hand-coloured landscape photography at the beginning of the 20th century is attributed to the work of Wallace Nutting. Nutting, a New England minister, pursued hand-coloured landscape photography as a hobby until 1904, when he opened a professional studio. He spent the next 35 years creating hand-coloured photographs, and became the best-selling hand-coloured photographer of all time. Between 1915 and 1925 hand-coloured photographs were popular among the middle classes in the United States, Canada, Bermuda and the Bahamas as affordable and stylish wedding gifts, shower gifts, holiday gifts, friendship gifts, and vacation souvenirs. With the start of the Great Depression in 1929, and the subsequent decrease in the numbers of the middle class, sales of hand-coloured photographs sharply diminished. Despite their downturn in popularity, skilled photographers continued to create beautifully hand-coloured photographs. Hans Bellmer's hand-coloured photographs of his own doll sculptures from the 1930s provide an example of continued hand-colouring of photographs in Europe during this time. In Poland, the Monidło is an example of popular hand-coloured wedding photographs. Another hand-colour photographer, Luis Márquez (1899–1978), was the official photographer for and art adviser of the Mexican Pavilion at the 1939-40 World's Fair. In 1937 he presented Texas Governor James V. Allred a collection of hand-coloured photographs. The National Autonomous University of Mexico in Mexico City has an extensive Luis Márquez photographic archive, as does the University of Houston in Texas. By the 1950s, the availability of colour film stopped the production of hand-coloured photographs. The upsurge in popularity of antiques and collectibles in the 1960s, however, increased interest in hand-coloured photographs. Since about 1970 there has been something of a revival of hand-colouring, as seen in the work of such artist-photographers as Robin Renee Hix, Elizabeth Lennard, Jan Saudek, Kathy Vargas, and Rita Dibert. Robert Rauschenberg's and others' use of combined photographic and painting media in their art represents a precursor to this revival. In spite of the availability of high-quality colour processes, hand-coloured photographs (often combined with sepia toning) are still popular for aesthetic reasons and because the pigments used have great permanence. In many countries where colour film was rare or expensive, or where colour processing was unavailable, hand-colouring continued to be used and sometimes preferred into the 1980s. More recently, digital image processing has been used – particularly in advertising – to recreate the appearance and effects of hand-colouring. Colourization is now available to the amateur photographer using image manipulation software such as Adobe Photoshop or Gimp. == Materials and techniques == === Dyes === Basic dyes are used in the hand-colouring of photographs. Dyes are soluble colour substance, either natural or synthetic, in an aqueous solution, as opposed to pigments which are generally insoluble colour substance in an aqueous suspension. Aniline dyes, the first synthetically produced dyes originally used for the dyeing of textiles, were first used to dye albumen prints and glass transparency photographs in Germany in the 1860s. When hand-colouring with dyes, a weak solution of dye in water is preferred, and colours are often built up with repeated washes rather than being applied all at once. The approach is to stain or dye the print rather than to paint it, as too much paint will obscure photographic details. Blotting paper is used to control the amount of dye on the surface by absorbing any excess. === Watercolours === Watercolour paint has the virtue of being more permanent than dyes, but is less transparent and so more likely to obscure details. Hand-colouring with watercolours requires the use of a medium to prevent the colours from drying with a dull and lifeless finish. Before the paint can be applied, the surface of the print must be primed so that the colours are not repelled. This often includes prepping the print with a thin coating of shellac, then adding grit before colouring. Watercolour paint used in photographic hand-colouring consists of four ingredients: pigments (natural or synthetic), a binder (traditionally arabic gum), additives to improve plasticity (such as glycerine), and a solvent to dilute the paint (i.e. water) that evaporates when the paint dries. The paint is typically applied to prints using a soft brush. Watercolours often "leave a darker edge of colour at the boundaries of the painted area." Since different pigments have varying degrees of transparency, the choice of colours must be considered carefully. More transparent pigments are preferred, since they ensure greater visibility of the photographic image. === Oils === Oil paint contains particles of pigment applied using a drying oil, such as linseed oil. The conventions and techniques of using oils demands a knowledge of drawing and painting, so it is often used in professional practice. When hand-colouring with oils, the approach is more often to use the photographic image simply as a base for a painted image. The ability to create accurate oil portraits using a photographic base lent itself to art crime, with some artists claiming to paint traditional oil portraits (for a higher price) when actually tracing a photograph base in oils. Therefore, the choice of oil colours is governed by the relative transparency of the pigments to allow for authentication of the photographic base. It is necessary to size the print first to prevent absorption of the colours into the paper. In the past, photographic lantern slides were often coloured by the manufacturer, though sometimes by the user, with variable results. Usually, oil colours were used for such slides, though in the collodion era – from 1848 to the end of the 19th century – sometimes watercolours were used as well. === Crayons and pastels === The use of crayon or pastel sticks of ground pigments in various levels of saturation is also considered a highly skilled colourist's domain, as it requires knowledge of drawing techniques. Like oils, crayons and pastels generally obscure the original photograph, which produces portraits more akin to traditional paintings. The Photo-crayotype, Chromotypes and Crayon Collotypes were all used to colourize photographs by the application of crayons and pigments over a photographic impression. Charcoal and coloured pencils are also used in hand-colouring of photographs and the terms crayon, pastel, charcoal, and pencil were often used interchangeably by colourists. Hand-coloured photographs sometimes include the combined use of dyes, water-colours, oils, and other pigments to create varying effects on the printed image. Regardless of which medium is used, the main tools to apply colour are the brush and fingertip. Often the dabbing finger is covered to ensure that no fingerprints are left on the image. == Preservation and storage == In general, the preservation of hand-coloured photographs is similar to that of colour and monochrome photography. Optimal storage conditions include an environmentally controlled climate with low relative humidity (approximately 30-40% RH), temperatures under 68 degrees Fahrenheit (20 degrees Celsius), and a low concentration of particulate pollution, such as sulfuric acid, nitric acid, and ozone. The storage area must also be clean and free of pests and mould. Because hand-coloured photographs, like colour photographs, are more sensitive to light and UV radiation, storage should be in a dark location. The storage area should be secure and monitored for internal threats – such as change in temperature or humidity due to HVAC malfunction, as well as external threats, such as theft or natural disaster. A disaster plan should be created and maintained for all materials. When handling cased photographs such as daguerreotypes, albumen prints, and tintypes, especially ones that have been hand-coloured, caution is required. They are fragile and even minimal efforts to clean them can irreparably damage the image. Hand-coloured cased photographs should be stored horizontally, in a single layer, preferably faced down. Cases can be wrapped with alkaline or buffered tissue paper. If the photograph has become separated from its case, a mat and backing board can be cut from alkaline buffered museum board. The mat is placed between the image and a newly cut glass plate while the backing board supports the image from behind. This "sandwich" is then sealed with Filmoplast tape. Commercial glass cleaners should not be used on new glass plates. Loose hand-coloured tintypes can be placed between mat boards. If bent, no attempt should be made to straighten them as this could cause the emulsion to crack and/or lift. Ideally, all photographic prints should be stored horizontally, although prints under 11"x14" and on stable mounts can be safely stored vertically. Prints should be stored away from light and water sources in acid-free, lignin-free boxes manufactured conforming to International Organization for Standardization (ISO) Standards 14523 (superseded in 2007 by ISO 18916) and 10214. Storage materials should pass the American National Standards Institute (ANSI) Photographic Activity Test (PAT), or similar standards, to ensure archival quality. If a photograph exhibits flaking or chipping emulsion it should not be stored in a plastic enclosure as static electricity could further damage the image. Clean cotton gloves should be worn when handling photographs to prevent skin oils and salts from damaging the surfaces. In some cases, it may be necessary to contact a professional conservator. In the United States, the American Institute for Conservation of Historic and Artistic Works (AIC) provides a tool that helps identify local conservation services. In the United Kingdom and Ireland, the Conservation Register provides a similar tool that searches by specialization, business, and surname. To locate other conservation services internationally, Conservation OnLine (CoOL) Resources for Conservation Professionals provides a tool that searches by country. === Colouring materials === Dyes and watercolours require similar preservation measures when applied to hand-coloured photographs. Like the photographs themselves, watercolours and dyes applied by hand to photographs are susceptible to light damage and must be housed in dark storage or displayed under dim, indirect light. Common particulate pollutants can cause watercolour pigments to fade, but the paint surface can be cleaned by lightly dusting with a soft brush to remove dirt. Oil paint was often applied to tintypes, daguerreotypes, and ambrotypes. As with all photographs, the materials respond negatively to direct light sources, which can cause pigments to fade and darken, and frequent changes in relative humidity and temperature, which can cause the oil paint to crack. For photographs with substantial damage, the expertise of an oil paintings conservator might be required for treatment. Crayon and pastel hand-coloured photographs have a powdery surface which must be protected for preservation purposes. Historically, crayon and pastel coloured photographs were sold in a frame under a protective layer of glass, which was often successful in reducing the amount of handling and smudging of the photograph surface. Any conservation work on crayon or pastel colour-photographs must retain these original frames and original glass to maintain the authenticity and value of the object. If the photograph is separated from its original enclosure, it can be stored in an archival quality folder until it is framed or cased. === Auxiliary materials === In the United States, many commercially sold, hand-coloured photographs were packaged and framed for retail sale. Early 20th century hand-coloured photographs were often mounted on mat-board, placed behind a glass frame, and backed by wood panel slats, cardboard, or heavy paperboard. A backing sheet was often glued to the back of the mat-board. Unfortunately, the paper products produced and used during the late-19th and early-20th centuries are highly acidic and will cause yellowing, brittling and degradation of hand-coloured photographs. Metallic inclusions in the paper can also oxidize which may be the cause of foxing in paper materials. Wood panel slats will also off-gas causing further degradation of the photographs. Simple conservation of these fragile materials can be carried out by the adventurous amateur. A hand-coloured photograph should be removed from the frame, retaining any original screws or nails holding the frame together. Wood panels, acidic cardboard slats, and acidic backing paper can be removed from the frame and mat-board and discarded, retaining any identifying information such as stamps or writing on the backing paper. The mat-board on which the photograph is mounted, even though acidic in nature, cannot be removed and replaced due to the intrinsic value of this original mounting. Often the artist's signature and the title of the photograph are inscribed on the mat-board. The best way to limit degradation is to store the photograph in a cool, dry atmosphere with low light. The hand-coloured photograph should be replaced in its original frame, held in place with archival quality acid-free paper paperboard, and closed with the original nails or screws. == Related techniques == Hand-colouring should be distinguished from tinting, toning, retouching, and crystoleum. Tinted photographs are made with dyed printing papers produced by commercial manufacturers. A single overall colour underlies the image and is most apparent in the highlights and mid-tones. From the 1870s albumen printing papers were available in pale pink or blue, and from the 1890s gelatine-silver printing-out papers in pale mauve or pink were available. There were other kinds of tinted papers as well. Over time such colouration often becomes very faded. Toning refers to a variety of methods for altering the overall colour of the photographic image itself. Compounds of gold, platinum or other metals are used in combination with variations in development time, temperature and other factors to produce a range of tones, including warm browns, purples, sepias, blues, olives, red-browns and blue-blacks. A well-known type of toning is sepia tone. Besides adding colour to a monochromatic print, toning often improves image stability and increases contrast. Retouching uses many of the same tools and techniques as hand-colouring, but with the intent of covering damage, hiding unwanted features, accentuating details, or adding missing elements in a photographic print. In a portrait retouching could be used to improve a sitter's appearance, for instance, by removing facial blemishes, and in a landscape with an overexposed sky, clouds could be painted into the image. Water-colours, inks, dyes, and chemical reducers are used with such tools as scalpels, pointed brushes, airbrushes, and retouching pencils. The crystoleum, from "crystal" + "oleum" (oil), process was yet another method of applying colour to albumen prints. The print was pasted face down to the inside of a concave piece of glass. Once the adhesive (usually starch paste or gelatine) was dry, the paper backing of the print was rubbed away, leaving only the transparent emulsion on the glass. The image was then coloured by hand. Another piece of glass was added to the back and this could also be coloured by hand. Both pieces of glass were bound together creating a detailed, albeit fragile, image. == See also == == References == == Further reading == Baldwin, G. (1991). Looking at photographs: A guide to technical terms. Malibu, Calif: J. Paul Getty Museum in association with British Museum Press, p. 7, 35, 55, 58, 74, 80-82. Jones, B. E. (1974). Encyclopedia of photography: With a new picture portfolio. New York: Arno Press, p. 132-134. Lavédrine, B. (2009). Photographs of the past: Process and preservation. Los Angeles: Getty Conservation Institute. Miki, Tamon. (1997). Concerning the arrival of photography in Japan. The advent of photography in Japan. Tokyo: Tokyo Metropolitan Foundation for History and Culture, Tokyo Metropolitan Museum of Photography, p. 11. Nadeau, L. (1994). Encyclopedia of printing, photographic, and photomechanial processes: A comprehensive reference to reproduction technologies : containing invaluable information on over 1500 processes : Vols. 1 & 2 - A-Z. New Brunswick: Atelier Luis Nadeau, p. 33. Reilly, J. M. (2009). Care and identification of 19th century photographs. Rochester, NY: Eastman Kodak Co. Ruggles, M. (1985). Paintings on a photographic base. Journal of the American Institute for Conservation 24(2), p. 92-103. == External links == Brooklyn Museum Flickr Collection The George Eastman House Flickr Collection The Field Museum Archived 31 December 2010 at the Wayback Machine Flickr Collection Nagasaki University Library; Japanese Old Photographs in Bakumatsu-Meiji Period. National Science and Media Museum Flickr Collection Collection of hand-colored photographs by Luis Marquez in the 1930s at the University of Houston Digital Library
Wikipedia/Hand-colouring_of_photographs
Bhaskara's Lemma is an identity used as a lemma during the chakravala method. It states that: N x 2 + k = y 2 ⟹ N ( m x + y k ) 2 + m 2 − N k = ( m y + N x k ) 2 {\displaystyle \,Nx^{2}+k=y^{2}\implies \,N\left({\frac {mx+y}{k}}\right)^{2}+{\frac {m^{2}-N}{k}}=\left({\frac {my+Nx}{k}}\right)^{2}} for integers m , x , y , N , {\displaystyle m,\,x,\,y,\,N,} and non-zero integer k {\displaystyle k} . == Proof == The proof follows from simple algebraic manipulations as follows: multiply both sides of the equation by m 2 − N {\displaystyle m^{2}-N} , add N 2 x 2 + 2 N m x y + N y 2 {\displaystyle N^{2}x^{2}+2Nmxy+Ny^{2}} , factor, and divide by k 2 {\displaystyle k^{2}} . N x 2 + k = y 2 ⟹ N m 2 x 2 − N 2 x 2 + k ( m 2 − N ) = m 2 y 2 − N y 2 {\displaystyle \,Nx^{2}+k=y^{2}\implies Nm^{2}x^{2}-N^{2}x^{2}+k(m^{2}-N)=m^{2}y^{2}-Ny^{2}} ⟹ N m 2 x 2 + 2 N m x y + N y 2 + k ( m 2 − N ) = m 2 y 2 + 2 N m x y + N 2 x 2 {\displaystyle \implies Nm^{2}x^{2}+2Nmxy+Ny^{2}+k(m^{2}-N)=m^{2}y^{2}+2Nmxy+N^{2}x^{2}} ⟹ N ( m x + y ) 2 + k ( m 2 − N ) = ( m y + N x ) 2 {\displaystyle \implies N(mx+y)^{2}+k(m^{2}-N)=(my+Nx)^{2}} ⟹ N ( m x + y k ) 2 + m 2 − N k = ( m y + N x k ) 2 . {\displaystyle \implies \,N\left({\frac {mx+y}{k}}\right)^{2}+{\frac {m^{2}-N}{k}}=\left({\frac {my+Nx}{k}}\right)^{2}.} So long as neither k {\displaystyle k} nor m 2 − N {\displaystyle m^{2}-N} are zero, the implication goes in both directions. (The lemma holds for real or complex numbers as well as integers.) == References == C. O. Selenius, "Rationale of the chakravala process of Jayadeva and Bhaskara II", Historia Mathematica, 2 (1975), 167-184. C. O. Selenius, Kettenbruch theoretische Erklarung der zyklischen Methode zur Losung der Bhaskara-Pell-Gleichung, Acta Acad. Abo. Math. Phys. 23 (10) (1963). George Gheverghese Joseph, The Crest of the Peacock: Non-European Roots of Mathematics (1975). == External links == Introduction to chakravala
Wikipedia/Bhaskara's_lemma
In algebra, the content of a nonzero polynomial with integer coefficients (or, more generally, with coefficients in a unique factorization domain) is the greatest common divisor of its coefficients. The primitive part of such a polynomial is the quotient of the polynomial by its content. Thus a polynomial is the product of its primitive part and its content, and this factorization is unique up to the multiplication of the content by a unit of the ring of the coefficients (and the multiplication of the primitive part by the inverse of the unit). A polynomial is primitive if its content equals 1. Thus the primitive part of a polynomial is a primitive polynomial. Gauss's lemma for polynomials states that the product of primitive polynomials (with coefficients in the same unique factorization domain) also is primitive. This implies that the content and the primitive part of the product of two polynomials are, respectively, the product of the contents and the product of the primitive parts. As the computation of greatest common divisors is generally much easier than polynomial factorization, the first step of a polynomial factorization algorithm is generally the computation of its primitive part–content factorization (see Factorization of polynomials § Primitive part–content factorization). Then the factorization problem is reduced to factorize separately the content and the primitive part. Content and primitive part may be generalized to polynomials over the rational numbers, and, more generally, to polynomials over the field of fractions of a unique factorization domain. This makes essentially equivalent the problems of computing greatest common divisors and factorization of polynomials over the integers and of polynomials over the rational numbers. == Over the integers == For a polynomial with integer coefficients, the content may be either the greatest common divisor of the coefficients or its additive inverse. The choice is arbitrary, and may depend on a further convention, which is commonly that the leading coefficient of the primitive part be positive. For example, the content of − 12 x 3 + 30 x − 20 {\displaystyle -12x^{3}+30x-20} may be either 2 or −2, since 2 is the greatest common divisor of −12, 30, and −20. If one chooses 2 as the content, the primitive part of this polynomial is − 6 x 3 + 15 x − 10 = − 12 x 3 + 30 x − 20 2 , {\displaystyle -6x^{3}+15x-10={\frac {-12x^{3}+30x-20}{2}},} and thus the primitive-part-content factorization is − 12 x 3 + 30 x − 20 = 2 ( − 6 x 3 + 15 x − 10 ) . {\displaystyle -12x^{3}+30x-20=2(-6x^{3}+15x-10).} For aesthetic reasons, one often prefers choosing a negative content, here −2, giving the primitive-part-content factorization − 12 x 3 + 30 x − 20 = − 2 ( 6 x 3 − 15 x + 10 ) . {\displaystyle -12x^{3}+30x-20=-2(6x^{3}-15x+10).} == Properties == In the remainder of this article, we consider polynomials over a unique factorization domain R, which can typically be the ring of integers, or a polynomial ring over a field. In R, greatest common divisors are well defined, and are unique up to multiplication by a unit of R. The content c(P) of a polynomial P with coefficients in R is the greatest common divisor of its coefficients, and, as such, is defined up to multiplication by a unit. The primitive part pp(P) of P is the quotient P/c(P) of P by its content; it is a polynomial with coefficients in R, which is unique up to multiplication by a unit. If the content is changed by multiplication by a unit u, then the primitive part must be changed by dividing it by the same unit, in order to keep the equality P = c ( P ) pp ⁡ ( P ) , {\displaystyle P=c(P)\operatorname {pp} (P),} which is called the primitive-part-content factorization of P. The main properties of the content and the primitive part are results of Gauss's lemma, which asserts that the product of two primitive polynomials is primitive, where a polynomial is primitive if 1 is the greatest common divisor of its coefficients. This implies: The content of a product of polynomials is the product of their contents: c ( P 1 P 2 ) = c ( P 1 ) c ( P 2 ) . {\displaystyle c(P_{1}P_{2})=c(P_{1})c(P_{2}).} The primitive part of a product of polynomials is the product of their primitive parts: pp ⁡ ( P 1 P 2 ) = pp ⁡ ( P 1 ) pp ⁡ ( P 2 ) . {\displaystyle \operatorname {pp} (P_{1}P_{2})=\operatorname {pp} (P_{1})\operatorname {pp} (P_{2}).} The content of a greatest common divisor of polynomials is the greatest common divisor (in R) of their contents: c ( gcd ⁡ ( P 1 , P 2 ) ) = gcd ⁡ ( c ( P 1 ) , c ( P 2 ) ) . {\displaystyle c(\operatorname {gcd} (P_{1},P_{2}))=\operatorname {gcd} (c(P_{1}),c(P_{2})).} The primitive part of a greatest common divisor of polynomials is the greatest common divisor (in R) of their primitive parts: pp ⁡ ( gcd ⁡ ( P 1 , P 2 ) ) = gcd ⁡ ( pp ⁡ ( P 1 ) , pp ⁡ ( P 2 ) ) . {\displaystyle \operatorname {pp} (\operatorname {gcd} (P_{1},P_{2}))=\operatorname {gcd} (\operatorname {pp} (P_{1}),\operatorname {pp} (P_{2})).} The complete factorization of a polynomial over R is the product of the factorization (in R) of the content and of the factorization (in the polynomial ring) of the primitive part. The last property implies that the computation of the primitive-part-content factorization of a polynomial reduces the computation of its complete factorization to the separate factorization of the content and the primitive part. This is generally interesting, because the computation of the prime-part-content factorization involves only greatest common divisor computation in R, which is usually much easier than factorization. == Over the rationals == The primitive-part-content factorization may be extended to polynomials with rational coefficients as follows. Given a polynomial P with rational coefficients, by rewriting its coefficients with the same common denominator d, one may rewrite P as P = Q d , {\displaystyle P={\frac {Q}{d}},} where Q is a polynomial with integer coefficients. The content of P is the quotient by d of the content of Q, that is c ( P ) = c ( Q ) d , {\displaystyle c(P)={\frac {c(Q)}{d}},} and the primitive part of P is the primitive part of Q: pp ⁡ ( P ) = pp ⁡ ( Q ) . {\displaystyle \operatorname {pp} (P)=\operatorname {pp} (Q).} It is easy to show that this definition does not depend on the choice of the common denominator, and that the primitive-part-content factorization remains valid: P = c ( P ) pp ⁡ ( P ) . {\displaystyle P=c(P)\operatorname {pp} (P).} This shows that every polynomial over the rationals is associated with a unique primitive polynomial over the integers, and that the Euclidean algorithm allows the computation of this primitive polynomial. A consequence is that factoring polynomials over the rationals is equivalent to factoring primitive polynomials over the integers. As polynomials with coefficients in a field are more common than polynomials with integer coefficients, it may seem that this equivalence may be used for factoring polynomials with integer coefficients. In fact, the truth is exactly the opposite: every known efficient algorithm for factoring polynomials with rational coefficients uses this equivalence for reducing the problem modulo some prime number p (see Factorization of polynomials). This equivalence is also used for computing greatest common divisors of polynomials, although the Euclidean algorithm is defined for polynomials with rational coefficients. In fact, in this case, the Euclidean algorithm requires one to compute the reduced form of many fractions, and this makes the Euclidean algorithm less efficient than algorithms which work only with polynomials over the integers (see Polynomial greatest common divisor). == Over a field of fractions == The results of the preceding section remain valid if the ring of integers and the field of rationals are respectively replaced by any unique factorization domain R and its field of fractions K. This is typically used for factoring multivariate polynomials, and for proving that a polynomial ring over a unique factorization domain is also a unique factorization domain. === Unique factorization property of polynomial rings === A polynomial ring over a field is a unique factorization domain. The same is true for a polynomial ring over a unique factorization domain. To prove this, it suffices to consider the univariate case, as the general case may be deduced by induction on the number of indeterminates. The unique factorization property is a direct consequence of Euclid's lemma: If an irreducible element divides a product, then it divides one of the factors. For univariate polynomials over a field, this results from Bézout's identity, which itself results from the Euclidean algorithm. So, let R be a unique factorization domain, which is not a field, and R[X] the univariate polynomial ring over R. An irreducible element r in R[X] is either an irreducible element in R or an irreducible primitive polynomial. If r is in R and divides a product P 1 P 2 {\displaystyle P_{1}P_{2}} of two polynomials, then it divides the content c ( P 1 P 2 ) = c ( P 1 ) c ( P 2 ) . {\displaystyle c(P_{1}P_{2})=c(P_{1})c(P_{2}).} Thus, by Euclid's lemma in R, it divides one of the contents, and therefore one of the polynomials. If r is not R, it is a primitive polynomial (because it is irreducible). Then Euclid's lemma in R[X] results immediately from Euclid's lemma in K[X], where K is the field of fractions of R. === Factorization of multivariate polynomials === For factoring a multivariate polynomial over a field or over the integers, one may consider it as a univariate polynomial with coefficients in a polynomial ring with one less indeterminate. Then the factorization is reduced to factorizing separately the primitive part and the content. As the content has one less indeterminate, it may be factorized by applying the method recursively. For factorizing the primitive part, the standard method consists of substituting integers to the indeterminates of the coefficients in a way that does not change the degree in the remaining variable, factorizing the resulting univariate polynomial, and lifting the result to a factorization of the primitive part. == See also == Rational root theorem == References == B. Hartley; T.O. Hawkes (1970). Rings, modules and linear algebra. Chapman and Hall. ISBN 0-412-09810-5. Page 181 of Lang, Serge (1993), Algebra (Third ed.), Reading, Mass.: Addison-Wesley, ISBN 978-0-201-55540-0, Zbl 0848.13001 David Sharpe (1987). Rings and factorization. Cambridge University Press. pp. 68–69. ISBN 0-521-33718-6.
Wikipedia/Primitive_polynomial_(ring_theory)
In theoretical computer science, a certifying algorithm is an algorithm that outputs, together with a solution to the problem it solves, a proof that the solution is correct. A certifying algorithm is said to be efficient if the combined runtime of the algorithm and a proof checker is slower by at most a constant factor than the best known non-certifying algorithm for the same problem. The proof produced by a certifying algorithm should be in some sense simpler than the algorithm itself, for otherwise any algorithm could be considered certifying (with its output verified by running the same algorithm again). Sometimes this is formalized by requiring that a verification of the proof take less time than the original algorithm, while for other problems (in particular those for which the solution can be found in linear time) simplicity of the output proof is considered in a less formal sense. For instance, the validity of the output proof may be more apparent to human users than the correctness of the algorithm, or a checker for the proof may be more amenable to formal verification. Implementations of certifying algorithms that also include a checker for the proof generated by the algorithm may be considered to be more reliable than non-certifying algorithms. For, whenever the algorithm is run, one of three things happens: it produces a correct output (the desired case), it detects a bug in the algorithm or its implication (undesired, but generally preferable to continuing without detecting the bug), or both the algorithm and the checker are faulty in a way that masks the bug and prevents it from being detected (undesired, but unlikely as it depends on the existence of two independent bugs). == Examples == Many examples of problems with checkable algorithms come from graph theory. For instance, a classical algorithm for testing whether a graph is bipartite would simply output a Boolean value: true if the graph is bipartite, false otherwise. In contrast, a certifying algorithm might output a 2-coloring of the graph in the case that it is bipartite, or a cycle of odd length if it is not. Any graph is bipartite if and only if it can be 2-colored, and non-bipartite if and only if it contains an odd cycle. Both checking whether a 2-coloring is valid and checking whether a given odd-length sequence of vertices is a cycle may be performed more simply than testing bipartiteness. Analogously, it is possible to test whether a given directed graph is acyclic by a certifying algorithm that outputs either a topological order or a directed cycle. It is possible to test whether an undirected graph is a chordal graph by a certifying algorithm that outputs either an elimination ordering (an ordering of all vertices such that, for every vertex, the neighbors that are later in the ordering form a clique) or a chordless cycle. And it is possible to test whether a graph is planar by a certifying algorithm that outputs either a planar embedding or a Kuratowski subgraph. The extended Euclidean algorithm for the greatest common divisor of two integers x and y is certifying: it outputs three integers g (the divisor), a, and b, such that ax + by = g. This equation can only be true of multiples of the greatest common divisor, so testing that g is the greatest common divisor may be performed by checking that g divides both x and y and that this equation is correct. == See also == Sanity check, a simple test of the correctness of an output or intermediate result that is not required to be a complete proof of correctness == References ==
Wikipedia/Certifying_algorithm
In algebra, the content of a nonzero polynomial with integer coefficients (or, more generally, with coefficients in a unique factorization domain) is the greatest common divisor of its coefficients. The primitive part of such a polynomial is the quotient of the polynomial by its content. Thus a polynomial is the product of its primitive part and its content, and this factorization is unique up to the multiplication of the content by a unit of the ring of the coefficients (and the multiplication of the primitive part by the inverse of the unit). A polynomial is primitive if its content equals 1. Thus the primitive part of a polynomial is a primitive polynomial. Gauss's lemma for polynomials states that the product of primitive polynomials (with coefficients in the same unique factorization domain) also is primitive. This implies that the content and the primitive part of the product of two polynomials are, respectively, the product of the contents and the product of the primitive parts. As the computation of greatest common divisors is generally much easier than polynomial factorization, the first step of a polynomial factorization algorithm is generally the computation of its primitive part–content factorization (see Factorization of polynomials § Primitive part–content factorization). Then the factorization problem is reduced to factorize separately the content and the primitive part. Content and primitive part may be generalized to polynomials over the rational numbers, and, more generally, to polynomials over the field of fractions of a unique factorization domain. This makes essentially equivalent the problems of computing greatest common divisors and factorization of polynomials over the integers and of polynomials over the rational numbers. == Over the integers == For a polynomial with integer coefficients, the content may be either the greatest common divisor of the coefficients or its additive inverse. The choice is arbitrary, and may depend on a further convention, which is commonly that the leading coefficient of the primitive part be positive. For example, the content of − 12 x 3 + 30 x − 20 {\displaystyle -12x^{3}+30x-20} may be either 2 or −2, since 2 is the greatest common divisor of −12, 30, and −20. If one chooses 2 as the content, the primitive part of this polynomial is − 6 x 3 + 15 x − 10 = − 12 x 3 + 30 x − 20 2 , {\displaystyle -6x^{3}+15x-10={\frac {-12x^{3}+30x-20}{2}},} and thus the primitive-part-content factorization is − 12 x 3 + 30 x − 20 = 2 ( − 6 x 3 + 15 x − 10 ) . {\displaystyle -12x^{3}+30x-20=2(-6x^{3}+15x-10).} For aesthetic reasons, one often prefers choosing a negative content, here −2, giving the primitive-part-content factorization − 12 x 3 + 30 x − 20 = − 2 ( 6 x 3 − 15 x + 10 ) . {\displaystyle -12x^{3}+30x-20=-2(6x^{3}-15x+10).} == Properties == In the remainder of this article, we consider polynomials over a unique factorization domain R, which can typically be the ring of integers, or a polynomial ring over a field. In R, greatest common divisors are well defined, and are unique up to multiplication by a unit of R. The content c(P) of a polynomial P with coefficients in R is the greatest common divisor of its coefficients, and, as such, is defined up to multiplication by a unit. The primitive part pp(P) of P is the quotient P/c(P) of P by its content; it is a polynomial with coefficients in R, which is unique up to multiplication by a unit. If the content is changed by multiplication by a unit u, then the primitive part must be changed by dividing it by the same unit, in order to keep the equality P = c ( P ) pp ⁡ ( P ) , {\displaystyle P=c(P)\operatorname {pp} (P),} which is called the primitive-part-content factorization of P. The main properties of the content and the primitive part are results of Gauss's lemma, which asserts that the product of two primitive polynomials is primitive, where a polynomial is primitive if 1 is the greatest common divisor of its coefficients. This implies: The content of a product of polynomials is the product of their contents: c ( P 1 P 2 ) = c ( P 1 ) c ( P 2 ) . {\displaystyle c(P_{1}P_{2})=c(P_{1})c(P_{2}).} The primitive part of a product of polynomials is the product of their primitive parts: pp ⁡ ( P 1 P 2 ) = pp ⁡ ( P 1 ) pp ⁡ ( P 2 ) . {\displaystyle \operatorname {pp} (P_{1}P_{2})=\operatorname {pp} (P_{1})\operatorname {pp} (P_{2}).} The content of a greatest common divisor of polynomials is the greatest common divisor (in R) of their contents: c ( gcd ⁡ ( P 1 , P 2 ) ) = gcd ⁡ ( c ( P 1 ) , c ( P 2 ) ) . {\displaystyle c(\operatorname {gcd} (P_{1},P_{2}))=\operatorname {gcd} (c(P_{1}),c(P_{2})).} The primitive part of a greatest common divisor of polynomials is the greatest common divisor (in R) of their primitive parts: pp ⁡ ( gcd ⁡ ( P 1 , P 2 ) ) = gcd ⁡ ( pp ⁡ ( P 1 ) , pp ⁡ ( P 2 ) ) . {\displaystyle \operatorname {pp} (\operatorname {gcd} (P_{1},P_{2}))=\operatorname {gcd} (\operatorname {pp} (P_{1}),\operatorname {pp} (P_{2})).} The complete factorization of a polynomial over R is the product of the factorization (in R) of the content and of the factorization (in the polynomial ring) of the primitive part. The last property implies that the computation of the primitive-part-content factorization of a polynomial reduces the computation of its complete factorization to the separate factorization of the content and the primitive part. This is generally interesting, because the computation of the prime-part-content factorization involves only greatest common divisor computation in R, which is usually much easier than factorization. == Over the rationals == The primitive-part-content factorization may be extended to polynomials with rational coefficients as follows. Given a polynomial P with rational coefficients, by rewriting its coefficients with the same common denominator d, one may rewrite P as P = Q d , {\displaystyle P={\frac {Q}{d}},} where Q is a polynomial with integer coefficients. The content of P is the quotient by d of the content of Q, that is c ( P ) = c ( Q ) d , {\displaystyle c(P)={\frac {c(Q)}{d}},} and the primitive part of P is the primitive part of Q: pp ⁡ ( P ) = pp ⁡ ( Q ) . {\displaystyle \operatorname {pp} (P)=\operatorname {pp} (Q).} It is easy to show that this definition does not depend on the choice of the common denominator, and that the primitive-part-content factorization remains valid: P = c ( P ) pp ⁡ ( P ) . {\displaystyle P=c(P)\operatorname {pp} (P).} This shows that every polynomial over the rationals is associated with a unique primitive polynomial over the integers, and that the Euclidean algorithm allows the computation of this primitive polynomial. A consequence is that factoring polynomials over the rationals is equivalent to factoring primitive polynomials over the integers. As polynomials with coefficients in a field are more common than polynomials with integer coefficients, it may seem that this equivalence may be used for factoring polynomials with integer coefficients. In fact, the truth is exactly the opposite: every known efficient algorithm for factoring polynomials with rational coefficients uses this equivalence for reducing the problem modulo some prime number p (see Factorization of polynomials). This equivalence is also used for computing greatest common divisors of polynomials, although the Euclidean algorithm is defined for polynomials with rational coefficients. In fact, in this case, the Euclidean algorithm requires one to compute the reduced form of many fractions, and this makes the Euclidean algorithm less efficient than algorithms which work only with polynomials over the integers (see Polynomial greatest common divisor). == Over a field of fractions == The results of the preceding section remain valid if the ring of integers and the field of rationals are respectively replaced by any unique factorization domain R and its field of fractions K. This is typically used for factoring multivariate polynomials, and for proving that a polynomial ring over a unique factorization domain is also a unique factorization domain. === Unique factorization property of polynomial rings === A polynomial ring over a field is a unique factorization domain. The same is true for a polynomial ring over a unique factorization domain. To prove this, it suffices to consider the univariate case, as the general case may be deduced by induction on the number of indeterminates. The unique factorization property is a direct consequence of Euclid's lemma: If an irreducible element divides a product, then it divides one of the factors. For univariate polynomials over a field, this results from Bézout's identity, which itself results from the Euclidean algorithm. So, let R be a unique factorization domain, which is not a field, and R[X] the univariate polynomial ring over R. An irreducible element r in R[X] is either an irreducible element in R or an irreducible primitive polynomial. If r is in R and divides a product P 1 P 2 {\displaystyle P_{1}P_{2}} of two polynomials, then it divides the content c ( P 1 P 2 ) = c ( P 1 ) c ( P 2 ) . {\displaystyle c(P_{1}P_{2})=c(P_{1})c(P_{2}).} Thus, by Euclid's lemma in R, it divides one of the contents, and therefore one of the polynomials. If r is not R, it is a primitive polynomial (because it is irreducible). Then Euclid's lemma in R[X] results immediately from Euclid's lemma in K[X], where K is the field of fractions of R. === Factorization of multivariate polynomials === For factoring a multivariate polynomial over a field or over the integers, one may consider it as a univariate polynomial with coefficients in a polynomial ring with one less indeterminate. Then the factorization is reduced to factorizing separately the primitive part and the content. As the content has one less indeterminate, it may be factorized by applying the method recursively. For factorizing the primitive part, the standard method consists of substituting integers to the indeterminates of the coefficients in a way that does not change the degree in the remaining variable, factorizing the resulting univariate polynomial, and lifting the result to a factorization of the primitive part. == See also == Rational root theorem == References == B. Hartley; T.O. Hawkes (1970). Rings, modules and linear algebra. Chapman and Hall. ISBN 0-412-09810-5. Page 181 of Lang, Serge (1993), Algebra (Third ed.), Reading, Mass.: Addison-Wesley, ISBN 978-0-201-55540-0, Zbl 0848.13001 David Sharpe (1987). Rings and factorization. Cambridge University Press. pp. 68–69. ISBN 0-521-33718-6.
Wikipedia/Content_(algebra)
Computer and network surveillance is the monitoring of computer activity and data stored locally on a computer or data being transferred over computer networks such as the Internet. This monitoring is often carried out covertly and may be completed by governments, corporations, criminal organizations, or individuals. It may or may not be legal and may or may not require authorization from a court or other independent government agencies. Computer and network surveillance programs are widespread today, and almost all Internet traffic can be monitored. Surveillance allows governments and other agencies to maintain social control, recognize and monitor threats or any suspicious or abnormal activity, and prevent and investigate criminal activities. With the advent of programs such as the Total Information Awareness program, technologies such as high-speed surveillance computers and biometrics software, and laws such as the Communications Assistance For Law Enforcement Act, governments now possess an unprecedented ability to monitor the activities of citizens. Many civil rights and privacy groups, such as Reporters Without Borders, the Electronic Frontier Foundation, and the American Civil Liberties Union, have expressed concern that increasing surveillance of citizens will result in a mass surveillance society, with limited political and/or personal freedoms. Such fear has led to numerous lawsuits such as Hepting v. AT&T. The hacktivist group Anonymous has hacked into government websites in protest of what it considers "draconian surveillance". == Network surveillance == The vast majority of computer surveillance involves the monitoring of personal data and traffic on the Internet. For example, in the United States, the Communications Assistance For Law Enforcement Act mandates that all phone calls and broadband internet traffic (emails, web traffic, instant messaging, etc.) be available for unimpeded, real-time monitoring by Federal law enforcement agencies. Packet capture (also known as "packet sniffing") is the monitoring of data traffic on a network. Data sent between computers over the Internet or between any networks takes the form of small chunks called packets, which are routed to their destination and assembled back into a complete message. A packet capture appliance intercepts these packets, so that they may be examined and analyzed. Computer technology is needed to perform traffic analysis and sift through intercepted data to look for important/useful information. Under the Communications Assistance For Law Enforcement Act, all U.S. telecommunications providers are required to install such packet capture technology so that Federal law enforcement and intelligence agencies are able to intercept all of their customers' broadband Internet and voice over Internet protocol (VoIP) traffic. These technologies can be used both by the intelligence and for illegal activities. There is far too much data gathered by these packet sniffers for human investigators to manually search through. Thus, automated Internet surveillance computers sift through the vast amount of intercepted Internet traffic, filtering out, and reporting to investigators those bits of information which are "interesting", for example, the use of certain words or phrases, visiting certain types of web sites, or communicating via email or chat with a certain individual or group. Billions of dollars per year are spent by agencies such as the Information Awareness Office, NSA, and the FBI, for the development, purchase, implementation, and operation of systems which intercept and analyze this data, extracting only the information that is useful to law enforcement and intelligence agencies. Similar systems are now used by Iranian Security dept. to more easily distinguish between peaceful citizens and terrorists. All of the technology has been allegedly installed by German Siemens AG and Finnish Nokia. The Internet's rapid development has become a primary form of communication. More people are potentially subject to Internet surveillance. There are advantages and disadvantages to network monitoring. For instance, systems described as "Web 2.0" have greatly impacted modern society. Tim O’ Reilly, who first explained the concept of "Web 2.0", stated that Web 2.0 provides communication platforms that are "user generated", with self-produced content, motivating more people to communicate with friends online. However, Internet surveillance also has a disadvantage. One researcher from Uppsala University said "Web 2.0 surveillance is directed at large user groups who help to hegemonically produce and reproduce surveillance by providing user-generated (self-produced) content. We can characterize Web 2.0 surveillance as mass self-surveillance". Surveillance companies monitor people while they are focused on work or entertainment. Yet, employers themselves also monitor their employees. They do so in order to protect the company's assets and to control public communications but most importantly, to make sure that their employees are actively working and being productive. This can emotionally affect people; this is because it can cause emotions like jealousy. A research group states "...we set out to test the prediction that feelings of jealousy lead to 'creeping' on a partner through Facebook, and that women are particularly likely to engage in partner monitoring in response to jealousy". The study shows that women can become jealous of other people when they are in an online group. Virtual assistants have become socially integrated into many people's lives. Currently, virtual assistants such as Amazon's Alexa or Apple's Siri cannot call 911 or local services. They are constantly listening for command and recording parts of conversations that will help improve algorithms. If the law enforcement is able to be called using a virtual assistant, the law enforcement would then be able to have access to all the information saved for the device. The device is connected to the home's internet, because of this law enforcement would be the exact location of the individual calling for law enforcement. While the virtual assistance devices are popular, many debates the lack of privacy. The devices are listening to every conversation the owner is having. Even if the owner is not talking to a virtual assistant, the device is still listening to the conversation in hopes that the owner will need assistance, as well as to gather data. == Corporate surveillance == Corporate surveillance of computer activity is very common. The data collected is most often used for marketing purposes or sold to other corporations, but is also regularly shared with government agencies. It can be used as a form of business intelligence, which enables the corporation to better tailor their products and/or services to be desirable by their customers. The data can also be sold to other corporations so that they can use it for the aforementioned purpose, or it can be used for direct marketing purposes, such as targeted advertisements, where ads are targeted to the user of the search engine by analyzing their search history and emails (if they use free webmail services), which are kept in a database. Such type of surveillance is also used to establish business purposes of monitoring, which may include the following: Preventing misuse of resources. Companies can discourage unproductive personal activities such as online shopping or web surfing on company time. Monitoring employee performance is one way to reduce unnecessary network traffic and reduce the consumption of network bandwidth. Promoting adherence to policies. Online surveillance is one means of verifying employee observance of company networking policies. Preventing lawsuits. Firms can be held liable for discrimination or employee harassment in the workplace. Organizations can also be involved in infringement suits through employees that distribute copyrighted material over corporate networks. Safeguarding records. Federal legislation requires organizations to protect personal information. Monitoring can determine the extent of compliance with company policies and programs overseeing information security. Monitoring may also deter unlawful appropriation of personal information, and potential spam or viruses. Safeguarding company assets. The protection of intellectual property, trade secrets, and business strategies is a major concern. The ease of information transmission and storage makes it imperative to monitor employee actions as part of a broader policy. The second component of prevention is determining the ownership of technology resources. The ownership of the firm's networks, servers, computers, files, and e-mail should be explicitly stated. There should be a distinction between an employee's personal electronic devices, which should be limited and proscribed, and those owned by the firm. For instance, Google Search stores identifying information for each web search. An IP address and the search phrase used are stored in a database for up to 18 months. Google also scans the content of emails of users of its Gmail webmail service in order to create targeted advertising based on what people are talking about in their personal email correspondences. Google is, by far, the largest Internet advertising agency—millions of sites place Google's advertising banners and links on their websites in order to earn money from visitors who click on the ads. Each page containing Google advertisements adds, reads, and modifies "cookies" on each visitor's computer. These cookies track the user across all of these sites and gather information about their web surfing habits, keeping track of which sites they visit, and what they do when they are on these sites. This information, along with the information from their email accounts, and search engine histories, is stored by Google to use to build a profile of the user to deliver better-targeted advertising. The United States government often gains access to these databases, either by producing a warrant for it, or by simply asking. The Department of Homeland Security has openly stated that it uses data collected from consumer credit and direct marketing agencies for augmenting the profiles of individuals that it is monitoring. == Malicious software == In addition to monitoring information sent over a computer network, there is also a way to examine data stored on a computer's hard drive, and to monitor the activities of a person using the computer. A surveillance program installed on a computer can search the contents of the hard drive for suspicious data, can monitor computer use, collect passwords, and/or report back activities in real-time to its operator through the Internet connection. A keylogger is an example of this type of program. Normal keylogging programs store their data on the local hard drive, but some are programmed to automatically transmit data over the network to a remote computer or Web server. There are multiple ways of installing such software. The most common is remote installation, using a backdoor created by a computer virus or trojan. This tactic has the advantage of potentially subjecting multiple computers to surveillance. Viruses often spread to thousands or millions of computers, and leave "backdoors" which are accessible over a network connection, and enable an intruder to remotely install software and execute commands. These viruses and trojans are sometimes developed by government agencies, such as CIPAV and Magic Lantern. More often, however, viruses created by other people or spyware installed by marketing agencies can be used to gain access through the security breaches that they create. Another method is "cracking" into the computer to gain access over a network. An attacker can then install surveillance software remotely. Servers and computers with permanent broadband connections are most vulnerable to this type of attack. Another source of security cracking is employees giving out information or users using brute force tactics to guess their password. One can also physically place surveillance software on a computer by gaining entry to the place where the computer is stored and install it from a compact disc, floppy disk, or thumbdrive. This method shares a disadvantage with hardware devices in that it requires physical access to the computer. One well-known worm that uses this method of spreading itself is Stuxnet. == Social network analysis == One common form of surveillance is to create maps of social networks based on data from social networking sites as well as from traffic analysis information from phone call records such as those in the NSA call database, and internet traffic data gathered under CALEA. These social network "maps" are then data mined to extract useful information such as personal interests, friendships and affiliations, wants, beliefs, thoughts, and activities. Many U.S. government agencies such as the Defense Advanced Research Projects Agency (DARPA), the National Security Agency (NSA), and the Department of Homeland Security (DHS) are currently investing heavily in research involving social network analysis. The intelligence community believes that the biggest threat to the U.S. comes from decentralized, leaderless, geographically dispersed groups. These types of threats are most easily countered by finding important nodes in the network, and removing them. To do this requires a detailed map of the network. Jason Ethier of Northeastern University, in his study of modern social network analysis, said the following of the Scalable Social Network Analysis Program developed by the Information Awareness Office: The purpose of the SSNA algorithms program is to extend techniques of social network analysis to assist with distinguishing potential terrorist cells from legitimate groups of people ... In order to be successful SSNA will require information on the social interactions of the majority of people around the globe. Since the Defense Department cannot easily distinguish between peaceful citizens and terrorists, it will be necessary for them to gather data on innocent civilians as well as on potential terrorists. == Monitoring from a distance == With only commercially available equipment, it has been shown that it is possible to monitor computers from a distance by detecting the radiation emitted by the CRT monitor. This form of computer surveillance, known as TEMPEST, involves reading electromagnetic emanations from computing devices in order to extract data from them at distances of hundreds of meters. IBM researchers have also found that, for most computer keyboards, each key emits a slightly different noise when pressed. The differences are individually identifiable under some conditions, and so it's possible to log key strokes without actually requiring logging software to run on the associated computer. In 2015, lawmakers in California passed a law prohibiting any investigative personnel in the state to force businesses to hand over digital communication without a warrant, calling this Electronic Communications Privacy Act. At the same time in California, state senator Jerry Hill introduced a bill making law enforcement agencies to disclose more information on their usage and information from the Stingray phone tracker device. As the law took into effect in January 2016, it will now require cities to operate with new guidelines in relation to how and when law enforcement use this device. Some legislators and those holding a public office have disagreed with this technology because of the warrantless tracking, but now if a city wants to use this device, it must be heard by a public hearing. Some cities have pulled out of using the StingRay such as Santa Clara County. And it has also been shown, by Adi Shamir et al., that even the high frequency noise emitted by a CPU includes information about the instructions being executed. == Policeware and govware == In German-speaking countries, spyware used or made by the government is sometimes called govware. Some countries like Switzerland and Germany have a legal framework governing the use of such software. Known examples include the Swiss MiniPanzer and MegaPanzer and the German R2D2 (trojan). Policeware is a software designed to police citizens by monitoring the discussion and interaction of its citizens. Within the U.S., Carnivore was the first incarnation of secretly installed e-mail monitoring software installed in Internet service providers' networks to log computer communication, including transmitted e-mails. Magic Lantern is another such application, this time running in a targeted computer in a trojan style and performing keystroke logging. CIPAV, deployed by the FBI, is a multi-purpose spyware/trojan. The Clipper Chip, formerly known as MYK-78, is a small hardware chip that the government can install into phones, designed in the nineties. It was intended to secure private communication and data by reading voice messages that are encoded and decode them. The Clipper Chip was designed during the Clinton administration to, “…protect personal safety and national security against a developing information anarchy that fosters criminals, terrorists and foreign foes.” The government portrayed it as the solution to the secret codes or cryptographic keys that the age of technology created. Thus, this has raised controversy in the public, because the Clipper Chip is thought to have been the next “Big Brother” tool. This led to the failure of the Clipper proposal, even though there have been many attempts to push the agenda. The "Consumer Broadband and Digital Television Promotion Act" (CBDTPA) was a bill proposed in the United States Congress. CBDTPA was known as the "Security Systems and Standards Certification Act" (SSSCA) while in draft form and was killed in committee in 2002. Had CBDTPA become law, it would have prohibited technology that could be used to read digital content under copyright (such as music, video, and e-books) without digital rights management (DRM) that prevented access to this material without the permission of the copyright holder. == Surveillance as an aid to censorship == Surveillance and censorship are different. Surveillance can be performed without censorship, but it is harder to engage in censorship without some forms of surveillance. And even when surveillance does not lead directly to censorship, the widespread knowledge or belief that a person, their computer, or their use of the Internet is under surveillance can lead to self-censorship. In March 2013 Reporters Without Borders issued a Special report on Internet surveillance that examines the use of technology that monitors online activity and intercepts electronic communication in order to arrest journalists, citizen-journalists, and dissidents. The report includes a list of "State Enemies of the Internet", Bahrain, China, Iran, Syria, and Vietnam, countries whose governments are involved in active, intrusive surveillance of news providers, resulting in grave violations of freedom of information and human rights. Computer and network surveillance is on the increase in these countries. The report also includes a second list of "Corporate Enemies of the Internet", including Amesys (France), Blue Coat Systems (U.S.), Gamma (UK and Germany), Hacking Team (Italy), and Trovicor (Germany), companies that sell products that are liable to be used by governments to violate human rights and freedom of information. Neither list is exhaustive and they are likely to be expanded in the future. Protection of sources is no longer just a matter of journalistic ethics. Journalists should equip themselves with a "digital survival kit" if they are exchanging sensitive information online, storing it on a computer hard-drive or mobile phone. Individuals associated with high-profile rights organizations, dissident groups, protest groups, or reform groups are urged to take extra precautions to protect their online identities. == Countermeasures == Countermeasures against surveillance vary based on the type of eavesdropping targeted. Electromagnetic eavesdropping, such as TEMPEST and its derivatives, often requires hardware shielding, such as Faraday cages, to block unintended emissions. To prevent interception of data in transit, encryption is a key defense. When properly implemented with end-to-end encryption, or while using tools such as Tor, and provided the device remains uncompromised and free from direct monitoring via electromagnetic analysis, audio recording, or similar methodologies, the content of communication is generally considered secure. For a number of years, numerous government initiatives have sought to weaken encryption or introduce backdoors for law enforcement access. Privacy advocates and the broader technology industry strongly oppose these measures, arguing that any backdoor would inevitably be discovered and exploited by malicious actors. Such vulnerabilities would endanger everyone's private data while failing to hinder criminals, who could switch to alternative platforms or create their own encrypted systems. Surveillance remains effective even when encryption is correctly employed, by exploiting metadata that is often accessible to packet sniffers unless countermeasures are applied. This includes DNS queries, IP addresses, phone numbers, URLs, timestamps, and communication durations, which can reveal significant information about user activity and interactions or associations with a person of interest. == See also == Anonymizer, a software system that attempts to make network activity untraceable Computer surveillance in the workplace Cyber spying Datacasting, a means of broadcasting files and Web pages using radio waves, allowing receivers near total immunity from traditional network surveillance techniques. Differential privacy, a method to maximize the accuracy of queries from statistical databases while minimizing the chances of violating the privacy of individuals. ECHELON, a signals intelligence (SIGINT) collection and analysis network operated on behalf of Australia, Canada, New Zealand, the United Kingdom, and the United States, also known as AUSCANNZUKUS and Five Eyes GhostNet, a large-scale cyber spying operation discovered in March 2009 List of government surveillance projects Internet censorship and surveillance by country Mass surveillance China's Golden Shield Project Mass surveillance in Australia Mass surveillance in China Mass surveillance in East Germany Mass surveillance in India Mass surveillance in North Korea Mass surveillance in the United Kingdom Mass surveillance in the United States Surveillance Surveillance by the United States government: 2013 mass surveillance disclosures, reports about NSA and its international partners' mass surveillance of foreign nationals and U.S. citizens Bullrun (code name), a highly classified NSA program to preserve its ability to eavesdrop on encrypted communications by influencing and weakening encryption standards, by obtaining master encryption keys, and by gaining access to data before or after it is encrypted either by agreement, by force of law, or by computer network exploitation (hacking) Carnivore, a U.S. Federal Bureau of Investigation system to monitor email and electronic communications COINTELPRO, a series of covert, and at times illegal, projects conducted by the FBI aimed at U.S. domestic political organizations Communications Assistance For Law Enforcement Act Computer and Internet Protocol Address Verifier (CIPAV), a data gathering tool used by the U.S. Federal Bureau of Investigation (FBI) Dropmire, a secret surveillance program by the NSA aimed at surveillance of foreign embassies and diplomatic staff, including those of NATO allies Magic Lantern, keystroke logging software developed by the U.S. Federal Bureau of Investigation Mass surveillance in the United States NSA call database, a database containing metadata for hundreds of billions of telephone calls made in the U.S. NSA warrantless surveillance (2001–07) NSA whistleblowers: William Binney, Thomas Andrews Drake, Mark Klein, Edward Snowden, Thomas Tamm, Russ Tice Spying on United Nations leaders by United States diplomats Stellar Wind (code name), code name for information collected under the President's Surveillance Program Tailored Access Operations, NSA's hacking program Terrorist Surveillance Program, an NSA electronic surveillance program Total Information Awareness, a project of the Defense Advanced Research Projects Agency (DARPA) TEMPEST, codename for studies of unintentional intelligence-bearing signals which, if intercepted and analyzed, may disclose the information transmitted, received, handled, or otherwise processed by any information-processing equipment == References == == External links == "Selected Papers in Anonymity", Free Haven Project, accessed 16 September 2011. Yan, W. (2019) Introduction to Intelligent Surveillance: Surveillance Data Capture, Transmission, and Analytics, Springer.
Wikipedia/Network_surveillance
The Barabási–Albert (BA) model is an algorithm for generating random scale-free networks using a preferential attachment mechanism. Several natural and human-made systems, including the Internet, the World Wide Web, citation networks, and some social networks are thought to be approximately scale-free and certainly contain few nodes (called hubs) with unusually high degree as compared to the other nodes of the network. The BA model tries to explain the existence of such nodes in real networks. The algorithm is named for its inventors Albert-László Barabási and Réka Albert. == Concepts == Many observed networks (at least approximately) fall into the class of scale-free networks, meaning that they have power-law (or scale-free) degree distributions, while random graph models such as the Erdős–Rényi (ER) model and the Watts–Strogatz (WS) model do not exhibit power laws. The Barabási–Albert model is one of several proposed models that generate scale-free networks. It incorporates two important general concepts: growth and preferential attachment. Both growth and preferential attachment exist widely in real networks. Growth means that the number of nodes in the network increases over time. Preferential attachment means that the more connected a node is, the more likely it is to receive new links. Nodes with a higher degree have a stronger ability to grab links added to the network. Intuitively, the preferential attachment can be understood if we think in terms of social networks connecting people. Here a link from A to B means that person A "knows" or "is acquainted with" person B. Heavily linked nodes represent well-known people with lots of relations. When a newcomer enters the community, they are more likely to become acquainted with one of those more visible people rather than with a relative unknown. The BA model was proposed by assuming that in the World Wide Web, new pages link preferentially to hubs, i.e. very well known sites such as Google, rather than to pages that hardly anyone knows. If someone selects a new page to link to by randomly choosing an existing link, the probability of selecting a particular page would be proportional to its degree. The BA model claims that this explains the preferential attachment probability rule. Later, the Bianconi–Barabási model works to address this issue by introducing a "fitness" parameter. Preferential attachment is an example of a positive feedback cycle where initially random variations (one node initially having more links or having started accumulating links earlier than another) are automatically reinforced, thus greatly magnifying differences. This is also sometimes called the Matthew effect, "the rich get richer". See also autocatalysis. == Algorithm == The only parameter in the BA model is m {\displaystyle m} , a positive integer. The network initializes with a network of m 0 ≥ m {\displaystyle m_{0}\geq m} nodes. At each step, add one new node, then sample m {\displaystyle m} neighbors among the existing vertices from the network, with a probability that is proportional to the number of links that the existing nodes already have (The original papers did not specify how to handle cases where the same existing node is chosen multiple times.). Formally, the probability p i {\displaystyle p_{i}} that the new node is connected to node i {\displaystyle i} is p i = k i ∑ j k j , {\displaystyle p_{i}={\frac {k_{i}}{\sum _{j}k_{j}}},} where k i {\displaystyle k_{i}} is the degree of node i {\displaystyle i} and the sum is made over all pre-existing nodes j {\displaystyle j} (i.e. the denominator results in twice the current number of edges in the network). This step can be performed by first uniformly sampling one edge, then sampling one of the two vertices on the edge. Heavily linked nodes ("hubs") tend to quickly accumulate even more links, while nodes with only a few links are unlikely to be chosen as the destination for a new link. The new nodes have a "preference" to attach themselves to the already heavily linked nodes. == Properties == === Degree distribution === The degree distribution resulting from the BA model is scale free, in particular, it is a power law of the form P ( k ) ∼ k − 3 {\displaystyle P(k)\sim k^{-3}\,} === Hirsch index distribution === The h-index or Hirsch index distribution was shown to also be scale free and was proposed as the lobby index, to be used as a centrality measure H ( k ) ∼ k − 6 {\displaystyle H(k)\sim k^{-6}\,} Furthermore, an analytic result for the density of nodes with h-index 1 can be obtained in the case where m 0 = 1 {\displaystyle m_{0}=1} H ( 1 ) | m 0 = 1 = 4 − π {\displaystyle H(1){\Big |}_{m_{0}=1}=4-\pi \,} === Node degree correlations === Correlations between the degrees of connected nodes develop spontaneously in the BA model because of the way the network evolves. The probability, n k ℓ {\displaystyle n_{k\ell }} , of finding a link that connects a node of degree k {\displaystyle k} to an ancestor node of degree ℓ {\displaystyle \ell } in the BA model for the special case of m = 1 {\displaystyle m=1} (BA tree) is given by n k ℓ = 4 ( ℓ − 1 ) k ( k + 1 ) ( k + ℓ ) ( k + ℓ + 1 ) ( k + ℓ + 2 ) + 12 ( ℓ − 1 ) k ( k + ℓ − 1 ) ( k + ℓ ) ( k + ℓ + 1 ) ( k + ℓ + 2 ) . {\displaystyle n_{k\ell }={\frac {4\left(\ell -1\right)}{k\left(k+1\right)\left(k+\ell \right)\left(k+\ell +1\right)\left(k+\ell +2\right)}}+{\frac {12\left(\ell -1\right)}{k\left(k+\ell -1\right)\left(k+\ell \right)\left(k+\ell +1\right)\left(k+\ell +2\right)}}.} This confirms the existence of degree correlations, because if the distributions were uncorrelated, we would get n k ℓ = k − 3 ℓ − 3 {\displaystyle n_{k\ell }=k^{-3}\ell ^{-3}} . For general m {\displaystyle m} , the fraction of links who connect a node of degree k {\displaystyle k} to a node of degree ℓ {\displaystyle \ell } is p ( k , ℓ ) = 2 m ( m + 1 ) k ( k + 1 ) ℓ ( ℓ + 1 ) [ 1 − ( 2 m + 2 m + 1 ) ( k + ℓ − 2 m ℓ − m ) ( k + ℓ + 2 ℓ + 1 ) ] . {\displaystyle p(k,\ell )={\frac {2m(m+1)}{k(k+1)\ell (\ell +1)}}\left[1-{\frac {{\binom {2m+2}{m+1}}{\binom {k+\ell -2m}{\ell -m}}}{\binom {k+\ell +2}{\ell +1}}}\right].} Also, the nearest-neighbor degree distribution p ( ℓ ∣ k ) {\displaystyle p(\ell \mid k)} , that is, the degree distribution of the neighbors of a node with degree k {\displaystyle k} , is given by p ( ℓ ∣ k ) = m ( k + 2 ) k ℓ ( ℓ + 1 ) [ 1 − ( 2 m + 2 m + 1 ) ( k + ℓ − 2 m ℓ − m ) ( k + ℓ + 2 ℓ + 1 ) ] . {\displaystyle p(\ell \mid k)={\frac {m(k+2)}{k\ell (\ell +1)}}\left[1-{\frac {{\binom {2m+2}{m+1}}{\binom {k+\ell -2m}{\ell -m}}}{\binom {k+\ell +2}{\ell +1}}}\right].} In other words, if we select a node with degree k {\displaystyle k} , and then select one of its neighbors randomly, the probability that this randomly selected neighbor will have degree ℓ {\displaystyle \ell } is given by the expression p ( ℓ | k ) {\displaystyle p(\ell |k)} above. === Clustering coefficient === The case for m = 1 {\displaystyle m=1} is trivial: networks are trees and the clustering coefficient is equal to zero. An analytical result for the clustering coefficient of the BA model was obtained by Klemm and Eguíluz and proven by Bollobás. A mean-field approach to study the clustering coefficient was applied by Fronczak, Fronczak and Holyst. The average clustering coefficient of the Barabási–Albert model depends on the size of the network N: ⟨ C ⟩ ∼ l n N 2 / N . {\displaystyle \langle C\rangle \sim lnN^{2}/N.} This behavior is distinct from the behavior of small-world networks where clustering is independent of system size. The clustering as a function of node degree C ( k ) {\displaystyle C(k)} is practically independent of k {\displaystyle k} . === Spectral properties === The spectral density of BA model has a different shape from the semicircular spectral density of random graph. It has a triangle-like shape with the top lying well above the semicircle and edges decaying as a power law. In (Section 5.1), it was proved that the shape of this spectral density is not an exact triangular function by analyzing the moments of the spectral density as a function of the power-law exponent. === Dynamic scaling === By definition, the BA model describes a time developing phenomenon and hence, besides its scale-free property, one could also look for its dynamic scaling property. In the BA network nodes can also be characterized by generalized degree q {\displaystyle q} , the product of the square root of the birth time of each node and their corresponding degree k {\displaystyle k} , instead of the degree k {\displaystyle k} alone since the time of birth matters in the BA network. We find that the generalized degree distribution F ( q , t ) {\displaystyle F(q,t)} has some non-trivial features and exhibits dynamic scaling F ( q , t ) ∼ t − 1 / 2 ϕ ( q / t 1 / 2 ) . {\displaystyle F(q,t)\sim t^{-1/2}\phi (q/t^{1/2}).} It implies that the distinct plots of F ( q , t ) {\displaystyle F(q,t)} vs q {\displaystyle q} would collapse into a universal curve if we plot F ( q , t ) t 1 / 2 {\displaystyle F(q,t)t^{1/2}} vs q / t 1 / 2 {\displaystyle q/t^{1/2}} . == Limiting cases == === Model A === Model A retains growth but does not include preferential attachment. The probability of a new node connecting to any pre-existing node is equal. The resulting degree distribution in this limit is geometric, indicating that growth alone is not sufficient to produce a scale-free structure. === Model B === Model B retains preferential attachment but eliminates growth. The model begins with a fixed number of disconnected nodes and adds links, preferentially choosing high degree nodes as link destinations. Though the degree distribution early in the simulation looks scale-free, the distribution is not stable, and it eventually becomes nearly Gaussian as the network nears saturation. So preferential attachment alone is not sufficient to produce a scale-free structure. The failure of models A and B to lead to a scale-free distribution indicates that growth and preferential attachment are needed simultaneously to reproduce the stationary power-law distribution observed in real networks. == Non-linear preferential attachment == The BA model can be thought of as a specific case of the more general non-linear preferential attachment (NLPA) model. The NLPA algorithm is identical to the BA model with the attachment probability replaced by the more general form p i = k i α ∑ j k j α , {\displaystyle p_{i}={\frac {k_{i}^{\alpha }}{\sum _{j}k_{j}^{\alpha }}},} where α {\displaystyle \alpha } is a constant positive exponent. If α = 1 {\displaystyle \alpha =1} , NLPA reduces to the BA model and is referred to as "linear". If 0 < α < 1 {\displaystyle 0<\alpha <1} , NLPA is referred to as "sub-linear" and the degree distribution of the network tends to a stretched exponential distribution. If α > 1 {\displaystyle \alpha >1} , NLPA is referred to as "super-linear" and a small number of nodes connect to almost all other nodes in the network. For both α < 1 {\displaystyle \alpha <1} and α > 1 {\displaystyle \alpha >1} , the scale-free property of the network is broken in the limit of infinite system size. However, if α {\displaystyle \alpha } is only slightly larger than 1 {\displaystyle 1} , NLPA may result in degree distributions which appear to be transiently scale free. == History == Preferential attachment made its first appearance in 1923 in the celebrated urn model of the Hungarian mathematician György Pólya in 1923. The master equation method, which yields a more transparent derivation, was applied to the problem by Herbert A. Simon in 1955 in the course of studies of the sizes of cities and other phenomena. It was first applied to explain citation frequencies by Derek de Solla Price in 1976. Price was interested in the accumulation of citations of scientific papers and the Price model used "cumulative advantage" (his name for preferential attachment) to generate a fat tailed distribution. In the language of modern citations network, Price's model produces a directed network, i.e. the version of the Barabási–Albert model. The name "preferential attachment" and the present popularity of scale-free network models is due to the work of Albert-László Barabási and Réka Albert, who discovered that a similar process is present in real networks, and applied in 1999 preferential attachment to explain the numerically observed degree distributions on the web. == See also == Bianconi–Barabási model Chinese restaurant process Complex networks Erdős–Rényi (ER) model Price's model Percolation theory Scale-free network Small-world network Watts and Strogatz model == References == == External links == "This Man Could Rule the World" "A Java Implementation for Barabási–Albert" "Generating Barabási–Albert Model Graphs in Code"
Wikipedia/Barabási–Albert_model
Network controllability concerns the structural controllability of a network. Controllability describes our ability to guide a dynamical system from any initial state to any desired final state in finite time, with a suitable choice of inputs. This definition agrees well with our intuitive notion of control. The controllability of general directed and weighted complex networks has recently been the subject of intense study by a number of groups in wide variety of networks, worldwide. Recent studies by Sharma et al. on multi-type biological networks (gene–gene, miRNA–gene, and protein–protein interaction networks) identified control targets in phenotypically characterized Osteosarcoma showing important role of genes and proteins responsible for maintaining tumor microenvironment. == Background == Consider the canonical linear time-invariant dynamics on a complex network X ˙ ( t ) = A ⋅ X ( t ) + B ⋅ u ( t ) {\displaystyle {\dot {\mathbf {X} }}(t)=\mathbf {A} \cdot \mathbf {X} (t)+\mathbf {B} \cdot \mathbf {u} (t)} where the vector X ( t ) = ( x 1 ( t ) , ⋯ , x N ( t ) ) T {\displaystyle \mathbf {X} (t)=(x_{1}(t),\cdots ,x_{N}(t))^{\mathrm {T} }} captures the state of a system of N {\displaystyle N} nodes at time t {\displaystyle t} . The N × N {\displaystyle N\times N} matrix A {\displaystyle \mathbf {A} } describes the system's wiring diagram and the interaction strength between the components. The N × M {\displaystyle N\times M} matrix B {\displaystyle \mathbf {B} } identifies the nodes controlled by an outside controller. The system is controlled through the time dependent input vector u ( t ) = ( u 1 ( t ) , ⋯ , u M ( t ) ) T {\displaystyle \mathbf {u} (t)=(u_{1}(t),\cdots ,u_{M}(t))^{\mathrm {T} }} that the controller imposes on the system. To identify the minimum number of driver nodes, denoted by N D {\displaystyle N_{\mathrm {D} }} , whose control is sufficient to fully control the system's dynamics, Liu et al. attempted to combine the tools from structural control theory, graph theory and statistical physics. They showed that the minimum number of inputs or driver nodes needed to maintain full control of the network is determined by the maximum-cardinality matching in the network. From this result, an analytical framework, based on the in–out degree distribution, was developed to predict n D = N D / N {\displaystyle n_{\mathrm {D} }=N_{\mathrm {D} }/N} for scale-free and Erdős–Rényi random graphs. However, more recently it has been demonstrated that network controllability (and other structure-only methods that use exclusively the connectivity of a graph, A {\displaystyle \mathbf {A} } , to simplify the underlying dynamics), both undershoot and overshoot the number and which sets of driver nodes best control network dynamics, highlighting the importance of redundancy (e.g. canalization) and non-linear dynamics in determining control. It is also notable that Liu's et al. formulation would predict same values of n D {\displaystyle {n_{\mathrm {D} }}} for a chain graph and for a weak densely connected graph. Obviously, both these graphs have very different in and out degree distributions. A recent unpublished work questions whether degree, which is a purely local measure in networks, would completely describe controllability and whether even slightly distant nodes would have no role in deciding network controllability. Indeed, for many real-word networks, namely, food webs, neuronal and metabolic networks, the mismatch in values of n D r e a l {\displaystyle {n_{\mathrm {D} }}^{real}} and n D r a n d _ d e g r e e {\displaystyle {n_{\mathrm {D} }}^{\mathrm {rand\_degree} }} calculated by Liu et al. is notable. If controllability is decided mainly by degree, why are n D r e a l {\displaystyle {n_{\mathrm {D} }}^{real}} and n D r a n d _ d e g r e e {\displaystyle {n_{\mathrm {D} }}^{\mathrm {rand\_degree} }} so different for many real world networks? They argued (arXiv:1203.5161v1) that this might be due to the effect of degree correlations. However, it has been shown that network controllability can be altered only by using betweenness centrality and closeness centrality, without using degree (graph theory) or degree correlations at all. === Structural controllability === The concept of the structural properties was first introduced by Lin (1974) and then extended by Shields and Pearson (1976) and alternatively derived by Glover and Silverman (1976). The main question is whether the lack of controllability or observability are generic with respect to the variable system parameters. In the framework of structural control the system parameters are either independent free variables or fixed zeros. This is consistent for models of physical systems since parameter values are never known exactly, with the exception of zero values which express the absence of interactions or connections. === Maximum matching === In graph theory, a matching is a set of edges without common vertices. Liu et al. extended this definition to directed graph, where a matching is a set of directed edges that do not share start or end vertices. It is easy to check that a matching of a directed graph composes of a set of vertex-disjoint simple paths and cycles. The maximum matching of a directed network can be efficiently calculated by working in the bipartite representation using the classical Hopcroft–Karp algorithm, which runs in O(E√N) time in the worst case. For undirected graph, analytical solutions of the size and number of maximum matchings have been studied using the cavity method developed in statistical physics. Liu et al. extended the calculations for directed graph. By calculating the maximum matchings of a wide range of real networks, Liu et al. asserted that the number of driver nodes is determined mainly by the networks degree distribution P ( k i n , k o u t ) {\displaystyle P(k_{\mathrm {in} },k_{\mathrm {out} })} . They also calculated the average number of driver nodes for a network ensemble with arbitrary degree distribution using the cavity method. It is interesting that for a chain graph and a weak densely connected graph, both of which have very different in and out degree distributions; the formulation of Liu et al. would predict same values of n D {\displaystyle {n_{\mathrm {D} }}} . Also, for many real-word networks, namely, food webs, neuronal and metabolic networks, the mismatch in values of n D r e a l {\displaystyle {n_{\mathrm {D} }}^{real}} and n D r a n d _ d e g r e e {\displaystyle {n_{\mathrm {D} }}^{\mathrm {rand\_degree} }} calculated by Liu et al. is notable. If controllability is decided purely by degree, why are n D r e a l {\displaystyle {n_{\mathrm {D} }}^{real}} and n D r a n d _ d e g r e e {\displaystyle {n_{\mathrm {D} }}^{\mathrm {rand\_degree} }} so different for many real world networks? It remains open to scrutiny whether "control robustness" in networks is influenced more by using betweenness centrality and closeness centrality over degree-based metrics. While sparser graphs are more difficult to control, it would obviously be interesting to find whether betweenness centrality and closeness centrality or degree heterogeneity plays a more important role in deciding controllability of sparse graphs with almost-similar degree distributions. == Control of composite quantum systems and algebraic graph theory == A control theory of networks has also been developed in the context of universal control for composite quantum systems, where subsystems and their interactions are associated to nodes and links, respectively. This framework permits formulation of Kalman's criterion with tools from algebraic graph theory via the minimum rank of a graph and related notions. == See also == Controllability Gramian == References == == External links == The network controllability project website The video showing network controllability
Wikipedia/Network_controllability
In complex network theory, the fitness model is a model of the evolution of a network: how the links between nodes change over time depends on the fitness of nodes. Fitter nodes attract more links at the expense of less fit nodes. It has been used to model the network structure of the World Wide Web. == Description of the model == The model is based on the idea of fitness, an inherent competitive factor that nodes may have, capable of affecting the network's evolution. According to this idea, the nodes' intrinsic ability to attract links in the network varies from node to node, the most efficient (or "fit") being able to gather more edges in the expense of others. In that sense, not all nodes are identical to each other, and they claim their degree increase according to the fitness they possess every time. The fitness factors of all the nodes composing the network may form a distribution ρ(η) characteristic of the system been studied. Ginestra Bianconi and Albert-László Barabási proposed a new model called Bianconi-Barabási model, a variant to the Barabási-Albert model (BA model), where the probability for a node to connect to another one is supplied with a term expressing the fitness of the node involved. The fitness parameter is time-independent and is multiplicative to the probability. Fitness model where fitnesses are not coupled to preferential attachment has been introduced by Caldarelli et al. Here a link is created between two vertices i , j {\displaystyle i,j} with a probability given by a linking function f ( η i , η j ) {\displaystyle f(\eta _{i},\eta _{j})} of the fitnesses of the vertices involved. The degree of a vertex i is given by: k ( η i ) = N ∫ 0 ∞ f ( η i , η j ) ρ ( η j ) d η j {\displaystyle k(\eta _{i})=N\int _{0}^{\infty }\!\!\!f(\eta _{i},\eta _{j})\rho (\eta _{j})d\eta _{j}} If k ( η i ) {\displaystyle k(\eta _{i})} is an invertible and increasing function of η i {\displaystyle \eta _{i}} , then the probability distribution P ( k ) {\displaystyle P(k)} is given by P ( k ) = ρ ( η ( k ) ) ⋅ η ′ ( k ) {\displaystyle P(k)=\rho (\eta (k))\cdot \eta '(k)} As a result if the fitnesses η {\displaystyle \eta } are distributed as a power law, then also the node degree does. Less intuitively with a fast decaying probability distribution as ρ ( η ) = e − η {\displaystyle \rho (\eta )=e^{-\eta }} together with a linking function of the kind f ( η i , η j ) = Θ ( η i + η j − Z ) {\displaystyle f(\eta _{i},\eta _{j})=\Theta (\eta _{i}+\eta _{j}-Z)} with Z {\displaystyle Z} a constant and Θ {\displaystyle \Theta } the Heavyside function, we also obtain scale-free networks. Such model has been successfully applied to describe trade between nations by using GDP as fitness for the various nodes i , j {\displaystyle i,j} and a linking function of the kind; δ η i η j 1 + δ η i η j {\displaystyle {\frac {\delta \eta _{i}\eta _{j}}{1+\delta \eta _{i}\eta _{j}}}} == Fitness model and the evolution of the Web == The fitness model has been used to model the network structure of the World Wide Web. In a PNAS article, Kong et al. extended the fitness model to include random node deletion, a common phenomena in the Web. When the deletion rate of the web pages are accounted for, they found that the overall fitness distribution is exponential. Nonetheless, even this small variance in the fitness is amplified through the preferential attachment mechanism, leading to a heavy-tailed distribution of incoming links on the Web. == See also == Bose–Einstein condensation: a network theory approach == References ==
Wikipedia/Fitness_model_(network_theory)
The Transmission Control Protocol (TCP) is one of the main protocols of the Internet protocol suite. It originated in the initial network implementation in which it complemented the Internet Protocol (IP). Therefore, the entire suite is commonly referred to as TCP/IP. TCP provides reliable, ordered, and error-checked delivery of a stream of octets (bytes) between applications running on hosts communicating via an IP network. Major internet applications such as the World Wide Web, email, remote administration, and file transfer rely on TCP, which is part of the transport layer of the TCP/IP suite. SSL/TLS often runs on top of TCP. TCP is connection-oriented, meaning that sender and receiver firstly need to establish a connection based on agreed parameters; they do this through three-way handshake procedure. The server must be listening (passive open) for connection requests from clients before a connection is established. Three-way handshake (active open), retransmission, and error detection adds to reliability but lengthens latency. Applications that do not require reliable data stream service may use the User Datagram Protocol (UDP) instead, which provides a connectionless datagram service that prioritizes time over reliability. TCP employs network congestion avoidance. However, there are vulnerabilities in TCP, including denial of service, connection hijacking, TCP veto, and reset attack. == Historical origin == In May 1974, Vint Cerf and Bob Kahn described an internetworking protocol for sharing resources using packet switching among network nodes. The authors had been working with Gérard Le Lann to incorporate concepts from the French CYCLADES project into the new network. The specification of the resulting protocol, RFC 675 (Specification of Internet Transmission Control Program), was written by Vint Cerf, Yogen Dalal, and Carl Sunshine, and published in December 1974. It contains the first attested use of the term internet, as a shorthand for internetwork. The Transmission Control Program incorporated both connection-oriented links and datagram services between hosts. In version 4, the monolithic Transmission Control Program was divided into a modular architecture consisting of the Transmission Control Protocol and the Internet Protocol. This resulted in a networking model that became known informally as TCP/IP, although formally it was variously referred to as the DoD internet architecture model (DoD model for short) or DARPA model. Later, it became the part of, and synonymous with, the Internet Protocol Suite. The following Internet Experiment Note (IEN) documents describe the evolution of TCP into the modern version: IEN 5 Specification of Internet Transmission Control Program TCP Version 2 (March 1977). IEN 21 Specification of Internetwork Transmission Control Program TCP Version 3 (January 1978). IEN 27 IEN 40 IEN 44 IEN 55 IEN 81 IEN 112 IEN 124 TCP was standardized in January 1980 as RFC 761. In 2004, Vint Cerf and Bob Kahn received the Turing Award for their foundational work on TCP/IP. == Network function == The Transmission Control Protocol provides a communication service at an intermediate level between an application program and the Internet Protocol. It provides host-to-host connectivity at the transport layer of the Internet model. An application does not need to know the particular mechanisms for sending data via a link to another host, such as the required IP fragmentation to accommodate the maximum transmission unit of the transmission medium. At the transport layer, TCP handles all handshaking and transmission details and presents an abstraction of the network connection to the application typically through a network socket interface. At the lower levels of the protocol stack, due to network congestion, traffic load balancing, or unpredictable network behavior, IP packets may be lost, duplicated, or delivered out of order. TCP detects these problems, requests re-transmission of lost data, rearranges out-of-order data and even helps minimize network congestion to reduce the occurrence of the other problems. If the data still remains undelivered, the source is notified of this failure. Once the TCP receiver has reassembled the sequence of octets originally transmitted, it passes them to the receiving application. Thus, TCP abstracts the application's communication from the underlying networking details. TCP is used extensively by many internet applications, including the World Wide Web (WWW), email, File Transfer Protocol, Secure Shell, peer-to-peer file sharing, and streaming media. TCP is optimized for accurate delivery rather than timely delivery and can incur relatively long delays (on the order of seconds) while waiting for out-of-order messages or re-transmissions of lost messages. Therefore, it is not particularly suitable for real-time applications such as voice over IP. For such applications, protocols like the Real-time Transport Protocol (RTP) operating over the User Datagram Protocol (UDP) are usually recommended instead. TCP is a reliable byte stream delivery service that guarantees that all bytes received will be identical and in the same order as those sent. Since packet transfer by many networks is not reliable, TCP achieves this using a technique known as positive acknowledgment with re-transmission. This requires the receiver to respond with an acknowledgment message as it receives the data. The sender keeps a record of each packet it sends and maintains a timer from when the packet was sent. The sender re-transmits a packet if the timer expires before receiving the acknowledgment. The timer is needed in case a packet gets lost or corrupted. While IP handles actual delivery of the data, TCP keeps track of segments – the individual units of data transmission that a message is divided into for efficient routing through the network. For example, when an HTML file is sent from a web server, the TCP software layer of that server divides the file into segments and forwards them individually to the internet layer in the network stack. The internet layer software encapsulates each TCP segment into an IP packet by adding a header that includes (among other data) the destination IP address. When the client program on the destination computer receives them, the TCP software in the transport layer re-assembles the segments and ensures they are correctly ordered and error-free as it streams the file contents to the receiving application. == TCP segment structure == Transmission Control Protocol accepts data from a data stream, divides it into chunks, and adds a TCP header creating a TCP segment. The TCP segment is then encapsulated into an Internet Protocol (IP) datagram, and exchanged with peers. The term TCP packet appears in both informal and formal usage, whereas in more precise terminology segment refers to the TCP protocol data unit (PDU), datagram to the IP PDU, and frame to the data link layer PDU: Processes transmit data by calling on the TCP and passing buffers of data as arguments. The TCP packages the data from these buffers into segments and calls on the internet module [e.g. IP] to transmit each segment to the destination TCP. A TCP segment consists of a segment header and a data section. The segment header contains 10 mandatory fields, and an optional extension field (Options, pink background in table). The data section follows the header and is the payload data carried for the application. The length of the data section is not specified in the segment header; it can be calculated by subtracting the combined length of the segment header and IP header from the total IP datagram length specified in the IP header. Source Port: 16 bits Identifies the sending port. Destination Port: 16 bits Identifies the receiving port. Sequence Number: 32 bits Has a dual role: If the SYN flag is set (1), then this is the initial sequence number. The sequence number of the actual first data byte and the acknowledged number in the corresponding ACK are then this sequence number plus 1. If the SYN flag is unset (0), then this is the accumulated sequence number of the first data byte of this segment for the current session. Acknowledgment Number: 32 bits If the ACK flag is set then the value of this field is the next sequence number that the sender of the ACK is expecting. This acknowledges receipt of all prior bytes (if any). The first ACK sent by each end acknowledges the other end's initial sequence number itself, but no data. Data Offset (DOffset): 4 bits Specifies the size of the TCP header in 32-bit words. The minimum size header is 5 words and the maximum is 15 words thus giving the minimum size of 20 bytes and maximum of 60 bytes, allowing for up to 40 bytes of options in the header. This field gets its name from the fact that it is also the offset from the start of the TCP segment to the actual data. Reserved (Rsrvd): 4 bits For future use and should be set to zero; senders should not set these and receivers should ignore them if set, in the absence of further specification and implementation. From 2003 to 2017, the last bit (bit 103 of the header) was defined as the NS (Nonce Sum) flag by the experimental RFC 3540, ECN-nonce. ECN-nonce never gained widespread use and the RFC was moved to Historic status. Flags: 8 bits Contains 8 1-bit flags (control bits) as follows: CWR: 1 bit Congestion window reduced (CWR) flag is set by the sending host to indicate that it received a TCP segment with the ECE flag set and had responded in congestion control mechanism. ECE: 1 bit ECN-Echo has a dual role, depending on the value of the SYN flag. It indicates: If the SYN flag is set (1), the TCP peer is ECN capable. If the SYN flag is unset (0), a packet with the Congestion Experienced flag set (ECN=11) in its IP header was received during normal transmission. This serves as an indication of network congestion (or impending congestion) to the TCP sender. URG: 1 bit Indicates that the Urgent pointer field is significant. ACK: 1 bit Indicates that the Acknowledgment field is significant. All packets after the initial SYN packet sent by the client should have this flag set. PSH: 1 bit Push function. Asks to push the buffered data to the receiving application. RST: 1 bit Reset the connection SYN: 1 bit Synchronize sequence numbers. Only the first packet sent from each end should have this flag set. Some other flags and fields change meaning based on this flag, and some are only valid when it is set, and others when it is clear. FIN: 1 bit Last packet from sender Window: 16 bits The size of the receive window, which specifies the number of window size units that the sender of this segment is currently willing to receive. (See § Flow control and § Window scaling.) Checksum: 16 bits The 16-bit checksum field is used for error-checking of the TCP header, the payload and an IP pseudo-header. The pseudo-header consists of the source IP address, the destination IP address, the protocol number for the TCP protocol (6) and the length of the TCP headers and payload (in bytes). Urgent Pointer: 16 bits If the URG flag is set, then this 16-bit field is an offset from the sequence number indicating the last urgent data byte. Options (TCP Option): Variable 0–320 bits, in units of 32 bits; size(Options) == (DOffset - 5) * 32 The length of this field is determined by the Data Offset field. The TCP header padding is used to ensure that the TCP header ends, and data begins, on a 32-bit boundary. The padding is composed of zeros. Options have up to three fields: Option-Kind (1 byte), Option-Length (1 byte), Option-Data (variable). The Option-Kind field indicates the type of option and is the only field that is not optional. Depending on Option-Kind value, the next two fields may be set. Option-Length indicates the total length of the option, and Option-Data contains data associated with the option, if applicable. For example, an Option-Kind byte of 1 indicates that this is a no operation option used only for padding, and does not have an Option-Length or Option-Data fields following it. An Option-Kind byte of 0 marks the end of options, and is also only one byte. An Option-Kind byte of 2 is used to indicate Maximum Segment Size option, and will be followed by an Option-Length byte specifying the length of the MSS field. Option-Length is the total length of the given options field, including Option-Kind and Option-Length fields. So while the MSS value is typically expressed in two bytes, Option-Length will be 4. As an example, an MSS option field with a value of 0x05B4 is coded as (0x02 0x04 0x05B4) in the TCP options section. Some options may only be sent when SYN is set; they are indicated below as [SYN]. Option-Kind and standard lengths given as (Option-Kind, Option-Length). The remaining Option-Kind values are historical, obsolete, experimental, not yet standardized, or unassigned. Option number assignments are maintained by the Internet Assigned Numbers Authority (IANA). Data: Variable The payload of the TCP packet == Protocol operation == TCP protocol operations may be divided into three phases. Connection establishment is a multi-step handshake process that establishes a connection before entering the data transfer phase. After data transfer is completed, the connection termination closes the connection and releases all allocated resources. A TCP connection is managed by an operating system through a resource that represents the local end-point for communications, the Internet socket. During the lifetime of a TCP connection, the local end-point undergoes a series of state changes: === Connection establishment === Before a client attempts to connect with a server, the server must first bind to and listen at a port to open it up for connections: this is called a passive open. Once the passive open is established, a client may establish a connection by initiating an active open using the three-way (or 3-step) handshake: SYN: The active open is performed by the client sending a SYN to the server. The client sets the segment's sequence number to a random value A. SYN-ACK: In response, the server replies with a SYN-ACK. The acknowledgment number is set to one more than the received sequence number i.e. A+1, and the sequence number that the server chooses for the packet is another random number, B. ACK: Finally, the client sends an ACK back to the server. The sequence number is set to the received acknowledgment value i.e. A+1, and the acknowledgment number is set to one more than the received sequence number i.e. B+1. Steps 1 and 2 establish and acknowledge the sequence number for one direction (client to server). Steps 2 and 3 establish and acknowledge the sequence number for the other direction (server to client). Following the completion of these steps, both the client and server have received acknowledgments and a full-duplex communication is established. === Connection termination === The connection termination phase uses a four-way handshake, with each side of the connection terminating independently. When an endpoint wishes to stop its half of the connection, it transmits a FIN packet, which the other end acknowledges with an ACK. Therefore, a typical tear-down requires a pair of FIN and ACK segments from each TCP endpoint. After the side that sent the first FIN has responded with the final ACK, it waits for a timeout before finally closing the connection, during which time the local port is unavailable for new connections; this state lets the TCP client resend the final acknowledgment to the server in case the ACK is lost in transit. The time duration is implementation-dependent, but some common values are 30 seconds, 1 minute, and 2 minutes. After the timeout, the client enters the CLOSED state and the local port becomes available for new connections. It is also possible to terminate the connection by a 3-way handshake, when host A sends a FIN and host B replies with a FIN & ACK (combining two steps into one) and host A replies with an ACK. Some operating systems, such as Linux implement a half-duplex close sequence. If the host actively closes a connection, while still having unread incoming data available, the host sends the signal RST (losing any received data) instead of FIN. This assures that a TCP application is aware there was a data loss. A connection can be in a half-open state, in which case one side has terminated the connection, but the other has not. The side that has terminated can no longer send any data into the connection, but the other side can. The terminating side should continue reading the data until the other side terminates as well. === Resource usage === Most implementations allocate an entry in a table that maps a session to a running operating system process. Because TCP packets do not include a session identifier, both endpoints identify the session using the client's address and port. Whenever a packet is received, the TCP implementation must perform a lookup on this table to find the destination process. Each entry in the table is known as a Transmission Control Block or TCB. It contains information about the endpoints (IP and port), status of the connection, running data about the packets that are being exchanged and buffers for sending and receiving data. The number of sessions in the server side is limited only by memory and can grow as new connections arrive, but the client must allocate an ephemeral port before sending the first SYN to the server. This port remains allocated during the whole conversation and effectively limits the number of outgoing connections from each of the client's IP addresses. If an application fails to properly close unrequired connections, a client can run out of resources and become unable to establish new TCP connections, even from other applications. Both endpoints must also allocate space for unacknowledged packets and received (but unread) data. === Data transfer === The Transmission Control Protocol differs in several key features compared to the User Datagram Protocol: Ordered data transfer: the destination host rearranges segments according to a sequence number Retransmission of lost packets: any cumulative stream not acknowledged is retransmitted Error-free data transfer: corrupted packets are treated as lost and are retransmitted Flow control: limits the rate a sender transfers data to guarantee reliable delivery. The receiver continually hints the sender on how much data can be received. When the receiving host's buffer fills, the next acknowledgment suspends the transfer and allows the data in the buffer to be processed. Congestion control: lost packets (presumed due to congestion) trigger a reduction in data delivery rate ==== Reliable transmission ==== TCP uses a sequence number to identify each byte of data. The sequence number identifies the order of the bytes sent from each computer so that the data can be reconstructed in order, regardless of any out-of-order delivery that may occur. The sequence number of the first byte is chosen by the transmitter for the first packet, which is flagged SYN. This number can be arbitrary, and should, in fact, be unpredictable to defend against TCP sequence prediction attacks. Acknowledgments (ACKs) are sent with a sequence number by the receiver of data to tell the sender that data has been received to the specified byte. ACKs do not imply that the data has been delivered to the application, they merely signify that it is now the receiver's responsibility to deliver the data. Reliability is achieved by the sender detecting lost data and retransmitting it. TCP uses two primary techniques to identify loss. Retransmission timeout (RTO) and duplicate cumulative acknowledgments (DupAcks). When a TCP segment is retransmitted, it retains the same sequence number as the original delivery attempt. This conflation of delivery and logical data ordering means that, when acknowledgment is received after a retransmission, the sender cannot tell whether the original transmission or the retransmission is being acknowledged, the so-called retransmission ambiguity. TCP incurs complexity due to retransmission ambiguity. ===== Duplicate-ACK-based retransmission ===== If a single segment (say segment number 100) in a stream is lost, then the receiver cannot acknowledge packets above that segment number (100) because it uses cumulative ACKs. Hence the receiver acknowledges packet 99 again on the receipt of another data packet. This duplicate acknowledgement is used as a signal for packet loss. That is, if the sender receives three duplicate acknowledgments, it retransmits the last unacknowledged packet. A threshold of three is used because the network may reorder segments causing duplicate acknowledgements. This threshold has been demonstrated to avoid spurious retransmissions due to reordering. Some TCP implementations use selective acknowledgements (SACKs) to provide explicit feedback about the segments that have been received. This greatly improves TCP's ability to retransmit the right segments. Retransmission ambiguity can cause spurious fast retransmissions and congestion avoidance if there is reordering beyond the duplicate acknowledgment threshold. In the last two decades more packet reordering has been observed over the Internet which led TCP implementations, such as the one in the Linux Kernel to adopt heuristic methods to scale the duplicate acknowledgment threshold. Recently, there have been efforts to completely phase out duplicate-ACK-based fast-retransmissions and replace them with timer based ones. (Not to be confused with the classic RTO discussed below). The time based loss detection algorithm called Recent Acknowledgment (RACK) has been adopted as the default algorithm in Linux and Windows. ===== Timeout-based retransmission ===== When a sender transmits a segment, it initializes a timer with a conservative estimate of the arrival time of the acknowledgment. The segment is retransmitted if the timer expires, with a new timeout threshold of twice the previous value, resulting in exponential backoff behavior. Typically, the initial timer value is smoothed RTT + max(G, 4 × RTT variation), where G is the clock granularity. This guards against excessive transmission traffic due to faulty or malicious actors, such as man-in-the-middle denial of service attackers. Accurate RTT estimates are important for loss recovery, as it allows a sender to assume an unacknowledged packet to be lost after sufficient time elapses (i.e., determining the RTO time). Retransmission ambiguity can lead a sender's estimate of RTT to be imprecise. In an environment with variable RTTs, spurious timeouts can occur: if the RTT is under-estimated, then the RTO fires and triggers a needless retransmit and slow-start. After a spurious retransmission, when the acknowledgments for the original transmissions arrive, the sender may believe them to be acknowledging the retransmission and conclude, incorrectly, that segments sent between the original transmission and retransmission have been lost, causing further needless retransmissions to the extent that the link truly becomes congested; selective acknowledgement can reduce this effect. RFC 6298 specifies that implementations must not use retransmitted segments when estimating RTT. Karn's algorithm ensures that a good RTT estimate will be produced—eventually—by waiting until there is an unambiguous acknowledgment before adjusting the RTO. After spurious retransmissions, however, it may take significant time before such an unambiguous acknowledgment arrives, degrading performance in the interim. TCP timestamps also resolve the retransmission ambiguity problem in setting the RTO, though they do not necessarily improve the RTT estimate. ==== Error detection ==== Sequence numbers allow receivers to discard duplicate packets and properly sequence out-of-order packets. Acknowledgments allow senders to determine when to retransmit lost packets. To assure correctness a checksum field is included; see § Checksum computation for details. The TCP checksum is a weak check by modern standards and is normally paired with a CRC integrity check at layer 2, below both TCP and IP, such as is used in PPP or the Ethernet frame. However, introduction of errors in packets between CRC-protected hops is common and the 16-bit TCP checksum catches most of these. ==== Flow control ==== TCP uses an end-to-end flow control protocol to avoid having the sender send data too fast for the TCP receiver to receive and process it reliably. Having a mechanism for flow control is essential in an environment where machines of diverse network speeds communicate. For example, if a PC sends data to a smartphone that is slowly processing received data, the smartphone must be able to regulate the data flow so as not to be overwhelmed. TCP uses a sliding window flow control protocol. In each TCP segment, the receiver specifies in the receive window field the amount of additionally received data (in bytes) that it is willing to buffer for the connection. The sending host can send only up to that amount of data before it must wait for an acknowledgment and receive window update from the receiving host. When a receiver advertises a window size of 0, the sender stops sending data and starts its persist timer. The persist timer is used to protect TCP from a deadlock situation that could arise if a subsequent window size update from the receiver is lost, and the sender cannot send more data until receiving a new window size update from the receiver. When the persist timer expires, the TCP sender attempts recovery by sending a small packet so that the receiver responds by sending another acknowledgment containing the new window size. If a receiver is processing incoming data in small increments, it may repeatedly advertise a small receive window. This is referred to as the silly window syndrome, since it is inefficient to send only a few bytes of data in a TCP segment, given the relatively large overhead of the TCP header. ==== Congestion control ==== The final main aspect of TCP is congestion control. TCP uses a number of mechanisms to achieve high performance and avoid congestive collapse, a gridlock situation where network performance is severely degraded. These mechanisms control the rate of data entering the network, keeping the data flow below a rate that would trigger collapse. They also yield an approximately max-min fair allocation between flows. Acknowledgments for data sent, or the lack of acknowledgments, are used by senders to infer network conditions between the TCP sender and receiver. Coupled with timers, TCP senders and receivers can alter the behavior of the flow of data. This is more generally referred to as congestion control or congestion avoidance. Modern implementations of TCP contain four intertwined algorithms: slow start, congestion avoidance, fast retransmit, and fast recovery. In addition, senders employ a retransmission timeout (RTO) that is based on the estimated round-trip time (RTT) between the sender and receiver, as well as the variance in this round-trip time. There are subtleties in the estimation of RTT. For example, senders must be careful when calculating RTT samples for retransmitted packets; typically they use Karn's Algorithm or TCP timestamps. These individual RTT samples are then averaged over time to create a smoothed round trip time (SRTT) using Jacobson's algorithm. This SRTT value is what is used as the round-trip time estimate. Enhancing TCP to reliably handle loss, minimize errors, manage congestion and go fast in very high-speed environments are ongoing areas of research and standards development. As a result, there are a number of TCP congestion avoidance algorithm variations. === Maximum segment size === The maximum segment size (MSS) is the largest amount of data, specified in bytes, that TCP is willing to receive in a single segment. For best performance, the MSS should be set small enough to avoid IP fragmentation, which can lead to packet loss and excessive retransmissions. To accomplish this, typically the MSS is announced by each side using the MSS option when the TCP connection is established. The option value is derived from the maximum transmission unit (MTU) size of the data link layer of the networks to which the sender and receiver are directly attached. TCP senders can use path MTU discovery to infer the minimum MTU along the network path between the sender and receiver, and use this to dynamically adjust the MSS to avoid IP fragmentation within the network. MSS announcement may also be called MSS negotiation but, strictly speaking, the MSS is not negotiated. Two completely independent values of MSS are permitted for the two directions of data flow in a TCP connection, so there is no need to agree on a common MSS configuration for a bidirectional connection. === Selective acknowledgments === Relying purely on the cumulative acknowledgment scheme employed by the original TCP can lead to inefficiencies when packets are lost. For example, suppose bytes with sequence number 1,000 to 10,999 are sent in 10 different TCP segments of equal size, and the second segment (sequence numbers 2,000 to 2,999) is lost during transmission. In a pure cumulative acknowledgment protocol, the receiver can only send a cumulative ACK value of 2,000 (the sequence number immediately following the last sequence number of the received data) and cannot say that it received bytes 3,000 to 10,999 successfully. Thus the sender may then have to resend all data starting with sequence number 2,000. To alleviate this issue TCP employs the selective acknowledgment (SACK) option, defined in 1996 in RFC 2018, which allows the receiver to acknowledge discontinuous blocks of packets that were received correctly, in addition to the sequence number immediately following the last sequence number of the last contiguous byte received successively, as in the basic TCP acknowledgment. The acknowledgment can include a number of SACK blocks, where each SACK block is conveyed by the Left Edge of Block (the first sequence number of the block) and the Right Edge of Block (the sequence number immediately following the last sequence number of the block), with a Block being a contiguous range that the receiver correctly received. In the example above, the receiver would send an ACK segment with a cumulative ACK value of 2,000 and a SACK option header with sequence numbers 3,000 and 11,000. The sender would accordingly retransmit only the second segment with sequence numbers 2,000 to 2,999. A TCP sender may interpret an out-of-order segment delivery as a lost segment. If it does so, the TCP sender will retransmit the segment previous to the out-of-order packet and slow its data delivery rate for that connection. The duplicate-SACK option, an extension to the SACK option that was defined in May 2000 in RFC 2883, solves this problem. Once the TCP receiver detects a second duplicate packet, it sends a D-ACK to indicate that no segments were lost, allowing the TCP sender to reinstate the higher transmission rate. The SACK option is not mandatory and comes into operation only if both parties support it. This is negotiated when a connection is established. SACK uses a TCP header option (see § TCP segment structure for details). The use of SACK has become widespread—all popular TCP stacks support it. Selective acknowledgment is also used in Stream Control Transmission Protocol (SCTP). Selective acknowledgements can be 'reneged', where the receiver unilaterally discards the selectively acknowledged data. RFC 2018 discouraged such behavior, but did not prohibit it to allow receivers the option of reneging if they, for example, ran out of buffer space. The possibility of reneging leads to implementation complexity for both senders and receivers, and also imposes memory costs on the sender. === Window scaling === For more efficient use of high-bandwidth networks, a larger TCP window size may be used. A 16-bit TCP window size field controls the flow of data and its value is limited to 65,535 bytes. Since the size field cannot be expanded beyond this limit, a scaling factor is used. The TCP window scale option, as defined in RFC 1323, is an option used to increase the maximum window size to 1 gigabyte. Scaling up to these larger window sizes is necessary for TCP tuning. The window scale option is used only during the TCP 3-way handshake. The window scale value represents the number of bits to left-shift the 16-bit window size field when interpreting it. The window scale value can be set from 0 (no shift) to 14 for each direction independently. Both sides must send the option in their SYN segments to enable window scaling in either direction. Some routers and packet firewalls rewrite the window scaling factor during a transmission. This causes sending and receiving sides to assume different TCP window sizes. The result is non-stable traffic that may be very slow. The problem is visible on some sites behind a defective router. === TCP timestamps === TCP timestamps, defined in RFC 1323 in 1992, can help TCP determine in which order packets were sent. TCP timestamps are not normally aligned to the system clock and start at some random value. Many operating systems will increment the timestamp for every elapsed millisecond; however, the RFC only states that the ticks should be proportional. There are two timestamp fields: a 4-byte sender timestamp value (my timestamp) a 4-byte echo reply timestamp value (the most recent timestamp received from you). TCP timestamps are used in an algorithm known as Protection Against Wrapped Sequence numbers, or PAWS. PAWS is used when the receive window crosses the sequence number wraparound boundary. In the case where a packet was potentially retransmitted, it answers the question: "Is this sequence number in the first 4 GB or the second?" And the timestamp is used to break the tie. Also, the Eifel detection algorithm uses TCP timestamps to determine if retransmissions are occurring because packets are lost or simply out of order. TCP timestamps are enabled by default in Linux, and disabled by default in Windows Server 2008, 2012 and 2016. Recent Statistics show that the level of TCP timestamp adoption has stagnated, at ~40%, owing to Windows Server dropping support since Windows Server 2008. === Out-of-band data === It is possible to interrupt or abort the queued stream instead of waiting for the stream to finish. This is done by specifying the data as urgent. This marks the transmission as out-of-band data (OOB) and tells the receiving program to process it immediately. When finished, TCP informs the application and resumes the stream queue. An example is when TCP is used for a remote login session where the user can send a keyboard sequence that interrupts or aborts the remotely running program without waiting for the program to finish its current transfer. The urgent pointer only alters the processing on the remote host and doesn't expedite any processing on the network itself. The capability is implemented differently or poorly on different systems or may not be supported. Where it is available, it is prudent to assume only single bytes of OOB data will be reliably handled. Since the feature is not frequently used, it is not well tested on some platforms and has been associated with vulnerabilities, WinNuke for instance. === Forcing data delivery === Normally, TCP waits for 200 ms for a full packet of data to send (Nagle's Algorithm tries to group small messages into a single packet). This wait creates small, but potentially serious delays if repeated constantly during a file transfer. For example, a typical send block would be 4 KB, a typical MSS is 1460, so 2 packets go out on a 10 Mbit/s Ethernet taking ~1.2 ms each followed by a third carrying the remaining 1176 after a 197 ms pause because TCP is waiting for a full buffer. In the case of telnet, each user keystroke is echoed back by the server before the user can see it on the screen. This delay would become very annoying. Setting the socket option TCP_NODELAY overrides the default 200 ms send delay. Application programs use this socket option to force output to be sent after writing a character or line of characters. The RFC 793 defines the PSH push bit as "a message to the receiving TCP stack to send this data immediately up to the receiving application". There is no way to indicate or control it in user space using Berkeley sockets; it is controlled by the protocol stack only. == Vulnerabilities == TCP may be attacked in a variety of ways. The results of a thorough security assessment of TCP, along with possible mitigations for the identified issues, were published in 2009, and was pursued within the IETF through 2012. Notable vulnerabilities include denial of service, connection hijacking, TCP veto and TCP reset attack. === Denial of service === By using a spoofed IP address and repeatedly sending purposely assembled SYN packets, followed by many ACK packets, attackers can cause the server to consume large amounts of resources keeping track of the bogus connections. This is known as a SYN flood attack. Proposed solutions to this problem include SYN cookies and cryptographic puzzles, though SYN cookies come with their own set of vulnerabilities. Sockstress is a similar attack, that might be mitigated with system resource management. An advanced DoS attack involving the exploitation of the TCP persist timer was analyzed in Phrack No. 66. PUSH and ACK floods are other variants. === Connection hijacking === An attacker who is able to eavesdrop on a TCP session and redirect packets can hijack a TCP connection. To do so, the attacker learns the sequence number from the ongoing communication and forges a false segment that looks like the next segment in the stream. A simple hijack can result in one packet being erroneously accepted at one end. When the receiving host acknowledges the false segment, synchronization is lost. Hijacking may be combined with ARP spoofing or other routing attacks that allow an attacker to take permanent control of the TCP connection. Impersonating a different IP address was not difficult prior to RFC 1948 when the initial sequence number was easily guessable. The earlier implementations allowed an attacker to blindly send a sequence of packets that the receiver would believe came from a different IP address, without the need to intercept communication through ARP or routing attacks: it is enough to ensure that the legitimate host of the impersonated IP address is down, or bring it to that condition using denial-of-service attacks. This is why the initial sequence number is now chosen at random. === TCP veto === An attacker who can eavesdrop and predict the size of the next packet to be sent can cause the receiver to accept a malicious payload without disrupting the existing connection. The attacker injects a malicious packet with the sequence number and a payload size of the next expected packet. When the legitimate packet is ultimately received, it is found to have the same sequence number and length as a packet already received and is silently dropped as a normal duplicate packet—the legitimate packet is vetoed by the malicious packet. Unlike in connection hijacking, the connection is never desynchronized and communication continues as normal after the malicious payload is accepted. TCP veto gives the attacker less control over the communication but makes the attack particularly resistant to detection. The only evidence to the receiver that something is amiss is a single duplicate packet, a normal occurrence in an IP network. The sender of the vetoed packet never sees any evidence of an attack. == TCP ports == A TCP connection is identified by a four-tuple of the source address, source port, destination address, and destination port. Port numbers are used to identify different services, and to allow multiple connections between hosts. TCP uses 16-bit port numbers, providing 65,536 possible values for each of the source and destination ports. The dependency of connection identity on addresses means that TCP connections are bound to a single network path; TCP cannot use other routes that multihomed hosts have available, and connections break if an endpoint's address changes. Port numbers are categorized into three basic categories: well-known, registered, and dynamic or private. The well-known ports are assigned by the Internet Assigned Numbers Authority (IANA) and are typically used by system-level processes. Well-known applications running as servers and passively listening for connections typically use these ports. Some examples include: FTP (20 and 21), SSH (22), TELNET (23), SMTP (25), HTTP over SSL/TLS (443), and HTTP (80). Registered ports are typically used by end-user applications as ephemeral source ports when contacting servers, but they can also identify named services that have been registered by a third party. Dynamic or private ports can also be used by end-user applications, however, these ports typically do not contain any meaning outside a particular TCP connection. Network Address Translation (NAT), typically uses dynamic port numbers, on the public-facing side, to disambiguate the flow of traffic that is passing between a public network and a private subnetwork, thereby allowing many IP addresses (and their ports) on the subnet to be serviced by a single public-facing address. == Development == TCP is a complex protocol. However, while significant enhancements have been made and proposed over the years, its most basic operation has not changed significantly since its first specification RFC 675 in 1974, and the v4 specification RFC 793, published in September 1981. RFC 1122, published in October 1989, clarified a number of TCP protocol implementation requirements. A list of the 8 required specifications and over 20 strongly encouraged enhancements is available in RFC 7414. Among this list is RFC 2581, TCP Congestion Control, one of the most important TCP-related RFCs in recent years, describes updated algorithms that avoid undue congestion. In 2001, RFC 3168 was written to describe Explicit Congestion Notification (ECN), a congestion avoidance signaling mechanism. The original TCP congestion avoidance algorithm was known as TCP Tahoe, but many alternative algorithms have since been proposed (including TCP Reno, TCP Vegas, FAST TCP, TCP New Reno, and TCP Hybla). Multipath TCP (MPTCP) is an ongoing effort within the IETF that aims at allowing a TCP connection to use multiple paths to maximize resource usage and increase redundancy. The redundancy offered by Multipath TCP in the context of wireless networks enables the simultaneous use of different networks, which brings higher throughput and better handover capabilities. Multipath TCP also brings performance benefits in datacenter environments. The reference implementation of Multipath TCP was developed in the Linux kernel. Multipath TCP is used to support the Siri voice recognition application on iPhones, iPads and Macs. tcpcrypt is an extension proposed in July 2010 to provide transport-level encryption directly in TCP itself. It is designed to work transparently and not require any configuration. Unlike TLS (SSL), tcpcrypt itself does not provide authentication, but provides simple primitives down to the application to do that. The tcpcrypt RFC was published by the IETF in May 2019. TCP Fast Open is an extension to speed up the opening of successive TCP connections between two endpoints. It works by skipping the three-way handshake using a cryptographic cookie. It is similar to an earlier proposal called T/TCP, which was not widely adopted due to security issues. TCP Fast Open was published as RFC 7413 in 2014. Proposed in May 2013, Proportional Rate Reduction (PRR) is a TCP extension developed by Google engineers. PRR ensures that the TCP window size after recovery is as close to the slow start threshold as possible. The algorithm is designed to improve the speed of recovery and is the default congestion control algorithm in Linux 3.2+ kernels. === Deprecated proposals === TCP Cookie Transactions (TCPCT) is an extension proposed in December 2009 to secure servers against denial-of-service attacks. Unlike SYN cookies, TCPCT does not conflict with other TCP extensions such as window scaling. TCPCT was designed due to necessities of DNSSEC, where servers have to handle large numbers of short-lived TCP connections. In 2016, TCPCT was deprecated in favor of TCP Fast Open. The status of the original RFC was changed to historic. == Hardware implementations == One way to overcome the processing power requirements of TCP is to build hardware implementations of it, widely known as TCP offload engines (TOE). The main problem of TOEs is that they are hard to integrate into computing systems, requiring extensive changes in the operating system of the computer or device. == Wire image and ossification == The wire data of TCP provides significant information-gathering and modification opportunities to on-path observers, as the protocol metadata is transmitted in cleartext. While this transparency is useful to network operators and researchers, information gathered from protocol metadata may reduce the end-user's privacy. This visibility and malleability of metadata has led to TCP being difficult to extend—a case of protocol ossification—as any intermediate node (a 'middlebox') can make decisions based on that metadata or even modify it, breaking the end-to-end principle. One measurement found that a third of paths across the Internet encounter at least one intermediary that modifies TCP metadata, and 6.5% of paths encounter harmful ossifying effects from intermediaries. Avoiding extensibility hazards from intermediaries placed significant constraints on the design of MPTCP, and difficulties caused by intermediaries have hindered the deployment of TCP Fast Open in web browsers. Another source of ossification is the difficulty of modification of TCP functions at the endpoints, typically in the operating system kernel or in hardware with a TCP offload engine. == Performance == As TCP provides applications with the abstraction of a reliable byte stream, it can suffer from head-of-line blocking: if packets are reordered or lost and need to be retransmitted (and thus are reordered), data from sequentially later parts of the stream may be received before sequentially earlier parts of the stream; however, the later data cannot typically be used until the earlier data has been received, incurring network latency. If multiple independent higher-level messages are encapsulated and multiplexed onto a single TCP connection, then head-of-line blocking can cause processing of a fully-received message that was sent later to wait for delivery of a message that was sent earlier. Web browsers attempt to mitigate head-of-line blocking by opening multiple parallel connections. This incurs the cost of connection establishment repeatedly, as well as multiplying the resources needed to track those connections at the endpoints. Parallel connections also have congestion control operating independently of each other, rather than being able to pool information together and respond more promptly to observed network conditions; TCP's aggressive initial sending patterns can cause congestion if multiple parallel connections are opened; and the per-connection fairness model leads to a monopolization of resources by applications that take this approach. Connection establishment is a major contributor to latency as experienced by web users. TCP's three-way handshake introduces one RTT of latency during connection establishment before data can be sent. For short flows, these delays are very significant. Transport Layer Security (TLS) requires a handshake of its own for key exchange at connection establishment. Because of the layered design, the TCP handshake and the TLS handshake proceed serially; the TLS handshake cannot begin until the TCP handshake has concluded. Two RTTs are required for connection establishment with TLS 1.2 over TCP. TLS 1.3 allows for zero RTT connection resumption in some circumstances, but, when layered over TCP, one RTT is still required for the TCP handshake, and this cannot assist the initial connection; zero RTT handshakes also present cryptographic challenges, as efficient, replay-safe and forward secure non-interactive key exchange is an open research topic. TCP Fast Open allows the transmission of data in the initial (i.e., SYN and SYN-ACK) packets, removing one RTT of latency during connection establishment. However, TCP Fast Open has been difficult to deploy due to protocol ossification; as of 2020, no Web browsers used it by default. TCP throughput is affected by packet reordering. Reordered packets can cause duplicate acknowledgments to be sent, which, if they cross a threshold, will then trigger a spurious retransmission and congestion control. Transmission behavior can also become bursty, as large ranges are acknowledged all at once when a reordered packet at the range's start is received (in a manner similar to how head-of-line blocking affects applications). Blanton & Allman (2002) found that throughput was inversely related to the amount of reordering, up to a threshold where all reordering triggers spurious retransmission. Mitigating reordering depends on a sender's ability to determine that it has sent a spurious retransmission, and hence on resolving retransmission ambiguity. Reducing reordering-induced spurious retransmissions may slow recovery from genuine loss. Selective acknowledgment can provide a significant benefit to throughput; Bruyeron, Hemon & Zhang (1998) measured gains of up to 45%. An important factor in the improvement is that selective acknowledgment can more often avoid going into slow start after a loss and can hence better use available bandwidth. However, TCP can only selectively acknowledge a maximum of three blocks of sequence numbers. This can limit the retransmission rate and hence loss recovery or cause needless retransmissions, especially in high-loss environments. TCP was originally designed for wired networks where packet loss is considered to be the result of network congestion and the congestion window size is reduced dramatically as a precaution. However, wireless links are known to experience sporadic and usually temporary losses due to fading, shadowing, hand off, interference, and other radio effects, that are not strictly congestion. After the (erroneous) back-off of the congestion window size, due to wireless packet loss, there may be a congestion avoidance phase with a conservative decrease in window size. This causes the radio link to be underused. Extensive research on combating these harmful effects has been conducted. Suggested solutions can be categorized as end-to-end solutions, which require modifications at the client or server, link layer solutions, such as Radio Link Protocol in cellular networks, or proxy-based solutions which require some changes in the network without modifying end nodes. A number of alternative congestion control algorithms, such as Vegas, Westwood, Veno, and Santa Cruz, have been proposed to help solve the wireless problem. == Acceleration == The idea of a TCP accelerator is to terminate TCP connections inside the network processor and then relay the data to a second connection toward the end system. The data packets that originate from the sender are buffered at the accelerator node, which is responsible for performing local retransmissions in the event of packet loss. Thus, in case of losses, the feedback loop between the sender and the receiver is shortened to the one between the acceleration node and the receiver which guarantees a faster delivery of data to the receiver. Since TCP is a rate-adaptive protocol, the rate at which the TCP sender injects packets into the network is directly proportional to the prevailing load condition within the network as well as the processing capacity of the receiver. The prevalent conditions within the network are judged by the sender on the basis of the acknowledgments received by it. The acceleration node splits the feedback loop between the sender and the receiver and thus guarantees a shorter round trip time (RTT) per packet. A shorter RTT is beneficial as it ensures a quicker response time to any changes in the network and a faster adaptation by the sender to combat these changes. Disadvantages of the method include the fact that the TCP session has to be directed through the accelerator; this means that if routing changes so that the accelerator is no longer in the path, the connection will be broken. It also destroys the end-to-end property of the TCP ACK mechanism; when the ACK is received by the sender, the packet has been stored by the accelerator, not delivered to the receiver. == Debugging == A packet sniffer, which taps TCP traffic on a network link, can be useful in debugging networks, network stacks, and applications that use TCP by showing an engineer what packets are passing through a link. Some networking stacks support the SO_DEBUG socket option, which can be enabled on the socket using setsockopt. That option dumps all the packets, TCP states, and events on that socket, which is helpful in debugging. Netstat is another utility that can be used for debugging. == Alternatives == For many applications TCP is not appropriate. The application cannot normally access the packets coming after a lost packet until the retransmitted copy of the lost packet is received. This causes problems for real-time applications such as streaming media, real-time multiplayer games and voice over IP (VoIP) where it is generally more useful to get most of the data in a timely fashion than it is to get all of the data in order. For historical and performance reasons, most storage area networks (SANs) use Fibre Channel Protocol (FCP) over Fibre Channel connections. For embedded systems, network booting, and servers that serve simple requests from huge numbers of clients (e.g. DNS servers) the complexity of TCP can be a problem. Tricks such as transmitting data between two hosts that are both behind NAT (using STUN or similar systems) are far simpler without a relatively complex protocol like TCP in the way. Generally, where TCP is unsuitable, the User Datagram Protocol (UDP) is used. This provides the same application multiplexing and checksums that TCP does, but does not handle streams or retransmission, giving the application developer the ability to code them in a way suitable for the situation, or to replace them with other methods such as forward error correction or error concealment. Stream Control Transmission Protocol (SCTP) is another protocol that provides reliable stream-oriented services similar to TCP. It is newer and considerably more complex than TCP, and has not yet seen widespread deployment. However, it is especially designed to be used in situations where reliability and near-real-time considerations are important. Venturi Transport Protocol (VTP) is a patented proprietary protocol that is designed to replace TCP transparently to overcome perceived inefficiencies related to wireless data transport. The TCP congestion avoidance algorithm works very well for ad-hoc environments where the data sender is not known in advance. If the environment is predictable, a timing-based protocol such as Asynchronous Transfer Mode (ATM) can avoid TCP's retransmission overhead. UDP-based Data Transfer Protocol (UDT) has better efficiency and fairness than TCP in networks that have high bandwidth-delay product. Multipurpose Transaction Protocol (MTP/IP) is patented proprietary software that is designed to adaptively achieve high throughput and transaction performance in a wide variety of network conditions, particularly those where TCP is perceived to be inefficient. == Checksum computation == === TCP checksum for IPv4 === When TCP runs over IPv4, the method used to compute the checksum is defined as follows: The checksum field is the 16-bit ones' complement of the ones' complement sum of all 16-bit words in the header and text. The checksum computation needs to ensure the 16-bit alignment of the data being summed. If a segment contains an odd number of header and text octets, alignment can be achieved by padding the last octet with zeros on its right to form a 16-bit word for checksum purposes. The pad is not transmitted as part of the segment. While computing the checksum, the checksum field itself is replaced with zeros. In other words, after appropriate padding, all 16-bit words are added using ones' complement arithmetic. The sum is then bitwise complemented and inserted as the checksum field. A pseudo-header that mimics the IPv4 packet header used in the checksum computation is as follows: The checksum is computed over the following fields: Source address: 32 bits The source address in the IPv4 header Destination address: 32 bits The destination address in the IPv4 header Zeroes: 8 bits All zeroes Protocol: 8 bits The protocol value for TCP: 6 TCP length: 16 bits The length of the TCP header and data (measured in octets). For example, let's say we have IPv4 packet with Total Length of 200 bytes and IHL value of 5, which indicates a length of 5 bits × 32 bits = 160 bits = 20 bytes. We can compute the TCP length as (Total Length) − (IPv4 Header Length) i.e. 200 − 20, which results in 180 bytes. === TCP checksum for IPv6 === When TCP runs over IPv6, the method used to compute the checksum is changed: Any transport or other upper-layer protocol that includes the addresses from the IP header in its checksum computation must be modified for use over IPv6, to include the 128-bit IPv6 addresses instead of 32-bit IPv4 addresses. A pseudo-header that mimics the IPv6 header for computation of the checksum is shown below. The checksum is computed over the following fields: Source address: 128 bits The address in the IPv6 header. Destination address: 128 bits The final destination; if the IPv6 packet doesn't contain a Routing header, TCP uses the destination address in the IPv6 header, otherwise, at the originating node, it uses the address in the last element of the Routing header, and, at the receiving node, it uses the destination address in the IPv6 header. TCP length: 32 bits The length of the TCP header and data (measured in octets). Zeroes: 24 bits; Zeroes == 0 All zeroes. Next header: 8 bits The protocol value for TCP: 6. === Checksum offload === Many TCP/IP software stack implementations provide options to use hardware assistance to automatically compute the checksum in the network adapter prior to transmission onto the network or upon reception from the network for validation. This may reduce CPU load associated with calculating the checksum, potentially increasing overall network performance. This feature may cause packet analyzers that are unaware or uncertain about the use of checksum offload to report invalid checksums in outbound packets that have not yet reached the network adapter. This will only occur for packets that are intercepted before being transmitted by the network adapter; all packets transmitted by the network adaptor on the wire will have valid checksums. This issue can also occur when monitoring packets being transmitted between virtual machines on the same host, where a virtual device driver may omit the checksum calculation (as an optimization), knowing that the checksum will be calculated later by the VM host kernel or its physical hardware. == See also == == Notes == == References == == Bibliography == === Requests for Comments === Cerf, Vint; Dalal, Yogen; Sunshine, Carl (December 1974). Specification of Internet Transmission Control Program, December 1974 Version. doi:10.17487/RFC0675. RFC 675. Postel, Jon (September 1981). Internet Protocol. doi:10.17487/RFC0791. RFC 791. Postel, Jon (September 1981). Transmission Control Protocol. doi:10.17487/RFC0793. RFC 793. Braden, Robert, ed. (October 1989). Requirements for Internet Hosts – Communication Layers. doi:10.17487/RFC1122. RFC 1122. Jacobson, Van; Braden, Bob; Borman, Dave (May 1992). TCP Extensions for High Performance. doi:10.17487/RFC1323. RFC 1323. Bellovin, Steven M. (May 1996). Defending Against Sequence Number Attacks. doi:10.17487/RFC1948. RFC 1948. Mathis, Matt; Mahdavi, Jamshid; Floyd, Sally; Romanow, Allyn (October 1996). TCP Selective Acknowledgment Options. doi:10.17487/RFC2018. RFC 2018. Allman, Mark; Paxson, Vern; Stevens, W. Richard (April 1999). TCP Congestion Control. doi:10.17487/RFC2581. RFC 2581. Floyd, Sally; Mahdavi, Jamshid; Mathis, Matt; Podolsky, Matthew (July 2000). An Extension to the Selective Acknowledgement (SACK) Option for TCP. doi:10.17487/RFC2883. RFC 2883. Ramakrishnan, K. K.; Floyd, Sally; Black, David (September 2001). The Addition of Explicit Congestion Notification (ECN) to IP. doi:10.17487/RFC3168. RFC 3168. Ludwig, Reiner; Meyer, Michael (April 2003). The Eifel Detection Algorithm for TCP. doi:10.17487/RFC3522. RFC 3522. Spring, Neil; Weatherall, David; Ely, David (June 2003). Robust Explicit Congestion Notification (ECN) Signaling with Nonces. doi:10.17487/RFC3540. RFC 3540. Allman, Mark; Paxson, Vern; Blanton, Ethan (September 2009). TCP Congestion Control. doi:10.17487/RFC5681. RFC 5681. Simpson, William Allen (January 2011). TCP Cookie Transactions (TCPCT). doi:10.17487/RFC6013. RFC 6013. Ford, Alan; Raiciu, Costin; Handley, Mark; Barre, Sebastien; Iyengar, Janardhan (March 2011). Architectural Guidelines for Multipath TCP Development. doi:10.17487/RFC6182. RFC 6182. Paxson, Vern; Allman, Mark; Chu, H.K. Jerry; Sargent, Matt (June 2011). Computing TCP's Retransmission Timer. doi:10.17487/RFC6298. RFC 6298. Ford, Alan; Raiciu, Costin; Handley, Mark; Bonaventure, Olivier (January 2013). TCP Extensions for Multipath Operation with Multiple Addresses. doi:10.17487/RFC6824. RFC 6824. Mathis, Matt; Dukkipati, Nandita; Cheng, Yuchung (May 2013). Proportional Rate Reduction for TCP. doi:10.17487/RFC6937. RFC 6937. Borman, David; Braden, Bob; Jacobson, Van (September 2014). Scheffenegger, Richard (ed.). TCP Extensions for High Performance. doi:10.17487/RFC7323. RFC 7323. Duke, Martin; Braden, Robert; Eddy, Wesley M.; Blanton, Ethan; Zimmermann, Alexander (February 2015). A Roadmap for Transmission Control Protocol (TCP) Specification Documents. doi:10.17487/RFC7414. RFC 7414. Cheng, Yuchung; Chu, Jerry; Radhakrishnan, Sivasankar; Jain, Arvind (December 2014). TCP Fast Open. doi:10.17487/RFC7413. RFC 7413. Zimmermann, Alexander; Eddy, Wesley M.; Eggert, Lars (April 2016). Moving Outdated TCP Extensions and TCP-Related Documents to Historic or Informational Status. doi:10.17487/RFC7805. RFC 7805. Fairhurst, Gorry; Trammell, Brian; Kuehlewind, Mirja, eds. (March 2017). Services Provided by IETF Transport Protocols and Congestion Control Mechanisms. doi:10.17487/RFC8095. RFC 8095. Cheng, Yuchung; Cardwell, Neal; Dukkipati, Nandita; Jha, Priyaranjan, eds. (February 2021). The RACK-TLP Loss Detection Algorithm for TCP. doi:10.17487/RFC8985. RFC 8985. Deering, Stephen E.; Hinden, Robert M. (July 2017). Internet Protocol, Version 6 (IPv6) Specification. doi:10.17487/RFC8200. RFC 8200. Trammell, Brian; Kuehlewind, Mirja (April 2019). The Wire Image of a Network Protocol. doi:10.17487/RFC8546. RFC 8546. Hardie, Ted, ed. (April 2019). Transport Protocol Path Signals. doi:10.17487/RFC8558. RFC 8558. Iyengar, Jana; Swett, Ian, eds. (May 2021). QUIC Loss Detection and Congestion Control. doi:10.17487/RFC9002. RFC 9002. Fairhurst, Gorry; Perkins, Colin (July 2021). Considerations around Transport Header Confidentiality, Network Operations, and the Evolution of Internet Transport Protocols. doi:10.17487/RFC9065. RFC 9065. Thomson, Martin; Pauly, Tommy (December 2021). Long-Term Viability of Protocol Extension Mechanisms. doi:10.17487/RFC9170. RFC 9170. Eddy, Wesley M., ed. (August 2022). Transmission Control Protocol (TCP). doi:10.17487/RFC9293. RFC 9293. === Other documents === Allman, Mark; Paxson, Vern (October 1999). "On estimating end-to-end network path properties". ACM SIGCOMM Computer Communication Review. 29 (4): 263–274. doi:10.1145/316194.316230. hdl:2060/20000004338. Bhat, Divyashri; Rizk, Amr; Zink, Michael (June 2017). "Not so QUIC: A Performance Study of DASH over QUIC". NOSSDAV'17: Proceedings of the 27th Workshop on Network and Operating Systems Support for Digital Audio and Video. pp. 13–18. doi:10.1145/3083165.3083175. S2CID 32671949. Blanton, Ethan; Allman, Mark (January 2002). "On making TCP more robust to packet reordering" (PDF). ACM SIGCOMM Computer Communication Review. 32: 20–30. doi:10.1145/510726.510728. S2CID 15305731. Briscoe, Bob; Brunstrom, Anna; Petlund, Andreas; Hayes, David; Ros, David; Tsang, Ing-Jyh; Gjessing, Stein; Fairhurst, Gorry; Griwodz, Carsten; Welzl, Michael (2016). "Reducing Internet Latency: A Survey of Techniques and Their Merits". IEEE Communications Surveys & Tutorials. 18 (3): 2149–2196. doi:10.1109/COMST.2014.2375213. hdl:2164/8018. S2CID 206576469. Bruyeron, Renaud; Hemon, Bruno; Zhang, Lixa (April 1998). "Experimentations with TCP selective acknowledgment". ACM SIGCOMM Computer Communication Review. 28 (2): 54–77. doi:10.1145/279345.279350. S2CID 15954837. Chen, Shan; Jero, Samuel; Jagielski, Matthew; Boldyreva, Alexandra; Nita-Rotaru, Cristina (2021). "Secure Communication Channel Establishment: TLS 1.3 (Over TCP Fast Open) versus QUIC". Journal of Cryptology. 34 (3). doi:10.1007/s00145-021-09389-w. S2CID 235174220. Corbet, Jonathan (8 December 2015). "Checksum offloads and protocol ossification". LWN.net. Corbet, Jonathan (29 January 2018). "QUIC as a solution to protocol ossification". LWN.net. Edeline, Korian; Donnet, Benoit (2019). A Bottom-Up Investigation of the Transport-Layer Ossification. 2019 Network Traffic Measurement and Analysis Conference (TMA). doi:10.23919/TMA.2019.8784690. Ghedini, Alessandro (26 July 2018). "The Road to QUIC". The Cloudflare Blog. Cloudflare. Gurtov, Andrei; Floyd, Sally (February 2004). Resolving Acknowledgment Ambiguity in non-SACK TCP (PDF). Next Generation Teletraffic and Wired/Wireless Advanced Networking (NEW2AN'04). Gurtov, Andrei; Ludwig, Reiner (2003). Responding to Spurious Timeouts in TCP (PDF). IEEE INFOCOM 2003. Twenty-second Annual Joint Conference of the IEEE Computer and Communications Societies. doi:10.1109/INFCOM.2003.1209251. Hesmans, Benjamin; Duchene, Fabien; Paasch, Christoph; Detal, Gregory; Bonaventure, Olivier (2013). Are TCP extensions middlebox-proof?. HotMiddlebox '13. CiteSeerX 10.1.1.679.6364. doi:10.1145/2535828.2535830. IETF HTTP Working Group. "HTTP/2 Frequently Asked Questions". Karn, Phil; Partridge, Craig (November 1991). "Improving round-trip time estimates in reliable transport protocols". ACM Transactions on Computer Systems. 9 (4): 364–373. doi:10.1145/118544.118549. Ludwig, Reiner; Katz, Randy Howard (January 2000). "The Eifel algorithm: making TCP robust against spurious retransmissions". ACM SIGCOMM Computer Communication Review. doi:10.1145/505688.505692. Marx, Robin (3 December 2020). "Head-of-Line Blocking in QUIC and HTTP/3: The Details". Paasch, Christoph; Bonaventure, Olivier (1 April 2014). "Multipath TCP". Communications of the ACM. 57 (4): 51–57. doi:10.1145/2578901. hdl:2078.1/141195. S2CID 17581886. Papastergiou, Giorgos; Fairhurst, Gorry; Ros, David; Brunstrom, Anna; Grinnemo, Karl-Johan; Hurtig, Per; Khademi, Naeem; Tüxen, Michael; Welzl, Michael; Damjanovic, Dragana; Mangiante, Simone (2017). "De-Ossifying the Internet Transport Layer: A Survey and Future Perspectives". IEEE Communications Surveys & Tutorials. 19: 619–639. doi:10.1109/COMST.2016.2626780. hdl:2164/8317. S2CID 1846371. Rybczyńska, Marta (13 March 2020). "A QUIC look at HTTP/3". LWN.net. Sy, Erik; Mueller, Tobias; Burkert, Christian; Federrath, Hannes; Fischer, Mathias (2020). "Enhanced Performance and Privacy for TLS over TCP Fast Open". Proceedings on Privacy Enhancing Technologies. 2020 (2): 271–287. arXiv:1905.03518. doi:10.2478/popets-2020-0027. Zhang, Lixia (5 August 1986). "Why TCP timers don't work well". ACM SIGCOMM Computer Communication Review. 16 (3): 397–405. doi:10.1145/1013812.18216. == Further reading == Stevens, W. Richard (1994-01-10). TCP/IP Illustrated, Volume 1: The Protocols. Addison-Wesley Pub. Co. ISBN 978-0-201-63346-7. Stevens, W. Richard; Wright, Gary R (1994). TCP/IP Illustrated, Volume 2: The Implementation. Addison-Wesley. ISBN 978-0-201-63354-2. Stevens, W. Richard (1996). TCP/IP Illustrated, Volume 3: TCP for Transactions, HTTP, NNTP, and the UNIX Domain Protocols. Addison-Wesley. ISBN 978-0-201-63495-2.** == External links == Oral history interview with Robert E. Kahn IANA Port Assignments IANA TCP Parameters John Kristoff's Overview of TCP (Fundamental concepts behind TCP and how it is used to transport data between two endpoints) Checksum example
Wikipedia/Transmission_control_protocol
Network planning and design is an iterative process, encompassing topological design, network-synthesis, and network-realization, and is aimed at ensuring that a new telecommunications network or service meets the needs of the subscriber and operator. The process can be tailored according to each new network or service. == A network planning methodology == A traditional network planning methodology in the context of business decisions involves five layers of planning, namely: need assessment and resource assessment short-term network planning IT resource long-term and medium-term network planning operations and maintenance. Each of these layers incorporates plans for different time horizons, i.e. the business planning layer determines the planning that the operator must perform to ensure that the network will perform as required for its intended life-span. The Operations and Maintenance layer, however, examines how the network will run on a day-to-day basis. The network planning process begins with the acquisition of external information. This includes: forecasts of how the new network/service will operate; the economic information concerning costs, and the technical details of the network’s capabilities. Planning a new network/service involves implementing the new system across the first four layers of the OSI Reference Model. Choices must be made for the protocols and transmission technologies. The network planning process involves three main steps: Topological design: This stage involves determining where to place the components and how to connect them. The (topological) optimization methods that can be used in this stage come from an area of mathematics called graph theory. These methods involve determining the costs of transmission and the cost of switching, and thereby determining the optimum connection matrix and location of switches and concentrators. Network-synthesis: This stage involves determining the size of the components used, subject to performance criteria such as the grade of service (GOS). The method used is known as "Nonlinear Optimisation", and involves determining the topology, required GoS, cost of transmission, etc., and using this information to calculate a routing plan, and the size of the components. Network realization: This stage involves determining how to meet capacity requirements, and ensure reliability within the network. The method used is known as "Multicommodity Flow Optimisation", and involves determining all information relating to demand, costs, and reliability, and then using this information to calculate an actual physical circuit plan. These steps are performed iteratively in parallel with one another. == The role of forecasting == During the process of Network Planning and Design, estimates are made of the expected traffic intensity and traffic load that the network must support. If a network of a similar nature already exists, traffic measurements of such a network can be used to calculate the exact traffic load. If there are no similar networks, then the network planner must use telecommunications forecasting methods to estimate the expected traffic intensity. The forecasting process involves several steps: Definition of a problem; Data acquisition; Choice of forecasting method; Analysis/Forecasting; Documentation and analysis of results. == Dimensioning == Dimensioning a new network determines the minimum capacity requirements that will still allow the Teletraffic Grade of Service (GoS) requirements to be met. To do this, dimensioning involves planning for peak-hour traffic, i.e. that hour during the day during which traffic intensity is at its peak. The dimensioning process involves determining the network’s topology, routing plan, traffic matrix, and GoS requirements, and using this information to determine the maximum call handling capacity of the switches, and the maximum number of channels required between the switches. This process requires a complex model that simulates the behavior of the network equipment and routing protocols. A dimensioning rule is that the planner must ensure that the traffic load should never approach a load of 100 percent. To calculate the correct dimensioning to comply with the above rule, the planner must take on-going measurements of the network’s traffic, and continuously maintain and upgrade resources to meet the changing requirements. Another reason for overprovisioning is to make sure that traffic can be rerouted in case a failure occurs in the network. Because of its complexity, network dimensioning is typically done using specialized software tools. Whereas researchers typically develop custom software to study a particular problem, network operators typically make use of commercial network planning software. == Traffic engineering == Compared to network engineering, which adds resources such as links, routers, and switches into the network, traffic engineering targets changing traffic paths on the existing network to alleviate traffic congestion or accommodate more traffic demand. This technology is critical when the cost of network expansion is prohibitively high and the network load is not optimally balanced. The first part provides financial motivation for traffic engineering while the second part grants the possibility of deploying this technology. == Survivability == Network survivability enables the network to maintain maximum network connectivity and quality of service under failure conditions. It has been one of the critical requirements in network planning and design. It involves design requirements on topology, protocol, bandwidth allocation, etc.. Topology requirement can be maintaining a minimum two-connected network against any failure of a single link or node. Protocol requirements include using a dynamic routing protocol to reroute traffic against network dynamics during the transition of network dimensioning or equipment failures. Bandwidth allocation requirements pro-actively allocate extra bandwidth to avoid traffic loss under failure conditions. This topic has been actively studied in conferences, such as the International Workshop on Design of Reliable Communication Networks (DRCN). == Data-driven network design == More recently, with the increasing role of Artificial Intelligence technologies in engineering, the idea of using data to create data-driven models of existing networks has been proposed. By analyzing large network data, also the less desired behaviors that may occur in real-world networks can be understood, worked around, and avoided in future designs. Both the design and management of networked systems can be improved by data-driven paradigm. Data-driven models can also be used at various phases of service and network management life cycle such as service instantiation, service provision, optimization, monitoring, and diagnostic. == See also == Core-and-pod Network Partition for Optimization Optimal network design - an optimization problem of constructing a network which minimizes the total travel cost. == References ==
Wikipedia/Network_planning_and_design
An intranet is a computer network for sharing information, easier communication, collaboration tools, operational systems, and other computing services within an organization, usually to the exclusion of access by outsiders. The term is used in contrast to public networks, such as the Internet, but uses the same technology based on the Internet protocol suite. An organization-wide intranet can constitute an important focal point of internal communication and collaboration, and provide a single starting point to access internal and external resources. In its simplest form, an intranet is established with the technologies for local area networks (LANs) and wide area networks (WANs). Many modern intranets have search engines, user profiles, blogs, mobile apps with notifications, and events planning within their infrastructure. An intranet is sometimes contrasted to an extranet. While an intranet is generally restricted to employees of the organization, extranets may also be accessed by customers, suppliers, or other approved parties. Extranets extend a private network onto the Internet with special provisions for authentication, authorization and accounting (AAA protocol). == Uses == Intranets are increasingly being used to deliver tools, such as for collaboration (to facilitate working in groups and teleconferencing) or corporate directories, sales and customer relationship management, or project management. Intranets are also used as corporate culture-change platforms. For example, a large number of employees using an intranet forum application to host a discussion about key issues could come up with new ideas related to management, productivity, quality, and other corporate issues. In large intranets, website traffic is often similar to public website traffic and can be better understood by using web metrics software to track overall activity. User surveys also improve intranet website effectiveness. Larger businesses allow users within their intranet to access public internet through firewall servers. They have the ability to screen incoming and outgoing messages, keeping security intact. When part of an intranet is made accessible to customers and others outside the business, it becomes part of an extranet. Businesses can send private messages through the public network using special encryption/decryption and other security safeguards to connect one part of their intranet to another. Intranet user-experience, editorial, and technology teams work together to produce in-house sites. Most commonly, intranets are managed by the communications, HR or CIO departments of large organizations, or some combination of these. Because of the scope and variety of content and the number of system interfaces, the intranets of many organizations are much more complex than their respective public websites. Intranets and the use of intranets are growing rapidly. According to the Intranet Design Annual 2007 from Nielsen Norman Group, the number of pages on participants' intranets averaged 200,000 over the years 2001 to 2003 and has grown to an average of 6 million pages over 2005–2007. == Benefits == Intranets can help users locate and view information faster and use applications relevant to their roles and responsibilities. With a web browser interface, users can access data held in any database the organization wants to make available at any time and — subject to security provisions — from anywhere within company workstations, increasing employees' ability to perform their jobs faster, more accurately, and with confidence that they have the right information. It also helps improve services provided to users. Using hypermedia and Web technology, Web publishing allows for the maintenance of and easy access to cumbersome corporate knowledge, such as employee manuals, benefits documents, company policies, business standards, news feeds, and even training, all of which can be accessed throughout a company using common Internet standards (Acrobat files, Flash files, CGI applications). Because each business unit can update the online copy of a document, the most recent version is usually available to employees using the intranet. Intranets are also used as a platform for developing and deploying applications to support business operations and decisions across the internetworked enterprise. Intranets allow organizations to distribute information to employees on an as-needed basis; Employees may link to relevant information at their convenience rather than being distracted indiscriminately by email. The intranet can also be linked to a company's management information system, such as a time keeping system. Information is easily accessible to all authorised users, enabling collaboration. Being able to communicate in real-time through integrated third-party tools, such as an instant messenger, promotes the sharing of ideas and removes blockages to communication to help boost a business's productivity. Intranets can serve as powerful tools for communicating (such as through chat, email and/or blogs) within a given organization about vertically strategic initiatives that have a global reach throughout said organization. The type of information that can easily be conveyed is the purpose of the initiative and what it is aiming to achieve, who is driving it, results achieved to date, and whom to speak to for more information. By providing this information on the intranet, staff can keep up-to-date with the strategic focus of their organization. For example, when Nestlé had a number of food processing plants in Scandinavia, their central support system had to deal with a number of queries every day. When Nestlé decided to invest in an intranet, they quickly realized the savings. Gerry McGovern says that the savings from the reduction in query calls was substantially greater than the investment in the intranet. Users can view information and data via a web browser rather than maintaining physical documents such as procedure manuals, internal phone list and requisition forms. This can potentially save the business money on printing, duplicating documents, and the environment, as well as document maintenance overhead. For example, the HRM company PeopleSoft "derived significant cost savings by shifting HR processes to the intranet". McGovern goes on to say the manual cost of enrolling in benefits was found to be US$109.48 per enrollment. "Shifting this process to the intranet reduced the cost per enrollment to $21.79; a saving of 80 percent". Another company that saved money on expense reports was Cisco. "In 1996, Cisco processed 54,000 reports and the amount of dollars processed was USD19 million". Many companies dictate computer specifications which, in turn, may allow Intranet developers to write applications that only have to work on one browser such that there are no cross-browser compatibility issues. Being able to specifically address one's "viewer" is a great advantage. Since intranets are user-specific (requiring database/network authentication prior to access), users know exactly who they are interfacing with and can personalize their intranet based on role (job title, department) or individual ("Congratulations Jane, on your 3rd year with our company!"). Since "involvement in decision making" is one of the main drivers of employee engagement, offering tools (like forums or surveys) that foster peer-to-peer collaboration and employee participation can make employees feel more valued and involved. == Planning and creation == Most organizations devote considerable resources into the planning and implementation of their intranet as it is of strategic importance to the organization's success. Some of the planning would include topics such as determining the purpose and goals of the intranet, identifying persons or departments responsible for implementation and management and devising functional plans, page layouts and designs. The appropriate staff would also ensure that implementation schedules and phase-out of existing systems were organized, while defining and implementing security of the intranet and ensuring it lies within legal boundaries and other constraints. In order to produce a high-value end product, systems planners should determine the level of interactivity (e.g. wikis, on-line forms) desired. Planners may also consider whether the input of new data and updating of existing data is to be centrally controlled or devolve. These decisions sit alongside to the hardware and software considerations (like content management systems), participation issues (like good taste, harassment, confidentiality), and features to be supported. Intranets are often static sites; they are a shared drive, serving up centrally stored documents alongside internal articles or communications (often one-way communication). By leveraging firms which specialise in 'social' intranets, organisations are beginning to think of how their intranets can become a 'communication hub' for their entire team. The actual implementation would include steps such as securing senior management support and funding, conducting a business requirement analysis and identifying users' information needs. From the technical perspective, there would need to be a coordinated installation of the web server and user access network, the required user/client applications and the creation of document framework (or template) for the content to be hosted. The end-user should be involved in testing and promoting use of the company intranet, possibly through a parallel adoption methodology or pilot programme. In the long term, the company should carry out ongoing measurement and evaluation, including through benchmarking against other company services. == Maintenance == Some aspects are non-static. === Staying current === An intranet structure needs key personnel committed to maintaining the intranet and keeping content current. For feedback on the intranet, social networking can be done through a forum for users to indicate what they want and what they do not like. === Privacy protection === The European Union's General Data Protection Regulation went into effect May 2018. Since then, the protection of the privacy of employees, customers and other stakeholders (e.g. consultants) has become more and more a significant concern for most companies (at least, all those having an interest in markets and countries where regulations are in place to protect the privacy). == Enterprise private network == An enterprise private network is a computer network built by a business to interconnect its various company sites (such as production sites, offices and shops) in order to share computer resources. Beginning with the digitalisation of telecommunication networks, started in the 1970s in the US by AT&T, and propelled by the growth in computer systems availability and demands, enterprise networks have been built for decades without the need to append the term private to them. The networks were operated over telecommunications networks and, as for voice communications, a certain amount of security and secrecy was expected and delivered. But with the Internet in the 1990s came a new type of network, virtual private networks, built over this public infrastructure, using encryption to protect the data traffic from eaves-dropping. So the enterprise networks are now commonly referred to as enterprise private networks in order to clarify that these are private networks, in contrast to public networks. == See also == eGranary Digital Library Enterprise portal Intranet portal Intranet strategies Intranet Wiki Kwangmyong (network) Virtual workplace Web portal == References ==
Wikipedia/Enterprise_private_network
A process control block (PCB), also sometimes called a process descriptor, is a data structure used by a computer operating system to store all the information about a process. When a process is created (initialized or installed), the operating system creates a corresponding process control block, which specifies and tracks the process state (i.e. new, ready, running, waiting or terminated). Since it is used to track process information, the PCB plays a key role in context switching. An operating system kernel stores PCBs in a process table. The current working directory of a process is one of the properties that the kernel stores in the process's PCB. == Role == The role of the PCBs is central in process management: they are accessed and/or modified by most utilities, particularly those involved with scheduling and resource management. == Structure == In multitasking operating systems, the PCB stores data needed for correct and efficient process management. Though the details of these structures are system-dependent, common elements fall in three main categories: Process identification Process state Process control Status tables exist for each relevant entity, like describing memory, I/O devices, files and processes. Memory tables, for example, contain information about the allocation of main and secondary (virtual) memory for each process, authorization attributes for accessing memory areas shared among different processes, etc. I/O tables may have entries stating the availability of a device or its assignment to a process, the status of I/O operations, the location of memory buffers used for them, etc. Process identification data include a unique identifier for the process (almost invariably an integer) and, in a multiuser-multitasking system, data such as the identifier of the parent process, user identifier, user group identifier, etc. The process id is particularly relevant since it is often used to cross-reference the tables defined above, e.g. showing which process is using which I/O devices, or memory areas. Process state data define the status of a process when it is suspended, allowing the OS to restart it later. This always includes the content of general-purpose CPU registers, the CPU process status word, stack and frame pointers, etc. During context switch, the running process is stopped and another process runs. The kernel must stop the execution of the running process, copy out the values in hardware registers to its PCB, and update the hardware registers with the values from the PCB of the new process. Process control information is used by the OS to manage the process itself. This includes: Process scheduling state – The state of the process in terms of "ready", "suspended", etc., and other scheduling information as well, such as priority value, the amount of time elapsed since the process gained control of the CPU or since it was suspended. Also, in case of a suspended process, event identification data must be recorded for the event the process is waiting for; Process structuring information – the process's children id's, or the id's of other processes related to the current one in some functional way, which may be represented as a queue, a ring or other data structures; Interprocess communication information – flags, signals and messages associated with the communication among independent processes; Process Privileges – allowed/disallowed access to system resources; Process State – new, ready, running, waiting, dead; Process Number (PID) – unique identification number for each process (also known as Process ID); Program Counter (PC) – a pointer to the address of the next instruction to be executed for this process; CPU Registers – register set where process needs to be stored for execution for running state; CPU Scheduling Information – information scheduling CPU time; Memory Management Information – page table, memory limits, segment table; Accounting Information – amount of CPU used for process execution, time limits, execution ID etc.; I/O Status Information – list of I/O devices allocated to the process. == Location == PCB must be kept in an area of memory protected from normal process access. In some operating systems the PCB is placed at the bottom of the process stack. == See also == Thread control block (TCB) Process Environment Block (PEB) Program segment prefix (PSP) Data segment Task Control Block for the equivalent in IBM mainframe software == Notes ==
Wikipedia/Process_control_block
In computer networking a work group is a collection of computers connected on a LAN that share the common resources and responsibilities. Workgroup is Microsoft's term for a peer-to-peer local area network. Computers running Microsoft operating systems in the same work group may share files, printers, or Internet connection. Work group contrasts with a domain, in which computers rely on centralized authentication. == See also == Windows for Workgroups – the earliest version of Windows to allow a work group Windows HomeGroup – a feature introduced in Windows 7 and later removed in Windows 10 (Version 1803) that allows work groups to share contents more easily Browser service – the service enabled 'browsing' all the resources in work groups Peer Name Resolution Protocol (PNRP) - IPv6-based dynamic name publication and resolution == References == == External links == Workgroup Server Protocol Program (WSPP)
Wikipedia/Workgroup_(computer_networking)